AI Goes to College: In the Classroom and Beyond

Tuesday, September 24, 2024 - Much of the buzz around artificial intelligence centers on its potential to transform the college of tomorrow, but there are many schools making meaningful change with this technology today. On this episode, we go deep on the applications of AI from recruitment to instruction to supporting post-grad success. We sit down Lev Gonick, Chief Information Officer at Arizona State University, and Ashley Budd, Senior Marketing Director at Cornell University, to dig into the ways their colleges are leveraging the power of AI. This episode is made with support from CollegeVine.

Listen Now!

Listen wherever you get your podcasts.

Join our Newsletter

Get notified about special content and events.

Links We Mentioned

AI-powered educational experiences underway at ASU

Four Singularities for Research by Ethan Mollick 

Reading Ease Calculator created by Todd Rogers and Jessica Lasky-Fink, authors of Writing for Busy Readers

Chapters

0:00 - Intro
01:19 - A Brief Recent History of AI
05:05 - AI Partnerships at ASU
08:29 - An Admonition on Privacy
10:56 - Classroom and Administrative Applications of AI
15:46 - Prioritizing Projects
18:15 - ASU’s Approach to Tech Partnerships
22:35 - AI in the Year Ahead
25:50 - AI’s Impact on Research
30:11 - Diversifying the Project Portfolio
33:55 - AI and Stanford’s Conference Decision
35:27 - AI’s Applications in Recruitment and Admissions
44:06 - Standardizing the Transcript
48:51 - The AI Arms Race
54:20 - Transactional or Transformational?

Transcript

Jeff Selingo:

The last episode of Future U provided a 50,000 foot view of the impact artificial intelligence will have in the near and long term on colleges and universities, especially when it comes to what they teach and how they teach. On today's episode, we're going to take a deeper dive on how it's already changing the way colleges operate, from admissions to marketing and enrollment to research.

Michael Horn:

And to help us, we've brought in two different experts who work at universities on the cutting edge of where AI can make changes in those areas. That's all ahead on this special episode of Future U.

Sarah:

This episode is sponsored by CollegeVine. CollegeVine has created Higher Ed's first autonomous AI agents to modernize all aspects of the student recruitment process. You can find out more about what theyre doing at www.collegefind.com recruit. Subscribe to Future U wherever you get your podcasts, and if you enjoy the show, send it along to a friend so others can discover the conversations were having about higher education.

Michael Horn:

I'm Michael Horn. 

A Brief Recent History of AI

Jeff Selingo:

And I'm Jeff Selingo. Okay, so, Michael, before we introduce our guests, I want to make sure folks know what we're talking about here. Artificial intelligence AI, you know, it's been around for a while now. The hype has even preceded the emergence of ChatGPT's much heralded release in 2022. Just think back to IBM's deep blue in the 1990s. And many applications in higher education have been using AI for quite a while, from things like adaptive math programs to chatbots that answer different student questions. But when a company, and I must say it's a very much different company now than when it released its original 3.5 version of ChatGPT in November 2022, when I don't think anyone knew about ChatGPT or OpenAI at that time, it really caught the world on fire. 100 million people rushed to start using it within months, which is really kind of an unprecedented rate of adoption.

Jeff Selingo:

And the reason everybody kind of rushed to it, it was really the first time we all had open access to this certain strand of AI, which is namely large language models. So you're going to hear these letters a lot, LLMs, right? These models stand in contrast to the machine learning approach that was behind things like deep blue and later on, things like Google Search bar, right, which would use search data over time to improve future results. These LLMs are built very differently. They attempt to create a model that has many more layers that can learn over time from many, many more inputs. And I mention all of this because chat GPT is far from the only large language model out there, there are many others, as we're going to hear today. And what they're increasingly enabling us to do is really build applications on top of those models so that the models themselves can be more useful, more accurate. And I think this is the most important piece of it, more intuitive, and so they can become better learners and better outputs over time.

Michael Horn:

Or fewer hallucinations. Jeff, would also be a good thing for all of us. But where you left off is the territory where we start when we bring in our first guest, Jeff, because he's going to talk to us about these different models and why, as faculty use them. Certain safeguards can be quite important given the sensitivity of the data that they're using and how that data, frankly, without those safeguards, the companies like OpenAI would be using them to improve those models. And that's the fundamental tension here. I think we want the models to improve, but we have sensitive data that we've got to be careful around. And I think there's questions about do we want those commercially used or are there other use cases? Now, before we get into all of this and more, one note from us, if you're just tuning in or if you are a loyal listener, a couple of things you can do to really help the show, like us, of course, and whatever platform you're listening to us on, subscribe on YouTube, Apple Podcasts, Spotify all the podcast players. It's a big help.

Michael Horn:

And if you have a minute, of course, leave us a review and you can also email the show. We get a lot of pitches for guests, but we frankly like to hear what's on your mind and questions you might have for us. So just go to futureupodcast.com, comma, click contact. And by the way, while you're there, subscribe to our newsletter. We're aiming to be much more active this year, which is going to be fun. Great way to communicate with all of you. All right, with all that housekeeping, Jeff, out of the way today, we have two guests that form the meat of today's episode. First up is Lev Gonick.

AI Partnerships at ASU

Michael Horn:

Lev is the Chief Information Officer at Arizona State University. He also chairs the Sun Corridor Network, which supports the state's research and education computer network. Prior to that, he had been the CIO at Case Western University. So, Jeff, he is no stranger to higher Eddie technology information systems. And as you'll hear much, much more, Lev, welcome to Future U. Good to see you.

Lev Gonick:

Thanks, Michael. Good to see you and Jeff.

Michael Horn:

Yeah. So earlier this year, Arizona State University became the first university, I believe, to form a deal with OpenAI. What does the partnership mean for practical purposes? Help us understand what's behind that right.

Lev Gonick:

So January 18 is a date inscribed not in infamy, but as a starting point in our journey with OpenAI, part of a much longer journey at ASU on all things machine learning and AI. Specifically, the way we framed the quote unquote deal was to work together to help OpenAI understand the needs specifically of an enterprise higher education customer at that time. Truth is, and to this day, it's an emergent set of offerings using a very significant large language model infrastructure, which needed to both be optimized in terms of how it might be used in education, but also specifically three things. One wanted to make sure that our student data was not going to get shared. And in so configuring with them, that requirement also managed to make sure as part of the then new OpenAI enterprise product, that there would also be no sharing of any of our intellectual property, that is to say, from our research agenda. And then finally, third, we have very specific requirements, as do most certainly every university and many enterprises, for security. And those pieces needed to be specifically attended to in order to pull the pieces together. So that was on the sort of the terms of helping them shape an enterprise product. And then we offered to use ASU as a kind of development partner, a research and development partner for what the offering might actually look like in higher education. So a go to market strategy for education. And that started with our proposal of working essentially to define what could be some impact activities that would actually be meaningful, compelling to what matters to ASU, namely our student success, of course, research of the public interest, and then trying to figure out how we could all work better using this new, if not better than, certainly differently with this new technology. And then finally, we did arm wrestle on some terms in terms of the financial picture, and then their commitment to, if it was all going well, to work with ASU effectively on a white paper that would describe ASU's journey in partnership. And in fact, that particular piece just came out about ten days ago. 

An Admonition on Privacy

Michael Horn:

Very cool. We'll find a way to link to that in the show notes as well. A few other universities since then have followed with building their own sandboxes for faculty, staff and students to experiment with AI. Just folks listening, they're not experts in this, talk about the difference between having access to university sandbox for AI tools versus accessing these models just like the rest of the public does, and how faculty, staff, students ought to be thinking about this.

Lev Gonick:

Right. So one is a little bit of a warning and admonition, I think, in that first year of either, again, utopian or dystopian interest in what this new generative AI thing might look like. We were taking significantly a lot of naive risks in using the consumer version of the technology. I had a chance to literally move all over the, the country to try to make the case that we need to be very careful, speaking to my counterparts, speaking to security officials, speaking to presidents, because everyone wanted to dive into the deep end right away without understanding sort of what the parameters needed to be. So that's sort of the first piece. The consumer experience is great. Just be extremely mindful that it's basically a public bulletin board outside your door, your office along the way, and the machines are going to get tuned. The actual, again, LLM, this is not specific to OpenAI or anybody else. This is just the way the machine learning part of the generative AI offering actually works. It needs data to actually train and get tuned along the way. Here at ASU. The OpenAI, as I said at the top, is actually part of 34 different large language models that we've created in our sandbox that is available here. And a lot of what I think the community needs to know now is it's still extremely early days. This is really, we're in the middle of, or heading towards the playoffs and baseball. So to use that analogy, it's still just the bottom of the first inning, and so we have to be very careful where we place bets.

Classroom and Administrative Applications of AI

Jeff Selingo:

Yeah. So, Lev, I think that's a perfect transition to a question I wanted to ask you about. What are the questions we do want to answer or have answered in higher ed? And particularly, where should colleges and universities place their bet? There's been so much discussion in the press about AI in the classroom and for research. But it's interesting, when I've spoken to senior leadership teams and trustees just at several types of different institutions just over the last couple of weeks. Indeed, one of the exercises I do with them often is, okay, get up and if you think AI is gonna have more impact in the classroom, move to this side of the room and if you think AI is gonna have more impact in the short term on services and the administrative side of the house, you'll move to this side of the room. And what's really interesting is that the overwhelming response, at least from these few universities I've spoken to, from the administrative side again, and from trustees is that they really think the impact is going to be on the administrative and services side, at least again, in the short term, not on the classroom and learning side. So what do you think? Is this, I know this is not, you know, either or here. But what do you think?

Lev Gonick:

It's certainly not binary. Yeah, and Jeff, what I'd share with you is that the area that we're likely to see the nearest term impact is actually in research. So I know this is only a subset of all the schools and colleges that are out there, but the use of AI, both in its application for discovery and the ways in which researchers and the workflow associated with submitting grants and applications are already, there's a high market motivation to begin to leverage these tools for that kind of outcome. And so that, at least for a research university, and the truth is, independent researchers, no matter where you are in a liberal arts college or at a research intensive institution, the moment is not too soon to be really leveraging the technology in terms of then the other sort of legs of the stool. I actually do agree with the idea, and we're working at this here at ASU because it is a complex use of the technology for enterprise use, not for an individual course or even an individual, perhaps even a degree, but for the whole enterprise. It's an early moment and it's a complicated moment. And so what we are doing, and this is consistent with at least part of the folks in the room that you have moved to side to side, is actually focusing in on the administrative side of the house, not at all to the exclusion of any uses of AI that are facing our students and their families, but in particular to try to actually get the value, to outline the value of AI. So, for example, at ASU we have 2 million calls a year into our experience center, which is our call center. We've just come through the beginning of the school year, and financial aid generates over a quarter of a million inbound calls. We were able to use AI in two ways. One was for our call attendants to be able to actually essentially use a generative AI interface to all of the knowledge base articles that were there, rather than saying, I know where to click, click, click, click to get the answer. So it was literally a dialogue that the attendants used, and then an opportunity also for students to begin to have more of a generative experience querying the tier one kinds of challenges. And we knocked out 34% of all of the inbound calls using basically this technology as a way of engaging with it. So the idea of creating concierge infrastructure for all of the backhouse of the institution, we're not unique in higher education. That's where a lot of effort is being used right now, but we're also using it in all kinds of school, I mean, classroom and instructional purposes. And we have, you know, really, at ASU, we actually went out to the campus community and asked them what challenges they wanted to solve. A relatively small subset of them were around the back of the house, if you will. The administrative functions, we've had over 500 grant submissions, and of the 254 that are in flight, well, well over 200 of them are actually classroom focused. And they're individual faculty working with either their research teams or their students to actually advance work. Things like question banks, syllabus competency frameworks, all of that kind of tool that are being used to advance that part of administering the classroom experience.

Prioritizing Projects 

Michael Horn:

You've received several hundred proposals from faculty about how to use AI and asking for support and so forth. You said you're working on a couple hundred of them at the moment. I'm just curious. You gave us a view of what a lot of them are about, but how do you sort through which ones to support? Like, that's the demand side, if you will, of the faculty and what they want from you? How do you all prioritize what you're looking for in them and where you give support? And frankly, if a proposal, say, is not accepted, I imagine they can still go on themselves and do things with it. So just help us with the mechanics of the process in your own prioritization.

Lev Gonick:

Sure. What we've done is carefully engaged the faculty community early on, in partnership with our provost and her team, and engaging both in discovery work, significantly engaging and, for example, we have a faculty ethics committee that works on all the hard questions that we have. We have a faculty advisory committee that works on second and third time horizons, not just, like, helping us right now, but what else should we be thinking about? And so what we really did is we went to that faculty community and said, the Challenge Grant is, where do you think you can actually deliver impact? The impact question now, what's come back, as you say, is hundreds and hundreds of proposals. And what we provide in enterprise technology at ASU is, of course, licensing to the product, but we also provide through a team that I've helped to assemble, which is ASU's AI acceleration team. We actually provide technical support, handholding to actually help faculty take their concepts and actually begin to create those minimally viable or proof of concepts that are out there, and if they don't fit the OpenAI challenge, we have 33 other large language models. And so a lot of that has been working, for example, with our research community in ASU's knowledge enterprise, a number of very intensive uses, or proposed intensive uses. OpenAI were not resource optimized for OpenAI, but we have several other large language models which we'd already been able to identify as being more likely to be of value for the intensive requirements of a computationally driven engagement. 

ASU’s Approach to Tech Partnerships 

Jeff Selingo:

So we've been talking a lot about inside of ASU here, and I want to ask you, because ASU has been long known for partnering with edtech companies that help it scale solutions to serving tens of thousands of students online or, or in person. And when I go to any higher ed conference these days, as you do, you know, every company says it's an AI company, right? So how do you think about partnerships on, you know, everything from enrollment management to teaching and learning to advancement to research? When it comes to AI and outside vendors, how are you thinking about partnering with these outside vendors?

Lev Gonick:

Right. Just to underscore, like to be Captain Obvious here, press releases today in the edtech space cannot have anything other than AI. Something AI enabled, AI driven, AI transformation, AI silver bullet. The truth is, a lot of that is essentially marketing hype. So buyer beware, do the due diligence. There are actually no silver bullets to any of the perennial challenges that we have in higher education. Let's just understand that more times than not, those are the wizard in the wizard of Oz behind the screen. In terms of how ASU approaches partnerships, for us, and this has certainly been my experience over nearly now 40 years of working in this space, is to find the rare organization in which understanding is that while we want to, while they want to transact business and we want to try to develop a set of solutions that will be relevant to the pain point or the issue we're trying to deal with. That is actually the vendor transactional relationship. And on top of. So a lot of vendors will say that's the partnership we're after. Necessary, but insufficient in certainly my terms and certainly for ASU, because ASU has been doing good partnership work well before I arrived on the scene. Those relatively few are really folks who want to essentially co-design and co-develop with ASU, either core products or customizing their existing product sets to meet our definition of who the learner and who the student is, rather than, as it were, kind of either one that they've been using for 25 years. Who is a student and how do you license software to a student? Well, at ASU today, we have traditional students and we have literally millions of learners who come to ASU who are taking either credentials or short courses and the like. And we still need to license that soft[ware]. Well, you have to educate the vendor community that if you want to enter into a partnership with ASU, we're going to have to work together to come to both the technical requirements to support this kind of work and a different licensing agreement on the creative side, if you want to actually, as we have with OpenAI and by the way, also with Google as well, and to, you know, aspirationally trying to do it with Apple and with Microsoft, all of whom are in that small set, but to try to actually talk about, right, your product, people want to get to market quickly. We're saying knock your socks off. It's not price competitive today. It may or may not actually meet the needs. But if you want to design the needs for either a specific set, for example, STEM education and generative AI, ASU stands ready to work with you on how to solve the math conundrum in colleges. Who's ready to not only sell us a solution that we know is not likely to work on day one, but who wants to work with us to actually drive to a higher efficacy together? And if it works for ASU, the value is you get ASU as a strategic customer and an opportunity for us to go to the market together and say, we actually worked on this together. And there are many, many cases where ASU has been successful in this. And just to be honest, obviously we haven't always been successful, but that's our approach to partnership.

AI in the Year Ahead 

Jeff Selingo:

Right. And I think that your goals are the same on both of those sides. It's just the time horizon might be different, right. Because they want to get, they need to have a sale this quarter or next quarter or whatever that might be. So just one quick closing question, because, say we had you, since this is moving so fast, I'm kind of curious. If we had you on a year from now, again, say we had you again on the show a year from now, what do you think is the one question we might ask you about AI that we didn't even think about asking today?

Lev Gonick:

How did it have so much disruptive impact in such a short period of time? I think we're at that moment where we can only see incremental ways of using it to tweak. It won't be long, maybe it's more, a little bit more than a year from now. But in the next short time horizon, call it 18 months or so, we're going to be able to begin to imagine and then design for some of the hardest questions that we face in education because we're beyond the idea that a car has got to be a buggy and instead of a horse kind of a moment in time, we're going to be ready to redesign what that actual experience will look like, the ways in which we can actually leverage the tools to really do disruptive work. It's just a little too hard just yet. And by the way, the technology isn't yet quite ready for prime time. And then we do have regulatory challenges that, lest we forget, can get in the way. But in the relatively near period of time, I think we're going to begin to see. And so you can ask the question, how is it that it had so much disruption when a year ago we were just talking about tinkering around the edges?

Jeff Selingo:

Yeah, Lev, just the fascinating stuff on how quickly this is moving. If we could only predict the future, it would be much easier. But thank you so much for joining us today on Future U. We'll be right back. We've talked a lot about how AI will transform higher ed learning and development, but what about administrative work, like recruiting new students? For our break, I'm bringing in a new guest to the show. [Phone Ringing]Hi, this is Jeff.

Sarah:

Hi, Jeff. It's great to be on the show. I'm Sarah. I and over 60 other AI recruiters work with college Vines partners. We pair one to one with prospective students to finally personalize outreach at scale.

Jeff Selingo:

Well, Sarah, that's fascinating, but I think if I were a student talking to you, I might be pretty difficult. So how would you respond to what the ROI on a communications degree might be?

Sarah:

That's a great question, Jeff. The ROI on a communications degree can vary based on several factors, like the industry you enter, location and how you leverage networking opportunities. At Ithaca College where you studied, the strong alumni network and practical experience provided through internships can significantly enhance a graduate's career prospects. This hands on approach helps in maximizing the return on investment for their degree.

Jeff Selingo:

Wow. That's pretty impressive. So where can people go to learn more about you and other AI recruiters?

Sarah:

You can go to www.CollegeVine.com recruit. That's vine with a v for Victor. Thanks, Jeff.

Jeff Selingo:

Thanks, Sarah.

Sarah:

You're welcome, Jeff. Take care.

AI’s Impact on Research

Jeff Selingo:

So welcome back to Future U. We're going to bring in another guest in a minute, but before we do, Michael, I really wanted to get your reaction to a couple points that Lev made. One of the interesting things he said was that there's this low hanging fruit of using AI in the research function, which frankly, we haven't really talked a lot about when we talk about AI again, AI is so focused on what happens in the classroom. So I wanted to get your take on that. On the research function of using AI.

Michael Horn:

Well, first, I love that he brought that up, Jeff, because research is one of those core functions of certain universities, and it's something that we don't talk nearly enough about. We talk about the student experience. But universities are an important contributor to society through the research they do, particularly at a place like Arizona State that contributes so much valuable research, so many fields important to Arizona and its local environment, but increasingly globally for a place like ASU as well. And I thought his points were really good ones. I thought I'd bring up, Jeff, just something from back in May, actually, that Ethan Mollick at Wharton actually wrote about this, and we'll link to it in the show notes. But he framed his piece around a crisis, as he put it, in research that predates AI. And the crisis was the rapidly declining productivity and pace of innovation in research, and that he measured everything from cancer research to agriculture. And his take was that AI is simultaneously assisting but also exacerbating this challenge in four distinct ways. And I just thought I'd mention two of them here. First, he was pointing out, obviously, it's sort of the corollary to what Lev was talking about, you can do a lot more writing of applications. Well, the pace of writing your research up can also go up significantly with AI. Now, that might not be good writing, but the pace is increasing, right? Either way. And here's the tension with that. Academic journals and the peer review process, they're already taking a really long time to get the flood of papers they have coming in vetted, go through the peer review process, get out to publication. We're measuring this in years, sometimes up to a decade. I suspect research you've done, depending on your field, might even be out of date by the time it actually appears. And now you imagine with the pace of writing potentially increasing, well, all of a sudden we're going to have even more things coming at our door step. And so how will journals and faculty who do peer review deal with this? Well, Ethan says here again, AI actually can help with peer review. It can do some of the work, but that then, of course, raises a whole bunch of integrity questions and how do we trust and so forth. So that's the first thing, the second thing. And again, Lev talked about how faculty can use AI to increase the speed of research, of the applications on the front end, but we can also use AI on the backend to not just write academic writing faster, but actually make it more accessible to a wider audience. Now, we know that academic writing is, I'm going to use the word dense to be charitable. And we also know that vanishingly few people read most papers. Part of that has to do with the narrowness in some cases of what people are researching, but some of it also has to do clearly with how they're written. And so I think a question is, can AI help make the writing clearer? I think the answer is a clear yes there, Jeff. One other section just on my mind, because I thought a lot of the administrative functions, Jeff, that he mentioned, where AI can make an impact are those we've actually seen at institutions like ASU for some time. They didn't feel new to me. It just seems like we're seeing an improvement of those now, better helping students get answers to questions, for example, and improving the student experience more generally so that, as you always like to point out, a student doesn't have to physically go to four or five different places on campus to actually get an answer to something that in any other service industry, they just ask the chatbot and get a simple answer. 

Diversifying the Project Portfolio

Jeff Selingo:

Yeah. Michael, so many good points from you and Lev on that front. And one of the other things he talked about that I know really kind of sparked your interest was in how, as a community, they decide which of these hundreds of proposed projects that they got around AI to actively support.

Michael Horn:

Yeah. And I was really glad I asked the question because I learned a lot from his answer. And I think the point that he made, the projects, they're coming in like they're very different types of projects. He, he mentioned this notion of time horizons. Horizon one, two, and three framework for those that don't know, basically, horizon one projects that, uh, you know, these projects are more incremental in nature. They also have faster timelines, they're less risky, they're more of a sure thing. Horizon three are sort of those moonshots. Transformational projects, long timelines, much riskier, probably a much bigger level of investment as well. And horizon two sort of sits in between. And what I liked about what Lev said is that one of the first things you do here is have a faculty committee to help, not just filter, hey, we're going to do this project versus that one, but actually just look at all the proposals and bucket them into these three different buckets so that you're not comparing a horizon one proposal that's super small, incremental costs, maybe, you know, a couple hundred thousand dollars, versus a horizon three one that may end up costing several million dollars before you're done. And because they have totally different levels of potential impact, risk needed resources, it just doesn't make sense to pit them against each other. It's very difficult to say, how do I judge this versus that? And ideally, and I imagine from his comments that ASU does this, you'd say, look, and I'm just going to make up a hypothetical here, but we want to have the capacity to support, say 150 horizon one projects this year, 75 horizon two projects, and 35 beginnings of horizon three projects. And so we'll take all the proposals that are horizon three and we're only going to judge them against the other horizon three one to figure out what are the 35. The point is we don't know exactly what we're going to do ahead of time. Our faculty from the grassroots, they have great ideas. We'll figure out what to do based on what they submit. But we are going to commit to a deliberate strategy in terms of the mix of innovation and the percentages in each of those buckets so that as a university we make sure we're sort of hitting our strategic plan milestones and we've got a nice mix of those incremental year to year things, but also those bigger moonshots and we're prepared to actually support them with the level of resources, processes, team structures that they're going to need over the long run if they really work out. And, yeah, so I think it's a really important thing that they're doing there. Jeff.

Jeff Selingo:

Yeah. And I like, Michael, how you laid out this kind of deliberate strategy of mixing innovation types, because I don't think that necessarily happens. In fact, I don't think it happens at all in most higher education institutions. You know, most institutions have one Uber strategy and they say, okay, the window on the strategy is going to be five or seven years, and everything fits in that one bucket. So I think this is really good advice on how to design innovation, particularly around AI, and probably a good place to leave this conversation with Lev. And while Lev talked about the low hanging fruit of research, Michael, one place in higher Ed where I see a lot of action right now around AI is really where there's, as we all know, a real pressure point for colleges and universities, and that's enrollment and probably the student journey in general right. The retention and engagement of students. And to help us sort that out, we're pleased to welcome Ashley Budd to the show.

AI and Stanford’s Conference Decision

Jeff Selingo:

Ashley, I've known for probably about ten years now. She's a consultant and full time marketing director at Cornell University who writes and speaks about marketing and digital innovation, mostly in the advancement and development sector. She's co author of a new book called Mailed it, a guide to crafting emails that build relationships and get results. Ashley, welcome to Future U.

Ashley Budd:

Thank you so much.

Jeff Selingo:

When we talked earlier, I know that you were telling me about some use cases of universities that you have worked with, but also, you know, something that happened at, at Stanford. Can you tell us a little bit more about that?

Ashley Budd:

Yeah, I think another really great use case is being able to respond quicker and receive information from your audience and feedback from your audience in a much quicker feedback loop. And so, yeah, examples from colleagues that have been using, you know, anonymize constituent data in a lot of cases to survey their audience and then be able to kind of rapidly respond. And the Stanford example came about at a time when they were switching athletic conferences and wanted to really quickly understand what that meant for their alumni body and be able to report back to their leadership. What is the sentiment right now? So to be able to turn around something like that overnight is really exciting when it might have taken months and a lot of manual number crunching in the past.x

AI’s Applications in Recruitment and Admissions

Jeff Selingo:

Yeah. So I want to talk specifically then about AI and the student experience in higher ed, particularly that funnel that gets students into the university, gets them through the university and of course then as engaged alums, which I know is area you've worked on a lot, that really seems to be the hat trick in higher ed, to use hockey analogy, if you get all three of those goals right. So how do you see higher ed using AI to help in marketing to prospective students? It seems like a lot of the work now, as you just described, is used as a tool, right? It's used as a tool to save time by writing personalized emails and things like that, but not in transforming how we recruit and match students to institutions necessarily.

Ashley Budd:

This is really exciting. I'm excited for the college admissions process to get a major process upgrade. We're going to be able to leverage AI and predictive analytics to find students that are more likely to be successful on our campuses. And it's the same technology that can be used to improve yield. And I'm also thinking about outside, what the campuses are doing, these like outside influences that AI are going to bring to us. It's going to be extremely helpful in the college search. Not only will this technology be available for students as their own assistants, they're going to be able to feed new tools, some really specific, unique criteria, and be served really well informed, highly personalized recommendations. So I just see this as a major upgrade for the students, as well as all of these, like you mentioned, process and productivity and effectiveness upgrades that we're going to be seeing in the admissions offices themselves.

Michael Horn:

It's exciting. It sort of blows up the notion of one size fits all rankings and you get your own personalized, tailored, personalized advice. Yeah. So we talk often on the show about how complicated, frankly, the student journey can be through higher ed. So much of it is this quote unquote hidden curriculum to many students. How might AI, in your view, help us communicate better to students to really get them connected to the experiences that are ultimately going to engage them and lead to success?

Ashley Budd:

Yeah, there's another huge benefit here. GPTs are really good at simplifying language, and I advocate for emails and websites to be written at a 7th to 9th grade reading level. And this is really hard for academia. Even when you show them data that says 39% of low income students don't know what the word undergraduate means, they still want to use some very postgraduate, very complex language. And now GPT assistants are going to force this clarity. Either we learn to communicate better or students will use AI assistance to translate our complexities for them.

Jeff Selingo:

Yeah, I just love this idea of trying to not only improve the feedback loop and make it much faster, but also this idea of personalizing things. If we make things faster and personalize them, I can imagine what we're able to do in higher ed. Ashley, thank you so much for being with us today on Future U.

Ashley Budd:

Thanks for having me.

Michael Horn:

And Jeff, I want to turn right back to you here for this last segment of this show because as Ashley was talking, I had just a couple questions that I would love your quick takes on. But before I let you opine, I have a quick tip from me, because Ashley mentioned writing communications at an 8th grade level, and obviously she's written her book about that. I'll recommend one other book by Todd Rogers and Jessica Laske Fink out of Harvard. It was called Writing for Busy Readers. But what was cool about it, I thought, was that they also have this website, writingforbusyreaders.com. We'll link to it in the show notes that uses AI actually to tell you at what grade level a passage of your writing is and help you improve its clarity and simplicity if you want. I didn't use the latter function, but for my new book, Job Moves, I was constantly just checking my writing to see what grade level it is because I wanted to keep it super simple. But with that said, Jeff, let me ask the first question to you, which is, you spend a lot of time in this space. 

Michael Horn:

What else are you seeing in how AI is beginning to reshape that student experience, starting with the admissions office?

Jeff Selingo:

Yeah. Well, first of all, Michael, glad that you use that website. I just kind of curious, did you end up at a 12th grade level, an 8th grade level? Where did you end up most of that time?

Michael Horn:

I was pitching around 7th, 8th grade level for bookstore.

Jeff Selingo:

That's good. It's interesting. I was talking to Michael Crow, president of ASU, about this. He's written a couple of books and he said some of his books were at like a 13th and 14th grade level. So he realizes that next time he writes, he has to kind of bring it down a little bit for most people to really understand what's happening in higher ed. But to get back to your question around what is happening on the admission side in particular, and I think right now it's what you had talked about earlier, it's mostly on this kind of transactional side. Think of the prospective students and current students ask. And so it's really around kind of developing chatbots for these and using AI to help answer those questions. I think that's kind of, really kind of the ground floor now for many admissions offices and even registrar's offices and financial aid offices to answer those kinds of transactional questions. I hosted a webinar a few months ago, and we had Michael Bettersworth, who's the Chief Marketing Officer at Texas State Technical College. And he was saying that it's bot in one month engaged more than 1200 chats. But what was interesting is that it had a 95% success rate in ending the questions. So people found a successful answer to those questions. So only 5% of the time did they have to shift that student to a human being. And he said it saved approximately 2700 minutes of, of staff time in that period they studied. So I just think it's incredible the amount of time savings you could do with that. And then I think the question is, what do you do with that staff? Does that reduce headcount in a higher ed among staff, or do you then redeploy that staff to do other things? Now, chatbots are great, but they don't really seem groundbreaking to me. That still seems like AI 1.0, as you pointed out earlier. But what follows, I think, is more groundbreaking in terms of how I think it can improve outcomes and save time. So let me go through some other things that I see already happening, being experimented with and probably going to be put in place, I would imagine, even this academic year, one is reducing the work and processing applications, and two is determining which students are more likely to succeed at the institution. And to that end, the University of Miami is piloting AI this year to review essays that this fall's applicants are sending in. And what the university did was they built models using essays from two falls ago. So the fall of 2022 admission cycle, and the experiment is going to target students who remained enrolled from that fall, as well as those who left the university. And they want to see if there's any content in their essays that can be identified to determine which students kind of stuck at Miami. So it's really around a retention standpoint to think, okay, this student who wrote this essay, more likely to stick at Miami and then Emory...

Michael Horn:

Sorry, can I interrupt you super quick there? I'm just curious because in your book, Who Gets In and Why, like, the admissions essays, people think that they're, like, the most valuable thing, but they're often nothing. Might this make them more valuable if we actually find out that there's some connection to persistence?

Jeff Selingo:

I think it could. I also think, I'm not quite sure how big these models are going to be. Miami probably has a pretty good retention rate, so I think you're going to have to probably judge this over a couple of years. And of course, this is happening at a time, speaking of AI, when most people are worried that AI is doing too much work on admissions essays. So what will be interesting to me is how are they really able to identify if the student actually really wrote the essay? I would be kind of interested to see what Miami comes up with in that study that they're doing. 

Standardizing the Transcript

Jeff Selingo:

Meanwhile, at Emory, they're experimenting with AI to bring more uniformity to the transcript. And probably, Michael, as I wrote in my book, this is really the most time consuming process or piece of the admissions process. Institutions, for example, they recalculate the GPA, and that part has been done by a machine for quite some time now. So that's not new. But at many universities, they're still going to look at course selection, and they might count up the number of courses that were taken across disciplines. They might count up the number of AP courses, for example. All that stuff takes time and that machines can really help do that, perhaps. And that's one thing that Emory is looking at in terms of AI, trying to kind of make the transcript across 20-25,000 high schools that we have just in the US alone, much more uniform. And then the other piece of this, I think, is the data that they know about students coming from high school. So the thing to remember, especially at bigger institutions, more selective institutions, is that college applications and applicants, I should say, kind of travel in clumps from high school, right? So a college is probably going to enroll a number of graduates from a particular high school. And remember, they're going to probably do that year over, year after year. And then they could follow those students on campus once they get on campus. And so what they could really try to figure out is, okay, what does a, you know, a 3.9 high school GPA at, you know, Wyoming Valley West High School, where I went, in Pennsylvania, what does that really mean at our college? And so this could actually provide some guidance to them, especially when everyone's worried about grade inflation, that when they're looking at Michael Horn from Whitman High School in Bethesda, and Jeff Selingo from Valley West High School in Plymouth, Pennsylvania, they can make those comparisons more easily having this data. And institutions are already using this. When I was inside the admissions process at the University of Washington, application readers there used GPA comparison tools for nearly every high school that they had in the state, because they obviously had a lot of in state applicants. They had some from out of state as well. And the tool basically compared the average GPA of prior students who enrolled from a specific high school and then their GPA after freshman year at the university. And the data could indicate then, for those reviewing the applications, how grades in one high school translated into grades at the university. And again, I think this process is just much faster using AI. Those are the ways I think it's already being used, or at least being experimented with now. But I think one other thing I wanted to mention here, Michael, because I think the real next frontier in admissions is around data and how we use data using large language models and AI models, right. You know, colleges, as we know, they have vast amounts of data on enrollment and student success. So I think the next step now is to develop a...Using AI to develop, okay, what segments of students are more likely to enroll and persist here? So then you can assist in customizing the enrollment process with tailor made communications and engagement. You can include, you know, personalized financial aid awards to improve yield, for example. Uh, now, I think there may be some worry out there, I think. When people hear that, well, does that mean they're going to, you know, reject a student, for example, if they know they're definitely not going to succeed at the university? And that does worry me. But I'm wondering if this also could be turned a different way. Could you know that? Not to pick on you, Michael. So I'll pick on me. Right, Jeff? Somebody like Jeff Selingo is just not going to succeed at this university. But there's so many things that we like about his application, so we're going to bring him in. But we know from day one that he may need help in these areas. So it's, you know, so many universities around student success now, you know, have the red, yellow, green lights around, trying to be much more attuned to student success, you know, once students are on campus. But what if you're more attuned to student success even before we arrive on campus? Imagine what that can do. And then just one last piece of data is what if AI analyzed real time labor market data to assist colleges in building new academic programs and skill based credentials that certify learning? Just imagine again how AI can help us design these new programs much more quickly than we do now.

The AI Arms Race 

Michael Horn:

It's super interesting, Jeff. I can imagine you take those last couple strands you mentioned and do something else, which is maybe a college could be more differentiated and say, hey, you are the right fit for us. I get the worries of, are we rejecting someone because of some bias or something? But maybe we say, hey, actually with your application, we know these five other colleges would be great places for you to look. Maybe we're helping them out as long as we're not under matching them. But it gets very interesting there as we go in. But it relates to something else I'm curious about, as I hear you wax poetic about these possibilities, which is I'm curious if you're, if you think there's going to be a situation of haves and have nots in these areas, like heard what Stanford did with AI, you're just talking about these correlations and maybe causation at some point we can figure out, gee, this student, you know, took this experience and didn't do well, and therefore they're not going to be a fit here. How cool is that? We heard what ASU is doing. How many institutions, Jeff, are going to be able to keep up? Is there going to be an arms race? And if so, what are the implications for the have nots, in your view?

Jeff Selingo:

Yeah. And I think the short answer is yes. You know, Michael, I think I told you this last year when I took a separate trip to the University of Michigan, not when we were there for the campus tour last year. I got to tour. It's a relatively new center for academic innovation with this executive director, which many of our listeners probably know, James Devaney. You know, it's a 47,000 foot facility that's housed in the original Borders bookstore, by the way, which was the defunct chain that was actually found way to repurpose Ann Arbor. Yeah, especially bookstore to academic innovation. But the center is. It's just absolutely gorgeous in terms of its design, and it's really the university's hub for building new online programs, new credentials to teach tech. You use tech tools that are then licensed to other universities. It's really kind of, as I say, it's its master control for its efforts around AR and VR. It has these huge studios to do ar and VR for college courses. And I bring that up because as I walked through the center with James, I realized just how much further ahead Michigan was on academic technology, because it had the resources, it had the money to invest. And. And now the same is true with AI, right? Whether it's Stanford or Michigan or Cornell or ASU, these are all universities either with the connections or with the money to make these investments in AI. And, you know, when you. You were asking me about this, I was thinking, okay, what is the solution to this problem? Because I. I don't think we want a growing divide. We already have a growing divide on so many fronts in higher ed. Do we want yet to add another one to it? And I was thinking about the history of the Internet, of the development of the Internet in higher ed. And when the Internet gained popularity in the late 1990s, colleges and universities were really one of the first entities to kind of outgrow its bandwidth limitations, because, you remember, researchers were sharing a lot of data and other things over the very early stages of the Internet at that time. And so what was created was something that was called later on called Internet two. It received money and support from both the National Science foundation, the NSF, and, at that time, from MCI, the former big telecommunications firm that heard it a while. Yeah. Well, if you remember from being in DC, the original arena that was built downtown was the MCI center. Now the Verizon center, or now Capital one. I guess it's gone through many, many name changes. But what I found fascinating about that, Michael, is that the NSF and MCI was a great public private partnership, and eventually this came under the modern day educause, which, of course, is the big it association in higher ed. And I was starting to wonder, could the same thing happen today? And think about it in this way, there's a lot of discussion right now around regulation of AI companies. What if, what if part of any bill that, and of course, this is assuming that we have a functional Congress, so let's put that aside for a second. But what if part of a bill, you know, there's a little horse trading because we know AI companies don't want regulation. But maybe you say to the AI companies, okay, well, you know, we'll do this and this, but if you in turn, you have to, you know, build these sandboxes, right, and maybe make this, these sandboxes equitable around the country. So there's some geography around it, maybe allow certain types, different types of institutions, so HBCUs and regional publics and community colleges to be part of that. And in some ways, it's similar to what NSF is doing around these regional centers of excellence, around the CHIPS Act, where they're bringing to where to qualify for that money, you have to bring in community groups, you have to bring in corporations, you have to bring in higher education institutions. There seems to be models out there that we could copy because I think if we continue down this road where it's the ASU's and Stanfords and the Cornells and the Michigans of the world, I do fear that we're going to basically have the elites and the big publics and then kind of everybody else. And as we know, the everybody else is where most students go to college and they serve most communities in the US. And if they're left on the sidelines and not able to be part of this, it does really worry me.

Transactional or Transformational?

Michael Horn:

Yeah. Well, thinking out loud for a second with all the dysfunction in Congress, I think AI is actually a place, though, where you might see some bipartisan agreement. And if you do what you just described, where there's something in it for local constituents within Congress that, you know, I think you could see something. And maybe it's a bulwark against what I worry about with AI regulation, which is sort of freezing the incumbents in place as the leaders at the expense of maybe some folks with fewer resources there as well. But last question, as we close out the second episode in this two part series on AI and higher ed, let's zoom back out again. We've gone deep over the last many minutes. Bigger picture Jeff, is this a transactional technology, or is this going to be a transformational technology? Like, what do you think? Is this the desktop computer, which simply shifted how higher ed was transacted, but nothing transformational? Or is this the Internet, where you actually saw a transformation of real models of education itself? In many, many ways.

Jeff Selingo:

Yeah, Michael, I've been thinking a lot about this because of our last guests, our last couple of guests, we had Cal Newport on in that first episode, the last week, and I think he saw it as potentially transformational. But right now, as he said, it's kind of more iterative and probably more transactional in how it's being used. And then we have Lev, who was kind of all in on its transformative power. And I'm going to say I'm more of an early adopter. So I must say that I tend to get pretty excited about technology in my household. My wife, not so much. So she tends to pull me back from kind of jumping all in on new technology. So I'll say I'm probably more in the Lev camp when it comes to transformation. So, you know, Michael, a lot has been written about this period that we're living in right now, this AI period, and how it compares to the development of the Internet. It's something that we have talked about on these last couple of shows, and a lot has been written about how this period that we're living in right now compared to the industrial revolution. And let's just stay with that comparison with the industrial revolution, because, you know, if we think about the industrial revolution, the steam engine power that was developed during the industrial revolution, if all we thought about that was okay, it's just going to change the way the process happens, right? So we had a process of making a product, and suddenly now we're just going to put steam behind it, right? And it's just going to change how that product is made, but it's not really going to transform, transform how the product is made, we really wouldn't be where we are today. There's this quote from Deirdre McCloskey, who's an economist, and the quote is, the industrial revolution was neither the age of steam nor the age of cotton, nor the age of iron. It was the age of progress. And we really thought of it in a very different way. And I still think that we're thinking too much around AI as being transactional and not thinking about how it can not only change how we do things, how we educate students, how we design institutions, how we do our work, and I think that's where Cal was going. I think he sees that. He just thinks that we're not there yet. And I really do think the same thing is happening now. And perhaps I'm overstating it, but I think one of the reasons there is so much political unrest right now around the world is because I think we all realize that the world of work, you know, how we sustain ourselves, it's really shifting in ways we don't yet understand. And, you know, none of us were alive for the industrial revolution, but it probably feels the same way now that it felt back then to, to those people who saw that the world of work was changing in ways that they didn't understand. And, you know, if you go back to the work of the Nobel Prize winner Claudia Goldin, we know there are these periods of huge social unrest after there are these massive technology changes, especially after the industrial revolution. And as she and Lawrence Katz have written, those times of social unrest are then followed usually by prosperity, and that prosperity then is usually driven by advances in education. I think if you take the dystopian view of this, we may never reach that age of prosperity. But I, again, tend to be an optimist. So I think that where higher ed should be thinking right now is what type of education are we going to need in the future? There's going to be a lot of changes in the coming years, not all of them good, but what parts of jobs, of degreed jobs now are going to change? What types of jobs are going to go away because of, of AI? And as Cal Newport said, higher ed isn't always great about predicting the future, but where I think they can play a role here is being ready to move fast when that future is created. And there will be a new future after this very disruptive period that we're going to live through for the next couple of years.

Michael Horn:

Great points, all. I will say here's to an age of progress. If we use that mantra. I'd also recommend Carlota Perez's work, obviously, on a lot of this, The Economist. But what you said, I think, echoes a lot of my thinking. The reality, I think, is AI is going to be used both as a sustaining innovation, and we may see it as a disruptive innovation where, like the Internet, it's the enabler, in effect, of radically different business models of education. That's certainly what our friend Paul LeBlanc, former Southern New Hampshire University President, now CEO of a new venture that I'm not going to name, use the name, because I think he's hopefully going to rebrand it, but that's certainly what he's hoping. And I think Cal is also right that at least as higher ed is concerned, we're still searching for product market fit, as in, you know, I spend a lot of time in K-12 and higher ed, and I see a ton of entrepreneurial right now activity. Jeff, in K-12 around AI, I see actually, frankly, a lot of activity around AI on the entrepreneurial front, in workforce and upskilling in what we'd call higher ed. I don't see nearly as much entrepreneurial activity around AI. It is striking how different it is. I won't get into why I think that is for now, but just to say that the difference is really, really, it seems, big to me, but let's leave it there. We've gone long enough, but I think the longer show was worth it because we got a good deep dive on this topic. So a thank you to CollegeVine for sponsoring this mini series on AI and higher education that allowed us to venture into these territories. A thank you, of course, to our guests today, Lev Gonik and Ashley Budd. And a thank you of course, to all of you, our listeners. And a reminder to you all to follow Future U podcast, JSelingo, Michael B. Horn on your favorite social media channels. And with that, we'll see you next time on Future U.

Wherever You Listen to Podcasts