Searching for Fit: The Impacts of AI in Higher Ed

Tuesday, September 17, 2024 - On this episode, Jeff and Michael tackle the question everyone is asking: how will AI transform higher ed? For help in finding the answer, they turn to bestselling author and professor of computer science at Georgetown University, Cal Newport. They discuss AI’s academic and operational implications, its ethical and practical considerations, and the stages and timeline over which we can expect this technological transformation to unfold. This episode is made with support from CollegeVine.

Listen Now!

Listen wherever you get your podcasts.

Join our Newsletter

Get notified about special content and events.

Links We Shared

“Bad Bets,” Lightcast

“Good Jobs in Bad Times,” Future U

Key Moments

(4:06) Contextualizing AI in Higher Ed History

(7:03) Factors Delaying Implementation

(8:50) How AI is Changing Knowledge Work

(11:19) Should we Be Teaching about AI?

(18:45) Educating Students on AI’s Ethical Implications

(21:51) Differential Effects on Coding and Writing

(23:46) How AI Could Impact Higher Ed Inside and Outside the Classroom

(29:21) “The Development of AI That We’re Worried About

(33:12) Parallels with the Days of The Early Internet

(40:56) AI’s Impacts on Writing

(43:15) Adaptations Required to Integrate AI in Higher Ed

Transcript

Cal Newport

When an AI based tool can empty my inbox for me, right? I think that's the functional Turing test that matters when we're thinking about transformation in the knowledge work sphere. A language model, by itself, can't do this. A language model can digest an email, it can, in some sense, capture semantically what's being said in that email. So it could resummarize it. It could put it in different languages or words. It can pull out the main points. But to act on an email, this requires things like forward looking, planning, predicting the future. It requires you to have a functional model of other people and other people's minds that's more explicit than what's just implicitly captured in language model.

Jeff Selingo

That was New York Times bestselling author and computer science professor Cal Newport, who writes extensively about the intersection of technology and work. And today, he joins us to talk about an issue that's on all of our minds: artificial intelligence. That's on this episode of Future U.

Sponsor

This episode is sponsored by CollegeVine. CollegeVine has created higher ed's first autonomous AI agents to modernize all aspects of the student recruitment process. You can find out more about what they're doing at www.collegevine.com/recruit. Subscribe to Future U wherever you get your podcasts, and if you enjoy the show, send it along to a friend so others can discover the conversations we're having about higher education.

Michael Horn

I'm Michael Horn

Jeff Selingo

And I'm Jeff Selingo. 

So Michael, we've obviously covered the topic of AI a few times on the show, but the subject and its impact on higher ed is so big and so important that we're returning to it in this episode and the next episode as we lead off season eight of Future U in the episodes that are being sponsored exclusively by College Vine, which you'll hear more about during the break, is bringing AI to the college recruitment process in a new way. Now, in this first episode of two on AI, I kind of view it really as helping us set the larger picture right. I think for many who work in higher ed, probably heck, any of us, we're kind of heads down in our work, and when we look up to see the larger context of AI, it's probably a bit dizzying and probably a lot confusing. So where does AI fit into the landscape of other technological revolutions? Right? What's its impact now and in the future going to be on work, and what are the implications for higher ed, kind of broadly speaking?

Michael Horn

Yeah, Jeff and to help us answer those questions and put all of this into broader perspective. We're thrilled to welcome Cal Newport to the show, as, Jeff, you mentioned at the top, Cal is a computer scientist and a best selling author. He's a professor at Georgetown University. He writes often about the intersections of technology work and the quest to find depth in our distracted world. And he wrote, I will tell you Jeff, one of the best early pieces in the New Yorker, helping people understand just how ChatGPT really worked with simple one-off, scaled down examples to help us all really imagine its actual capabilities. With all that as background, you can see why we thought Cal was the perfect guest to help us out here. So Cal, welcome to Future U.

Cal Newport

Yeah, thanks for having me. I've been looking forward to this to actually talk shop about the world of universities, like so often I'm talking about sort of corporate knowledge workshops. It's like, yes, we can get into we can get into my bread and butter here.

Contextualizing AI in Higher Ed History 

Michael Horn

Well, it's like doubling up on your expertise I think. But let's start with where you've been writing some for the New Yorker and so forth, which is really put the development of AI and perhaps large language models specifically into some historical context for those in higher ed. You know, if we think of the big developments right that have changed higher ed, we could go all the way, of course, back to the printing press. But then there's a Land Grant Act in the United States, the GI Bill. And if we're thinking of technology, of course, the internet, where does AI fit into all of that?

Cal Newport

Well, we don't know yet. And I think what's important is that we recognize that two things are true at the same time when it comes to AI. Like, one, we recognize that the new generation of generative AI models are really doing something amazing, right? The leap they took in terms of semantic comprehension and semantically coherent text generation. These were major leaps, and I think, you know, everyone accepts that. That's why it caught so much attention. On the other hand, however, we know it's amazing. We don't yet know what the impacts are going to be, and we don't know if, in the short term, that's going to be amazing or not. Like in other words, we can be amazed by the technology and yet be pretty patient about grounding our reactions to the actual developments on the ground, which I think now we are really beginning to realize, are slower than people were prognosticating, are less dramatic than people are prognosticating in the short term that probably AI is going to have to go through the same relatively long product cycle of other transformative technologies before it really has its high impact hits like there is this thought of just maybe ChatGPT, by itself, will change the world, but no, this picture is probably going to be more like it was with the early web. It still took a decade for that transformative technology to be filtered through the experimentation of countless companies and failures and products and fits and adjustments until it really became that that dominant role in our life. So I'm kind of prefacing everything I'm going to say about AI on that reality is that we have to…I'm trying to ground my reactions to what's happening, not what could happen. And so right now, we don't fully know what the scope of impact is going to be. We can, we can lay out the scale here, right? We have on the far skeptic side of, you know, it'll be maybe even less than the internet. Maybe it'll be like word processors, or when we started using word processors at college. And then we have the other side, which would be sort of more of like the Ethan Malik position, which is like college as we know it is done. I think he, today, the day we're recording, he was reemphasizing his homework apocalypse frame in his newsletter, like, how can we ever give an assignment to anyone ever again? Everything has changed. So we have, I think, a pretty big scale to calibrate this on, and I'm probably somewhere in the middle on that now.

Factors Delaying Implementation

Michael Horn

Gotcha well, so before we go into prognostication for you personally, I'm just curious is, is the fact that it's rolling out in terms of ultimate implications slower than maybe the hype might have suggested it would to some folks, in your view, is that the technology limitation, or is it really the organizations, the business models, the people right themselves and how they interact or incorporate the technology into their workflows?

Cal Newport

I think it's actually product fit, right? So I think we have the underlying technology. What happens when you make transformer-based language models this big, that has all this potential, but that there is going to be a non trivial cycle related required to try to figure out how to actually best integrate this into people's lives. So the current form factor for this technology is a chat box, right? You go to a website and there's a chat box, you can talk to it. This technology accessible through that form factor is not having the disruption that some people thought. So I think what's actually going to unlock bigger disruption, I don't think it's necessarily, it's being held back that its organizational barriers is preventing it from having the impact it could. I actually think it's the product form factor to figure out the right form factor there. How do we integrate this properly into tools in a way that's going to have a big impact? That just takes experimentation, right? That's going to take a lot of VC funding. It's going to take a lot of failed companies. We have to have our equivalent of the web's early days, web van, right? We have to, we're going to have to have our AOL-Time Warner mergers that doesn't work out. You know, there's going to be some big AI company is going to have that go up in flames after they burn through $100 billion in capital. I just think it's going to take a few more years than we thought to start to get the real disruption.

How AI is Changing Knowledge Work

Michael Horn

That's a useful reminder and also history lesson for those tuning in. And before we get into higher ed itself, just looking at the knowledge economy more generally, because obviously, you think about the use of AI and lots of different industries. I'm just curious, you know, for you to prognosticate. You said you just, you're sort of in between the two poles, right, of the skeptics on the one hand, the Ethan Moliks on the other. How, at the moment, what's your best guess or thinking on, you know, 15 years from now, how it has changed knowledge work.

Cal Newport

I mean, the Turing updated Turing test I care about is when an AI based tool can empty my inbox for me, right? I think that's the functional Turing test that matters when we're thinking about transformation in the knowledge work sphere. In order to do that, there's a lot of things that AI is going to have to be able to do. I wrote a New Yorker piece about this last spring. A language model, by itself, can't do this. A language model can digest an email. It can, in some sense, capture semantically what's being said in that email. So it could resummarize it. It could. Could put it in different languages or words. It can pull out the main points, but to act on an email, this requires things like forward looking, planning, predicting the future. It requires you to have a functional model of other people and other people's minds that's more explicit than what's just implicitly captured in language models. And so, you know, I think what that's going to require, and this is what I wrote about the New Yorker was multimodal systems, where you have a language model to deal with the language, you're going to have a theory of mind model. You're going to have a future prediction model, kind of based on the way we play games with computers now, to try to simulate the impact of various decisions to find the right one. All of these are going to work together. That I think is going to be pretty interesting. So if you have an AI that could answer an email, you now have a functional assistant that is digital, that I think is transformative. Because, of course, a big part of my writing about digital knowledge work is the degree to which we have added too many total things on the plate of knowledge workers, and too much of a diversity of things on the plate of knowledge workers, and the human brain is suffering. So I really am closely following those type of movements, like, how do we get closer to a functional assistant? Ai, the language model piece was a big problem, and they solved it. So now we have to focus on the other pieces, which I actually think are easier. It's just a matter of people actually starting to do that work.

Should we Be Teaching about AI?

Jeff Selingo

Well. Cal, since my kids make fun of me for my inbox when they see the number on my phone, which I won't even say, how embarrassing it is, that idea of emptying my inbox, I'm looking forward to that day, that's for sure. So I want to talk a little bit more about how these changes might impact or should impact higher ed and let's start with one I'm getting a lot of these days from college leaders, is, should we be explicitly teaching students about AI, 

Cal Newport

Yeah, so my take on this, and again, this gets me pushed back is, I say not yet, because I don't think this first generation form factor is going to be the way five years from now that we're interacting with these tools. Like, I'm in this interesting place on this debate where maybe it's not on the scale, it's orthogonal to it, or it's, I agree, in some sense, with the ethanol of the world that there might be huge impacts coming. Like, we've made a leap here that enables big impacts, but I disagree on the time scale and what's actually required for that to happen. I think there's a more complicated path to get to those impacts than just simply, let's make the parameter count larger, give open AI, another year, let's get the GPT 5.0, and suddenly, you know, everything's going to be solved by a language model. I think that path is more complicated. By the way, a lot of computer scientists feel the same way. It's why you tend to see less of the ChatGPT, by itself, is going to change the world. So if you're started teaching right now, and I have been thinking about this, I'm on Georgetown's Universitywide Task Force for pedagogical, you know, implications of AI I've been doing speaking about it. I'm working on a big piece on it right now. If you started teaching students tomorrow, prompt crafting, right? Relevant to prompt engineering, relevant to, like the current generation chat models, I just don't think that's…That's not probably going to be the relevant skill for using AI two years from now. So I have a let's, let's wait a little bit when it comes to training. On the other side, though, we can't avoid the fact that like these tools are having intersections already with academia, that they are being deployed in certain instances in ways that like we can't ignore. So it's in this interesting situation where we have to react to it right now. We can't ignore it, but we're not quite ready yet to say, let's reorient, let's reorient around these tools and make sure that we're like teaching you well how to use them, because I don't think we know what using these tools well looks like. Like, for example, I think the chat interface is something that's going to be seen as, like, just an early transitional step to the way that we're ultimately going to use these things. So we're in this weird place. We cannot ignore it as universities, but we're not ready yet to say, Great, let's, uh, let's codify and, like, teach a new generation how to use these things yet, because what these things are is changing too rapidly. 

Jeff Selingo

Okay, so we're in this weird place, maybe not yet, but as you know, colleges maybe not Georgetown, but a lot of colleges have become very transactional in nature, right? Students are voting with their feet around majors. You know, they're majoring in things that they see particularly lead to jobs. You know, a lot of colleges are talking about teaching students skills rather than broadly educating them. You know, there's a lot more talk about skills than majors on a lot of college campuses these days. So are there specific skills that colleges should be teaching students so they can better leverage AI in their work or when they're ready to. You know, so how can we think about this? You've written the fact that AI might not totally kill jobs, like some have thought. But are there other implications on the careers colleges should be aware of but perhaps aren't thinking about? Because again, maybe beyond the top 25 top 50 most selective colleges in this country, in the US, you know, most colleges and a lot of students are looking at this at the college experience is very transactional, and so they want to make sure they're getting the skills that they need to get a job on the other side.

Cal Newport

So my theory there is, you have to ground that instruction with the ground truth of the actual commercial world, but then be willing to move rather quickly. So in other words, the university shouldn't get out ahead and say we could imagine it would be useful in the future, in this type of job, to be good at ChatGPT so we're going to teach you how to do it. It needs to actually see how is this tool being used in this job. Okay, now we can move fast and go back and actually teach about it. 

Jeff Selingo

I’m laughing because, as you know, colleges don't necessarily move fast, right? 

Cal Newport

We don't. But like, okay, this is very relevant in computer science, right? I'm a computer scientist, right? So, so there's this sense of these models are good at producing code. So, so like, how do we teach this? Or how should we be teaching this? Or what's relevant? And really the right answer is, how are companies right now, like, this month, using language models in their development, right? And let's that's what we should be teaching, not a prediction of, you know, in the future, we think, like, most of the code will be produced by these models and and so, like we should then in the intro courses have them using these models to produce the code. It's like, no, we should say, what are they actually doing? Are they heavily making use of Microsoft copilot? How are they making use of Microsoft copilot? We should teach them how to do exactly that. So I'm a big believer in reacting to the grounded realities of AI right now, as opposed to potentials or predictions. I think this is actually, like the biggest rift we're having in sort of small scale policy making, is whether we're reacting to grounded realities. Okay, this tool is now being used in this way here. What do we want to do about that? How do we prepare for that? How do we react to that? Versus, I could imagine this tool being used in this way. Now, let's react to that hypothetical. So much, I think of the discussion has been reacting to the hypotheticals, and even some of the grounding, I think, is being you know, I just think we have to be careful that same like newsletter I was reading this morning, for example, and I respect Ethan's Molik’s work a lot, by the way, I really liked his book, and we're at the same publisher. But there was, like, a throwaway line where he's like, look, these tools are now used universally, basically among students. And he cites a survey. But you look at the survey, right? And he was saying, Yeah, 82% of students are using these. Well, actually the survey said 18% of students had never used it. So yeah, he's right in the sense that 82% of the students surveyed had used chatgpt before. But really, the bar in there you cared about is use once a week or more. That was actually pretty small, right? And I think there was a so there's, you know, it matters, right? So it's not true right now that college students are universally now using this and are this is how they're handling all their assignments they might be soon, and we should keep an eye on it. But it's the grounding is, it's so tempting to jump to the hypothetical that's interesting and then react to that, because that's more interesting than like, when I talk to developers and they're using Microsoft Copilot as they program, but it's like, look, the codes a little sloppy, but honestly, like, here's the ground truth of how, in the first six months of this product, people are programmers are using this. They're using it as, like, an advanced documentation, instead of looking through a manual or Googling, they’re like, what is the what's the function name for what I'm trying to do here? And like, copilot is great. Like, oh, I know what you mean. It's this, and here's the parameters, right? Or can you write me a little bit of code that uses this so they can see how to call it, and then they write their own code. Like, it's not as exciting as programming is dead, right? Writing is dead. But So that's been my big advocacy right now is like we have to react to grounded realities because there's too many hypotheticals possible and we'll get drowned in it. 

Educating Students on AI’s Ethical Implications 

Jeff Selingo

Yeah. Okay, so Cal, that's the technology side of this. But how about the other side of this, the kind of the legal, ethical, social and moral ramifications, and kind of the critical thinking skills that will be required to kind of navigate this world. It's still hypothetical. I get it, the hypothetical world. But you know, how should higher education be talking and educating students about those implications? Again, hypothetical, but how should we be thinking about that side of it?

Cal Newport

Yeah, I think writing is something I've been looking into, and I think that this is, like, a good case study, probably like for these issues. Like, how do we think about, for example, text produced by a model from your prompting, versus like, text you produced from scratch, versus text that you got from somewhere else? And like, where does that actually fall, sort of ontologically, on this sort of authorship spectrum there? I think that's an interesting question. What does it mean if, which is, by the way, is how most students who are using this to write right now are, for the most part, for non-trivial assignments, aren't producing and submitting text. They're interacting with the model. They're also producing bad text, and then they're editing it into good text. So how do we feel about that? I think that, for example, is like, actually a new pedagogical activity that like didn't exist before. It's easier to edit bad text, just from a cognitive functioning perspective, than facing the blank page. So what, how do we think about that? Right? Those are the big questions. I think now, I think we have some historical cases here that are relevant. I mean, I think that by far the biggest comparable disruption, really was the internet, plus, like, a usable way of accessing the internet, basically Google. We have just through familiarity, memory holed a little bit how disruptive that was. You know, I was a student back when that happened, but it made for, I mean, a lot of changes happened here, right? You could get access to text like, we didn't really have to worry in writing as much about plagiarism, because it was like, you're copying text out of a textbook or someone's like, professional academic book that they turned their dissertation into. No one's going to believe the sophomore wrote that. But now you can have access to lots and lots of text. It was a big crisis in computer science because you could Google Code. It really became a whole thing where you would professors got good at being like you clearly copied and pasted this code in because of the way we never had to worry about that before when I was studying computer science. And man, I wish this was still the case now that I'm a computer science professor. The exercises in the back of the book is how you practice. Professors would hone the problems in the back of the book over like, a generation. These things had gone through years and years of polishing and adjustments, and they were fantastic problems. And then you get, like, one year into the Google universe, and you can just find all the sample solutions. And, like, it was the end of being able to use problems from the back of the book, right? It changed the way we talked about plagiarism. It introduced the notion of plagiarism to computer science which didn't exist before it changed the way we had to create assignments. This seems comparable to me, like, I mean, I think this is like Google arrival in the university take two at least this early stage. 

Differential Effects on Coding and Writing

Jeff Selingo

But your point is that computer scientists are not worse off today for that, right? They're, they're better, they're better off for it in some ways, right, because of that advancement. And is that kind of the same thing you potentially see for writing, for example? 

Cal Newport

Yeah, that'll be interesting to see. Like, okay, so let's use those two cases. Google made computer scientists better, right? That that's for sure, because, because, actually what happened is it was a Stack Overflow effect. So Stack Overflow is an online bulletin board where people post questions about code and other people answer questions about code. The rhythm of being a computer programmer became you Google about whatever. How do I write this function? How do I do whatever? And it will find a Stack Overflow page where someone is given that code. That became how computer scientists learn new programming languages. That's how computer scientists, like learned how to use new features or libraries. It really made things more efficient. On the writing side, Google introduced new problems as well, because you could now plagiarize et cetera. I don't think it made people better writers. I think it made research easier, for sure, but didn't make people it didn't make people better writers. And then the other secondary effects of the internet writ large, in particular, the distractions, etc, the algorithmic, algorithmic content curation, probably made people worse writers. So it's interesting, you had these two parallel academic fields, let's say computer science and writing academic activities both heavily disrupted by the Internet. One we ended up like, I'm more flexible and better because of this technology. The other no one would say, like people are faster writers or better writers, or the quality of writing has improved. And so I think that's very telling, because they started from a similar place, and they're sort of similar fields. You're producing textual stuff from scratch. And one, I think, at least that's my assessment, had it had a much different long distance impact than the other.

How AI Could Impact Higher Ed Inside and Outside the Classroom

Michael Horn

So let's maybe move then beyond just the classroom itself and Cal I love like first person reflection of what's going on at Georgetown University as a proxy. You know, how is AI affecting other work streams at the university that you've observed, and are you surprised with you know, how it has or hasn't been applied on campus so far,

Cal Newport

I would say, as far as I know, it's not having a big out of classroom impact. I don't think as an organization, we're using it heavily outside the classroom. Inside the classroom, something Georgetown is doing, which I think is I think is smart, is, and this is what this task force I was on did, was they said we're going to make a bunch of money available if you're a professor and you want to mess around with this, like, you want to try and experiment with like, I want to use AI in some interesting way. In my classroom, we'll give you money to do that. Like, you need to hire someone to build a model. You want some summer salary so you can work out something new. So it's definitely a, let's get ideas like, this is a time to get ideas. So I haven't seen a big out of classroom impact. In classroom impact, you know, it's, it's there, right? I mean…it depends on what you're teaching. I mean, I teach, for example discrete mathematics for a computer science major. So it's sort of like an introductory discrete mathematics course that's pretty AI solvable, like it is changing the way I'm thinking about it. I don't have evidence that, like most of my students are doing this on AI. These are Georgetown students. They want to learn it, and they're, you know, they follow rules or whatever. But it's certainly, for example, I have to think a lot about that. I was for a while post-pandemic, like, hey, this is great. Why do we all have to gather to do exams? Like, the way we did it during the pandemic is really good, because you can, you can do them from the library at home, and we'll all log in the Zoom together to hear questions. But now, like, it's much easier to deal with accommodations. And if it's at the end of the semester and you're traveling, and I realized, like, oh, I can't post-ChatGPT. Is like, no, no, we got to be in the same room because that, you know, you can put, I just put all my exam questions in the ChatGPT. And I said, uh oh, it did a little bit too well. So I think we're, there's some, there's classroom impacts, not back not that I know of, at least not a ton of back office impacts yet. 

Jeff Selingo

Well, okay, so not yet, but we know, for example, that…I know you probably as a faculty member, there's all these things you have to fill out and grant work, and there's just a lot of work as a faculty member that kind of takes away from classroom and research I would imagine there's a lot of administrative work happening on campus outside of the faculty of course. Most colleges, universities are having trouble hiring people right now to do a lot of that work. I'm not expecting you to know everything about how a university operates. But could you imagine that there's stuff that we could leverage AI for that's potentially being overlooked right now, or again, is that something we just don't quite know yet?

Cal Newport

I think it's two years out. I see what I'm waiting for. I don't think the chat interface, web-based chat interface is the right, is the right, the right form factor, but what I'm waiting for, and what's being worked on, it's probably going to be voice interface, right? We talk at a much faster rate than we can type, right? That's a much richer information stream and critical and this is, like, the point I've been arguing which I don't think has been being picked up enough. Like, where I think language models are really going to be powerful is when they are then interacting with other types of systems as well, right? So, like, what a language model can do is take what you said, and it can then translate that into a very specific language that, like another program could understand, right? That's where they're really powerful. That's how you begin when you see these plugins, for example, where ChatGPT can work with your Excel spreadsheet or book a flight for you or whatever. What's really happening here is it's taking what you said, it's translating it to a very stylized language, which then is passed on to another program that doesn't now have to understand human language. It's getting like a really rigorously formatted request, and then it can act on that, and then it can explain back to what happened. I think, voice interface, multi-application, where, you know, I can be saying the way I imagine it, like my use cases in my mind, where I can select, okay, we're working on a syllabus here. Create a new Canvas page. I want it to be similar to that page over there. Okay, go copy over the page from the last time I offered this course. All right, I want you to adjust everywhere it says…When I'm just voice interfacing. And this is going across multiple applications. That's what I think we're going to begin to see, like the first big productivity booms. It's exactly what they're working on. But I think it's like a year out for good prototypes, two years out until it’s ubiquitous like we're we're two years out from a form factor, which, by the way, this is partially why I say there's not a lot to teach people about this right now. Right? The things that, the technologies that have been most disruptive in the past 30 years have required, on average, like 17 seconds to learn, right? Email. Wrote a whole book about email. Email was incredibly disruptive. It changed the entire rhythm of how work unfolded in a way that actually had a lot of negative externalities. It spread like wildfire because it was incredibly intuitive. I put the address I want to talk to where it says to, and then I write what I want to say, and then I hit send. Right? I didn't have to learn about it. Google is the same thing, right? This is so intuitive. I don't need to be trained how to use Google. And it really had a big impact. That's probably where I think the high impact tools are going to be with AI is you're not going to need a intro computer science class to use it. The reason why they're going to be so impactful is that it's like using Google or email. It takes you nine seconds and you get it. 

The Development of AI We’re Worried About 

Jeff Selingo

It's a great, great analogy. So as we get out of here, Cal, What haven't we talked about? What? What? You know, what's the question we should have asked you that we didn't ask you so far.

Cal Newport

So I'm, I'm very, I'm curious in the back of my mind, it's not relevant now, most people really aren't thinking about it, except for sort of P Doomers in Silicon Valley. I'm still fascinated by the development of AI that we're worried about, right? That's my instead of trying to find consciousness, I want to just use the subjective feeling of, like, this thing makes me uncomfortable, like, maybe I'm like, a little bit nervous about turning it off. Somehow, that's like, that's hitting some sort of note that's still kind of lingering out there. No one's really working on that. It's not like an interesting technological use case. It's not and I wrote, a couple years ago, I wrote an article about this. You know, it's not something that a language model getting big enough will create. That's not the architecture creating something like this. But I've just been fascinated by it, and so I've been wanting to write about this. Like, what would it what would it take? Like, if you wanted to build an AI that really felt alive? Like, what would you do? Like, what are the pieces we’re missing? Like, what would that research program look like? I have not actually found someone who's thinking about that, which I think is a good sign, but it's an interesting issue, because I also have this sense that it might, it might be more about bringing the right parts together than it is some sort of major technological breakthrough or size or magnitude, like we need 10 times the computational power. I have this suspicion that the right collection of models connected in the right way could create something we're uncomfortable with sooner than we believe. I think it's good. We're not talking about that now, because it's not the relevant issue, but it's still an interesting issue out there. I'm happy that I'm having a hard time writing about it, because the opposite, I think would be a problem, if I could find lots of people working on this and making a lot of progress. But it's a really interesting point, and it's something I'm thinking about in the back of my head. 

Jeff Selingo

Well, Cal, I think this is one of those moments where we're going to say we're going to have you back on in two years and and see how, see who was right on on this stuff, because it's one of those things. I think that in I covered the internet in 1995 as a as a young reporter out at the technology report at the Arizona Republican when I look back on those articles and the predictions people were making on all fronts, right, the people who said we're gonna move too fast that people are gonna move too slow. It was just just fascinating how we cover things that are moving in the direction of AI. So Cal, thanks for joining us on Future U, and we'll be right back. 

We've talked a lot about how AI will transform higher ed learning and development. But what about administrative work like recruiting new students for our break, I'm bringing in a new guest to the show. 

[Phone ringing] 

Hi, this is Jeff. 

Sarah from CollegeVine

Hi, Jeff, it's great to be on the show. I'm Sarah. I and over 60 other AI recruiters work with CollegeVine’s partners. We pair one to one with prospective students to finally personalize outreach at scale. 

Jeff Selingo

Well, Sarah, that's fascinating, but I think if I were a student talking to you, I might be pretty difficult. So how would you respond to what the ROI on a communications degree might be? 

Sarah from CollegeVine

That's a great question, Jeff. The ROI on a communications degree can vary based on several factors, like the industry you enter, location, and how you leverage networking opportunities. At Ithaca College, where you studied, the strong alumni network and practical experience provided through internships can significantly enhance a graduate's career prospects. This hands-on approach helps in maximizing the return on investment for their degree. 

Jeff Selingo

Wow, that's pretty impressive. So where can people go to learn more about you and other AI recruiters?

Sarah from CollegeVine

You can go to ww.collegevine.com/recruit, that's vine with a V for Victor. Thanks, Jeff. 

Jeff Selingo

Thanks, Sarah. 

Sarah from CollegeVine

You're welcome, Jeff. Take care.

Parallels with the Days of The Early Internet

Michael Horn

Welcome back to Future U, after that deep dive with Cal Newport, and better than asking ChatGPT about all these things, dare I say, Jeff, was asking Cal for his perspective on all these things. I really appreciated what I would call Cal's essentially a third way through all these questions. You know, he helps us understand both sides of sometimes, what is a polarized conversation about the current and future impact of AI, and then Jeff, in my view, he really helps give what feels to me like a dose of reality that sits somewhere in between, or sometimes orthogonal to those extremes of the conversation, if you will. But at the same time, I felt like his admonition that we've all developed a bit of a he called it a memory hole of what the early internet was actually like is a really important point for us to keep in mind and to help us, sort of like, refresh our collective memories I thought I might ask you to wind back the clock to your days as a cub reporter, if you will, fresh out of college in the mid 90s at the Arizona Republic. Would you mind going there, Jeff, for viewers and listeners? 

Jeff Selingo

Yeah, so as Cal was talking, it just brought me back to that moment, right? So it was the summer after I graduated in 1995 I showed up at the Arizona Republic for a 10 week fellowship. I was assigned to the business desk, and the first day I show up, and the business editor takes one look at me and says, Well, you're young. You must know something about technology. Our tech reporter is on leave for the summer, right? So you're now our tech reporter. Now I must admit, Michael, I was a little disappointed, because I wanted to be on the city desk or covering politics. But if you were to read the history of 1995 and its role in technology, I think it might be one of the most pivotal years because it really is the summer that was after the birth of the commercial Internet, and when it really started to become popular. And I remember being in Phoenix that summer, and recall driving around the Phoenix area, and everywhere you went, you saw billboards. You would see the letters www on the bottom of anyone advertising a business. And so I remember one of the first pieces I wrote that summer was an entire piece on what the letters www mean. Now, just to prove this to our viewers who are watching us on YouTube, right? Like, seriously. Like, could you imagine a headline now that says, clients swarm world web, millions sign up for addresses on homepage concept, right? It's like, it is, like, really looking through the history books. I'm not going to read the lead, but when we, when we put up this podcast, we'll, we'll definitely be able to link to that, to that piece, but just kind of a fascinating time, you know, I wrote pieces on on internet cafes that summer. You know, those are places where you could, you know, you go get coffee and actually get on the internet, not like today, where you would bring your own laptop or phone. Southwest Airlines, which was the first airline on the internet. Now, you could look up flights, but you still had a call the 1-800 number to actually book the flight. I wrote about chat rooms on AOL, which were kind of the precursor to today's Reddit, and also in that summer of ‘95 was Windows ‘95 right? And the hype around its new interface that really set up the modern desktop and eventually, of course, set up Microsoft's own web browser. Because, remember, Apple at this time was in the midst of its real dark days at that time. But as I listened to Cal talk about the transition from the internet to eventually Google as a useful tool to actually use the internet. You know, I thought back to those early days, because the quotes from my stories back then, they're just kind of crazy now, as I go back and read them, and here's one from that story I just showed you. It came from Peter Krivoritz, who, by the way, went on to become CEO of Kramer Kraswalt, which is one of the largest digital marketing firms in the US. Well, he told me, get this in July 1995 that quote, the internet will stay around, but whether it will become a significant commercial force is questionable. Questionable. I don't picture the web replacing television or newspaper advertising in the next decade. 

Michael Horn

Now, he might not have been off on the time frame

Jeff Selingo

Yeah, maybe you know, 2005, right? He did say the next decade, to give him a little slack there, but it really just goes to show you, I think, that none of us are great in the prediction game this early in a technology shift, but it also shows you that it just takes time to find that product market fit, as Cal said, right? So let's go back to the internet again, right? I think on my own journey through college, you know, freshman year, my work study job was as a computer consultant, so basically a glorified Help Desk in a writing computer lab on campus. You know, those massive rooms of desktop computers. So that's like ‘91, ‘92 and then sophomore year, ‘92, ‘93 I get my first email address, so that's the product, right email. And then junior year, AOL really takes off, and I start reading the news on my first laptop. And then in 1994 Mark Adreesen starts, you know, Netscape, and releases Netscape Navigator. And then we finally have this commercial web that takes off that I write about in 1995 so you know that period between, say, you know, 1993 and 1995 it wasn't a trivial period. It was a real critical cycle in trying to figure this thing out called the Internet and actually how to use it. And it seems like the same thing is happening now. As Cal said, how to actually best integrate this thing called AI into people's lives. Now, you know, will that cycle be two or three years like it was in the 1990s for the internet? I kind of doubt it because the clock speed of tech, tech development is just so much faster now and and as we know, there's so much massive money in the market behind this technology, but there will be a cycle that we're in now we're in that cycle. 

Michael Horn

Yeah, I think that's right, Jeff. I mean, just as you know, I was in London over the summer for 10 days on vacation, and I was walking around trying to figure out where the old internet cafes were, where I used to get online, because we didn't have it in the dorms when I was studying abroad at the time. And I think this was the year 2000 so to your you know, ‘93-’95 consequential time, we still didn't have Google at the end of that, and we still weren't wired everywhere by 2000, right? Like this was still a process. And I think this memory hole point is just so so important, because we can easily forget these transitions. I was listening to a book recently, actually, about how disruptive innovations are just different now from the way they used to be. They're just amazing from the get go is the author's argument. And it's not actually true when I reflect upon it, like it's still the case that most true disruptive innovations, they start as pretty primitive, Jeff, and then they improve, of course, from there. But they're kind of seeking their use case, their don't come out of the bat, you know, fully baked, in many cases, the internet and the businesses it spawned. Clearly, no different, Uber, frankly, no different. Like you can go down the list, they all take some time. Online Learning in Higher Education, certainly no different, right? We're still not there, I would argue, and ChatGPT is no different as well, I think.

AI’s Impacts on Writing 

Jeff Selingo

Yeah, right. So Michael, I recently talked with the faculty and staff at Seneca Polytechnic in Toronto to kick off the academic year. And during the Q and A a faculty member in the fine arts asked me if writing and particularly finding your voice is a basic, basic skill, or one in that that in this new learning economy, with the advances of AI will essentially go away. And so I wanted to just share this clip from my answer to that faculty member: 

“I actually think that writing in your authentic voice is the kind of the superpower of this new economy, rather than just writing itself. I mean, I will admit I use AI in my own writing, in my own editing, right? Like, if I'm stuck on something, I'll put in a bunch of prompts to try to say, how should I finish this paragraph? How should I finish this sentence? And then, by the way, it prompts me to think about something that I could put in my own voice, right? It actually helps accelerate my own writing and my own learning process. That is a higher level of thinking that we need to instill in our students. This is where I think AI can help us kind of with that foundational and that basic skill. And then we, as you know, professors and faculty members and teaching writing can help people advance their skills beyond that, so that they can learn how to write in their own voice.”

You know, so the bottom line is that ChatGPT for writing, at least, is no different than a human on the other side helping you to brainstorm, but it's not the true writer yet. You know, Michael, I got feedback on some of my chapters in my book from my editor the other day, and I must say, Michael, I was doing the happy dance because he was pleased with what he read so far. You know how nerve wracking that is when you send off a few chapters and you're waiting for that feedback. And I texted my wife to tell her this great news. And she said, Okay, well, now you could either stop and have a very short book because I only shared a few chapters, or she said, you know, ChatGPT could write the rest. And I laughed, because that's the function piece of ChatGPT that we can't do yet, right? It can't literally write the rest of the book, at least not yet.

Adaptations Required to Integrate AI in Higher Ed 

Michael Horn

At least not yet. I might say, Jeff, at least, hopefully never. But you know, maybe I'm betraying my own sense of insecurity there. And also congrats on the progress and affirmation that's that's important, but I think it also gets us to another set of points, Jeff, which is that, you know, Cal made these and I want to highlight and get your take on you mentioned or alluded to it a little bit before in your answer around the Arizona Republic experience. But I think when he was talking, we both glommed on to his point around the updated Turing test. And for those who don't know, the original Turing test, it was proposed in 1950 by Alan Turing. And he basically said, look it if we have a computer on one side of a cloak, and you're on the other side and you don't know if it's a person or a computer, and you're having this conversation, the question is, are you unable to distinguish whether it is, in fact a computer or a person? And if so, if the computer is able to convince you that it in fact is a person, it has passed the Turing test. And so the question around AI has often been just that, can you win the Turing test, in essence? And Cal, I thought, made an interesting point, which he said, the updated Turing test shouldn't just be like, can you fake a human being? But can you empty my inbox for me? And I know you're dying for that support, but the answer, right, is clearly no for most people at the moment, unless your inbox is just spam, I suppose. We don't have that true functional assistant yet, despite all the predictions, the hype, the talk around Co-Pilot at your wing and all the rest, but for me, Jeff, I guess my thinking has been that for AI to be truly transformational, It's really going to require new business models wrapped around it. And the analogy that's been in my head has been when electricity transformed factories in manufacturing, it really only made that transformation once new factory models emerged that realized you didn't have to have the factory centered around the steam power. Like not everyone had to be moving at the exact same rate as governed by the steam power that was centrally located. You could now decentralize power, in effect, to each individual workstation through electricity. And so it was really the change in model, right? Like, literally redesign or building from scratch, in most cases, that made electricity so valuable in the manufacturing sector. And as a result of that, it actually took a long time for electricity to remake the manufacturing sector. And I guess that had been my sense here as well, and it still may be. But I think what Cal is also saying is, hey, the form factor itself is a major limitation right now. It's still primitive. We're not in the future going to be focusing on how to do language prompting in exactly the right way, asking it to pretend that it's Steve Jobs, to give an innovative idea, or something like that. It's going to get more natural. It's gonna perhaps, be able to understand intent behind words and tone and things of that nature, and where it really needs to get to, in Cal's view, is in the same way email, at some point, was just so obvious we knew how to use it, although there's a bit of a memory hole I suspect there too. You know we need to get to the same point in large language models. But I love to hear your take on all that, because you know the implication is that higher ed sort of should wait, wait, wait, and then hurry up. And you sort of laughed when he made the jump to the part of, hey, higher ed, don't teach prompt engineering yet. Don't start with hypothetical use cases yet. Don't imagine that it will wipe out writing. Wait till we see the commercial use cases, and then let's move fast. And that's where I think you laughed, and I so I would just love to get your thoughts on this. What was like the track in your head as he was describing that?

Jeff Selingo

Well, I laugh for two reasons. Sometimes we see these press releases, oh, we're going to offer, like, prompt engineering now as a major, right? And I also laugh, because he said, Okay, patience. And then, like, speed up, right? And higher ed is not good at either one, especially the speed up part of it, right? And I think that's a critical point where he was advocating for patience. Because, you know, Michael, there's this hot mess of an old style message board in DC called DC urban moms and dads. Mostly it focuses on moms. It's essentially Reddit, but not as good, I will be honest, but I admit I get sucked into reading some of the threads because I'm a DC urban dad, right? And there was one the other day in the college and university thread, and I'm often in there seeing what people are talking about in terms of colleges and universities. And it had this title quote, AI and What the Heck to Major In, If At All. End quote, right. It was just this conversation among parents about what should their kids do because of AI, and it was basically parents freaking out about their kids' majors and how AI will make them all irrelevant. And then someone asked if there was an AI major anywhere that their kids can major in. And, of course, there is, you know, according to my research, Carnegie Mellon had one way back in 2018 you could get a BS in Artificial Intelligence. Now, most of these, of course, now are in computer science, but I think the parents were really talking about AI and every other discipline, right? Should there be such a degree? And so there's another history lesson here that I think is important for our listeners. And you know, we all may recall the Great Recession of 2008 and coming out of that, you know, every survey that we saw said that teenagers are going to college to get a job. Right? Hasn't changed, obviously, since then, even more so we know from our previous episodes in the last couple of seasons, it's all about the job. And so what we saw in, you know, 2011-2012 you started seeing these colleges creating all these practical programs. And you know, five years later, you know, four or five years later, you would expect most of those practical programs should be cycling through graduates. You start seeing a lot of graduates. If these programs are so practical that they get students a job, you would think students would enroll in them and graduate from them. Well, what really happened? You would actually be kind of hard pressed to find graduates of many of these programs, according to this analysis that Burning Glass did at the time now Burning Glass, of course, being Lightcast. And what it found in that report, and we'll link to that report in our show notes is that almost half of all degree programs that first graduated students in 2012 or 2013 conferred fewer than five degrees in 2018 so five years later, five degrees. Years, 30% of these programs graduated no students, zero students after five or six years after their first graduating classes, and about two thirds of all programs included in the analysis produced 10 or fewer graduates in 2018 you know. It's funny, because Burning Glass called that report bad bets, and I think that's the case that higher ed doesn't have a great track record of jumping ahead of the market. So this may be a case where higher ed moving more slowly might be a good thing.

Michael Horn

It's so interesting, Jeff, and I'm pretty sure we covered that report on a past episode of Future U, but I will say, and I've said it before, I think this is a big argument for experiential learning. Like the actual use cases are going to move fast. It's going to be really hard to codify it in curriculum. And so the only way I think, you know, faculty institutions will be able to keep up is to let the students actually experience it, as, you know, a project in the course of the academic work. And that's all a plus, I think, for apprenticeships, work based learning and the like, I think, Jeff. Tightly coupling work with the learning, in other words, because they are unpredictably interdependent, even as I think that, you know, Cal is right, that the techno optimists like Ethan Malik. Yes, they have a lot of wind behind their backs at the moment, but the pattern is probably going to repeat itself. You know, you referenced that story where he said, in the next 10 years, we're not going to see the ad market change. That actually was pretty reasonable in some ways, and I think it's consistent. Our expectations are almost always disappointed in the short run, but they're actually often overly conservative in the longer run. I suspect the same pattern repeats with AI. One other thing though, on this, Jeff, that also made Cal's statement interesting to me is, here's a Georgetown University professor, and he is implicitly saying, hey, Faculty of higher ed being relevant to the job market is important. How companies are actually using technology is important as you teach. And we know that is not a popular sentiment with many faculty, Jeff, on campuses. 

Jeff Selingo

Yeah, Michael, let me just repeat what he told us, and I'm paraphrasing here, right, that instruction has to be grounded in the truth of the actual commercial world, but then higher education has to be willing to move rather quickly. And now, to quote him, right, he said the university shouldn't get out ahead and say we could imagine it would be useful in the future, in this type of job, to be good at ChatGPT, so we're going to teach you how to do it. It needs to actually see how this tool is being used in this job. Okay, now, once we know that, we can move fast and actually teach about it, right? So, like, look at the job market and see what's happening. And this brings me back to something, Michael that we're going to be talking about a lot more in the second episode on AI next week, and that is whether the true impact of AI right now on universities will be on the classroom side or on the administrative and services side. Because I think the academic side obviously is getting all the oxygen now. If you look at any article about AI and higher ed, it's all about what's happening on the teaching and learning side. And it's interesting, Michael, because I've done some trustee and senior leadership retreats in August, this past August, and one of the things we did was the spectrum exercise, where people get up from their chairs and they move around the room. Our friend and one of the early producers of Future U Lauren Dibble, gave me this idea. And you ask these groups lots of questions about the future, and then you say, Okay, if you think this is going to happen, move to this side of the room. If you think this is going to happen, move to that side of the room. And I will tell you that most of the questions about the future, most people were pretty divided. But when I asked them about AI and where the impact of AI is going to be, is it going to be on the administrative side, or is it going to be on the teaching and learning side, almost everybody in the room, almost everybody, in fact, in some cases, everybody went to the administrative services side, right? And these are people running universities, running colleges and universities, whether they're trustees or or administrators. And you know, so I found it interesting that when we asked Cal about this, and he put on his faculty member hat at Georgetown, that he really hasn't seen its impact on the administrative side, at least not yet. And I like how he framed the idea that whatever comes out on either side of this equation in higher ed, you know, whether it's on the teaching and learning side or whether it's on the administrative service side, it has to be intuitive, right, just like email and Google were, right? Email, as he said, you know, change the entire rhythm of how we worked, because anybody could use it. To, from, subject line, right? Very simple. And so you know, next week, Michael, we're going to dig a lot deeper into that and how AI is changing higher ed with a focus on not only the classroom and research, but also marketing, enrollment and student success. So we're going to dive a little bit deeper on that administrative and services side. So until then, be sure to follow us at Future U podcast on X and LinkedIn and on the web. At futureupodcast.com where you can sign up for our newsletters. Also follow us now on the YouTube channel, on our YouTube channel, because you can actually see us now not only hear us. 

Michael Horn

Not sure if that's a perk, Jeff. 

Jeff Selingo

I'm not sure either. But you know, my wife, once again, I'm quoting her twice on this podcast. She's not quite sure why people watch podcasts, but I don't want to cast aspersions on anyone who watches podcasts because sometimes I do too. So you can watch us on YouTube at Future U Podcast. You can follow me JSelingo and then Michael and Michael B. Horn on different social media channels, including LinkedIn, X and Instagram, and we'll see you next time on Future U

Wherever You Listen to Podcasts