Tuesday, April 25, 2023 - Chat GPT and other AI tools have put the future of education on the front page yet again. But beyond large language models, what else can AI do in education? And should we worry about students cheating themselves out of an education—or will that education just fundamentally change? Jeff and Michael pose these questions and more to computer science experts Charles Isbell of Georgia Tech and Michael Littman of Brown University. This episode is made possible with sponsorship from Course Hero.
Get notified about special content and events.
Relevant Links
Khan-migo https://www.khanacademy.org/khan-labs
Transcript
What We're Missing When It Comes to Colleges and AI
Jeff Selingo:
Michael, there's no question that even as the mainstream media talks about inflation or bank bailouts or climate change, there is one story that they're also talking about that is in our wheelhouse here on Future U. That's the launch last November of Open AI's ChatGPT.
Michael Horn:
There's no question about it, Jeff. The AI chatbot garnered anywhere from 50 to 100 million users depending on whose numbers you believe within weeks, and it truly announced, I think, the arrival of generative artificial intelligence in the minds of the public. It wowed and frankly cowed people, and it's put education back onto the front page as people have wondered everything from whether this represents the end of writing to rampant cheating or whether it heralds a new age of boosting the quality of student work. So today on Future U, we're interviewing two faculty members who specialize in artificial intelligence to help us understand the broader implications from AI and what's coming that perhaps people in higher ed have yet to consider.
Sponsor:
Earn continuing education units this spring with teaching practice, an online faculty development program from Course Hero. Over a series of asynchronous courses, you'll uncover new ways to leverage tech in the classroom and build inclusive curriculum all while supporting your own wellbeing. Plus, you'll get weekly office hours support from leading instructors. Enroll for free today at education.coursehero.com. Subscribe to Future U wherever you get your podcasts, and if you enjoy the show, share it with your friends so [inaudible 00:01:46] can discover the conversations we're having about higher education.
Jeff Selingo:
I'm Jeff Selingo.
Michael Horn:
And I'm Michael Horn. When it burst onto the scene in November, 2022, open AI's ChatGPT turned heads in education. Its clear and thorough written responses to user-generated prompts sparked widespread discussion. What it might mean for higher education was one area of speculation. Some worried about the potential for plagiarism with students dishonestly passing off computer generated work as their own creative product. Some viewed that threat as particularly formidable, pointing to three attributes that make ChatGPT different from past tools. First, ChatGPT generates responses on demand. That means that students can receive a complete essay tailored to their prompt in a matter of seconds. Second, ChatGPT is not repetitive, that means it answers multiple submissions of the same question with responses that are each different from each other and they have different arguments and even different phrasing. And finally, its output is untraceable.
The essays that it produces, the answers that it provides literally are not stored in any publicly accessible place on the internet. That in turn has led to lots of hand wringing about the impact of ChatGPT and AI more generally in higher education. According to one survey from best colleges in March of 2023, 43% of college students say they've used AI tools like ChatGPT. Roughly 20% say they've used them to work on assignments or even exams. And roughly a third of students knew that there were rules that prohibited using AI at their colleges, but over half said their professors hadn't even talked about AI or ChatGPT. Now ChatGPT is what's known as a large language model. That means it's one in which in the words of ChatGPT itself has been trained on a huge amount of text data such as books, articles, and web pages.
It has learned how to understand and generate human language by analyzing patterns and relationships in this data. That's the end of the quote from ChatGPT, but it does this in a probabilistic way by essentially predicting what the next word should be in a sequence. Michael Littman is a computer science professor at Brown University studying machine learning and working to engage broadly about applications and the implications of artificial intelligence. His take is that public engagement with these core ideas of computing and artificial intelligence are critical to the health of our society because these tools are a powerful force and they can be a powerful force for good or harm depending on how they're used. As Michael pointed out, so much of the focus in the higher education conversation has been about these large language models, but that's not all higher education should be thinking about when it comes to AI.
Michael L. Littman:
I think one of the things we're seeing, part of the sky falling is, wait a second, these models are too powerful and the other part of the sky is falling is, oh my God, these models are not powerful enough. And so there's a lot of mixed signals I think that we're getting along those lines, but at the moment, we shouldn't be too surprised that these models are having a difficult time being useful because they're not trained to be useful. They're really trained to predict the next word. And so I think looking forward, my hope is that people will be thinking instead of, so generally, let's just make a model that knows everything about everything to say, okay, we need something in education. We need something to support scientists. We need something to work on the power grid, and that they'll focus the data on how to exploit what we have, what know to some particular problem where you can actually measure the performance.
Michael Horn:
Those are a lot of different kinds of models that Michael is proposing folks in higher ed should think about, models that are trained potentially on smaller data sets than are large language models, and models that might do other things. For example, AI can perform specific tasks such as recognizing objects and images or making predictions based on data. As ChatGPT told me, "These tasks typically require specific inputs and outputs, whereas language understanding and generation is more complex and requires a broader understanding of context, grammar and meaning." That's the end of the quote but in other words, the inputs into these models, they're not necessarily language and they are trained by humans to solve specific types of problems.
Charles Isbell is the Dean of the College of Computing at Georgia Tech. He was a guest on a stop on the Future U campus tour at Georgia Tech back in June of 2022, and his research passion is artificial intelligence. He was one of the architects of Georgia Tech's low-cost online master's degree program in computer science that at the time it was launched, was an eighth of the cost of that of its rivals and has got extremely good student outcomes. Charles told us that the limitations of today's large language models shouldn't surprise us perhaps as much as they seem to have.
Charles Isbell:
It's a large language model and as Michael says, it's really about the next word. So here's a bunch of words. What's the next word that you're likely to see? It is not here's a bunch of knowledge and things that have happened recently. What's the next thing that is going to happen? How should I change my view of the world? It's really words to words, because people talk in a particular way and because languages are... they have a lot of predictability and there's a lot of data out there. You can appear to know what you're talking about, you can appear to be smart but as someone once was, I was having a conversation with someone recently about this and they're saying, well, as soon as you push it, it starts making things up. But that's not really true. It's always making things up. It just so happens that the things that it makes up sound reasonable most of the time if you keep it in its sort of narrow length.
Michael L. Littman:
Charles and I have both worked in the field of reinforcement learning and one of the fundamental premises of that field is the learning agent needs to actually have access to the world to interact with it to learn from it, to make a hypothesis and then test that hypothesis in the world. These language models can't do that. They make a hypothesis and the only way that they can test that hypothesis is by observing somewhere else in the corpus, is there's some language that maybe bears on that. And in some cases the answer is no. No one's talked about that. There's no examples of that. The only way to really learn that is to say something in a social setting and then have everybody look at you like you're crazy and then you realize, oh, that was the wrong thing to say. And so I think there's a fundamental limitation in language as our entry point to all of intelligence.
Michael Horn:
As a result, Charles then argued these tools are actually more limited than what they might seem at present. They're really good at translation in other words, but they're not designed to build new ideas that don't yet exist. And the discussion around them also shirks the real question, which in Charles's mind isn't worries about plagiarism or the end of writing. Indeed, he called back to a breakthrough in AI in the area of grading written essays from many years ago, even before he helped launch Georgia Tech's low-cost online master's degree program in computer science.
Charles Isbell:
Using vector space models of words you could predict the same score for AP English writing essay as a human being and it worked remarkably well. And some people are like, oh, this is amazing but of course there was nothing going on there. It was just a collection of words. What you were really capturing is that people who write really good essays for AP English happened to use certain words. There was nothing about this that captured proper order. So you could in fact take an essay that was worth a five and completely scramble the words so they were nonsense and you would get the same score because it didn't understand anything about word order.
And what's happening now in some ways is very similar. It's just that it's a lot more subtle than just looking at the words. So I don't really worry about that. I think that there are ways to kind of deal with it. I think the real question we have to ask ourselves is if we think of this as a tool as opposed to as an enemy, what does that actually look like for us and how can we use it to effectively educate students and to make our lives easier otherwise?
Michael Horn:
So if these are some of the challenges with AI in higher education and large language models in particular, what are some of the opportunities? Where might they have impact? Jeff had some immediate ideas that led us to all brainstorm some others.
Jeff Selingo:
I just came back from Arizona State and they're using it to rewrite course descriptions, especially for the catalog, especially for students, and then they're showing it back to the professor and the professor says, "Wow, that's a better description of my course than I wrote. And it's also, by the way, using words that people will use to describe the course. It's also helpful for the online courses to get people to actually register for them and sign up for them.
Charles Isbell:
And one of the things that I discovered is I'm much happier being an editor. I don't want to write the first from the blank page if I can help it. I'm very good at writing introductions. I'm very good at sort of bringing things down to their essence and listing contributions, and it's the kind of thing you teach students to be good at, but you want to start from somewhere if you can. And I think that it's actually quite useful to use tools like this to just get started and get past that first blank page, but that's not the same thing as doing your work for you.
It's not the same thing as explaining what is important about what you do. So I think it's perfectly fine, and I would be very happy to throw something in as a bunch of notes and then have ChatGPT clean it up and make it sound slightly better. I would, I'm sure, do another pass at it to make it sound like me, but I think that's a wonderful thing. I'm no more bothered by that than I'm bothered by Google Docs or pages or Word trying to correct my grammar. They're wrong most of the time, but when they're right, I'm perfectly happy with that.
Michael Horn:
So those are just a few of the positive use cases. It's not hard to imagine many more. In addition to being your editor, these tools could serve as your research assistant a way to uplevel all work around campus and something to push and prod and test your own thinking. Indeed, since we recorded this conversation ChatGPT has started to integrate with other tools, calculators and other computing tools that allow it to do many more things. But all this then raises another question. If folks like Charles are less concerned around plagiarism from these tools, might there still be bad uses for them? The answer is certainly yes, but Charles and Michael both believe that if we have deeper clarity around what are the goals from using these tools and more broadly what are we trying to accomplish in any specific facet of education, that would be a helpful starting point.
Charles Isbell:
I think here you have to be very careful, and I think a lot of the language around this and discussion around this sort of falls into this trap of not being clear about what the goals actually are and so you end up deciding it's going to be an unalloyed good or it's an unalloyed evil, but of course it isn't, is a tool. And hammers are very good for driving in nails and they can also be misused in terrible ways and so you just have to figure out how you're going to use it.
Michael Horn:
And perhaps those alternative ways might not be so terrible Michael suggested.
Michael L. Littman:
Could they actually cheat themselves out of an education or do they end up in the long run just getting a different kind of education. So I wonder if there's other skills that even the undergrads, if they're cheating, but they're cheating very consistently and deeply, that they might come away with some skills that will pay off in the long run like editing. Like Charles said, once you get to the pinnacle of your profession like Charles, mostly what you're doing is editing.
Michael Horn:
These reflections then raise the question, how might we start to tune these models in the context of higher education so that they can be more and more useful? And perhaps this is comforting, but as the conversation shifts to how to tune these models to a specific problem, and we consider how to get clearer both around the goal of how we use these AI tools and the goals of any educational activity, that's going to mean more reliance on and input from human beings.
Michael L. Littman:
In particular, the notion of tuning it to some particular problem and not just saying, okay, we've got this giant model, it does everything, therefore it does everything well. You see this in human society that people specialize, right? And when you are a computer scientist trying to do biology and you don't know any biology, you do bad biology. The way you do good biology is you bring in someone who's a specialist, who actually understands that area extremely well, knows what the subtle distinctions are, knows where the nuance lives, knows where the pitfalls are, and can really help scope a description or the work or whatever kind of biological stuff that you're doing from that perspective.
And so I absolutely agree with you or I guess I'm on board with the idea that you're saying that honing these tools... it's not like we push a button, we create this great AI and therefore, it solves all human problems and we're done it. It's always going to have the same issues that we have, which is that to become an expert in an area, you need to study that area. Now, I guess an AI can study more areas in more detail than a person can, but with less comprehension. So at the end of the day, you really still need to do the work of applying it to a particular problem.
Charles Isbell:
The place where I think AI will actually have an impact if we do it right, is on building models of individual learners based on the way that they interact with and interact with the systems and the data you're able to get. We know for example, that you can intervene with students in their first semester or two and significantly raise the chances that they will remain in school if you notice they're not going to class or that they're not doing well in this particular subject. And there's all kinds of things you can do in the very simple intervention will allow you to change the trajectory of their academic career.
So imagine doing that at the individual level and doing it the level of understanding and learning and automating that and making it work at scale. So I think the sort of data about groups of people and then data about individuals that allows to tailor things at key moments during their educational journey as a thing that we ought to be able to do, which doesn't have anything necessarily to do with ChatGPT or large language models, but it has a lot to do with just simply predicting what we know about how human being- When we can tell that human beings are falling off the cliff or sort of drifting out.
I think that would be easy to do and it's something that we don't do enough of, at least not at the level of individual courses in learning. Imagine what a difference that would make if as a teacher you are being told, here's three or four students who are having this particular problem. Maybe you should bring this up in your next lecture, or maybe you should look at this quiz question because it seems to predict something else and it would change the way that you did, the way you presented a lecture or the way that you did your final exam or the way you changed an assignment. Those things can make huge differences.
Michael Horn:
These insights then bleed into perhaps a place where AI might have a much bigger impact than what the conversation in higher education at the moment is focusing on and that's the relationships.
Michael L. Littman:
One of the things that we've learned in research on education is that it really matters that people build a relationship with the teacher. It's not just a matter of we'll just give you a lot of facts and then you'll know a lot of things. It's like people need the motivation, they need the positive role model, they need that relationship to happen. At the end of the day, we're going to see, I think a benefit from this technology to the extent that it can form healthy relationships with people and get them excited about learning. So I think that's a scary thing because I don't know that I want people to have relationships with machines, that undercuts what a relationship is.
Charles Isbell:
Does it? We do that now, we have virtual TAs, people sort of bond with them and they learn and they do a pretty good job of answering the common questions and make things easier for the rest of us. You could argue that even if you ignore the virtual TAs, you and I already benefit from this, right? You teach a thousand people in a class and you have an army of TAs, and the TAs are the ones interacting with the students. They don't necessarily build a relationship with you, but they think they do.
So Mike and I both, we've created two online classes, one in machine learning and one in reinforcement learning. I continue to teach this class. It has a thousand students in any given semester, each one of those classes, and the students see video of us and they feel very strongly a connection between both me and Michael, even though Michael isn't there and doesn't interact with them at all. But they feel this strong connection and at least the feedback we get is because of the way that we did this, they feel more motivated to learn, they enjoy it, it's actually working. And I don't know why that wouldn't be true of a virtual assistant or something like that.
Michael L. Littman:
That's very threatening to me.
Michael Horn:
But on that, I've seen some early AI that can create actor like humans that deliver content and you can reprogram and much cheaper to create content that feels like you're being talked to and so forth. Is that going to change the nature of online programs and connections online or what else can be done from an editing perspective through AI?
Charles Isbell:
Exactly what you say, I think you can create these things and people will connect to them and it'll help them learn or won't. Somebody's going to have to intervene and be a part of it, but it may make the common case easier to deal with so that you can focus on the more difficult ones.
Michael Horn:
So at the end of the day, will the introduction of AI and education be a good thing or a bad thing? Charles argued that the ultimate impact comes back to relationships, and Michael suggested that the goals we set for AI will be what matters most.
Charles Isbell:
We always overestimate the short-term impact of technology and underestimate the long-term impact. And that's what we're doing here. Everybody's freaking out and they're imagining worst case and best case scenarios. And I absolutely guarantee you that most of the things we're concerned about right now either won't come to pass or can be easily mitigated. It's the long term, decades long cultural changes to the way we think about education and the relationship between students and teachers. That's what's going to be impacted, and that's what we're going to have to think about. That's generational, not just what's going to happen in my midterm exam in the next three weeks.
Michael L. Littman:
One of the fears about AI more generally, not just in higher education, is that if we make it too powerful and give it the wrong goals, it's going to do terrible things to us because it's going to pursue those goals in a way that's going to be to our detriment. And I think arguably that is the case. Somehow people, humanity are wired so that when we pursue our goals, we often, in the short term do terrible things, but somehow in the longer term fix it. And there's some force that is getting us back on track, somehow we don't accept when things are so terrible. We try to find a way to make them okay again. And I see that in this setting as well, that yeah, it's a little bit of a bumpy ride right now, but we're going to make it okay again, we're going to figure out how to make this benefit people broadly.
Michael Horn:
On that hopeful note, we'll be right back on Future U. Do you want your course content to be engaging or do you want it to be pedagogically sound? You probably want both right? But knowing how to leverage all the teaching tools at your disposal can feel like a never ending learning curve, especially when it comes to technology. At Course Hero teachers with diverse backgrounds come together to create rich and engaging learning experiences using online tools and applications, from learning how to create a more engaging syllabus to building a more inclusive curriculum, Course Hero is a virtual gathering place for teachers who want to level up their digital pedagogy. Join Course Hero's teaching Community where digital innovators and classroom change makers connect and share actionable insights for the future of education. Create your free account today at coursehero.com/educators. Members get access to their faculty newsletter filled with teaching tips from fellow faculty, eBooks and early bird registration to upcoming events and workshops. Join today at coursehero.com/educators. That's coursehero.com/educators.
Jeff Selingo:
Welcome back to Future U, and thanks Michael for putting together that package from our conversation with Charles and Michael, the other Michael.
Michael Horn:
You bet, Jeff. And I want to dive in right away with a question for you actually, because I'm curious, you get to spend a lot of time around faculty and administrators in higher ed, and I mentioned some stats about student usage of ChatGPT, but what are you hearing about from the faculty and administrators about their reactions? Are they worried? Are they optimistic?
Jeff Selingo:
I think the reactions, to be honest with you, Michael, are all over the place because I think so many people, like many of us are still trying to figure this out. And what I like about it is that where they are figuring out they're at least being transparent about it with each other, with administrators and faculty, but also with students. So for example, I was at Tarrant County College in December, which is in Texas. Soon after ChatGPT came out and the faculty members there were having open conversations with students about the tool, about both academic integrity, but also how the tool can be used as an academic aid to explain concepts that were really hard to explain in the classroom, to ask better questions because it was generative in that way, so you could continue to go back to it to ask a question to just make your questions better.
And I really like that because it was involving the students in their learning, it was involving the students in how the tool can be used. It was involving the students about questions about the ethics of the tool. And so I think that this is just something that to me, that college and university leaders and faculty members have to be really transparent about with their students. I recently hosted a dinner for several alumni in the ASU Georgetown program that I helped start, the Academy for Innovative Higher Education leadership. And it was interesting because at least one of the attendees runs a teaching and learning center, and she emphasized how such centers really need to bring together faculty to learn from each other because what we're also seeing, I think in this is how faculty and different disciplines might be using the tool in different ways.
And I think again, this is where teaching and learning centers could be really front and center on campuses and bringing folks together. And then finally, just one final thought on this piece is that I think we know so many people are running detection tools to catch cheating with ChatGPT. And as we know, some are not very accurate. And I really think it comes down to this core belief that the power of these tools should be in helping students and not punishing them. And I really hope that no matter what happens over the next six to 12 months, because I have a feeling this won't be the last time we're talking about this, is that that is at the core of what colleges and universities do is that it is doing what Tarrant County College is doing and making students be part of the learning process, rather than bring this tablet down from on high and to say, oh, faculty, this is how we're going to use it. And oh students, this is how we're going to essentially punish you if you misuse it.
Michael Horn:
Jeff, I totally want to follow up and just agree, agree, agree with that point, because A, we know that active learning helps students and so when you're involving them, they're active creators of that, they're going to get a boost. But also, you remember we had Sean Michael Morris in the show in the beginning of the season from Course Hero and I thought he made the point that we really have to stop playing this game of trying to catch students cheating and instead figure out how to trust them more in essence and bring them as partners in the learning process rather than adversaries, those weren't his exact words, but I think that's the sentiment of it. And frankly, it seemed to be Charles and Michael's take to an extent as well, that plagiarism and cheating, that's just not the most interesting question at stake here. And I guess instead the question is, okay, what should they be doing? So then I get to pivot and ask you, what's your take on what should higher ed be doing with regards to the stuff if it isn't focusing on the cheating and the plagiarism?
Jeff Selingo:
Well, what's interesting to me about this is that AI in higher ed is not new. It's been around for some time now, particularly in the hands of administrators with administrative tools. So for example, we had [inaudible 00:27:45] at Georgia State, which is driven by the technology that Mainstay, formally AdmitHub, used, and that was a chatbot that was integrated into course content. And we know that from studies they did at Georgia State, they did this randomized control trial where half the students were selected to receive messages related to the classroom. The chatbot half got those messages, half did not. And the students who received the messages earned grades of B or above at a rate that was 16% higher than those not on the chatbot. So here's clearly a way where AI helped or then you have St. Louis University putting Alexa in dorms, again, helping answer basic questions.
This is obviously pre-ChatGPT and pre generative AI, but clearly still driven by AI. And now what's interesting to me is that now suddenly students have access to the tools that direct them and suddenly we think it's a problem and that it needs to be monitored. So I don't want to repeat ourselves here, but again, it goes back to who has control of the tools. And when the administrators had control of the tools, it was fine to use them on students the way they wanted to use them. But now suddenly when we have open access tools in AI, now suddenly, oh, it's something the students have to be controlled. We have to watch them, they have to be monitored. And I think it goes back to your point about what Sean Michael Morris told us, and just one other final thought on this, Michael, is we know the future of work is partially about the future of AI.
It's clear that AI is going to take over parts of our jobs. I know here on Future U, we hope that AI kind of books our guests, perhaps even writes our scripts for us so we could free up a little bit of our own time, but whatever, it's going to take over parts of jobs. And so we really need to teach students how to use these tools, how to fail with these tools, what are the ethical things we should be thinking about? So the more that we make these tools part of the learning process, it not only makes learning more engaging for students, but more than that, it teaches them how to properly use these tools in the future when they're going to be confronted with them every day, I would imagine in their jobs, right?
Michael Horn:
Jeff, I couldn't agree more. I think you're exactly right. In this cat and mouse game of schools and technology try to catch the learning and so forth, the technology's going to win. Students are getting savvier, they are using the models to make their essays sound more like them. Technology's going to keep getting better. This is just not the right focus, and we need to move toward active learning. My favorite one, mastery based learning, where it becomes more about the mastery of the material instead of the game, if you will, of just trying to produce the artifact of an essay. Did I really learn and really help students frankly, gain the confidence to put their own thoughts on the page, even if that means they might fail occasionally. And that's okay because now we're going to view it as an iterative process in which we're taking the AI as a tool to help them get better.
Jeff Selingo:
So let's finish up with this then, Michael, we just got back from ASUGSV and there were so many panels there about AI powered tools. So what's the coolest AI powered thing you've seen in education lately? Because we know, again, from being at ASUGSV that every entrepreneur right now is pitching everything with AI in the title. And so what are you most excited about?
Michael Horn:
Yeah, so I'm not going to name one of the for-profit ventures that was using AI in the deck. I'm going to go with two non-profits actually. Quill.org is one, they use AI as a tool to actually teach the writing, Jeff. So they basically have developed essay prompts based on passages that students have read and so forth. And then they trained the model on all of these possible answers students could give. And so, you start off your essay with a topic sentence and the AI jumps in and says, "Hey Jeff, like you should be more specific and the topic sentence and outline the major points you're going to go hit." And then you start to get into your evidence and it says, "Hey Jeff, like you ought to cite X, Y, and Z." And it's really coaching you along. It's a very cool writing tool.
And then the second one that I think a lot of folks are buzzing about is Khanmigo, which is your friend in Khan Academy. And it's basically Khan Academy partnered with ChatGPT-4 to create this tutor alongside. We'll link to it in the show notes, Sal Khan did a demo of what it looks like, but it's really cool you're doing your exercises, your quizzes, your problems in Khan Academy, and there's a tutor right alongside you now giving you little prompts and so forth. I frankly think it's pretty revolutionary, Jeff.
Jeff Selingo:
Yeah, it's interesting, Michael, I just think we're going to see so many revolutionary tools coming on this front in the next couple of years. And as we said, I don't think this is the last time we're going to be talking about generative AI on this show. So before we wrap up this episode, we do have one of our favorite segments, our question brought to you this season by Course Hero. And this one comes from Karla Carter, an associate professor of cybersecurity at Bellevue University in Nebraska, which Michael, as you remember, many, many, many moons ago, we first came up with this idea to do this podcast together when we both appeared at Bellevue University. And here's her question,
Karla Carter:
How can we ensure through policy and practice that the use of AI tools in higher education is ethical and equitable for everyone? For example, students, faculty, staff, the public, et cetera. Thank you.
Jeff Selingo:
So what's your take on that, Michael?
Michael Horn:
I think it's a great question. To me, and you said it also, AI is going to become part of everything in the work and the future in which we're living. So look, do not ban it is the first rule. There's been a lot of school districts in the K-12 realm that have banned AI, universities in some cases I think have that inkling. Don't ban it, instead incorporate it, make it available, create exposure opportunities really as the default so that it's part of the course and part of the learning. And this is important I think, so it doesn't create extra fees for the students, it needs to be really something that we assume is universal and that universities, I think are going to be providing as part of the financial aid packages, if you will, just like they might help out with courseware and have got a lot better on things of that nature, rather than sticking...
Students who can't legitimately afford textbooks with the bill that they really start to account for these tools In the same way, Jeff. And with just a big thanks to Karla for the question, and a thank you to Charles, second time appearance on Future U and Michael as well out of Brown, just a really interesting conversation. It feels like it's changing daily at the moment with the news around AI. So we will certainly stay tuned. We look forward to your thoughts and we'll see you next time on Future U.