Main content

ChatGPT – how this one-year-old child is changing the higher education system

Chris Middleton Profile picture for user cmiddleton November 30, 2023
Summary:
A one-year-old finds itself at university - ChatGPT. But should education embrace it? That was the question for a Westminster policy conference.

education

A lot has happened in education circles since ChatGPT was launched on 30 November 2022. Over the past 12 months, various surveys have claimed that anywhere from 30-90% of students are using generative AIs and Large Language Models to complete their assignments. Other surveys have suggested that younger pupils and their parents believe these tools are more effective than personal tutors. 

Coupled with recent UK findings that the COVID pandemic has, in many cases, broken the centuries-old bond between students and the education system, some might see these developments as troubling. But are they? Might they, in fact, be an opportunity to make education better, and more forward-looking and inclusive?

Others have pointed out that the arrival of generative AI, hot on the heels of disruptive COVID lockdowns, has pushed students towards an environment of independent inquiry, personal learning, and self-discovery, all of which may have countless benefits – assuming they can avoid misinformation.

To acknowledge that, some educators now believe that schools, colleges, and universities need to offer more than just lessons, and shift towards pastoral care and well-being, perhaps – a challenge when local authority budgets are restricted and traditionalists would be up in arms.

But there is another aspect to all this: the extraordinary speed at which AI has filled a void that many people didn’t realise existed. ChatGPT was launched a year ago today, yet has found its way to the epicenter of both higher academia and schooling. Whatever its merits, the haste with which many people have both adopted and trusted AI might seem unwise.

As the head of one AI company put it this month, one problem is that generative AIs are neither “truth machines”, as he put it, nor trained to be. At the very least, students and teachers need to understand that. Models trained on data scraped from the pre-2021 internet are no guarantee of accuracy or lack of bias.

Misinformation rising

So, how can policymakers grapple with these issues – particularly in a world of rising misinformation? And how can the education system ensure academic integrity in the age of AI? Those were two questions posed by a Westminster Higher Education Forum this month.

Professor Michael Grove is Deputy Pro Vice-Chancellor, Education Policy and Standards, at the University of Birmingham. He told delegates that when students at Brown University were given permission to use ChatGPT in one module, there was a surprising outcome:

Their use dropped off quite markedly after the first week or two. What was really interesting was when they spoke to the students about why they weren't using these tools, there were two reasons. The first was perhaps not surprising: the fear of breaking course rules, despite being given permission. 

But the second was that they felt using LLMs would actually detract from their learning, and they didn't want that to be compromised. I find great comfort in the fact that students recognized that they do want to learn.

Cerys Evans is President of the Students’ Union at Lancaster University. She emphasized there is no cause for alarm about the use of LLMs and generative AI tools:

We’ve done research with our students. And we found that, yes, a lot of them are using it. But they're not using it how we thought they were! What we heard – really loudly – from students is that they're using it to plug the gaps in their provision. They're using it as a way to cover areas that their lecture might not have done, or not covered in a way that they understood. Or they might not have a reading list item on a topic they are particularly interested in.

In short, AI is enabling them to pursue some elements of their courses more intuitively, and for themselves. Evidence of inquiring minds, in fact – surely what higher education is aiming for?  She continued:

Students aren't generally interested in cheating their way through degrees. A majority of the time we go to university because we love our subjects, and we want to become subject experts. We want to become useful members of society who know things, and can engage with critical thinking and education processes. 

I think it's a big mindset shift that we need to have, in how we encourage students to use AI in a way that works for them. How can we learn to ask better questions of AI and get better answers? I think that's something that has been a fantastic development in some courses, where lecturers are really taking the initiative to ask and teach students how to engage with AI in a way that's more effective.

Dr Ailsa Crum is Director of Membership, Quality Enhancement and Standards, at the Quality Assurance Agency. She raised a useful and perhaps overlooked point about why some other students might feel obliged to take shortcuts:

Why is it that some students tend to be over-represented in academic misconduct cases? What is it about some students that makes them more vulnerable? Well, this might be students whose first language isn't English or students with disabilities [in both cases facing additional pressures and obstacles].  We also find that students who have a lot of commitments outside of their educational experience can become pressured into taking what we might regard as undesirable shortcuts in their studies.

Put another way, many students find themselves mired in debt or struggling to make ends meet, so take on long hours of casual work when they would otherwise be studying or socializing. She added:

More importantly, what can we do about that? What are the tools and activities we can engage with? These people have given up huge amounts of financial resources, they're investing their time, and they do want to get positive things out of it. But there can be a lot of pressures on their time. And that can lead to less desirable behaviors.

Plugging 

Excellent points. But is something more fundamental now happening in education too – something beyond simply using AI to plug the gaps in traditional learning? 

Conrad Wolfram is CEO of Wolfram Research. The venerable technologist is a longstanding proponent of the idea that education – in particular maths – needs to be completely rethought for the computer age. (His brother Stephen founded the company, and is prime mover of the Wolfram Alpha knowledge engine and Mathematica software.) The CEO told delegates:

I would describe [the AI age] as the most quintessentially human of industrial revolutions, and probably the fastest moving ever. And the key questions for us in all of this are, What is it to be human? And what do we need to learn? And, of course, that greatly affects how universities should conduct what they do. And their purpose.

So, let's be clear, there are two discrepant effects of AI on higher education. There's what we need to learn, and how we learn it. But most of the discussion is about the second of these. It’s all been about how we optimise learning, and not about what we learn.

AI drives a need for urgent change in this regard, he said, while also supplying the tools that can provide it:

What GPT and the wider LLMs have done this year is create a bridge between the computational and human worlds. Now, potentially, we have bridged those in a very clean way.

For Wolfram, a good comparison for AI in the history of human progress would be the harnessing of electricity in the 19th Century:

Back in the 1870s, electricity was probably thought of as a rather narrow construct. There were lightbulbs, and there were motors, but people didn't see the big picture of how it would change many, many things about life.

In the education system specifically, Wolfram believes that computation will soon be built into more and more subjects on a fundamental level:

Computational biology, computational history, computational linguistics. It's very important to think about the computational aspects of virtually every subject. This doesn't wipe out traditional aspects, but it does add an extra layer. And we must reflect this new dimension of thinking.

He added:

Earlier this year, I published a blog entitled ‘Now computers talk human, humans need to talk computer.’ So, I think there's going be a very interesting juxtaposition now that generative AI can produce code, for example. I think we're going to see much more of a need for editing than for straight coding in future, and this will fundamentally change what can be done.

So, what did he make of the belief, doubtless held by many traditionalists, that students’ use of generative AI amounts to little more than laziness or cheating? He said:

Funnily enough, this came up a lot with Wolfram Alpha back in 2009. In my view, we need people to be learning for a future in which we have all of these tools – a mixed AI/human future. So, it's very important that they get very used to what the tools can do, so that when they get fooled by them, they need to take charge.  But what we don't want is dishonesty, to pretend that you didn't use the tool. Somehow, in between those two points, we need to encourage innovation, but also make sure that we encourage honesty.

In my view, a key test is, Would you reasonably use this thing in real life, for what you're trying to achieve? If the answer is yes, then we ought to set up courses and assessments to encourage that. And to make sure the best use can be made of it, while encouraging that person to learn what they need to learn. But that is not necessarily what they would have learned in the past.

Then the next question is, will AI tutoring and assessment become a reality? And particularly, can we get cost-effective, mass, personalised learning – the Holy Grail? It is exciting to see what will be possible.”

A fair point. AI can certainly be used to learn about the student, and their individual needs and preferred learning methods. Then Wolfram added:

Another key thing to think about is, can we explicitly talk about what thinking means in the AI age? Often in curricula and in courses, we don't pull out the thinking, because we are so engrossed in the subject itself. We don't think about what the thinking part is!

We should be pulling out the abstraction of thinking techniques, intertwining creativity and process. There's this very interesting idea that you want a certain amount of process linked to creativity, and there's a funny human creativity that will, I think, always be different from AI creativity. Intertwining those often gets the best results.

But what about questions of trust and speed – the haste with which many people are adopting these tools and trusting them to provide accurate answers? He said:

Trust in general, I think that is a critical issue for our societies. And I think that's a critical thing for universities to think hard about too. But I'm not sure that, in societies, we're doing that very well.  And of course, as new technology comes along, it becomes difficult to keep up. Also, there is confidence, which is intermixed with trust. Giving people confidence that they can really achieve new things and trust what they're presented with.

“Also, we need to think about the survival aspects we will need in the AI age. Whether you can have a decent job. And what's the real value added? Who are the top people who are going to drive our societies forward?  There are things we could fix right now, such as this problem that we have with mass education, which is driving a lot of unhappiness. And the belief that we need more of it, but it's not delivering. That, I think is, is critical. 

He concluded:

What we need above all is a computationally literate society – and I compare this to mass literacy in the 1800s. Mass computational literacy is what we need now, but we also need computational thinkers who are specialists at the top end.

My take

An important debate. But one that carries its own risks too, including that some in society – and in the higher education system – will be left out. As Wolfram noted, there may be a motivational gap for some. He said:

There'll be a cohort that gets frightened and is left behind. So, what we've got to do is develop, as much as possible, confidence in what they can do to engage positively with AI. Part of that is changing what we're learning, so that we don't give people things that they can just trivially cheat on, and just go round without really thinking.

Might AI encourage a new approach to mass education in which students are obliged to think critically and for themselves? That might be a useful and unexpected outcome, after years in which education has not favoured the critical thinker.

Loading
A grey colored placeholder image