Editorial/Opinion

Sir Anthony Seldon: Bursting The AI Dam

In an exclusive interview with TeachingTimes, Sir Anthony Seldon outlines his vision for the future of AI in the education sector.

TeachingTimes: Your book, ‘The Fourth Education Revolution’, was published in 2018. In it, you referred to education as the ‘Cinderella’ subject of AI – as in, something that was unjustly overlooked. Do you think this is still the case in 2024? 

Sir Anthony Seldon: Yes, I do, and I think there’s a real risk that we are going to fall behind other countries. Firstly, I think there is a risk that we’re going to miss the benefits of the technologies in helping young people, particularly in subjects like mathematics and science where the technology is more advanced. 

It is easier for AI to help students learn in STEM subjects than it is in some of the arts subjects, and still more so the performing arts subjects. It so happens that AI is most advanced in some of the greatest areas of teacher specialist shortage. So, the second risk is that we’re going to miss out on that.

Thirdly, I think there is a risk involving tech companies. Digital technology has brought some benefits to young people, while also doing them immeasurable harm. Schools never got on top of social media and were never part of the conversation. Unless we use this time now to safeguard the interests of the child – of the learner – then tech companies will do what tech companies everywhere around the world do, which is to maximise their sales and their revenue. There’s nothing good or bad about that, it’s just the way they operate. 

I think a bright sign is that the government have at last a minister in Baroness Diana Barran who has been empowered by the education secretary to lead on this subject, and there are some gifted officials in the DfE who at last have been given the opportunity to lead on this. For 12 years, since 2010, the Conservative government really wasn’t interested in AI despite a lot of encouragement to become so. That neglect, I think, has done enormous damage. We have lost huge opportunities and are now playing catch-up.

Speaking of Baroness Barran, can you tell us more about what she has done regarding AI?

She is a Tory junior minister who has done more to understand AI than any minister in history. If you look on the DfE website, she has overseen some excellent advice about how schools can adapt to AI. She engages in debate, she talks at conferences, she meets people, she is thoroughly engaged and open in thinking about how government can help schools anticipate what’s happening and decide what is best and what isn’t.

Going back to the idea of playing catch-up, the past few years have seen an explosion in the availability of generative AI. If we were to look at the example of a headteacher who isn’t an expert in EdTech but is interested in what generative AI like large language models (LLMs) could do for their classrooms, where would you advise them to start?

I would advise them to look at the website AI In Education, which I started with a group of state school teachers. It is an organisation which some 1,200 schools and colleges are part of. It’s not for profit and is entirely free, and we offer entirely objective advice on where, how and why AI can help. 

For instance, AI can help with teacher shortages. It can help to deliver material in interesting ways for students. It can help with lesson planning. It can help with ideas for lesson delivery. It can help provide motivation for students themselves, not least in mathematics and science.

Is this possible to achieve with the LLMs already publicly available?

Absolutely. They are also extraordinarily helpful in assessment and in giving real-time, formative feedback. We are still in the relatively early stages, but things are beginning to build up. It is not unlike a dam, where the water is building up higher and higher. At some point, that dam is going to burst and the products will flood onto the market, and we will either ensure that they are in the interest of genuine learning, that ethical considerations are placed first, that the mental health of young people is factored in and that teachers and teaching assistants are involved ethically and responsibly in the process, or we won’t.

That’s the point: that we need to move ahead. It’s not just in the classroom that this technology comes into play; it’s also extraordinarily helpful for assessing student attendance and helping to maximise utility of space, as well as helping with the organisation and finances of schools, or helping in recruitment. It is a technology which is still embryonic enough to be able to shape and form, but before one knows it it will be everywhere in schools, and I think the supreme challenge is to ensure that it is shaped – unlike pre-AI digital technologies – in the interests of genuine learning, of young people and of teachers and making teaching a more rewarding career.

What do you see as the greatest ethical challenges of AI at the moment? In a concrete sense, what do you see as the potential harm that this technology could do to young people, and how can we look ahead to mitigate this?

Elementary school children using an AI program on a computer.

As we already know, digital technology is able to map itself onto young people. It personalises itself in terms of their viewing habits, shopping habits and topics of interest. It is extraordinarily clever at worming its way inside the minds of young people and therefore taking hold of them and taking them away from what’s real and true. 

There are risks of the manipulation of young people, not least through deepfakes, which are getting more and more subtle by the week. There are risks to privacy. There are risks of online abuse. There are risks of plagiarism – when using AI, it’s very hard to tell what is true.

There are also risks of infantilisation, that technologies will take out the hard stuff and do all of the difficult things we ask of it. Education is most valuable when young people struggle and when they have challenges – it’s what young people love, the moment when they suddenly ‘get it’. They’re excited when they suddenly understand something in maths or any other subject, and this technology can make life too easy because it does everything that’s difficult for us. 

So there is a whole realm of risks of AI, but there are also massive opportunities that it will give young people and schools and education. As I was talking about six years ago, when that book was published, we have to get ahead of the advantages and milk them for all they’re worth. Your question was ‘How do we mitigate this’. We mitigate by making people aware of what the risks are.

In regard to mitigating the risk of infantilisation, then, would you suggest that it would be a good idea to guard against permitting the use of AI among students?

I would certainly be teaching about AI from Year 1 onwards and teaching young people about machine learning. machine intelligence and human intelligence, and I would be grounding young people in what is real and human and humane. But I don’t think we can control this technology; we can just shape it.

I think it’s going to be incredibly difficult to control, like illegal drugs or alcohol. We are just going to have to educate people on how to use this technology and how to avoid being used by it. That’s the key to it. Any university student using social media to speak to their parent from 200 miles away will recognise it as a great asset that their parents never had. There are so many benefits to social media, but so many drawbacks. The point is that the education sector never got on top of it. I am saying that we have to get on top of AI.

We have to start moving on it now, not pull the wool over our eyes as the DfE did for many years until Baroness Barran stepped in. This isn’t a political point; Labour’s shadow ministers are very alert to AI. But it’s still a Cinderella, and we are still way behind in the use of AI in many senses. This shouldn’t be the case.

The idea of AI as a force for infantilisation was a core theme of your book as well. Can you tell us more about what this might look like in practice?

I think we can look at the impact of the satnav on people’s sense of geography and ability to negotiate their way around cities or countryside and use maps and read physical signs. People don’t do that anymore. Young people at the school where I’m the Head will just slap in their destination and be taken to it. 

Now, I’ve learned to use maps. I love maps and I love working out how different parts of towns and cities relate to each other, how the rivers work, how the different layers of history fit on top of each other and where the original roads and highways were before cars. That’s all part of my own architecture, but for those young people who have never had to do that, never had to use an A-Z for London or other cities or use OS maps, it’s a different proposition. I think that part of them is being infantilised. 

We all go for easy things. Personally, I’m lazy. I will always go for the easy and quick option. So that’s a practical example of AI infantilising people writ large. If we don’t have to learn foreign languages or grammatical formulae because the technology will do it for us, that is another infantilisation. The great thing about education is challenge.

We would define infantilisation in this case as losing out on developing skills one might have if not for this more convenient tool, then?

Absolutely. It is losing out on the development of life skills and challenge and difficulty.

So the optimal implementation of AI would be to have it introduced in such a way that it does not necessarily impact the development of the skills that went into training the AI?

Yes.

Perhaps the bigger question is the idea that AI is going to be a revolution on par with institutionalised learning, as discussed in your book. What has convinced you that AI is going to be such a revolution?

Because the second education revolution, which was the founding of schools, colleges and universities, was dictated by the technology of the time. You couldn’t learn at home. Before schools and colleges, before learning ‘settled down’, it took place within the family and the community. Anthropologically, you can see this in some of the forms of civilisation that have not yet been urbanised; learning is passed down from elders. 

People gathered together in schools because the teacher’s voice could only travel 30 feet. Now, you don’t need to come to a school to learn. You don’t even need to come to a university. Lectures? You can learn more by watching a recording of a person giving a lecture from your phone or at home, where you can pause and rewind it. Seminars? You can do them as well (and in some ways better) online. Libraries? You can use them online as well, more easily than physically going into a library.

A teacher shows another how to use a computer program.

Why would you want to have people travelling across the world, polluting towns and cities, other than for the social experience? Even there, you can have valid social experiences online, and the technology is only just getting started in terms of the intimacy with which remote human contact can take place. Now, I am a big believer in human contact and shared human space. I’m saying that we may well lose that when young people don’t have the learning experiences as they had in pre-second revolution models, where people were learning within the family and community without going to institutions. 

I notice that the examples of remote learning that you use are things that we are already seeing with a minimum of AI input. Why is it AI specifically that is going to drive this revolution?

In the book, I refer to the fourth educational revolution as a period in which machine learning and generative AI feed into VR, blockchain, mixed reality and augmented reality; part of a wider sense of learning whereby we can conduct scientific experiments that we never could in a laboratory. These experiments could involve economic modelling, geography modelling, finding out what causes the turning points. We are still not yet fully comprehending quite how revolutionary it is going to be. It’s hard to think of anything that cannot be more imaginatively portrayed.

Consider all the ways that AI can personalise and individuated learning and make remote experiences real. One can walk inside a glacier, or along the ocean floor, or along the rim of an exploding volcano. One can converse with Shakespeare or be part of the cast of Hamlet. One can be a participant in the Battle of Thermopylae. It means a form of learning that challenges what is reality and what is human intelligence, and we are not fully comprehending it.

To those people whose feet are on the ground, I would warn that the storm – the winds, the gales that are going to blow in – will sweep you off the ground unless we adapt now to what is happening.

Earlier, you compared AI to social media as something that took schools in the UK by surprise and which has had in some senses a deleterious impact on the development of young people. You mentioned that schools have never ‘got on top’ of social media and really engaged with students about it. 

To be precise there, I think that, as we can see at the moment in the debate about whether young people should be allowed to have smartphones in schools, this is all reactive stuff. The damage has been done and is still being done by social media to young people and their emotional, psychological and cognitive development. What I’m saying about AI, which is potentially so much bigger than this, is that we need all the more to get ahead of it. 

Digitalisation is an extraordinarily big, important thing, and we need to get ahead of it. We are on the back foot on social media and, had the government 20 years ago thought more about how to ensure that the rich benefits of the technology could be maximised and the downsides could be mitigated, then I think that young people would have been in a much better position than they are today.

So what I’m saying about AI is: let’s get ahead of it. That’s what AI in Education is all about; let’s try to work together with people of goodwill and experience putting the interests of young people first and especially those of the least advantaged. 

And to prevent, as you mentioned in the book, the disproportionate usage of technology on the behalf of the elite to advance their own station?

Absolutely. The tech companies are already well off. We see them today making even more money and employing fewer and fewer people. We should be asking: how can we ensure that AI and the fourth education revolution will give opportunities for social mobility, for lifelong education for all people, for teachers to spend more time interacting with young people rather than on marking and administration? How can we ensure that the technology allows young people to move at the pace at which they are ready to move in each and every subject, rather than the pace of the factory model we have now?

As you have stressed, we are currently behind in acclimating to the use of AI in the current moment. But to finish, what is an example of a really positive use of AI that you’re seeing in secondary-level education at the moment?

Century Products are being really helpful in helping young people by providing individual learning experiences. That would be one example, but I also think that young people researching products and asking questions through generative AI engines is helpful in expanding on their learning. 

Broadly, however, I think we’re in this lull period that could last two to four years before the applications have been worked out and the dam bursts. This is the time we will look back on and rue if we don’t get it sorted. 

Sir Anthony Seldon is a leading author, educationalist, political commentator and contemporary historian. He is the Head of Epsom College and former Vice-Chancellor at the University of Buckingham, as well as honorary historical adviser to 10 Downing Street. He has also authored or edited over 40 books on contemporary history, politics and education, including ‘The Fourth Education Revolution: Will Artificial Intelligence Liberate or Infantilise Humanity?’ in 2018.

Register for free

No Credit Card required

  • Register for free
  • Free TeachingTimes Report every month

Comments