Using AI to help with schoolwork has become commonplace among students. Our research of 1,000 secondary school students revealed that over two-thirds (67%) are using generative AI to assist with their studies. Some are using it to solve maths problems, while others are using it to draft English essays or as a translation tool. Yet, many teachers lack the training, support and guidance to identify AI usage and introduce good practice into the classroom. The same study interviewed 500 teachers and found that one in ten cannot tell the difference between AI-generated text and a pupil’s own work.
With AI technology becoming more sophisticated and the gulf between teachers’ and students' usage only widening, a significant knowledge gap is starting to appear. This not only impedes teachers' ability to educate students on effectively using the technology and its many benefits, but it can also lead to safeguarding concerns. Without a full awareness of the technology, it becomes even more difficult for teachers to protect students' safety online.
Steps have been taken to address this at a governmental level, with the Online Safety Bill including numerous provisions about AI chatbots, such as ChatGPT, to help protect users from the harmful content these tools can produce. However, with AI advancing quickly, these efforts must be kept up to date to have an impact on helping those they intend to support.
Following the Department for Education’s call for evidence on the use of generative AI in education and the UK’s leading role in the global AI Safety Summit, it’s clear that technology is high on the government’s agenda. Yet, regulation beyond the Online Safety Bill continues to lag. For the education sector, this poses a challenge. Without firm guidance around using AI, it becomes tricky to navigate both the threats and opportunities it presents.
Removing the administrative burden