AI

Deepening Human Intelligence Alongside AI

Navigating the new landscape of Artificial Intelligence has its challenges, but Professor Rose Luckin believes it’s possible for humans and machines to successfully co-exist, as long as we keep our humanity in the foreground.
Engineering with AI

Artificial Intelligence (AI) is not just the stuff of movie and science fiction, it is here now and many of us use it every day. For example, when we search on the internet, use a voice activated assistant like Apple’s Siri or Amazon’s Alexa, or when we use an e-passport gate at the airport. AI is here, it is not going away and it will impact on education. 

A basic definition of AI is one that describes it as ‘technology that is capable of actions and behaviours that require intelligence when done by humans’. The desire to create machines in our own image is not new—we have, for example, been keen on creating mechanical ‘human’ automata for centuries. However, the concept of AI was really born 63 years ago in September 1956 when 10 scientists at Dartmouth College in New Hampshire spent the summer working to create AI. 

Following on from this there were some early successes. For example, expert systems that were used for tasks such as diagnosis in medicine. These systems were built from a series of rules through which the symptoms a patient presented could be matched to potential diseases or causes, so enabling the doctor, aided by their AI, to make a decision about treatment. These systems were relatively successful, but they were limited, because they could not learn. All of the knowledge that these expert systems could use to make decisions had to be written at the time the computer program was created. If new information was discovered about a particular disease or its symptoms, then in order for it to be encompassed by the expert system, its rule-base had to be changed. In the 1980s and 90s useful systems were built, but certainly we were not anywhere near the dreams of the 1963 Dartmouth College conference. 

Then, in March 2016 came a game-changing breakthrough. A breakthrough that was based on many years of research. A breakthrough that was made when Google Deepmind produced the AI system called AlphaGo that beat Lee Sedol, the world ‘Go’ champion. This was an amazing feat—a feat that could seem like magic. While many of the techniques behind these machine learning algorithms are very sophisticated, these systems are not magic and they do have their limitations. Smart as AlphaGo is, the real breakthrough was due to a combination that one might describe as a perfect storm. This perfect storm arose due to the combination of our ability to capture huge amounts of data, with the development of very sophisticated AI machine-learning algorithms that can process this data, plus affordable computing power and memory. When combined, these three factors provide us with the ability to produce a system that could beat the world Go champion. 

Each of the elements in that perfect storm: the data, the sophisticated AI algorithms and the computing power and memory are important—they are the power that enables us to build AI that can learn and improve. But just like any other technology that we might use in education, we need to use AI judiciously and we need to make sure that it is addressing the educational needs of our institutions, teachers and learners.

Egineers working

AI as EdTech

We can certainly use AI to tackle some of the big educational challenges and to support teaching and learning

<--- The article continues for users subscribed and signed in. --->

Enjoy unlimited digital access to Teaching Times.
Subscribe for £7 per month to read this and any other article
  • Single user
  • Access to all topics
  • Access to all knowledge banks
  • Access to all articles and blogs
Subscribe for the year for £70 and get 2 months free
  • Single user
  • Access to all topics
  • Access to all knowledge banks
  • Access to all articles and blogs