Assessing the dangers of AI applications in education

March 2, 2018 Tony Bates

Image: CaspionReport

Lynch, J. (2017) How AI will destroy education, buZZrobot, 13 November

I’m a bit slow catching up on this (I have a large backlog of articles and books to review), but this is the best critique I have seen of the potential dangers of AI applications in education.

Don’t be put off by the title – it’s not totally anti-AI but thoughtfully criticises some of the current thinking about AI applications in education.

It’s worth reading in full (an 8 minute read) but here’s a quick summary to encourage you to have the full meal rather than a snack, with my bits of flavouring on top:

Measuring the wrong things

Most data collected about student learning is indirect, inauthentic, lacking demonstrable reliability or validity, and reflecting unrealistic retention timelines. And current examples of AIEd often rely on these poor proxies for learning, using data that is easily collectable rather than educationally meaningful.

Yes, but don’t educators do that too?

(re)Discovering bad ways to teach

AIEd solutions frequently incorporate false and/or unsupported educational ideas reflecting the biases of their developers….If AIEd is going to benefit education, it will require strengthening the connection between AI developers and experts in the learning sciences. Otherwise, AIEd will simply ‘discover’ new ways to teach poorly and perpetuate erroneous ideas about teaching and learning.

I hope the good folks at MIT are reading this because this is exactly what happened with their early MOOCs.

Prioritising adaptivity over quality

The ubiquity of poor quality content means AIEd technologies often simply recommend the ‘best’ piece of (crappy) content or identify students at risk of failing a (crappy) online course early in the semester….Improving and evaluating the quality of instructional content is neither easy nor cheap, it also isn’t something any AIEd solution is going to do. 

This comes down to the criteria that AI uses to make recommendations. This means replacing criteria such as the number of hits, or likes, with more educational criteria, such as clarity and reliability. Not easy but not impossible. And we still need to improve the quality of content, whether we use AI or not.

Swapping affect for efficiency

Maybe one day AIEd will be capable of effectively identifying and nurturing student emotions during learning, but until then we must be careful not to offload educational tasks that, on the surface, may appear menial or routine, but critically depend on emotion and meaningful human connections to be optimally beneficial.

AI advocates often argue that they are not trying to replace teachers but to make their life easier or more efficient. Don’t believe them: the key driver of AI applications is cost-reduction, which means reducing the number of teachers, as this is the main cost in education. In fact, the key lesson from all AI developments is that we will need to pay increased attention to the affective and emotional aspects of life in a robot-heavy society, so teachers will become even more important. 

Comment

One problem with being old is that you keep seeing the same old hype going round and round. I remember the same arguments back in the 1980s over artificial intelligence. Millions of dollars went into AI research at the time, including into educational applications, with absolutely no payoff.

There have been some significant developments in AI since then, in particular pattern recognition, access to and analysis of big data sets, and formalized decision-making within limited boundaries. The trick though is to recognise exactly what kind of applications these new AI developments are good for, and what they cannot do well. In other words, the context in which AI is used matters and needs to be taken account of. Thus the importance of Lynch’s comment about involving learning scientists/educators in the design of AI applications in education.

I believe there will be some useful applications of AI in education, but only if there is continuing dialogue between AI developers and ‘learning scientists’/educators as new developments in AI become available. But that will require being very clear about the purpose of AI applications in education and being wide awake to the unintended consequences.

Previous Article
Videos from three ‘inspiring’ online leaders
Videos from three ‘inspiring’ online leaders

Drexel University Online as part of its excellent Virtually Inspired blog has posted three videos of  ‘thou...

Next Article
Dispelling some myths about distance education in the USA
Dispelling some myths about distance education in the USA

Taylor-Straut, T. (2018) Distance Education Enrollment Growth – Major Differences Persist Among Sectors Bou...