I read this article The largest internet company in 2030? (link via Martin King @timekord) which predicts that by 2030 the biggest internet company will be an automated teaching provider.
From a business standpoint it might have some merit: one only has to look at the traffic to Wikipedia to see there’s a market for easily accessible information. Once you take out the Star Trek plot summaries and celebrity biographies there might even be some learning going on.
From a technical standpoint through the article is completely wrongheaded.
AI is literally AI
It pins its argument on the sudden fashion for all things AI. It references Google’s DeepMind and IBM’s Watson as if these are evidence of a step change in AI. They’re not. They represent progress but only a bit.
Their recent successes in solving real-world problems (like playing Atari VCS Breakout, a labour saving device the world has been waiting for, obviously) is due not to major advances in the state-of-the-art of AI but in advances in the amount of optimised math processing we are able to throw at the problem.
Neural nets are, when it comes down to it, optimising recognisers. You give them a big pile of input data and the correct answers and you get back a really big, but opaque, machine. This machine will, one hopes, suggest what the correct answer might be for a piece of input data that wasn’t in the example set (because it shares similarities with the examples).
By opaque, I mean that the inner workings of the machine are arbitrary. It is simply an arrangement of mathematical operations that happens to give the right results. It’s not intelligent, it gives an artificial impression of intelligence.
Putting real into the artificial
Aside from compute power, recent advances have been largely in how nets are trained. Rather than just pouring in increasing volumes of data (although we’re doing this too thanks to the advent of the Internet), we have developed techniques to intervene in the effect this data has.
We still can’t make sense of what’s inside each neural net but we can split them into sections. We can dictate that one section is good at spotting horizontal lines, one at spotting circles, one at spotting bright areas and so forth. We can dictate that sections are good at combining these features.
What’s happened here is that we have developed techniques to push a little human intelligence into an artificial intelligence. The AI wouldn’t learn a good way to recognise, for examples, cars in photographs, it was built that way by an intelligent human.
We can build a machine to perform a task more effectively but we’ve not changed the basics of what’s going on.
What does artificial teaching do?
The article suggests that an AI teacher would present material to the student and, over time, adapt the presentation to achieve the best results. There’s two things to pick apart here.
First, “present material”. This is the crux of what it’s doing. It’s not teaching any more than Wikipedia teaches. It’s a great resource but it’s not a fundamental change in teaching: it’s just a really, really big book with a fabulous index. It is not originating insights. It is not creating teaching material, it is a delivery system of extant knowledge.
Second, “best results”. This means test results. Somebody, hopefully an intelligent human, has sat down and decided what the optimal output is – the questions and the answers. Again the AI does not originate this, it just facilitates.
So now we have an AI, an optimising recogniser, which can present information in the manner most likely to result in a student achieving the best test result. Using the AI we have a student whose aim is to do well in the test and is being trained to do that and exactly only that like a wired up, logged in, Pavlovian dog.
It’s possible to argue that this isn’t a bad thing. The quality of the outcome, however, is entirely dependent on the test. The ideal outcome, one that billions of dollars of highly attuned computer power is directed to achieve, is that the student learns exactly what’s in the test, nothing more, nothing less.
This is not teaching, it’s programming. The student and the AI are peers in a purely mechanistic process.
Super tests
Is it possible to build tests that go beyond end-of-course checking?
Can we mitigate the precision training machine by broarding the target? Could we build a recogniser for a “good life” and train students accordingly? Could an AI teacher recognise behaviour patterns on Facebook and teach in order to modify them?
Maybe, but not any time soon, and even if we could, perhaps we shouldn’t.
Being more charitable
There are some aspects of recent AI developments which are undoubtedly useful in a teaching context.
The ability to search data based on a human language question is improving. Mechanisms to spot common modes of error in exercises would be a useful application. Chatbots may provide an excellent context aware way to explore information.
What these things are is not AI-teaching, but better methods of accessing the information that’s been produced by intelligent people.
Are real people just real AIs?
If an AI is just an opaque recognising machine what extra thing does a person have to differentiate them? Could an AI ever be a good teacher? Is the article just 100, 200 years ahead of itself?
This is a good question and nobody knows the answer. However, like the idea of teaching a “good life”, we are so far away from being able to build that AI that it doesn’t matter right now.
I fear that the UK education system at least is being driven by the tech industry to go down the automation route.
Its got worrying implications as education is a way of shaping minds and do we want them shaped by automata.
There is another way … getting out of the managed learning rut and moving down a path that develops human soft skills of empathy, creativity, inventiveness, imagination, creativity etc and most of all the ability for independent continuous learning, self discovery and identity .. these will be crucial in the years ahead which could be quite bleak in terms of traditional employment.
Education should look at its social function rather than just its economic function.
Hopefully … there will be some form of balance with AI augmenting\assisting the routine stuff … finding out facts etc and with teachers facilitating creative applications .. I fear the worst though .. with education policy makers going for automation.
LikeLike