Artificial intelligence is (mostly) not intelligent

This is not AI-powered, even though it's about to forecast the weather.
This is not an artificial intelligence, even though it’s about to forecast the weather.

I last wrote about artificial intelligence here in February 2014. Four and a half years ago it wasn’t something that very many people were paying attention to. Artificial intelligence (AI) had been fashionable in computing circles back in the mid 1980s, but its popularity as a mainstream topic was long gone. Cognitive scientists and psychologists also appeared to have given up on the topic. For example, the Open University removed the chapters on cognitive modelling and connectionism from the final few presentations of DD303 sometime around 2011. Fortunately, this was after I’d taken the course.

However, you can’t help but notice that there’s been a huge surge in software companies jumping onto the AI bandwagon recently. Probably the most irritating manifestation of this trend is the shouty chap on the Microsoft TV advert. While what he’s peddling is interesting, it’s not a definition of AI that I recognise.

By these same standards, the camera on your smartphones isn’t using AI to take better photographs, regardless of manufacturer claims. Chess playing computers aren’t AIs. And self-driving cars – no, they’re not using AI to avoid obstacles.

All of these examples are simply using the vast computing power we have available today to scan for patterns in ever-larger datasets. Domain-specific algorithms are then used to obtain a result. Algortihms that enable them to play chess, avoid obstacles and take better photographs. The more computing power there is, the more options these algorithms can run, and the more intelligent they seem. But they use the power of brute force computing rather than anything resembling an articificial human – or biological – intelligence to obtain results.

If you ask your camera phone to play chess, you won’t get very far. Likewise, you’ll not find a self-driving car that can diagnose illnesses. There are people who can do both – maybe even simultaneously – and avoid obstacles while driving a car, figure out that Brexit is a bad idea and so on.

Having said all of that, these examples are still better uses of computing resources and power than cryptocurrency mining. At the time of writing this activity is consuming as much electricity as the whole of Austria and adding incrementally to climate change.

So if my earlier examples aren’t AI, what is?

The term AI should be reserved for systems that (a) simulate human cognition and (b) can subsequently be used to explain how human cognition works. An AI system should also not be inherently domain-specific. In other words, the computing framework (hardware plus software) used should be capable of being retrained to deliver solutions in multiple domains, potentially simultaneously, just as a person can.

Without such rigour being applied to the definition of AI, any or all computer programs could be called AI. Much as I love the algorithm I wrote for my premium bond simulator a few days ago, it’s not an AI. Neither is my weather forecaster.

I’m not trying to argue about the number of angels that will fit on a pin-head here. I have a real concern about the misuse of the term AI. There is genuinely interesting research being performed in artificial intelligence. SpiNNaker at Manchester University appears to be one such example.

However, nothing will stop the flow of funding to valuable AI research faster than the inevitable perception (I predict within 3 years) that AI has failed. This will happen because software marketeers don’t understand what AI is and don’t really care anyway. For them, AI is simply a means to shift more computing “stuff”. When it is no longer a hot topic it will be unceremoniously dumped and rubbished for the next “big thing”.

Think I’m exaggerating? Take a look at the rise and fall of any big computing trend of the last 40 years. Object databases in the mid 1990s, for example. Computing has always been the equivalent of the fashion business for nerds (like me).

Passing the Turing test is an achievement – but the Lovelace test is terrifying!

Congratulations are due to the creators of Eugene Goostman, the first computer program to pass the “Turing test”. It’s a remarkably difficult thing to create a program that is able to imitate conversation reliably and convince even a minority of observers that a human is on the other side of the screen. Problems of artificial intelligence are what sparked my first interests in psychology back in the early 1980s, so the steady progress that has been made towards this goal over the last 64 years demonstrates to me how creative we are as a species.

Lesser plaudits need to go to many journalists who have reported this achievement in somewhat breathless tones, often forecasting in apocalyptic terms the end of human society as we know it. The Independent’s reporting is typical of how the test has been misunderstood. They report:

Computing pioneer Alan Turing said that a computer could be understood to be thinking if it passed the test, which requires that a computer dupes 30 per cent of human interrogators in five-minute text conversations.

Unfortunately for the Independent (and doom-merchants everywhere), Turing’s 1950 paper(*) makes no such claim. While he starts the first paragraph of his paper asking the question “Can machines think?”, he quickly changes his focus to discuss whether a machine might be able to imitate a human in a conversation (the imitation game). His famous test is formulated to assess that specific proposition, not whether a computer program can be said to be truly thinking for itself – a proposition that he believed to be “too meaningless to deserve discussion”. Instead, what Turing writes is:

I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.

Turing was out in his prediction by around 14 years (a mere twinkle in time of course) and modern computers have a somewhat greater storage capacity than 10^9 words or bytes, but it was a remarkably accurate prediction if you take the long view of human history.

So, unlike much of the mainstream media, I’m not in the least worried that a computer program has finally passed his famous test.

However, there is a test that if a computer program should ever pass it will definitely send me off in search of stocks of corned beef, bottled water and a safe bolthole in the Derbyshire hills. It’s known as the Lovelace test. Simply put, it says that if the human designer(s) of a computer program are unable to account for the output it produces, then it can truly be said to have become conscious. Now that’s a scary thought!

(*) Turing, A.M. (1950). Computing Machinery and Intelligence. Mind, 59, 433–460.

This article was originally written for the University of Leicester Student Blogs, 8th June 2014.