Artificial intelligence is (mostly) not intelligent

This is not AI-powered, even though it's about to forecast the weather.
This is not an artificial intelligence, even though it’s about to forecast the weather.

I last wrote about artificial intelligence here in February 2014. Four and a half years ago it wasn’t something that very many people were paying attention to. Artificial intelligence (AI) had been fashionable in computing circles back in the mid 1980s, but its popularity as a mainstream topic was long gone. Cognitive scientists and psychologists also appeared to have given up on the topic. For example, the Open University removed the chapters on cognitive modelling and connectionism from the final few presentations of DD303 sometime around 2011. Fortunately, this was after I’d taken the course.

However, you can’t help but notice that there’s been a huge surge in software companies jumping onto the AI bandwagon recently. Probably the most irritating manifestation of this trend is the shouty chap on the Microsoft TV advert. While what he’s peddling is interesting, it’s not a definition of AI that I recognise.

By these same standards, the camera on your smartphones isn’t using AI to take better photographs, regardless of manufacturer claims. Chess playing computers aren’t AIs. And self-driving cars – no, they’re not using AI to avoid obstacles.

All of these examples are simply using the vast computing power we have available today to scan for patterns in ever-larger datasets. Domain-specific algorithms are then used to obtain a result. Algortihms that enable them to play chess, avoid obstacles and take better photographs. The more computing power there is, the more options these algorithms can run, and the more intelligent they seem. But they use the power of brute force computing rather than anything resembling an articificial human – or biological – intelligence to obtain results.

If you ask your camera phone to play chess, you won’t get very far. Likewise, you’ll not find a self-driving car that can diagnose illnesses. There are people who can do both – maybe even simultaneously – and avoid obstacles while driving a car, figure out that Brexit is a bad idea and so on.

Having said all of that, these examples are still better uses of computing resources and power than cryptocurrency mining. At the time of writing this activity is consuming as much electricity as the whole of Austria and adding incrementally to climate change.

So if my earlier examples aren’t AI, what is?

The term AI should be reserved for systems that (a) simulate human cognition and (b) can subsequently be used to explain how human cognition works. An AI system should also not be inherently domain-specific. In other words, the computing framework (hardware plus software) used should be capable of being retrained to deliver solutions in multiple domains, potentially simultaneously, just as a person can.

Without such rigour being applied to the definition of AI, any or all computer programs could be called AI. Much as I love the algorithm I wrote for my premium bond simulator a few days ago, it’s not an AI. Neither is my weather forecaster.

I’m not trying to argue about the number of angels that will fit on a pin-head here. I have a real concern about the misuse of the term AI. There is genuinely interesting research being performed in artificial intelligence. SpiNNaker at Manchester University appears to be one such example.

However, nothing will stop the flow of funding to valuable AI research faster than the inevitable perception (I predict within 3 years) that AI has failed. This will happen because software marketeers don’t understand what AI is and don’t really care anyway. For them, AI is simply a means to shift more computing “stuff”. When it is no longer a hot topic it will be unceremoniously dumped and rubbished for the next “big thing”.

Think I’m exaggerating? Take a look at the rise and fall of any big computing trend of the last 40 years. Object databases in the mid 1990s, for example. Computing has always been the equivalent of the fashion business for nerds (like me).

4 thoughts on “Artificial intelligence is (mostly) not intelligent

  1. Thought provoking as always – love the succinct honesty. Could have done with you to reinforce an event VSL has in London today covering similar ground. These reality checks are essential – your stat about Austria is sobering. I have shared with my Linked In contacts. Thanks for posting

    1. Thanks Andrew. There’s some really clever stuff out there at the moment, but calling everything AI is a nonsense. Far better to reserve the term for genuine attempts at cognitive modelling.

      I did toy with the idea of introducing a fourth test for real AI – that of consciousness. After all, it is an emergent property of biological brains. Can we really say we have artificial intelligence until we have machines with something that approximates human consciousness?

      1. I would say that is essential to the test Tim, but until we reach that level of potential ‘responsibility’ we are running ever increasing risks of unregulated algorithms on a self serving basis for competitive advantage (or worse) which is reminiscent of the wild west.

        There are public calls for something akin to the FDA in the US to enforce registration and potentially regulation for algorithms. I am not sure that is the mitigation to the risk since it implies also policing. But I understand why the calls are there.

        And then there is the exponential compounding when algorithms get together – flash crash scenarios dont directly endanger life yet, but the fallout is potentially as significant as that.

  2. AI ‘things’ are just machines with past computer learning and human intelligence in them or ,it.As you say your camera cannot play chess.All texchnologyis human created. It is us who will create. Once we lost interest in going to the moon a ‘rebirth’ is only just appearing.s you say interest can drop off. Then in later decades it will revive with more new human knowledge producing it.

Your thoughts?