Do robo-pets dream of electric sheep?

Philip K. Dick would be proud. Twenty robo-pets have been “installed” in Lancashire care homes. Definitely not AI, but an interesting use of robotics nonetheless.

The council’s champion for older people is quoted as saying:

These robo-pets are fantastic because they look and act like the real thing. The dogs bark when they hear you, the cats purr when you stroke them.

Part of the cover of Brainchildren, by Daniel C. Dennett

UKAuthority reports that there are plans for more.

The brain is (mostly) not a computer

I recently had my attention drawn to this essay from May 2016 – The Empty Brain – written by psychologist Robert Epstein (thanks Andrew). In it, Epstein argues that the dominant information processing (IP) model of the brain is wrong. He states that human brains do not use symbolic representations of the world and do not process information like a computer. Instead, the IP model is one chained to our current level of technological sophistication. It is just a metaphor, with no biological validity.

Epstein points out that no-one now believes that the human brain works like a hydraulic system. However, this was the dominant model of intelligence from 300 BCE to the 1300s. It was based on the technology of the times. Similarly, no-one now argues that the brain works like a telegraph. This model was popularised by physicist Hermann von Helmholtz in the mid 1800s. The IP model of the brain can be traced back to the mid 20th century. Epstein cites John von Neumann (mathematician) and George Miller (psychologist) as being particularly influential in its development. His conclusion is that it is as misguided as the hydraulic and telegraphy models of earlier times.

If Epstein is correct, his argument has significant implications for the world of artificial intelligence. If humans are not information processors, with algorithms, data, models, memories and so on, then how could computing technology be programmed to become artificially intelligent? Is it even possible with current computing architectures? (*) There has been no successful ‘human brain project’ so far using such a model. I’m convinced (as both a computer scientist and psychologist) that there never will be.

However, I disagree with what I interpret as Epstein’s (applied) behaviourist view of human intelligence. The argument that we act solely on combinations of stimuli reinforced by the rewards or punishment that follow has been thoroughly debunked (+). There is a difference between explaining something and explaining away something. The behaviourist obsession with explaining away rather than attempting explanations of mental events is a serious blind spot to progress. As serious as the obsession with the IP model, to the exclusion of other possibilities, exhibited by many cognitive scientists.

Living together in perfect harmony on my bookshelf - some of the many psychological traditions.
Living together in perfect harmony on my bookshelf – some of the many psychological traditions.

Just because we can’t currently say how the brain changes in response to learning something, or how we later re-use this knowledge, doesn’t mean that the task will always be impossible. It certainly doesn’t mean that our brains don’t have biological analogues of memories or rules. Declarative and procedural knowledge exists, even if there isn’t a specific collection of neurons assigned to each fact or process we know.

Furthermore, the limits of our current understanding of brain architecture doesn’t invalidate the IP paradigm per-se – at least for partly explaining human intelligence. We shouldn’t be surprised at this. After all, blood circulates around the body – and brain – using hydraulics. This earlier model of how the brain functions therefore isn’t completely invalid – at least, at a low-level. It may therefore turn out that the IP model of intelligence is at least partly correct too.

Epstein finishes his essay by saying asserting “We are organisms, not computers. Get over it.” He’s right – up to a point. But the explanations (or explaining away) he offers are partial at best. Psychologists from all traditions have something to add to the debate about human intelligence. Discarding one approach solely on the grounds that it can’t explain everything that makes up human intelligence is just silly. And that’s something which Epstein definitely needs to get over.

 

(*) I asked the same question at the end of Brainchildren – Exploding robots and AI. I’m still not ready to answer it!

(+) For example, see Dennett’s essay Skinner Skinned in Brainstorms.

Brainchildren – Exploding robots and AI

I’ve recently been dipping into Brainchildren – essays on designing minds, by the philosopher Daniel C. Dennett. The essays in the book were written between the mid 1980s and 1998. There’s a whole section dedicated to artificial intelligence, hence my interest. It’s instructive to look at this topic from a philosophical rather than a pure technology perspective. It certainly makes a pleasant change from being constantly bombarded with the frenzied marketing half-truths of the last couple of years. I mean you, shouty Microsoft man.

Part of the cover of Brainchildren, by Daniel C. Dennett

My conclusion from reading Brainchildren is that many of the problems with AI, known in the 80s, have not been addressed. They’ve simply been masked by the rapidly increasing computer power (and decreasing costs) of the last three decades. Furthermore, the problems that beset AI are unlikely to be resolved in the near future without a fundamental shift in architectural approaches.

Exploding Robots – The Frame Problem

One such hard problem for AI is known as the frame problem. How do you get a computer program (controlling a robot, for example) to represent its world efficiently and to plan and execute its actions appropriately?

Dennett imagines a robot with a single task – to fend for itself. The robot is told that the spare battery it relies on is in a room with a bomb in it. It quickly decides to pull the cart its battery sits on out of the room. The robot acts and is destroyed, as the bomb is also on the cart. It failed to realise a crucial side effect of its planned action.

A rebuilt (and slightly dented) robot is programmed with the requirement to consider all potential side effects of its actions. It is set the same task and decides to pull the cart out of the room. However, it then spends so much time evaluating all of the possible implications of this act – Will it change the colour of the walls? What if the cart’s wheels need to rotate more times than it has wheels? – that the bomb explodes before it has had time to do anything.

The third version of the robot is designed to ignore irrelevant side effects. It is set the same task, decides on the same plan, but then appears to freeze. The robot is so busy ignoring all of the millions of irrelevant side effects that it fails to find the important one before the bomb explodes.

AI is impossible to deliver using 20th century technologies

Dennet concludes that an artificially intelligent program needs to be capable of ignoring most of what it knows or can deduce. As the robot thought experiments show, this can’t be achieved by exhaustively ruling out possibilities. In other words, not by the brute-force algorithms commonly used by chess playing programs and presumably by this fascinating system used in the NHS for identifying the extent of cancer tumours.

The hardest problem for an AI isn’t finding enough data about its world. It’s about making good decisions (*) – efficiently – about the 99% of data held that isn’t relevant.

Human brains do this qualification task incredibly efficiently, using a fraction of the computing power available to your average mobile ‘phone. Artificial “brains”, unless ridiculously constrained, simply don’t perform with anything like the flexibility required. My belief is that the key problem lies with the underlying computing architectures used for current “AI” systems. These architectures have been fundamentally unchanged since the 1940s. An entirely new approach to system architecture (hardware and software) is required, as the computational paradigm is unsuitable for the task.

 

(*) As good decisions, and ideally better, than a trained person would make.

Artificial intelligence is (mostly) not intelligent

This is not AI-powered, even though it's about to forecast the weather.
This is not an artificial intelligence, even though it’s about to forecast the weather.

I last wrote about artificial intelligence here in February 2014. Four and a half years ago it wasn’t something that very many people were paying attention to. Artificial intelligence (AI) had been fashionable in computing circles back in the mid 1980s, but its popularity as a mainstream topic was long gone. Cognitive scientists and psychologists also appeared to have given up on the topic. For example, the Open University removed the chapters on cognitive modelling and connectionism from the final few presentations of DD303 sometime around 2011. Fortunately, this was after I’d taken the course.

However, you can’t help but notice that there’s been a huge surge in software companies jumping onto the AI bandwagon recently. Probably the most irritating manifestation of this trend is the shouty chap on the Microsoft TV advert. While what he’s peddling is interesting, it’s not a definition of AI that I recognise.

By these same standards, the camera on your smartphones isn’t using AI to take better photographs, regardless of manufacturer claims. Chess playing computers aren’t AIs. And self-driving cars – no, they’re not using AI to avoid obstacles.

All of these examples are simply using the vast computing power we have available today to scan for patterns in ever-larger datasets. Domain-specific algorithms are then used to obtain a result. Algorithms that enable them to play chess, avoid obstacles and take better photographs. The more computing power there is, the more options these algorithms can run, and the more intelligent they seem. But they use the power of brute force computing rather than anything resembling an artificial human – or biological – intelligence to obtain results.

If you ask your camera phone to play chess, you won’t get very far. Likewise, you’ll not find a self-driving car that can diagnose illnesses. There are people who can do both – maybe even simultaneously – and avoid obstacles while driving a car, figure out that Brexit is a bad idea and so on.

Having said all of that, these examples are still better uses of computing resources and power than cryptocurrency mining. At the time of writing this activity is consuming as much electricity as the whole of Austria and adding incrementally to climate change.

So if my earlier examples aren’t AI, what is?

The term AI should be reserved for systems that (a) simulate human cognition and (b) can subsequently be used to explain how human cognition works. An AI system should also not be inherently domain-specific. In other words, the computing framework (hardware plus software) used should be capable of being retrained to deliver solutions in multiple domains, potentially simultaneously, just as a person can.

Without such rigour being applied to the definition of AI, any or all computer programs could be called AI. Much as I love the algorithm I wrote for my premium bond simulator a few days ago, it’s not an AI. Neither is my weather forecaster.

I’m not trying to argue about the number of angels that will fit on a pin-head here. I have a real concern about the misuse of the term AI. There is genuinely interesting research being performed in artificial intelligence. SpiNNaker at Manchester University appears to be one such example.

However, nothing will stop the flow of funding to valuable AI research faster than the inevitable perception (I predict within 3 years) that AI has failed. This will happen because software marketeers don’t understand what AI is and don’t really care anyway. For them, AI is simply a means to shift more computing “stuff”. When it is no longer a hot topic it will be unceremoniously dumped and rubbished for the next “big thing”.

Think I’m exaggerating? Take a look at the rise and fall of any big computing trend of the last 40 years. Object databases in the mid 1990s, for example. Computing has always been the equivalent of the fashion business for nerds (like me).

Passing the Turing test is an achievement – but the Lovelace test is terrifying!

Congratulations are due to the creators of Eugene Goostman, the first computer program to pass the “Turing test”. It’s a remarkably difficult thing to create a program that is able to imitate conversation reliably and convince even a minority of observers that a human is on the other side of the screen. Problems of artificial intelligence are what sparked my first interests in psychology back in the early 1980s, so the steady progress that has been made towards this goal over the last 64 years demonstrates to me how creative we are as a species.

Lesser plaudits need to go to many journalists who have reported this achievement in somewhat breathless tones, often forecasting in apocalyptic terms the end of human society as we know it. The Independent’s reporting is typical of how the test has been misunderstood. They report:

Computing pioneer Alan Turing said that a computer could be understood to be thinking if it passed the test, which requires that a computer dupes 30 per cent of human interrogators in five-minute text conversations.

Unfortunately for the Independent (and doom-merchants everywhere), Turing’s 1950 paper(*) makes no such claim. While he starts the first paragraph of his paper asking the question “Can machines think?”, he quickly changes his focus to discuss whether a machine might be able to imitate a human in a conversation (the imitation game). His famous test is formulated to assess that specific proposition, not whether a computer program can be said to be truly thinking for itself – a proposition that he believed to be “too meaningless to deserve discussion”. Instead, what Turing writes is:

I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.

Turing was out in his prediction by around 14 years (a mere twinkle in time of course) and modern computers have a somewhat greater storage capacity than 10^9 words or bytes, but it was a remarkably accurate prediction if you take the long view of human history.

So, unlike much of the mainstream media, I’m not in the least worried that a computer program has finally passed his famous test.

However, there is a test that if a computer program should ever pass it will definitely send me off in search of stocks of corned beef, bottled water and a safe bolthole in the Derbyshire hills. It’s known as the Lovelace test. Simply put, it says that if the human designer(s) of a computer program are unable to account for the output it produces, then it can truly be said to have become conscious. Now that’s a scary thought!

(*) Turing, A.M. (1950). Computing Machinery and Intelligence. Mind, 59, 433–460.

This article was originally written for the University of Leicester Student Blogs, 8th June 2014.