Brainchildren – Exploding robots and AI

I’ve recently been dipping into Brainchildren – essays on designing minds, by the philosopher Daniel C. Dennett. The essays in the book were written between the mid 1980s and 1998. There’s a whole section dedicated to artificial intelligence, hence my interest. It’s instructive to look at this topic from a philosophical rather than a pure technology perspective. It certainly makes a pleasant change from being constantly bombarded with the frenzied marketing half-truths of the last couple of years. I mean you, shouty Microsoft man.

Part of the cover of Brainchildren, by Daniel C. Dennett

My conclusion from reading Brainchildren is that many of the problems with AI, known in the 80s, have not been addressed. They’ve simply been masked by the rapidly increasing computer power (and decreasing costs) of the last three decades. Furthermore, the problems that beset AI are unlikely to be resolved in the near future without a fundamental shift in architectural approaches.

Exploding Robots – The Frame Problem

One such hard problem for AI is known as the frame problem. How do you get a computer program (controlling a robot, for example) to represent its world efficiently and to plan and execute its actions appropriately?

Dennett imagines a robot with a single task – to fend for itself. The robot is told that the spare battery it relies on is in a room with a bomb in it. It quickly decides to pull the cart its battery sits on out of the room. The robot acts and is destroyed, as the bomb is also on the cart. It failed to realise a crucial side effect of its planned action.

A rebuilt (and slightly dented) robot is programmed with the requirement to consider all potential side effects of its actions. It is set the same task and decides to pull the cart out of the room. However, it then spends so much time evaluating all of the possible implications of this act – Will it change the colour of the walls? What if the cart’s wheels need to rotate more times than it has wheels? – that the bomb explodes before it has had time to do anything.

The third version of the robot is designed to ignore irrelevant side effects. It is set the same task, decides on the same plan, but then appears to freeze. The robot is so busy ignoring all of the millions of irrelevant side effects that it fails to find the important one before the bomb explodes.

AI is impossible to deliver using 20th century technologies

Dennet concludes that an artificially intelligent program needs to be capable of ignoring most of what it knows or can deduce. As the robot thought experiments show, this can’t be achieved by exhaustively ruling out possibilities. In other words, not by the brute-force algorithms commonly used by chess playing programs and presumably by this fascinating system used in the NHS for identifying the extent of cancer tumours.

The hardest problem for an AI isn’t finding enough data about its world. It’s about making good decisions (*) – efficiently – about the 99% of data held that isn’t relevant.

Human brains do this qualification task incredibly efficiently, using a fraction of the computing power available to your average mobile ‘phone. Artificial “brains”, unless ridiculously constrained, simply don’t perform with anything like the flexibility required. My belief is that the key problem lies with the underlying computing architectures used for current “AI” systems. These architectures have been fundamentally unchanged since the 1940s. An entirely new approach to system architecture (hardware and software) is required, as the computational paradigm is unsuitable for the task.


(*) As good decisions, and ideally better, than a trained person would make.

Artificial intelligence is (mostly) not intelligent

This is not AI-powered, even though it's about to forecast the weather.
This is not an artificial intelligence, even though it’s about to forecast the weather.

I last wrote about artificial intelligence here in February 2014. Four and a half years ago it wasn’t something that very many people were paying attention to. Artificial intelligence (AI) had been fashionable in computing circles back in the mid 1980s, but its popularity as a mainstream topic was long gone. Cognitive scientists and psychologists also appeared to have given up on the topic. For example, the Open University removed the chapters on cognitive modelling and connectionism from the final few presentations of DD303 sometime around 2011. Fortunately, this was after I’d taken the course.

However, you can’t help but notice that there’s been a huge surge in software companies jumping onto the AI bandwagon recently. Probably the most irritating manifestation of this trend is the shouty chap on the Microsoft TV advert. While what he’s peddling is interesting, it’s not a definition of AI that I recognise.

By these same standards, the camera on your smartphones isn’t using AI to take better photographs, regardless of manufacturer claims. Chess playing computers aren’t AIs. And self-driving cars – no, they’re not using AI to avoid obstacles.

All of these examples are simply using the vast computing power we have available today to scan for patterns in ever-larger datasets. Domain-specific algorithms are then used to obtain a result. Algortihms that enable them to play chess, avoid obstacles and take better photographs. The more computing power there is, the more options these algorithms can run, and the more intelligent they seem. But they use the power of brute force computing rather than anything resembling an articificial human – or biological – intelligence to obtain results.

If you ask your camera phone to play chess, you won’t get very far. Likewise, you’ll not find a self-driving car that can diagnose illnesses. There are people who can do both – maybe even simultaneously – and avoid obstacles while driving a car, figure out that Brexit is a bad idea and so on.

Having said all of that, these examples are still better uses of computing resources and power than cryptocurrency mining. At the time of writing this activity is consuming as much electricity as the whole of Austria and adding incrementally to climate change.

So if my earlier examples aren’t AI, what is?

The term AI should be reserved for systems that (a) simulate human cognition and (b) can subsequently be used to explain how human cognition works. An AI system should also not be inherently domain-specific. In other words, the computing framework (hardware plus software) used should be capable of being retrained to deliver solutions in multiple domains, potentially simultaneously, just as a person can.

Without such rigour being applied to the definition of AI, any or all computer programs could be called AI. Much as I love the algorithm I wrote for my premium bond simulator a few days ago, it’s not an AI. Neither is my weather forecaster.

I’m not trying to argue about the number of angels that will fit on a pin-head here. I have a real concern about the misuse of the term AI. There is genuinely interesting research being performed in artificial intelligence. SpiNNaker at Manchester University appears to be one such example.

However, nothing will stop the flow of funding to valuable AI research faster than the inevitable perception (I predict within 3 years) that AI has failed. This will happen because software marketeers don’t understand what AI is and don’t really care anyway. For them, AI is simply a means to shift more computing “stuff”. When it is no longer a hot topic it will be unceremoniously dumped and rubbished for the next “big thing”.

Think I’m exaggerating? Take a look at the rise and fall of any big computing trend of the last 40 years. Object databases in the mid 1990s, for example. Computing has always been the equivalent of the fashion business for nerds (like me).

Premium bond mythbusting

One of yesterday’s budget announcements was the lowering of the minimum premium bond purchase from £100 to £25 by March 2019. Inevitably the usual conspiracy theorists and/or people who don’t understand probability came out to play on various forums.

Some facts:

  • Every bond in every draw has an equal chance of winning a prize. Currently, these odds are 24,500 to 1 against.
  • If you hold a single £1 bond, then with average luck you’ll win a prize once every 24,500 months – or once every 2,041 years.
  • A £100 holding would improve these odds to once every 20 years or so.
  • Someone with the maximum holding of £50,000 could therefore expect to win around two prizes a month. 2.04, to be precise.

However, two myths seem to be in common circulation. The widespread belief in these myths perhaps explains why 42% still think that the NHS will get £350m/week extra after Brexit. (Also a myth – along with the idea that a Brexit of any type will deliver a dividend to the UK).

  • Myth 1 – blocks of consecutive bond numbers stand a better chance of winning than widely scattered bond numbers.
  • Myth 2 – a newer bond has a better chance of winning than an older bond.

Neither is true – as every bond in the draw has an equal chance of winning a prize.

Myth 1

It makes no difference whether you hold a single block of consecutive bonds or if they are scattered. Believing otherwise is as fallacious as suggesting that the sequence 6,6,6,6,6,6 is more or less likely than 1,2,3,4,5,6 when rolling a die six times. Assuming a fair die, any six number sequence is as likely as any other, as a die has no memory for what was rolled previously. The same is true for premium bonds – there’s no memory for which numbers have been drawn.

Myth 2

The old number / new number myth probably stems from the observation that new bonds seem to win more frequently than older ones – if you just look at lists of prize winners. However, this neglects the obvious point that older bonds are more likely to have been cashed in than newer ones. Regardless of when a bond was bought, it still has an equal chance of winning a prize. This myth is especially pernicious, as someone who withdraws older bonds to purchase newer ones loses at least a month of opportunity. This is because a bond bought (say) in November won’t be entered into a draw until the following January.

Crunching the numbers

Now, I realise that if you’re still struggling to see past these myths, some proof might be useful. So as part of my mental recovery from chemotherapy, I’ve written a premium bond simulator this morning. It’s aim is to dispel these two myths.

It works by simulating 82 years worth of bond draws. 81 is the average UK life expectancy. The extra year stems from the rule that bonds can be left in the draw for up to a year after someone’s death.

To make the programme run in a reasonable length of time, the number of bonds in each draw has been scaled back from 60 billion (approximately the number in circulation) to 6 million. Maximum bond holdings have been scaled back proportionately – from 50,000 to 5. This means that the outcome – an average 2.04 prizes per month – is maintained in line with the real NS&I draw.

There are four bondholder types defined. Someone with a block of consecutive numbers, someone with widely scattered numbers, a holder who has old bonds (represented by low numbers) and a holder with new bonds (represented by high numbers).

Here’s the output of a couple of runs from earlier on this afternoon. They demonstrate that every bondholder type has an approximately equal chance of winning over a lifetime of bondholding.

On this run, the bondholder with scattered single bonds won more times than the bondholder with consecutive numbers
On this run, the bondholder with scattered single bonds won more times than the bondholder with a block of consecutive numbers.
On this run, older bonds outperformed new bonds.
On this run, older bonds outperformed new bonds.

If you’re still not convinced, here’s the source code so you can play with it yourself.

#include <stdio.h> 
#include <stdbool.h>
#include <stdint.h>
#include <time.h>
#include <stdlib.h>
#define TOTALBONDS 6000000
#define WINODDS    2.45
#define BONDSHELD  5
#define TOTALDRAWS 984
double rand0to1()
    	/* return a pseudorandom number between 0 and 1 */
        return ((double)rand()/(double)RAND_MAX);
void main()
        int prizes, winner, i, drawnumber, prizesallocated; 
        int blocktotal=0, singletotal=0, oldtotal=0, newtotal=0;
        /* Different bondholder types and their bond numbers */
        int blockbonds[BONDSHELD]={200000,200001,200002,200003,200004};
       	int singlebonds[BONDSHELD]={10534,248491,1000302,4000522,5200976};
        int oldbonds[BONDSHELD]={1,10,100,1000,10000};
        int newbonds[BONDSHELD]={5999900,5999910,5999920,5999930,5999940};
	bool allbonds[TOTALBONDS]; 
        /* Seed the pseudorandom number generator */
        /* Total prizes are calculated using current NS&I odds of 24,500 to 1 per bond held, scaled to 2.45 to 1 per bond held for this simulation */
        /* The total number of bonds in ciruclation is also scaled back by the same propotion, from around 60 billion to 6 million */
        /* Therefore the maximum holding in this simulation is 5 bonds, equivalent to 50,000 in the real draw */
        /* Average luck implies each bondholder should win 2.04 times per draw - the same as in the real draw*/
        /* Run the draw multiple times - 12 is equivalent to 1 year's worth of real draws */
        for (drawnumber=0; drawnumber<TOTALDRAWS; drawnumber++) {
       		/* Set up the draw - no-one has won yet */
        	for (i=0;i<TOTALBONDS;i++)  { 
        	/* Work out the total number of prizes */
		prizes = (int) (TOTALBONDS / WINODDS);
        	/* Draw a new bond to win until all prizes are allocated */
        	while (prizesallocated<prizes) {
            		winner = (int) (rand0to1()*TOTALBONDS);
            		/* The NS&I rules state that the same bond cannot win twice in the same draw */
           		/* prizesallocated is not incremented in this event and the prize is reallocated */
            		if (!allbonds[winner]) {
               			allbonds[winner] = true;
        	/* Check each bondholder against the draw, and increment the total number of times they have won */
                printf("\nWinners for draw %d\n",drawnumber+1);
		for (i=0;i<BONDSHELD;i++) {
  			if (allbonds[blockbonds[i]]) { printf("Block bond %d wins!\n",blockbonds[i]); ++blocktotal; }
  			if (allbonds[singlebonds[i]]) { printf("Single bond %d wins!\n",singlebonds[i]); ++singletotal; }
  			if (allbonds[oldbonds[i]])  { printf("Old bond %d wins!\n",oldbonds[i]); ++oldtotal; }
  			if (allbonds[newbonds[i]]) { printf("New bond %d wins!\n",newbonds[i]); ++newtotal; }
        /* Calculate what the average luck was for each of the bondholders */
	printf("\nSummary of results\n");
        printf("Block bond holder won on average %.2f times per draw\n", (float) blocktotal/TOTALDRAWS);
        printf("Single bond holder won on average %.2f times per draw\n", (float) singletotal/TOTALDRAWS);
        printf("Old bond holder won on average %.2f times per draw\n", (float) oldtotal/TOTALDRAWS);
        printf("New bond holder won on average %.2f times per draw\n", (float) newtotal/TOTALDRAWS);

Pester goes – but that isn’t the biggest surprise of the TSB debacle

Paul Pester, CEO of TSB, has finally left the business after the bank’s customers suffered another long weekend of failing processes and systems. That he resigned today “with immediate effect” isn’t the most surprising aspect of the story, even after he grimly clung on for months after the initial failures in April.

BBC report that TSB has lost just 6,000 net customers as a result of April's process and systems failures
From the BBC report on Paul Pester’s resignation

The BBC report that 26,000 TSB customers (out of 4.5 million claimed on their LinkedIn page) closed their accounts as a result of the TSB’s botched project to change their banking systems.

I find it astonishing that nearly 199 in 200 customers chose to stay with the bank. It’s even more astonishing that 20,000 people have actively decided to become customers since April, making a net loss of just 6,000. At this rate TSB will have more customers by the end of 2018 than they had at the beginning.

On this basis, having a high-profile business failure looks like a recipe for success. British financial consumers, no matter how badly they’ve been treated, seem unlikely to change their allegiance. As it happens, it’s also the trait that the government are banking on to push through Brexit.

What to do if you get a scam internet service provider call

Over the last few days I’ve received a number of calls from scammers posing as my internet service provider (ISP). Which? has also noted an uptick in activity from these parasites.

If you do answer a call from them, the best advice is to hang up and block the number they called from. Whatever you do don’t be fooled into providing them with your account details or installing and running any software on your computer. The one call I did answer rather than letting go through to voicemail was amateur in the extreme, but may have easily fooled a vulnerable person.

Scam phone calls are on the riseNumbers that I’ve caught and blocked in the last three days include:

  • 0151 327 3276 and 0151 329 0986

This appears to be an Indian outfit routing calls via a Liverpool number. Their gambit was to suggest that Nigerian scammers had compromised my router. Because my router has no keyboard(!), they needed remote access to my computer to change its address.

Even if the claim was credible, changing the (external) address on a router is usually as simple as restarting it. This is because UK retail ISPs dynamically allocate your address and in some cases, the same one is shared by multiple consumers. I managed to waste 10 minutes of their time by feigning incompetence. They eventually got bored with me and hung up.

  • 020 9637 1427

This is an automated message, claiming that your router has been compromised and threatening disconnection within 12 hours if you don’t press 1 to speak to a technician. Don’t press 1 – hang up!

  • +33 883 571 187, +33 983 215 066, +33 998 826 457, +33 239 932 929, +33 537 542 458, +33 307 433 634, +33 248 324 733 and +33 913 120 655

All of these (French) numbers rang within a few hours of each other. The chances are the number is being spoofed and all the calls originate in the same scammers operation.

  • Your ISP will never call you from an “unknown” number.
  • Your ISP will never call and randomly ask for your personal information, such as your account number, bank details, date of birth etc.
  • If you’re unsure if a call is genuine, hang up and contact your ISP on their official number.
  • If you’ve been scammed or an attempt has been made, contact Action Fraud online or on 0300 123 2040.

Raspberry Pi weather forecaster

In my ongoing struggle to overcome chemo-brain, I’ve recently created a rudimentary (*) weather forecaster. It’s based on a Raspberry Pi Zero W that came as part of a gift subscription to the MagPi. It incorporates a BME280 sensor (for reporting pressure, temperature and humidity) and a LCD display.

The current prototype

Not pretty, but functional. It requires just 3 hours of historical data before it makes a forecast! Somehow I don’t think anyone will claim that professional weather forecasters are no longer needed … unless you’re someone who still believes in unicorns. (+)

Prototype Raspberry Pi weather forecaster
Prototype Raspberry Pi weather forecaster. The Pi is hidden under the multiplexing board.
The technical stuff

The LCD display and BME280 sensor are both connected to the Pi using the I2C bus exposed by GPIO (physical) pins 3 (data) and 5 (clock). Power to the LCD is provided through a 5v pin (2 or 4) and to the BME280 through a 3.3v pin (1 or 17). The final wire to each component is the ground. Any of GPIO pins 6, 9, 14, 20, 25, 30, 34 or 39 will do.

The coding tasks (in C and FORTRAN) were greatly simplified by the use of the excellent wiringPi library.

Release 1.0 of the code is available on github should anyone want to enjoy a good laugh at my legacy coding skills.


BME280 – £5.45 (I bought two for £10.90, just in case my soldering skills let me down).

LCD display – £7.99

Raspberry Pi Zero W – free as part of the current MagPi subscription offer, or around £10-£15 if you need to buy one and a 40 pin GPIO header.

Next steps

Finding a suitably sized waterproof project box with a transparent lid to house the electronics (sadly far harder than it used to be with the demise of Maplins earlier on this year).

Tidying up the wiring by using a piece of Veroboard (other brands are available) so I don’t need to use my only multiplexing board (by far the most expensive component in the current build).

Creating a better forecasting model. In FORTRAN, naturally. I acknowledge that it could take me some time to become as good as the Met Office …



(*) “Rudimentary” makes it sound better than it is. Here’s the current forecasting model it uses for those who don’t want to wade through all the code on github.

C                                                                      C
C                                                                      C
C     CIND  - CHANGE INDICATOR STRING                                  C
C     CFORE - FORECAST STRING                                          C
C                                                                      C
C     AUTHOR: TJH 26-07-2018                                           C
C                                                                      C
      IF (PDIFF.LE.-0.5 .AND. PDIFF.GE.-3.0) THEN
         CIND="v "
         CFORE=" Some rain possible "
      ELSE IF (PDIFF.LT.-3.0 .AND. PDIFF.GE.-6.0) THEN
         CFORE="Wind,rain;fine later"
      ELSE IF (PDIFF.LT.-6.0) THEN
         CFORE="    ** STORMY **    "
         CIND="^ "
         CFORE="Fine, becoming drier"
         CFORE="Becoming dry &amp; windy"
         CFORE="No change in weather"
      END IF


(+) For example, someone who still believes that Brexit is a really good idea. So maybe I should approach the disgraced former defence secretary, Liam Fox, to promote it for me.

B&Q ( = Do It Yourself integration

Earlier on this week I ordered some new fence panels for delivery from (B&Q). This was the message I received during checkout:

DIY "digital" at B&Q
DIY “digital” at B&Q

A DIY (not very) digital experience. The part of their instructions about the letter was wrong, fortunately. I received an email from their supplier some hours later to organise delivery. This (naturally, given the paucity of the shopping experience so far) involved re-typing a B&Q order number into their website, even though they could have provided me with a unique URL to follow in their email.

If only there was a company that provides market-leading business to business integration software

Whetstone FORTRAN benchmark: Raspberry Pi 3B+

Following the launch on 14th March, a shiny new Raspberry Pi 3B+ landed on my doormat yesterday. I purchased it from The Pi Hut for the princely sum of £32 + £2.50pp. I was delighted that it arrived the day after I ordered it, despite not having paid extra to guarantee delivery.

Naturally(+) the first thing I did after setting it up (and deciding on the somewhat unoriginal hostname of ‘custard’) was to install a copy of gfortran to compile and run the Whetstone double precision FORTRAN benchmark(*).

When I was a young programmer in the 1980s, the Whetstone benchmarks were acknowledged as being the standard for assessing general computing performance. I believe that they first appeared in the early 1970s, written in ALGOL. This was way before multi-core processors became the norm, so the benchmark doesn’t give a true reflection of the total computing power available on the Pi 3B+. It’s easy enough to multiply by four to get an estimate, of course, so I’m sticking with it!

Running on a single core, the Pi 3B+ performs the benchmark approximately 33% faster than the Pi 3B (an average over 10 runs of 530,348 KIPS vs 399,858 KIPS). Compared with the original (single core) Raspberry Pi (150,962 KIPS), the improvement in speed is around 3.5x (i.e. 14x, if it had been possible for this benchmark to use all four cores on the Pi 3B+).

Custard running the Whetstone double precision FORTRAN benchmark

Whetstone double precision benchmarks - Raspberry Pi 3B+ and predecessors
Comparing Raspberry Pi models using the Whetstone double precision benchmark (single core performance)



(+) Naturally for me, that is.

(*) My previous posts about this benchmark and the results on earlier Raspberry Pi models are here:

Raspberry Pi 3, Raspberry Pi 2, Raspberry Pi Zero and the original Raspberry Pi.

Before the EU single market and the disaster of a hard Brexit

Yesterday an opinion poll suggested that a majority of voters want the UK to remain in the single market. It’s encouraging that the majority take this view, as I’m old enough to remember the difficulties of trading without it.

In the 1980s, before the EU single market, I worked for a UK software company based in Nottingham. One of our partners was the French computer manufacturer,  Bull. We had an agreement with Bull to support our software on their hardware – the SPS9 and SPS7. In a world before high-speed networks, this meant physically having the machines on loan in our offices. A heavily bureaucratic process known as a carnet was required. This meant the machines had to be shipped back to France every year (“for a holiday”, as my director put it) and updated models returned. We were unable to carry out work for our Bull customers while the lengthy process of satisfying customs regulations took place.

One year, the machines were shipped back from France and held at the Port of Dover for inspection. At best, this process took a couple of days, but on this occasion the days turned into a week, and then almost two. Eventually, our shipping agent suggested that I give the customs people a call, as he was making no progress. After getting through to the right office I was met first with hostility, but after turning on the East Midland’s charm, he agreed to look into the problem for me.

The problem was simple – the carnet was in French, and the person in the customs office dealing with my shipment who spoke French was on holiday. They were due back in a couple of days. I sighed, as arguing with customs is a pointless exercise. Two days later after his colleague had returned, the computers were released and returned to Nottingham. However, this delay eventually contributed to the loss of a large contract.

Any Brexit agreement that fails to keep the UK in the single market will lead to a drag on the economy. And let’s not fool ourselves – the type of no agreement, hard Brexit promoted by the extremists in the Conservative and Labour parties will be even more disastrous.The consequences of a hard Brexit will be dire – especially for funding the public services we all rely on like the NHS.

There’s a great opportunity on June 8th to stop this economic vandalism happening. We need to make sure that there are strong voices in the next parliament that will fight for our place in the single market.

The best way to ensure that this happens is to vote for Liberal Democrats.

Raspberry Pi – camera box

This weekend, I was finally happy that I’d managed to implement a reasonable temperature and humidity project as well as a motion detecting camera for my Raspberry Pi. I decided to invest £19 in a ModMyPi camera box to consolidate them onto my Pi 3. It arrived today, and after an evening’s fun this is the result.

Pi Camera BoxLid open for testing. The DHT22 temperature and humidity sensor is at the front, with the motion sensor and camera mounted in the lid.

Pi Camera Box - installed in the garageIn situ.

I think it looks much better than my original attempt, even if the rather fiddly assembly took a couple of hours (with testing) rather than the 10 minutes claimed by the manufacturer! It also means that as my camera is now mounted the correct way up, I no longer need to rotate the image by 180 degrees in my code …

Update: After I’d installed this in the garage, I started to get a large number of false positives. A change back to my Pi2 made little difference (although the original version I’d put together but without the DHT22 had worked well). Finally, soldering a 10k resistor between the data and ground wires of the PIR detector seems to have resolved the issue of the data pin going high without it sensing movement.