Smart metering – 1973 style

46 years on since this quirky piece from Tomorrow’s World,12 million or so first generation smart meters are installed. Second generation meters are supposed to be ubiquitous by the end of 2020. But by January of this year, just 250,000 had been installed. The £11bn project is running years late and at least £500m over budget. It seems unlikely this target will be met. The “and then a miracle happens” graphs in this House of Commons Library article bears this pessimistic view out.

The forty pence per year to read each meter in 1973 is around £4.80 in today’s money. Assuming that there are 48 million domestic meters, the programme will cost at least £240 per meter. Break-even in 50 years – if meters were still read 1973-style and they were capable of lasting anything like that long. But at least you won’t find Michael Rodd rummaging through your cupboards.

Note: For the computer history geeks, the ‘small computer’ shown in the clip is a Digital Equipment Corporation PDP-8.

The brain is (mostly) not a computer

I recently had my attention drawn to this essay from May 2016 – The Empty Brain – written by psychologist Robert Epstein (thanks Andrew). In it, Epstein argues that the dominant information processing (IP) model of the brain is wrong. He states that human brains do not use symbolic representations of the world and do not process information like a computer. Instead, the IP model is one chained to our current level of technological sophistication. It is just a metaphor, with no biological validity.

Epstein points out that no-one now believes that the human brain works like a hydraulic system. However, this was the dominant model of intelligence from 300 BCE to the 1300s. It was based on the technology of the times. Similarly, no-one now argues that the brain works like a telegraph. This model was popularised by physicist Hermann von Helmholtz in the mid 1800s. The IP model of the brain can be traced back to the mid 20th century. Epstein cites John von Neumann (mathematician) and George Miller (psychologist) as being particularly influential in its development. His conclusion is that it is as misguided as the hydraulic and telegraphy models of earlier times.

If Epstein is correct, his argument has significant implications for the world of artificial intelligence. If humans are not information processors, with algorithms, data, models, memories and so on, then how could computing technology be programmed to become artificially intelligent? Is it even possible with current computing architectures? (*) There has been no successful ‘human brain project’ so far using such a model. I’m convinced (as both a computer scientist and psychologist) that there never will be.

However, I disagree with what I interpret as Epstein’s (applied) behaviourist view of human intelligence. The argument that we act solely on combinations of stimuli reinforced by the rewards or punishment that follow has been thoroughly debunked (+). There is a difference between explaining something and explaining away something. The behaviourist obsession with explaining away rather than attempting explanations of mental events is a serious blind spot to progress. As serious as the obsession with the IP model, to the exclusion of other possibilities, exhibited by many cognitive scientists.

Living together in perfect harmony on my bookshelf - some of the many psychological traditions.
Living together in perfect harmony on my bookshelf – some of the many psychological traditions.

Just because we can’t currently say how the brain changes in response to learning something, or how we later re-use this knowledge, doesn’t mean that the task will always be impossible. It certainly doesn’t mean that our brains don’t have biological analogues of memories or rules. Declarative and procedural knowledge exists, even if there isn’t a specific collection of neurons assigned to each fact or process we know.

Furthermore, the limits of our current understanding of brain architecture doesn’t invalidate the IP paradigm per-se – at least for partly explaining human intelligence. We shouldn’t be surprised at this. After all, blood circulates around the body – and brain – using hydraulics. This earlier model of how the brain functions therefore isn’t completely invalid – at least, at a low-level. It may therefore turn out that the IP model of intelligence is at least partly correct too.

Epstein finishes his essay by saying asserting “We are organisms, not computers. Get over it.” He’s right – up to a point. But the explanations (or explaining away) he offers are partial at best. Psychologists from all traditions have something to add to the debate about human intelligence. Discarding one approach solely on the grounds that it can’t explain everything that makes up human intelligence is just silly. And that’s something which Epstein definitely needs to get over.

 

(*) I asked the same question at the end of Brainchildren – Exploding robots and AI. I’m still not ready to answer it!

(+) For example, see Dennett’s essay Skinner Skinned in Brainstorms.

Brainchildren – Exploding robots and AI

I’ve recently been dipping into Brainchildren – essays on designing minds, by the philosopher Daniel C. Dennett. The essays in the book were written between the mid 1980s and 1998. There’s a whole section dedicated to artificial intelligence, hence my interest. It’s instructive to look at this topic from a philosophical rather than a pure technology perspective. It certainly makes a pleasant change from being constantly bombarded with the frenzied marketing half-truths of the last couple of years. I mean you, shouty Microsoft man.

Part of the cover of Brainchildren, by Daniel C. Dennett

My conclusion from reading Brainchildren is that many of the problems with AI, known in the 80s, have not been addressed. They’ve simply been masked by the rapidly increasing computer power (and decreasing costs) of the last three decades. Furthermore, the problems that beset AI are unlikely to be resolved in the near future without a fundamental shift in architectural approaches.

Exploding Robots – The Frame Problem

One such hard problem for AI is known as the frame problem. How do you get a computer program (controlling a robot, for example) to represent its world efficiently and to plan and execute its actions appropriately?

Dennett imagines a robot with a single task – to fend for itself. The robot is told that the spare battery it relies on is in a room with a bomb in it. It quickly decides to pull the cart its battery sits on out of the room. The robot acts and is destroyed, as the bomb is also on the cart. It failed to realise a crucial side effect of its planned action.

A rebuilt (and slightly dented) robot is programmed with the requirement to consider all potential side effects of its actions. It is set the same task and decides to pull the cart out of the room. However, it then spends so much time evaluating all of the possible implications of this act – Will it change the colour of the walls? What if the cart’s wheels need to rotate more times than it has wheels? – that the bomb explodes before it has had time to do anything.

The third version of the robot is designed to ignore irrelevant side effects. It is set the same task, decides on the same plan, but then appears to freeze. The robot is so busy ignoring all of the millions of irrelevant side effects that it fails to find the important one before the bomb explodes.

AI is impossible to deliver using 20th century technologies

Dennet concludes that an artificially intelligent program needs to be capable of ignoring most of what it knows or can deduce. As the robot thought experiments show, this can’t be achieved by exhaustively ruling out possibilities. In other words, not by the brute-force algorithms commonly used by chess playing programs and presumably by this fascinating system used in the NHS for identifying the extent of cancer tumours.

The hardest problem for an AI isn’t finding enough data about its world. It’s about making good decisions (*) – efficiently – about the 99% of data held that isn’t relevant.

Human brains do this qualification task incredibly efficiently, using a fraction of the computing power available to your average mobile ‘phone. Artificial “brains”, unless ridiculously constrained, simply don’t perform with anything like the flexibility required. My belief is that the key problem lies with the underlying computing architectures used for current “AI” systems. These architectures have been fundamentally unchanged since the 1940s. An entirely new approach to system architecture (hardware and software) is required, as the computational paradigm is unsuitable for the task.

 

(*) As good decisions, and ideally better, than a trained person would make.

Artificial intelligence is (mostly) not intelligent

This is not AI-powered, even though it's about to forecast the weather.
This is not an artificial intelligence, even though it’s about to forecast the weather.

I last wrote about artificial intelligence here in February 2014. Four and a half years ago it wasn’t something that very many people were paying attention to. Artificial intelligence (AI) had been fashionable in computing circles back in the mid 1980s, but its popularity as a mainstream topic was long gone. Cognitive scientists and psychologists also appeared to have given up on the topic. For example, the Open University removed the chapters on cognitive modelling and connectionism from the final few presentations of DD303 sometime around 2011. Fortunately, this was after I’d taken the course.

However, you can’t help but notice that there’s been a huge surge in software companies jumping onto the AI bandwagon recently. Probably the most irritating manifestation of this trend is the shouty chap on the Microsoft TV advert. While what he’s peddling is interesting, it’s not a definition of AI that I recognise.

By these same standards, the camera on your smartphones isn’t using AI to take better photographs, regardless of manufacturer claims. Chess playing computers aren’t AIs. And self-driving cars – no, they’re not using AI to avoid obstacles.

All of these examples are simply using the vast computing power we have available today to scan for patterns in ever-larger datasets. Domain-specific algorithms are then used to obtain a result. Algortihms that enable them to play chess, avoid obstacles and take better photographs. The more computing power there is, the more options these algorithms can run, and the more intelligent they seem. But they use the power of brute force computing rather than anything resembling an articificial human – or biological – intelligence to obtain results.

If you ask your camera phone to play chess, you won’t get very far. Likewise, you’ll not find a self-driving car that can diagnose illnesses. There are people who can do both – maybe even simultaneously – and avoid obstacles while driving a car, figure out that Brexit is a bad idea and so on.

Having said all of that, these examples are still better uses of computing resources and power than cryptocurrency mining. At the time of writing this activity is consuming as much electricity as the whole of Austria and adding incrementally to climate change.

So if my earlier examples aren’t AI, what is?

The term AI should be reserved for systems that (a) simulate human cognition and (b) can subsequently be used to explain how human cognition works. An AI system should also not be inherently domain-specific. In other words, the computing framework (hardware plus software) used should be capable of being retrained to deliver solutions in multiple domains, potentially simultaneously, just as a person can.

Without such rigour being applied to the definition of AI, any or all computer programs could be called AI. Much as I love the algorithm I wrote for my premium bond simulator a few days ago, it’s not an AI. Neither is my weather forecaster.

I’m not trying to argue about the number of angels that will fit on a pin-head here. I have a real concern about the misuse of the term AI. There is genuinely interesting research being performed in artificial intelligence. SpiNNaker at Manchester University appears to be one such example.

However, nothing will stop the flow of funding to valuable AI research faster than the inevitable perception (I predict within 3 years) that AI has failed. This will happen because software marketeers don’t understand what AI is and don’t really care anyway. For them, AI is simply a means to shift more computing “stuff”. When it is no longer a hot topic it will be unceremoniously dumped and rubbished for the next “big thing”.

Think I’m exaggerating? Take a look at the rise and fall of any big computing trend of the last 40 years. Object databases in the mid 1990s, for example. Computing has always been the equivalent of the fashion business for nerds (like me).

Premium bond mythbusting

One of yesterday’s budget announcements was the lowering of the minimum premium bond purchase from £100 to £25 by March 2019. Inevitably the usual conspiracy theorists and/or people who don’t understand probability came out to play on various forums.

Some facts:

  • Every bond in every draw has an equal chance of winning a prize. Currently, these odds are 24,500 to 1 against.
  • If you hold a single £1 bond, then with average luck you’ll win a prize once every 24,500 months – or once every 2,041 years.
  • A £100 holding would improve these odds to once every 20 years or so.
  • Someone with the maximum holding of £50,000 could therefore expect to win around two prizes a month. 2.04, to be precise.

However, two myths seem to be in common circulation. The widespread belief in these myths perhaps explains why 42% still think that the NHS will get £350m/week extra after Brexit. (Also a myth – along with the idea that a Brexit of any type will deliver a dividend to the UK).

  • Myth 1 – blocks of consecutive bond numbers stand a better chance of winning than widely scattered bond numbers.
  • Myth 2 – a newer bond has a better chance of winning than an older bond.

Neither is true – as every bond in the draw has an equal chance of winning a prize.

Myth 1

It makes no difference whether you hold a single block of consecutive bonds or if they are scattered. Believing otherwise is as fallacious as suggesting that the sequence 6,6,6,6,6,6 is more or less likely than 1,2,3,4,5,6 when rolling a die six times. Assuming a fair die, any six number sequence is as likely as any other, as a die has no memory for what was rolled previously. The same is true for premium bonds – there’s no memory for which numbers have been drawn.

Myth 2

The old number / new number myth probably stems from the observation that new bonds seem to win more frequently than older ones – if you just look at lists of prize winners. However, this neglects the obvious point that older bonds are more likely to have been cashed in than newer ones. Regardless of when a bond was bought, it still has an equal chance of winning a prize. This myth is especially pernicious, as someone who withdraws older bonds to purchase newer ones loses at least a month of opportunity. This is because a bond bought (say) in November won’t be entered into a draw until the following January.

Crunching the numbers

Now, I realise that if you’re still struggling to see past these myths, some proof might be useful. So as part of my mental recovery from chemotherapy, I’ve written a premium bond simulator this morning. It’s aim is to dispel these two myths.

It works by simulating 82 years worth of bond draws. 81 is the average UK life expectancy. The extra year stems from the rule that bonds can be left in the draw for up to a year after someone’s death.

To make the programme run in a reasonable length of time, the number of bonds in each draw has been scaled back from 60 billion (approximately the number in circulation) to 6 million. Maximum bond holdings have been scaled back proportionately – from 50,000 to 5. This means that the outcome – an average 2.04 prizes per month – is maintained in line with the real NS&I draw.

There are four bondholder types defined. Someone with a block of consecutive numbers, someone with widely scattered numbers, a holder who has old bonds (represented by low numbers) and a holder with new bonds (represented by high numbers).

Here’s the output of a couple of runs from earlier on this afternoon. They demonstrate that every bondholder type has an approximately equal chance of winning over a lifetime of bondholding.

On this run, the bondholder with scattered single bonds won more times than the bondholder with consecutive numbers
On this run, the bondholder with scattered single bonds won more times than the bondholder with a block of consecutive numbers.
On this run, older bonds outperformed new bonds.
On this run, older bonds outperformed new bonds.

If you’re still not convinced, here’s the source code so you can play with it yourself.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
#include <stdio.h> 
#include <stdbool.h>
#include <stdint.h>
#include <time.h>
#include <stdlib.h>
 
#define TOTALBONDS 6000000
#define WINODDS    2.45
#define BONDSHELD  5
#define TOTALDRAWS 984
 
 
double rand0to1()
{
    	/* return a pseudorandom number between 0 and 1 */
 
        return ((double)rand()/(double)RAND_MAX);
}
 
void main()
{
        int prizes, winner, i, drawnumber, prizesallocated; 
        int blocktotal=0, singletotal=0, oldtotal=0, newtotal=0;
 
        /* Different bondholder types and their bond numbers */
        int blockbonds[BONDSHELD]={200000,200001,200002,200003,200004};
       	int singlebonds[BONDSHELD]={10534,248491,1000302,4000522,5200976};
        int oldbonds[BONDSHELD]={1,10,100,1000,10000};
        int newbonds[BONDSHELD]={5999900,5999910,5999920,5999930,5999940};
 
	bool allbonds[TOTALBONDS]; 
 
        /* Seed the pseudorandom number generator */
        srand(time(NULL));
 
        /* Total prizes are calculated using current NS&I odds of 24,500 to 1 per bond held, scaled to 2.45 to 1 per bond held for this simulation */
        /* The total number of bonds in ciruclation is also scaled back by the same propotion, from around 60 billion to 6 million */
        /* Therefore the maximum holding in this simulation is 5 bonds, equivalent to 50,000 in the real draw */
        /* Average luck implies each bondholder should win 2.04 times per draw - the same as in the real draw*/
 
        /* Run the draw multiple times - 12 is equivalent to 1 year's worth of real draws */
 
        for (drawnumber=0; drawnumber<TOTALDRAWS; drawnumber++) {
 
       		/* Set up the draw - no-one has won yet */
        	for (i=0;i<TOTALBONDS;i++)  { 
			allbonds[i]=false; 
        	}
 
        	/* Work out the total number of prizes */
		prizes = (int) (TOTALBONDS / WINODDS);
 
        	/* Draw a new bond to win until all prizes are allocated */
        	prizesallocated=0;
        	while (prizesallocated<prizes) {
            		winner = (int) (rand0to1()*TOTALBONDS);
            		/* The NS&I rules state that the same bond cannot win twice in the same draw */
           		/* prizesallocated is not incremented in this event and the prize is reallocated */
            		if (!allbonds[winner]) {
               			allbonds[winner] = true;
               	 		++prizesallocated;
            		}
        	}
 
        	/* Check each bondholder against the draw, and increment the total number of times they have won */
                printf("\nWinners for draw %d\n",drawnumber+1);
		for (i=0;i<BONDSHELD;i++) {
  			if (allbonds[blockbonds[i]]) { printf("Block bond %d wins!\n",blockbonds[i]); ++blocktotal; }
  			if (allbonds[singlebonds[i]]) { printf("Single bond %d wins!\n",singlebonds[i]); ++singletotal; }
  			if (allbonds[oldbonds[i]])  { printf("Old bond %d wins!\n",oldbonds[i]); ++oldtotal; }
  			if (allbonds[newbonds[i]]) { printf("New bond %d wins!\n",newbonds[i]); ++newtotal; }
        	}
	}
 
        /* Calculate what the average luck was for each of the bondholders */
 
	printf("\nSummary of results\n");
        printf("Block bond holder won on average %.2f times per draw\n", (float) blocktotal/TOTALDRAWS);
        printf("Single bond holder won on average %.2f times per draw\n", (float) singletotal/TOTALDRAWS);
        printf("Old bond holder won on average %.2f times per draw\n", (float) oldtotal/TOTALDRAWS);
        printf("New bond holder won on average %.2f times per draw\n", (float) newtotal/TOTALDRAWS);
 
}

Pester goes – but that isn’t the biggest surprise of the TSB debacle

Paul Pester, CEO of TSB, has finally left the business after the bank’s customers suffered another long weekend of failing processes and systems. That he resigned today “with immediate effect” isn’t the most surprising aspect of the story, even after he grimly clung on for months after the initial failures in April.

BBC report that TSB has lost just 6,000 net customers as a result of April's process and systems failures
From the BBC report on Paul Pester’s resignation

The BBC report that 26,000 TSB customers (out of 4.5 million claimed on their LinkedIn page) closed their accounts as a result of the TSB’s botched project to change their banking systems.

I find it astonishing that nearly 199 in 200 customers chose to stay with the bank. It’s even more astonishing that 20,000 people have actively decided to become customers since April, making a net loss of just 6,000. At this rate TSB will have more customers by the end of 2018 than they had at the beginning.

On this basis, having a high-profile business failure looks like a recipe for success. British financial consumers, no matter how badly they’ve been treated, seem unlikely to change their allegiance. As it happens, it’s also the trait that the government are banking on to push through Brexit.

What to do if you get a scam internet service provider call

Over the last few days I’ve received a number of calls from scammers posing as my internet service provider (ISP). Which? has also noted an uptick in activity from these parasites.

If you do answer a call from them, the best advice is to hang up and block the number they called from. Whatever you do don’t be fooled into providing them with your account details or installing and running any software on your computer. The one call I did answer rather than letting go through to voicemail was amateur in the extreme, but may have easily fooled a vulnerable person.

Scam phone calls are on the riseNumbers that I’ve caught and blocked in the last three days include:

  • 0151 327 3276 and 0151 329 0986

This appears to be an Indian outfit routing calls via a Liverpool number. Their gambit was to suggest that Nigerian scammers had compromised my router. Because my router has no keyboard(!), they needed remote access to my computer to change its address.

Even if the claim was credible, changing the (external) address on a router is usually as simple as restarting it. This is because UK retail ISPs dynamically allocate your address and in some cases, the same one is shared by multiple consumers. I managed to waste 10 minutes of their time by feigning incompetence. They eventually got bored with me and hung up.

  • 020 9637 1427

This is an automated message, claiming that your router has been compromised and threatening disconnection within 12 hours if you don’t press 1 to speak to a technician. Don’t press 1 – hang up!

  • +33 883 571 187, +33 983 215 066, +33 998 826 457, +33 239 932 929, +33 537 542 458, +33 307 433 634, +33 248 324 733 and +33 913 120 655

All of these (French) numbers rang within a few hours of each other. The chances are the number is being spoofed and all the calls originate in the same scammers operation.

Remember:
  • Your ISP will never call you from an “unknown” number.
  • Your ISP will never call and randomly ask for your personal information, such as your account number, bank details, date of birth etc.
  • If you’re unsure if a call is genuine, hang up and contact your ISP on their official number.
  • If you’ve been scammed or an attempt has been made, contact Action Fraud online or on 0300 123 2040.

Raspberry Pi weather forecaster

In my ongoing struggle to overcome chemo-brain, I’ve recently created a rudimentary (*) weather forecaster. It’s based on a Raspberry Pi Zero W that came as part of a gift subscription to the MagPi. It incorporates a BME280 sensor (for reporting pressure, temperature and humidity) and a LCD display.

The current prototype

Not pretty, but functional. It requires just 3 hours of historical data before it makes a forecast! Somehow I don’t think anyone will claim that professional weather forecasters are no longer needed … unless you’re someone who still believes in unicorns. (+)

Prototype Raspberry Pi weather forecaster
Prototype Raspberry Pi weather forecaster. The Pi is hidden under the multiplexing board.
The technical stuff

The LCD display and BME280 sensor are both connected to the Pi using the I2C bus exposed by GPIO (physical) pins 3 (data) and 5 (clock). Power to the LCD is provided through a 5v pin (2 or 4) and to the BME280 through a 3.3v pin (1 or 17). The final wire to each component is the ground. Any of GPIO pins 6, 9, 14, 20, 25, 30, 34 or 39 will do.

The coding tasks (in C and FORTRAN) were greatly simplified by the use of the excellent wiringPi library.

Release 1.0 of the code is available on github should anyone want to enjoy a good laugh at my legacy coding skills.

Costs

BME280 – £5.45 (I bought two for £10.90, just in case my soldering skills let me down).

LCD display – £7.99

Raspberry Pi Zero W – free as part of the current MagPi subscription offer, or around £10-£15 if you need to buy one and a 40 pin GPIO header.

Next steps

Finding a suitably sized waterproof project box with a transparent lid to house the electronics (sadly far harder than it used to be with the demise of Maplins earlier on this year).

Tidying up the wiring by using a piece of Veroboard (other brands are available) so I don’t need to use my only multiplexing board (by far the most expensive component in the current build).

Creating a better forecasting model. In FORTRAN, naturally. I acknowledge that it could take me some time to become as good as the Met Office …

 

 

(*) “Rudimentary” makes it sound better than it is. Here’s the current forecasting model it uses for those who don’t want to wade through all the code on github.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
      SUBROUTINE MFCAST(PDIFF,CIND,CFORE)
C----------------------------------------------------------------------C
C                                                                      C
C     MAKE FORECAST BASED ON 3 HOURLY PRESSURE DIFFERENCE              C
C                                                                      C
C     PDIFF - PRESSURE DIFFERENCE IN LAST THREE HOURS                  C
C     CIND  - CHANGE INDICATOR STRING                                  C
C     CFORE - FORECAST STRING                                          C
C                                                                      C
C     AUTHOR: TJH 26-07-2018                                           C
C                                                                      C
C----------------------------------------------------------------------C
      REAL PDIFF
      CHARACTER*2 CIND
      CHARACTER*20 CFORE
C
      IF (PDIFF.LE.-0.5 .AND. PDIFF.GE.-3.0) THEN
         CIND="v "
         CFORE=" Some rain possible "
      ELSE IF (PDIFF.LT.-3.0 .AND. PDIFF.GE.-6.0) THEN
         CIND="vv"
         CFORE="Wind,rain;fine later"
      ELSE IF (PDIFF.LT.-6.0) THEN
         CIND="vv"
         CFORE="    ** STORMY **    "
      ELSE IF (PDIFF.GE.0.5 .AND. PDIFF.LE.6.0) THEN
         CIND="^ "
         CFORE="Fine, becoming drier"
      ELSE IF (PDIFF.GT.6.0) THEN
         CIND="^^"
         CFORE="Becoming dry &amp; windy"
      ELSE
         CIND="--"
         CFORE="No change in weather"
      END IF
      RETURN
      END

 

(+) For example, someone who still believes that Brexit is a really good idea. So maybe I should approach the disgraced former defence secretary, Liam Fox, to promote it for me.

B&Q (diy.com) = Do It Yourself integration

Earlier on this week I ordered some new fence panels for delivery from diy.com (B&Q). This was the message I received during checkout:

DIY "digital" at B&Q
DIY “digital” at B&Q

A DIY (not very) digital experience. The part of their instructions about the letter was wrong, fortunately. I received an email from their supplier some hours later to organise delivery. This (naturally, given the paucity of the shopping experience so far) involved re-typing a B&Q order number into their website, even though they could have provided me with a unique URL to follow in their email.

If only there was a company that provides market-leading business to business integration software

Whetstone FORTRAN benchmark: Raspberry Pi 3B+

Following the launch on 14th March, a shiny new Raspberry Pi 3B+ landed on my doormat yesterday. I purchased it from The Pi Hut for the princely sum of £32 + £2.50pp. I was delighted that it arrived the day after I ordered it, despite not having paid extra to guarantee delivery.

Naturally(+) the first thing I did after setting it up (and deciding on the somewhat unoriginal hostname of ‘custard’) was to install a copy of gfortran to compile and run the Whetstone double precision FORTRAN benchmark(*).

When I was a young programmer in the 1980s, the Whetstone benchmarks were acknowledged as being the standard for assessing general computing performance. I believe that they first appeared in the early 1970s, written in ALGOL. This was way before multi-core processors became the norm, so the benchmark doesn’t give a true reflection of the total computing power available on the Pi 3B+. It’s easy enough to multiply by four to get an estimate, of course, so I’m sticking with it!

Running on a single core, the Pi 3B+ performs the benchmark approximately 33% faster than the Pi 3B (an average over 10 runs of 530,348 KIPS vs 399,858 KIPS). Compared with the original (single core) Raspberry Pi (150,962 KIPS), the improvement in speed is around 3.5x (i.e. 14x, if it had been possible for this benchmark to use all four cores on the Pi 3B+).

Custard running the Whetstone double precision FORTRAN benchmark

Whetstone double precision benchmarks - Raspberry Pi 3B+ and predecessors
Comparing Raspberry Pi models using the Whetstone double precision benchmark (single core performance)

 

 

(+) Naturally for me, that is.

(*) My previous posts about this benchmark and the results on earlier Raspberry Pi models are here:

Raspberry Pi 3, Raspberry Pi 2, Raspberry Pi Zero and the original Raspberry Pi.