Do robo-pets dream of electric sheep?

Philip K. Dick would be proud. Twenty robo-pets have been “installed” in Lancashire care homes. Definitely not AI, but an interesting use of robotics nonetheless.

The council’s champion for older people is quoted as saying:

These robo-pets are fantastic because they look and act like the real thing. The dogs bark when they hear you, the cats purr when you stroke them.

Part of the cover of Brainchildren, by Daniel C. Dennett

UKAuthority reports that there are plans for more.

A Zambretti weather forecaster

When I was recovering from my stem cell transplant last year, I built a weather forecaster. It uses a Raspberry Pi, a BME280 sensor and a 20×4 character LCD screen. The forecasting algorithm I’d written for it was rudimentary, to say the least. However, earlier on this year I came across a device known as a Zambretti forecaster. These were made by Negretti and Zambra for the UK market in the 1920s.

The Zambretti device uses air pressure, the direction of change, season and wind direction to make a forecast. Depending on what you believe on t’internet, a forecast accuracy of 90% is possible. You can buy replicas from a popular forest-based e-commerce site if you want to. I didn’t, but with the help of a search engine and a number of people who’ve been down this route before, wrote my own Zambretti forecasting algorithm. In FORTRAN 77 naturally.

The results so far have been encouraging. However, I’m of the opinion that the accuracy I’m perceiving may be due to the Forer effect, rather than the goodness of the algorithm. It’s true that different barometric conditions do produce different forecasts. However, I remain unsure as to the real difference between Fine : showers possible, Fair : showers likely and Fairly fine : showers. Not much I suspect.  

Anyway, it was producing good enough results to invest a few more pounds in a second LCD display. This retrieves the forecast made by the Raspberry Pi and sensor covered in cobwebs in the garage and displays it in more comfortable surroundings. This time I’ve stuck to C as my language of choice.

Raspberry Pi 4B plus a 20x4 LCD screen showing a Zambretti weather forecast
A Raspberry Pi 4B plus a 20×4 LCD screen, showing a Zambretti weather forecast all the way from my garage.

The current release of my Zambretti forecaster with remote display screen, with instructions, is available on github. Some (most) of the code could definitely do with improvement …

Government Gateway chronicles part 1: “Microsoft are our common enemy”

In business it’s been my experience that chance happenings, hard work and good luck lead to success more often than detailed strategic planning. This was definitely true when I found myself involved in Software AG’s efforts around the Government Gateway in the early 2000s.

I joined Software AG in the summer of 2001 from a web content management startup (Mediasurface). It was fortunate that I did. Mediasurface was haemorrhaging venture capital at an alarming rate. Some weeks after I left, Mediasurface was downsized drastically. I still have my share option certificate and occasionally wonder what on earth I’d have done with all the riches it was supposed to have bestowed on me.

The invitation to join Software AG came from two former Computer Associates’ colleagues. I knew of Software AG because of a faintly ridiculous encounter I’d had with them in the mid-1980s. I’d asked an employee if the Adabas database (like Ingres) had an embedded SQL interface. This was answered in the form of a (very) long lecture on why SQL was the spawn of the devil and why Adabas was the only true way. A consistent feature of Software AG during its 50 years has been passionate advocacy for our (often unique) approach to software engineering.

I was at Mediasurface the next time I bumped into Software AG. This happened during the 2001 Socitm Spring Conference, where XML was being positioned as the key enabler for e-government. By lunchtime I’d become rather tired of being asked if the Mediasurface product was based on XML (it wasn’t) and had gone brochure hunting instead. I picked up one for Tamino – the XML database – and asked a Software AG representative who on earth would need such a thing. You can guess how long it was before I managed to prise myself away …

One of the early assignments I had after joining Software AG was working on our bid for a local government ‘pathfinder’ project at Sedgemoor District Council. (It wasn’t my first assignment – this was for Leeds City Council, who remain the only client I’ve ever worked with who insisted on recording our meetings.) The Sedgemoor ‘virtual service provider’ project was the first time that I’d seen the Government Gateway mentioned outside of the press. It was a ‘negotiated’ procurement process and in November 2001 we were informed that we’d not won it. One of the pieces of feedback we received was that they believed they needed a piece of Microsoft technology known as a DIS (Departmental Interface Server) to work with the Gateway. “You’re not a Microsoft partner, so you can’t meet this requirement” was the gist of what was said.

This was intensely annoying for a number of reasons. Firstly, as the Gateway used documented XML standards it was more than possible for us to work with it. We’d shown that we could, using Tamino X-bridge (later renamed EnitreX XML Mediator) against the Inland Revenue’s ISV test Government Gateway. Secondly, our corporate tagline at the time was ‘The XML Company’, so senior management took a dim view of any suggestion that we didn’t do XML as well as someone else. Thirdly, we’d had lots of success that year in selling XML middleware to UK local government. If we were good enough for Birmingham City Council, we were good enough for anyone! One of my colleagues remarked that we should build our own DIS to demonstrate that they were wrong. At the time I laughed …

2002 arrived and Software AG was struggling. Worldwide sales had dropped from around €600m in FY2000 to just over €400m in FY2001, primarily due to difficulties in re-integrating the US business. The XML database market hadn’t grown in the way the analysts had predicted. The partner channel was also underperforming expectations, so there was renewed focus on trying to encourage business through that route. Alex Campbell, a long-time Software AG employee, was working as our UK partner manager at the time. One rainy lunchtime towards the end of April he happened to be walking past my desk and asked what I thought he should talk to the Sun local government team about. Having known Sun salespeople for most of my career (my first job after university was porting the PAFEC DOGS CAD software onto a Sun 2/50), I suggested that he might want to provoke them. This is what I came up with.

Our corporate offer wasn’t that exciting – as slide 3 of the presentation I spent that afternoon crafting shows.

Slide 3 of a presentation given to Sun in May 2002.

I predicted that by the time he’d finished talking through it, Sun’s salespeople would have already switched off. Hence slide 4. Marmite time. Alex was either going to love it or hate it. Given the legendary antipathy between Sun and Microsoft, I hoped that at the very least it would spark a discussion. It did.

The slide that started it all ...By the time I got to the “architecture” slide Alex was sold on the concept and he pitched it to Sun in early May.

The first 'Sun Software AG DIS' "architecture" slide
The first ‘Sun Software AG DIS’ “architecture” slide. IESD = Integrated Electronic Service Delivery – a Tamino-based CRM product we sold to UK local government customers from 1999 onwards.

I wasn’t able to be at that first meeting with Sun, but the slides had the desired effect. I’m fairly sure that some people at Sun were thinking along similar lines too, but this certainly galvanised the effort. We agreed to jointly approach the Office of the e-Envoy to explore the idea further. But without the Marmite, obviously. Because it was nonsense (as well as grammatically incorrect). Without Microsoft, there would have been no gateway in the first place, and no opportunity for us.

To follow soon: Government Gateway chronicles part 2: The Gateway Interface Project gets the green light

Raspberry Pi 4B review

I’m now the proud owner of a  Raspberry Pi 4B. Naturally, I wanted to see how it performed using the Whetstone double precision benchmark. In FORTRAN, obviously. Over ten runs it averaged a single core performance of 1,259,871 KIPS. This is 2.4x faster than its predecessor, the 3B+, and 8.3x faster than the original model B, released in 2012.

Raspberry Pi Performance, 2012 - 2019
Whetstone double precision benchmark performance of Raspberry Pi models released between 2012 and 2019. (Single core, with no compiler optimisation flags set).

I’ve not yet decided what to do permanently with the latest addition to my collection. The others are used as a weather station, security cameras and for general tinkering. The graphics performance of the Pi 4B isn’t quite good enough to wean me off my Windows 10 PC for general office work and image editing. It’s not too far off being acceptable however. At £76.50 for the 4GB version (with a case, 3A power supply and Micro HDMI lead) it’s definitely better value.

The Pi 4B does get warm in use. vcgencmd reports a cpu temperature of 60 to 65 degrees Celsius when not under load. By way of contrast, my 3B+ idles at 50 degrees and the Pi Zero at 35 degrees. A heatsink or fan would seem like a good investment.

A Raspberry Pi 4B

I’m currently playing with the gfortran OpenMP compiler directives. I’ve already figured out the first two gotchas. The first is that gfortran wants the source file extension to be .f90 rather than .f (otherwise it ignores the OpenMP parallelisation directives in the code). The second is that the GNU implementation of FORTRAN 90 breaks backwards compatibility for traditional FORTRAN comments. Both were simple enough to fix once I’d worked out what was happening.

The compiler optimisation flags (-O1, -O2, and -O3) make a significant difference to performance. For benchmarking purposes I’ve not used them, but for any compute-intensive work they’re worth experimenting with. However, I still have nightmares about compiler optimisation settings breaking my code in the 1980s, hence my caution. Old habits die hard. The remaining challenge is figuring out which loops to parallelise. I have lots of not so lovely segmentation faults happening at the moment. Oh well.

The Raspberry Pi is one of the few things that make me feel proud to be British at the moment. Jo Swinson in her pitch to become the leader of the Liberal Democrats stresses the importance of the UK investing in technological leadership. She’s right, but we’ll need hundreds of similar successes. This is difficult enough to see happening while we’re still in the EU, let alone if we end up outside.

Smart metering – 1973 style

46 years on since this quirky piece from Tomorrow’s World,12 million or so first generation smart meters are installed. Second generation meters are supposed to be ubiquitous by the end of 2020. But by January of this year, just 250,000 had been installed. The £11bn project is running years late and at least £500m over budget. It seems unlikely this target will be met. The “and then a miracle happens” graphs in this House of Commons Library article bears this pessimistic view out.

The forty pence per year to read each meter in 1973 is around £4.80 in today’s money. Assuming that there are 48 million domestic meters, the programme will cost at least £240 per meter. Break-even in 50 years – if meters were still read 1973-style and they were capable of lasting anything like that long. But at least you won’t find Michael Rodd rummaging through your cupboards.

Note: For the computer history geeks, the ‘small computer’ shown in the clip is a Digital Equipment Corporation PDP-8.

The brain is (mostly) not a computer

I recently had my attention drawn to this essay from May 2016 – The Empty Brain – written by psychologist Robert Epstein (thanks Andrew). In it, Epstein argues that the dominant information processing (IP) model of the brain is wrong. He states that human brains do not use symbolic representations of the world and do not process information like a computer. Instead, the IP model is one chained to our current level of technological sophistication. It is just a metaphor, with no biological validity.

Epstein points out that no-one now believes that the human brain works like a hydraulic system. However, this was the dominant model of intelligence from 300 BCE to the 1300s. It was based on the technology of the times. Similarly, no-one now argues that the brain works like a telegraph. This model was popularised by physicist Hermann von Helmholtz in the mid 1800s. The IP model of the brain can be traced back to the mid 20th century. Epstein cites John von Neumann (mathematician) and George Miller (psychologist) as being particularly influential in its development. His conclusion is that it is as misguided as the hydraulic and telegraphy models of earlier times.

If Epstein is correct, his argument has significant implications for the world of artificial intelligence. If humans are not information processors, with algorithms, data, models, memories and so on, then how could computing technology be programmed to become artificially intelligent? Is it even possible with current computing architectures? (*) There has been no successful ‘human brain project’ so far using such a model. I’m convinced (as both a computer scientist and psychologist) that there never will be.

However, I disagree with what I interpret as Epstein’s (applied) behaviourist view of human intelligence. The argument that we act solely on combinations of stimuli reinforced by the rewards or punishment that follow has been thoroughly debunked (+). There is a difference between explaining something and explaining away something. The behaviourist obsession with explaining away rather than attempting explanations of mental events is a serious blind spot to progress. As serious as the obsession with the IP model, to the exclusion of other possibilities, exhibited by many cognitive scientists.

Living together in perfect harmony on my bookshelf - some of the many psychological traditions.
Living together in perfect harmony on my bookshelf – some of the many psychological traditions.

Just because we can’t currently say how the brain changes in response to learning something, or how we later re-use this knowledge, doesn’t mean that the task will always be impossible. It certainly doesn’t mean that our brains don’t have biological analogues of memories or rules. Declarative and procedural knowledge exists, even if there isn’t a specific collection of neurons assigned to each fact or process we know.

Furthermore, the limits of our current understanding of brain architecture doesn’t invalidate the IP paradigm per-se – at least for partly explaining human intelligence. We shouldn’t be surprised at this. After all, blood circulates around the body – and brain – using hydraulics. This earlier model of how the brain functions therefore isn’t completely invalid – at least, at a low-level. It may therefore turn out that the IP model of intelligence is at least partly correct too.

Epstein finishes his essay by saying asserting “We are organisms, not computers. Get over it.” He’s right – up to a point. But the explanations (or explaining away) he offers are partial at best. Psychologists from all traditions have something to add to the debate about human intelligence. Discarding one approach solely on the grounds that it can’t explain everything that makes up human intelligence is just silly. And that’s something which Epstein definitely needs to get over.


(*) I asked the same question at the end of Brainchildren – Exploding robots and AI. I’m still not ready to answer it!

(+) For example, see Dennett’s essay Skinner Skinned in Brainstorms.

Brainchildren – Exploding robots and AI

I’ve recently been dipping into Brainchildren – essays on designing minds, by the philosopher Daniel C. Dennett. The essays in the book were written between the mid 1980s and 1998. There’s a whole section dedicated to artificial intelligence, hence my interest. It’s instructive to look at this topic from a philosophical rather than a pure technology perspective. It certainly makes a pleasant change from being constantly bombarded with the frenzied marketing half-truths of the last couple of years. I mean you, shouty Microsoft man.

Part of the cover of Brainchildren, by Daniel C. Dennett

My conclusion from reading Brainchildren is that many of the problems with AI, known in the 80s, have not been addressed. They’ve simply been masked by the rapidly increasing computer power (and decreasing costs) of the last three decades. Furthermore, the problems that beset AI are unlikely to be resolved in the near future without a fundamental shift in architectural approaches.

Exploding Robots – The Frame Problem

One such hard problem for AI is known as the frame problem. How do you get a computer program (controlling a robot, for example) to represent its world efficiently and to plan and execute its actions appropriately?

Dennett imagines a robot with a single task – to fend for itself. The robot is told that the spare battery it relies on is in a room with a bomb in it. It quickly decides to pull the cart its battery sits on out of the room. The robot acts and is destroyed, as the bomb is also on the cart. It failed to realise a crucial side effect of its planned action.

A rebuilt (and slightly dented) robot is programmed with the requirement to consider all potential side effects of its actions. It is set the same task and decides to pull the cart out of the room. However, it then spends so much time evaluating all of the possible implications of this act – Will it change the colour of the walls? What if the cart’s wheels need to rotate more times than it has wheels? – that the bomb explodes before it has had time to do anything.

The third version of the robot is designed to ignore irrelevant side effects. It is set the same task, decides on the same plan, but then appears to freeze. The robot is so busy ignoring all of the millions of irrelevant side effects that it fails to find the important one before the bomb explodes.

AI is impossible to deliver using 20th century technologies

Dennet concludes that an artificially intelligent program needs to be capable of ignoring most of what it knows or can deduce. As the robot thought experiments show, this can’t be achieved by exhaustively ruling out possibilities. In other words, not by the brute-force algorithms commonly used by chess playing programs and presumably by this fascinating system used in the NHS for identifying the extent of cancer tumours.

The hardest problem for an AI isn’t finding enough data about its world. It’s about making good decisions (*) – efficiently – about the 99% of data held that isn’t relevant.

Human brains do this qualification task incredibly efficiently, using a fraction of the computing power available to your average mobile ‘phone. Artificial “brains”, unless ridiculously constrained, simply don’t perform with anything like the flexibility required. My belief is that the key problem lies with the underlying computing architectures used for current “AI” systems. These architectures have been fundamentally unchanged since the 1940s. An entirely new approach to system architecture (hardware and software) is required, as the computational paradigm is unsuitable for the task.


(*) As good decisions, and ideally better, than a trained person would make.

Artificial intelligence is (mostly) not intelligent

This is not AI-powered, even though it's about to forecast the weather.
This is not an artificial intelligence, even though it’s about to forecast the weather.

I last wrote about artificial intelligence here in February 2014. Four and a half years ago it wasn’t something that very many people were paying attention to. Artificial intelligence (AI) had been fashionable in computing circles back in the mid 1980s, but its popularity as a mainstream topic was long gone. Cognitive scientists and psychologists also appeared to have given up on the topic. For example, the Open University removed the chapters on cognitive modelling and connectionism from the final few presentations of DD303 sometime around 2011. Fortunately, this was after I’d taken the course.

However, you can’t help but notice that there’s been a huge surge in software companies jumping onto the AI bandwagon recently. Probably the most irritating manifestation of this trend is the shouty chap on the Microsoft TV advert. While what he’s peddling is interesting, it’s not a definition of AI that I recognise.

By these same standards, the camera on your smartphones isn’t using AI to take better photographs, regardless of manufacturer claims. Chess playing computers aren’t AIs. And self-driving cars – no, they’re not using AI to avoid obstacles.

All of these examples are simply using the vast computing power we have available today to scan for patterns in ever-larger datasets. Domain-specific algorithms are then used to obtain a result. Algorithms that enable them to play chess, avoid obstacles and take better photographs. The more computing power there is, the more options these algorithms can run, and the more intelligent they seem. But they use the power of brute force computing rather than anything resembling an artificial human – or biological – intelligence to obtain results.

If you ask your camera phone to play chess, you won’t get very far. Likewise, you’ll not find a self-driving car that can diagnose illnesses. There are people who can do both – maybe even simultaneously – and avoid obstacles while driving a car, figure out that Brexit is a bad idea and so on.

Having said all of that, these examples are still better uses of computing resources and power than cryptocurrency mining. At the time of writing this activity is consuming as much electricity as the whole of Austria and adding incrementally to climate change.

So if my earlier examples aren’t AI, what is?

The term AI should be reserved for systems that (a) simulate human cognition and (b) can subsequently be used to explain how human cognition works. An AI system should also not be inherently domain-specific. In other words, the computing framework (hardware plus software) used should be capable of being retrained to deliver solutions in multiple domains, potentially simultaneously, just as a person can.

Without such rigour being applied to the definition of AI, any or all computer programs could be called AI. Much as I love the algorithm I wrote for my premium bond simulator a few days ago, it’s not an AI. Neither is my weather forecaster.

I’m not trying to argue about the number of angels that will fit on a pin-head here. I have a real concern about the misuse of the term AI. There is genuinely interesting research being performed in artificial intelligence. SpiNNaker at Manchester University appears to be one such example.

However, nothing will stop the flow of funding to valuable AI research faster than the inevitable perception (I predict within 3 years) that AI has failed. This will happen because software marketeers don’t understand what AI is and don’t really care anyway. For them, AI is simply a means to shift more computing “stuff”. When it is no longer a hot topic it will be unceremoniously dumped and rubbished for the next “big thing”.

Think I’m exaggerating? Take a look at the rise and fall of any big computing trend of the last 40 years. Object databases in the mid 1990s, for example. Computing has always been the equivalent of the fashion business for nerds (like me).

Premium bond mythbusting

One of yesterday’s budget announcements was the lowering of the minimum premium bond purchase from £100 to £25 by March 2019. Inevitably the usual conspiracy theorists and/or people who don’t understand probability came out to play on various forums.

Some facts:

  • Every bond in every draw has an equal chance of winning a prize. Currently, these odds are 24,500 to 1 against.
  • If you hold a single £1 bond, then with average luck you’ll win a prize once every 24,500 months – or once every 2,041 years.
  • A £100 holding would improve these odds to once every 20 years or so.
  • Someone with the maximum holding of £50,000 could therefore expect to win around two prizes a month. 2.04, to be precise.

However, two myths seem to be in common circulation. The widespread belief in these myths perhaps explains why 42% still think that the NHS will get £350m/week extra after Brexit. (Also a myth – along with the idea that a Brexit of any type will deliver a dividend to the UK).

  • Myth 1 – blocks of consecutive bond numbers stand a better chance of winning than widely scattered bond numbers.
  • Myth 2 – a newer bond has a better chance of winning than an older bond.

Neither is true – as every bond in the draw has an equal chance of winning a prize.

Myth 1

It makes no difference whether you hold a single block of consecutive bonds or if they are scattered. Believing otherwise is as fallacious as suggesting that the sequence 6,6,6,6,6,6 is more or less likely than 1,2,3,4,5,6 when rolling a die six times. Assuming a fair die, any six number sequence is as likely as any other, as a die has no memory for what was rolled previously. The same is true for premium bonds – there’s no memory for which numbers have been drawn.

Myth 2

The old number / new number myth probably stems from the observation that new bonds seem to win more frequently than older ones – if you just look at lists of prize winners. However, this neglects the obvious point that older bonds are more likely to have been cashed in than newer ones. Regardless of when a bond was bought, it still has an equal chance of winning a prize. This myth is especially pernicious, as someone who withdraws older bonds to purchase newer ones loses at least a month of opportunity. This is because a bond bought (say) in November won’t be entered into a draw until the following January.

Crunching the numbers

Now, I realise that if you’re still struggling to see past these myths, some proof might be useful. So as part of my mental recovery from chemotherapy, I’ve written a premium bond simulator this morning. It’s aim is to dispel these two myths.

It works by simulating 82 years worth of bond draws. 81 is the average UK life expectancy. The extra year stems from the rule that bonds can be left in the draw for up to a year after someone’s death.

To make the programme run in a reasonable length of time, the number of bonds in each draw has been scaled back from 60 billion (approximately the number in circulation) to 6 million. Maximum bond holdings have been scaled back proportionately – from 50,000 to 5. This means that the outcome – an average 2.04 prizes per month – is maintained in line with the real NS&I draw.

There are four bondholder types defined. Someone with a block of consecutive numbers, someone with widely scattered numbers, a holder who has old bonds (represented by low numbers) and a holder with new bonds (represented by high numbers).

Here’s the output of a couple of runs from earlier on this afternoon. They demonstrate that every bondholder type has an approximately equal chance of winning over a lifetime of bondholding.

On this run, the bondholder with scattered single bonds won more times than the bondholder with consecutive numbers
On this run, the bondholder with scattered single bonds won more times than the bondholder with a block of consecutive numbers.
On this run, older bonds outperformed new bonds.
On this run, older bonds outperformed new bonds.

If you’re still not convinced, here’s the source code so you can play with it yourself (github release).

#include <stdio.h> 
#include <stdbool.h>
#include <stdint.h>
#include <time.h>
#include <stdlib.h>
#define TOTALBONDS 6000000
#define WINODDS    2.45
#define BONDSHELD  5
#define TOTALDRAWS 984
double rand0to1()
    	/* return a pseudorandom number between 0 and 1 */
        return ((double)rand()/(double)RAND_MAX);
void main()
        int prizes, winner, i, drawnumber, prizesallocated; 
        int blocktotal=0, singletotal=0, oldtotal=0, newtotal=0;
        /* Different bondholder types and their bond numbers */
        int blockbonds[BONDSHELD]={200000,200001,200002,200003,200004};
       	int singlebonds[BONDSHELD]={10534,248491,1000302,4000522,5200976};
        int oldbonds[BONDSHELD]={1,10,100,1000,10000};
        int newbonds[BONDSHELD]={5999900,5999910,5999920,5999930,5999940};
	bool allbonds[TOTALBONDS]; 
        /* Seed the pseudorandom number generator */
        /* Total prizes are calculated using current NS&I odds of 24,500 to 1 per bond held, scaled to 2.45 to 1 per bond held for this simulation */
        /* The total number of bonds in ciruclation is also scaled back by the same propotion, from around 60 billion to 6 million */
        /* Therefore the maximum holding in this simulation is 5 bonds, equivalent to 50,000 in the real draw */
        /* Average luck implies each bondholder should win 2.04 times per draw - the same as in the real draw*/
        /* Run the draw multiple times - 12 is equivalent to 1 year's worth of real draws */
        for (drawnumber=0; drawnumber<TOTALDRAWS; drawnumber++) {
       		/* Set up the draw - no-one has won yet */
        	for (i=0;i<TOTALBONDS;i++)  { 
        	/* Work out the total number of prizes */
		prizes = (int) (TOTALBONDS / WINODDS);
        	/* Draw a new bond to win until all prizes are allocated */
        	while (prizesallocated<prizes) {
            		winner = (int) (rand0to1()*TOTALBONDS);
            		/* The NS&I rules state that the same bond cannot win twice in the same draw */
           		/* prizesallocated is not incremented in this event and the prize is reallocated */
            		if (!allbonds[winner]) {
               			allbonds[winner] = true;
        	/* Check each bondholder against the draw, and increment the total number of times they have won */
                printf("\nWinners for draw %d\n",drawnumber+1);
		for (i=0;i<BONDSHELD;i++) {
  			if (allbonds[blockbonds[i]]) { printf("Block bond %d wins!\n",blockbonds[i]); ++blocktotal; }
  			if (allbonds[singlebonds[i]]) { printf("Single bond %d wins!\n",singlebonds[i]); ++singletotal; }
  			if (allbonds[oldbonds[i]])  { printf("Old bond %d wins!\n",oldbonds[i]); ++oldtotal; }
  			if (allbonds[newbonds[i]]) { printf("New bond %d wins!\n",newbonds[i]); ++newtotal; }
        /* Calculate what the average luck was for each of the bondholders */
	printf("\nSummary of results\n");
        printf("Block bond holder won on average %.2f times per draw\n", (float) blocktotal/TOTALDRAWS);
        printf("Single bond holder won on average %.2f times per draw\n", (float) singletotal/TOTALDRAWS);
        printf("Old bond holder won on average %.2f times per draw\n", (float) oldtotal/TOTALDRAWS);
        printf("New bond holder won on average %.2f times per draw\n", (float) newtotal/TOTALDRAWS);