Like many others, I’ve been eagerly awaiting the release of WordPress 5.0 and its new Gutenberg editor. The project, however, appears to have run into problems. The release date has been moved twice – it currently sits as “TBD”.
I’m hoping that the people running the project have read “The Mythical Man Month“. To get the release back on track, Brooks recommends:
“Take no small slips … allow enough time in the new schedule to ensure that the work can be carefully and thoroughly done, and that rescheduling will not have to be done again.”
“Trim the task … In practice this tends to happen anyway … only alternatives are to trim it formally and carefully, to reschedule, or to watch the task get silently trimmed by hasty design and incomplete testing.” (No-one in their right mind would want the last type of trimming to take place).
To not add more people into an already late project. “Brooks’ Law: Adding manpower to a late software project makes it later.”
I’m looking forward to seeing WordPress 5.0 in the wild, but I’m happy to wait. In the words written on the menu of the Antoine restaurant in New Orleans:
Good cooking takes time. If you are made to wait, it is to serve you better, and to please you.
I’ve recently been dipping into Brainchildren – essays on designing minds, by the philosopher Daniel C. Dennett. The essays in the book were written between the mid 1980s and 1998. There’s a whole section dedicated to artificial intelligence, hence my interest. It’s instructive to look at this topic from a philosophical rather than a pure technology perspective. It certainly makes a pleasant change from being constantly bombarded with the frenzied marketing half-truths of the last couple of years. I mean you, shouty Microsoft man.
My conclusion from reading Brainchildren is that many of the problems with AI, known in the 80s, have not been addressed. They’ve simply been masked by the rapidly increasing computer power (and decreasing costs) of the last three decades. Furthermore, the problems that beset AI are unlikely to be resolved in the near future without a fundamental shift in architectural approaches.
Exploding Robots – The Frame Problem
One such hard problem for AI is known as the frame problem. How do you get a computer program (controlling a robot, for example) to represent its world efficiently and to plan and execute its actions appropriately?
Dennett imagines a robot with a single task – to fend for itself. The robot is told that the spare battery it relies on is in a room with a bomb in it. It quickly decides to pull the cart its battery sits on out of the room. The robot acts and is destroyed, as the bomb is also on the cart. It failed to realise a crucial side effect of its planned action.
A rebuilt (and slightly dented) robot is programmed with the requirement to consider all potential side effects of its actions. It is set the same task and decides to pull the cart out of the room. However, it then spends so much time evaluating all of the possible implications of this act – Will it change the colour of the walls? What if the cart’s wheels need to rotate more times than it has wheels? – that the bomb explodes before it has had time to do anything.
The third version of the robot is designed to ignore irrelevant side effects. It is set the same task, decides on the same plan, but then appears to freeze. The robot is so busy ignoring all of the millions of irrelevant side effects that it fails to find the important one before the bomb explodes.
AI is impossible to deliver using 20th century technologies
Dennet concludes that an artificially intelligent program needs to be capable of ignoring most of what it knows or can deduce. As the robot thought experiments show, this can’t be achieved by exhaustively ruling out possibilities. In other words, not by the brute-force algorithms commonly used by chess playing programs and presumably by this fascinating system used in the NHS for identifying the extent of cancer tumours.
The hardest problem for an AI isn’t finding enough data about its world. It’s about making good decisions (*) – efficiently – about the 99% of data held that isn’t relevant.
Human brains do this qualification task incredibly efficiently, using a fraction of the computing power available to your average mobile ‘phone. Artificial “brains”, unless ridiculously constrained, simply don’t perform with anything like the flexibility required. My belief is that the key problem lies with the underlying computing architectures used for current “AI” systems. These architectures have been fundamentally unchanged since the 1940s. An entirely new approach to system architecture (hardware and software) is required, as the computational paradigm is unsuitable for the task.
(*) As good decisions, and ideally better, than a trained person would make.
I last wrote about artificial intelligence here in February 2014. Four and a half years ago it wasn’t something that very many people were paying attention to. Artificial intelligence (AI) had been fashionable in computing circles back in the mid 1980s, but its popularity as a mainstream topic was long gone. Cognitive scientists and psychologists also appeared to have given up on the topic. For example, the Open University removed the chapters on cognitive modelling and connectionism from the final few presentations of DD303 sometime around 2011. Fortunately, this was after I’d taken the course.
However, you can’t help but notice that there’s been a huge surge in software companies jumping onto the AI bandwagon recently. Probably the most irritating manifestation of this trend is the shouty chap on the Microsoft TV advert. While what he’s peddling is interesting, it’s not a definition of AI that I recognise.
By these same standards, the camera on your smartphones isn’t using AI to take better photographs, regardless of manufacturer claims. Chess playing computers aren’t AIs. And self-driving cars – no, they’re not using AI to avoid obstacles.
All of these examples are simply using the vast computing power we have available today to scan for patterns in ever-larger datasets. Domain-specific algorithms are then used to obtain a result. Algortihms that enable them to play chess, avoid obstacles and take better photographs. The more computing power there is, the more options these algorithms can run, and the more intelligent they seem. But they use the power of brute force computing rather than anything resembling an articificial human – or biological – intelligence to obtain results.
If you ask your camera phone to play chess, you won’t get very far. Likewise, you’ll not find a self-driving car that can diagnose illnesses. There are people who can do both – maybe even simultaneously – and avoid obstacles while driving a car, figure out that Brexit is a bad idea and so on.
Having said all of that, these examples are still better uses of computing resources and power than cryptocurrency mining. At the time of writing this activity is consuming as much electricity as the whole of Austria and adding incrementally to climate change.
So if my earlier examples aren’t AI, what is?
The term AI should be reserved for systems that (a) simulate human cognition and (b) can subsequently be used to explain how human cognition works. An AI system should also not be inherently domain-specific. In other words, the computing framework (hardware plus software) used should be capable of being retrained to deliver solutions in multiple domains, potentially simultaneously, just as a person can.
Without such rigour being applied to the definition of AI, any or all computer programs could be called AI. Much as I love the algorithm I wrote for my premium bond simulator a few days ago, it’s not an AI. Neither is my weather forecaster.
I’m not trying to argue about the number of angels that will fit on a pin-head here. I have a real concern about the misuse of the term AI. There is genuinely interesting research being performed in artificial intelligence. SpiNNaker at Manchester University appears to be one such example.
However, nothing will stop the flow of funding to valuable AI research faster than the inevitable perception (I predict within 3 years) that AI has failed. This will happen because software marketeers don’t understand what AI is and don’t really care anyway. For them, AI is simply a means to shift more computing “stuff”. When it is no longer a hot topic it will be unceremoniously dumped and rubbished for the next “big thing”.
Think I’m exaggerating? Take a look at the rise and fall of any big computing trend of the last 40 years. Object databases in the mid 1990s, for example. Computing has always been the equivalent of the fashion business for nerds (like me).
In my ongoing struggle to overcome chemo-brain, I’ve recently created a rudimentary (*) weather forecaster. It’s based on a Raspberry Pi Zero W that came as part of a gift subscription to the MagPi. It incorporates a BME280 sensor (for reporting pressure, temperature and humidity) and a LCD display.
The current prototype
Not pretty, but functional. It requires just 3 hours of historical data before it makes a forecast! Somehow I don’t think anyone will claim that professional weather forecasters are no longer needed … unless you’re someone who still believes in unicorns. (+)
The technical stuff
The LCD display and BME280 sensor are both connected to the Pi using the I2C bus exposed by GPIO (physical) pins 3 (data) and 5 (clock). Power to the LCD is provided through a 5v pin (2 or 4) and to the BME280 through a 3.3v pin (1 or 17). The final wire to each component is the ground. Any of GPIO pins 6, 9, 14, 20, 25, 30, 34 or 39 will do.
The coding tasks (in C and FORTRAN) were greatly simplified by the use of the excellent wiringPi library.
SUBROUTINE MFCAST(PDIFF,CIND,CFORE)C----------------------------------------------------------------------CC CC MAKE FORECAST BASED ON 3 HOURLY PRESSURE DIFFERENCE CC CC PDIFF - PRESSURE DIFFERENCE IN LAST THREE HOURS CC CIND - CHANGE INDICATOR STRING CC CFORE - FORECAST STRING CC CC AUTHOR: TJH 26-07-2018 CC CC----------------------------------------------------------------------CREAL PDIFF
CFORE=" Some rain possible "ELSEIF(PDIFF.LT.-3.0.AND. PDIFF.GE.-6.0)THEN
CFORE=" ** STORMY ** "ELSEIF(PDIFF.GE.0.5 .AND. PDIFF.LE.6.0)THEN
CFORE="Fine, becoming drier"ELSEIF(PDIFF.GT.6.0)THEN
CFORE="Becoming dry & windy"ELSE
CFORE="No change in weather"ENDIFRETURNEND
C MAKE FORECAST BASED ON 3 HOURLY PRESSURE DIFFERENCE C
C PDIFF - PRESSURE DIFFERENCE IN LAST THREE HOURS C
C CIND - CHANGE INDICATOR STRING C
C CFORE - FORECAST STRING C
C AUTHOR: TJH 26-07-2018 C
IF (PDIFF.LE.-0.5 .AND. PDIFF.GE.-3.0) THEN
CFORE=" Some rain possible "
ELSE IF (PDIFF.LT.-3.0 .AND. PDIFF.GE.-6.0) THEN
ELSE IF (PDIFF.LT.-6.0) THEN
CFORE=" ** STORMY ** "
ELSE IF (PDIFF.GE.0.5 .AND. PDIFF.LE.6.0) THEN
CFORE="Fine, becoming drier"
ELSE IF (PDIFF.GT.6.0) THEN
CFORE="Becoming dry & windy"
CFORE="No change in weather"
(+) For example, someone who still believes that Brexit is a really good idea. So maybe I should approach the disgraced former defence secretary, Liam Fox, to promote it for me.
I’ve been following the news on the scandal surrounding Volkswagen with keen interest. It appears that at the heart of the matter is a piece of software, written for or by Volkswagen, that forms part of their engine management system. As a software professional of more than 30 years standing, it angers me that an apparently reputable organisation (of which I was a customer of for many years) thinks that it is acceptable to misuse code in this way. While there’s clearly a need for those in charge of VW to take responsibility, there is also a need for the individual software professionals involved to examine their conduct. So I’ve been pleased to see that the British Computer Society CEO, Paul Fletcher, has published a blog article on this topic today.
Software is no longer confined to large computers in purpose-built rooms – it’s everywhere
In it, Paul calls for all technologists to work to a strong professional code of conduct. Naturally, the BCS has a code of conduct that it expects its members to conform to. However, in my opinion, it’s not as strongly worded or as visible as it needs to be, particularly when you compare it to those of other professional bodies, such as the British Psychological Society’s code of ethics and conduct. Professional qualifications and membership really mean something in psychology – but despite rising membership numbers and the BCS’s best efforts, the equivalent professional qualifications and membership for software professionals carry a fraction of the weight that they ought to.
Sadly, even if the code of conduct was stronger and more visible, the BCS would need far more clout than it has today to promote it more widely. Even more importantly, a government-backed regulatory framework, to ensure that the BCS can support its members put under undue pressure to act unethically, is absent.
I believe we should be just as interested in ensuring that people who write and implement software are as well-regulated and ethically aware as professional psychologists. After all, unethical behaviour in software development can have potentially devastating effects on the environment, health, wealth … in fact, on any aspect of society touched by software.
Which, as society is becoming increasingly aware, is all of it.
The best thing that could come out of the VW scandal is that we all start to pay far more attention to ensuring that technologists, especially software developers, understand their ethical duty to society and that they have the necessary professional and regulatory backing to be able to stand up to rogue employers.
Nothing better illustrates car insurers preying on loyal customers than Sarah Cooper’s tweet. “My car insurance renewal is £1,200. New policy with same company is £690. How do they justify this?” They don’t. They just do it.
I’ve had my car insurance renewal notice from Allianz today. Comparing it with last year’s premium, they want an additional 51%! Nothing has changed – except that I’ve had another claim free year, bringing my total to 10. A quick check of a couple of price comparison websites showed that for the same cover the cheapest quotation was around £15 less than I’d paid this year, with 10s of quotations clustered around £10-£20 more expensive than last year. There were three or four (out of a hundred or so) that were more expensive than the Allianz renewal, but they were offering free unicorns as well. (OK, I’m fibbing about the unicorns).
I rang Allianz up. I was calm. I politely explained the situation. I was reasonable and persuasive. I asked that they considered renewing my policy at around the same price as last year, or perhaps on or around the median quotation I’d found for this year.
Their call handler was lovely, but her response was:
We don’t price match sir. I could re-quote you, but the result would be the same.
They wouldn’t budge by even a penny. I hate being taken for a fool and her excuses became less and less convincing as I suggested that they were guilty of sharp practice. I’ve cancelled my policy with them and I’ll do everything I possibly can do to make sure that I don’t use Allianz again any time soon.
So if customer loyalty is as worthless as it appears to be from this example, I wonder why so many software companies are marketing customer experience management and customer loyalty solutions?
Perhaps they’d be better off trying to sell customer disloyalty solutions instead.
One of the many unsubstantiated claims surrounding open source approaches to software development is that they are inherently less prone to errors than closed source development because there are more eyes able to inspect the underlying code. It’s an irrational claim, as it’s not the number of eyes that matter, but the quality of the brain(s) behind them and whether they’re looking in the first place. It’s not the case that any particular software business model has a monopoly on talent, so it has never been a credible claim that open source is somehow likely to be better in this respect.
“Any fool can know. The point is to understand.” ― Albert Einstein
More from CeBIT – David Cameron talks to CEO Karl-Heinz Streibich and Chancellor Angela Merkel about the benefits of digital technologies such as those provided by Software AG.
This is very important for the future of Europe, because we need to be more competitive, we need to be more productive, we need to keep our costs down and digital technology can be a very big part of that. We shouldn’t just talk about it, we need to make sure that Angela and I at the European Union, that we make sure we complete the digital single market – that must be one of the tasks for the next Commission, for the next Council. It’s something that we’ve dedicated ourselves to doing and we will make sure that happens. It will benefit your company and these sorts of technologies. Thank you very much.
Recently, there’s been quite a bit of banter in the comments of a post I wrote a couple of years ago about my first employer, PAFEC Ltd.
It’s got me thinking. It would be great to try to re-create a working copy of their most famous software package, DOGS (Design Office Graphics System) on (say) a modern Linux platform such as the Raspberry Pi, for conservation reasons.
As one of the first general purpose CAD packages on the market (it was first released around 1979 if my memory serves me correctly) that didn’t require specialist CAD hardware to operate it, as well as being the leading British CAD software package of the 1980s, it would be a shame not to try I think.
I’ve no idea who owns the rights to the software today, but if they’d like to get in touch I’d be very interested in putting together a small team together to start a conservation effort – assuming that they still have access to its source code.