Computing’s great leap forward not matched by safeguards

Software errors can be perilous – as the airline and motor industries have discovered

Some 75 years ago, there were but a few electromechanical computers worldwide, used to break military encryption. Even just 40 years ago, at that time it seemed audacious that Microsoft's Bill Gates would envisage "a computer on every desk and in every home". Today, almost every individual, every company, every government and every society worldwide are directly or indirectly influenced by computers.

The early machines were hand-programmed using raw patterns of bits. Alan Turing, the cryptanalyst who broke German codes during the second World War, noted in 1945 that the world will consequently need a great deal of mathematicians of ability, because of the complexity involved in programming machines. Since then, the history of computing has been the drive for ever more efficient mechanisation of software.

The pervasiveness of computers has not in fact been held back by an impoverished supply of mathematicians of ability to program computers. Instead computers have become so accessible that they can be controlled and directed by ordinary members of the public. And now deep learning is increasingly leading us to the point at which computers can control and direct themselves in ways which humans may not be capable of understanding.

In 1979. the author Douglas Adams satirised the Deep Thought supercomputer which calculated, taking 7.5 million years to do so, that the answer to the Ultimate Question of Life, the Universe, and Everything is 42. Unfortunately though, no one knows what the Ultimate Question is that leads to that answer!

Today’s deep learning systems remind us of this enigma: they can give us near perfect answers, they can beat the world’s best board game players, they can assert new mathematical truths, but their explanation of how they came to their result and of what questions they solved to irrefutably prove their hypothesis and so reach their conclusion, are often completely intractable to the human mind.

Paradoxically, on the one hand we have created software technology to make machines much easier to program, requiring less highly-skilled human capital, and less time; while on the other we have made software much more difficult to understand. Computers are now much easier to program but are pervasive across industry, government and society; consequently, there are so many more to be programmed.

While business managers expect computer solutions to be delivered at low expense software developers are always pressured for higher productivity to build yet even more software solutions.

And so the software industry is continuously stressed to meet deadlines and to respond to business demands. Computer engineers developed the Boeing 737 Max MCAS system which apparently led two experienced flight crews to dive their loaded passenger aircraft to doom. Software developers at Volkswagen developed emissions systems to defeat regulatory testing standards for diesel engines, potentially seriously damaging the health of populations. Data scientists enabled democracy to be hacked in multiple jurisdictions, including Australia, India, Kenya, Malta, Mexico, the United Kingdom and the United States, at Cambridge Analytica.

Given that computing now runs our society and the business expectations of rapid development of software, it seems almost inevitable that sooner or later there may be a major software disaster. Mistakes by medical professionals, who take an oath to first do no harm, can certainly lead to death. But a software mistake or omission could result, feasibly, in the death of thousands.

The professional software associations, such as the Association of Computer Machinery and the Institute of Electronic and Electrical Engineers, in recent years have developed codes of ethics for the software industry. These emphasise the highest professional judgments and standards, including ensuring quality and minimising risk to society. However, the sanction for an ethical failure is just limited to disqualification from membership.

In Ireland, workers are safeguarded by protected disclosures legislation. This protects employees, contractors and trainees in bringing senior management attention to concerns, including in particular any activity which can be reasonably believed to endanger health and safety whether it be in Ireland, or indeed anywhere else. I am aware that some software professionals here have recently used this mechanism to flag concerns over the work undertaken for their employers, in some cases overseas.

While an unsafe software system could clearly create liability for a company, some early legislation is emerging that could create personal liabilities for individual software engineers. For example, the US Commodity Futures Trading Commission (CFTC) has noted that “the appropriate question is whether these software developers could reasonably foresee, at the time they created the code, that it would likely be used by US persons in a manner violative of CFTC regulations”.

Society may move to requiring software systems which have the potential to endanger life or property, to be certified as safe by appropriately qualified and chartered software professionals. Equally, the insurance industry may spot a new opportunity in offering indemnity for software professionals. The contradiction underlying the ease at which software can now be rapidly developed, with inadequate training in best professional practice, is that software errors can sometimes be downright dangerous.