The ‘no-code’ approach is very popular but it comes with a dilemma
A new approach to software is needed that not only makes it convenient to build but automatically ensures it is error-free
Telephone operators receiving long distance calls at a London exchange in 1946. More telephones meant even more switchboard operators. Photograph: Getty Images
In 1950 the popular adoption of a new technology was being increasingly threatened by an imminent staff shortage. The rapid growth of telephones was bringing societies and economies together. But telephones had a negative economy of scale. Every new customer added to the network resulted in more potential connections. To route each individual call connection request an operator had to physically plug a cable into the correct pair of sockets on the switchboard panel. More telephones meant even more switchboard operators.
In the United States in 1950 just under 8 per cent of working women were switchboard operators. It was obvious that within the decade telecommunications would stall as there just would not be enough people to employ as operators. New technology was needed so that customers could make calls on their own without requiring the intermediary operators.
By 1960 almost all exchanges had become automated, and telephone operators became largely redundant.
In the 1970s computers were no longer just the preserve of government research laboratories and elite universities, and had become affordable for business and commerce. However businesses, and the computer industry itself, were becoming concerned by a shortage of skilled programmers to write software. Frequently there was a growing backlog of requests from business managers to the central computing department of their company for yet further software to be written to provide more analyses and reports to help run the business.
While a student at Harvard Business School in 1979, Dan Bricklin realised during a class that a financial model being presented on the blackboard, with each cell in the table holding a number or formula, could be reproduced in a computer. He and his business partner, Bob Frankston, founded a startup, Visicalc, and together they built the first electronic spreadsheet.
It represented a new way of thinking about computing. Instead of programming conventionally as a sequence of steps, changing any spreadsheet cell immediately propagated changes automatically throughout the rest of the table.
The spreadsheet largely freed business units from having to wait for their central computing department to write new software. Instead business staff could themselves quickly build financial models, analyses and reports without having to learn any computer language or to write any software code.
The “no-code” approach naturally became extremely popular but a dilemma resulted. As non-programmers mastered a computer, it also became much easier for non-programmers to modify and alter the work of others. Spreadsheets multiplied, sometimes with many derivatives of the original version, and sometimes with incorrect data and formulae. It became difficult to verify that the software was accurate and correct. Spreadsheet proliferation still remains a challenge for many companies.
Today governments and corporations continue to fret that there are insufficient programmers. Software has become even more pervasive with the internet, smartphones and consumer devices, smart vehicles, business intelligence and government services.
If the invention of the spreadsheet showed a way forward why cannot a similar “no-code” approach be adopted so as to reduce the endless demand for trained software developers?
In fact there are already numerous “no-code” products available. Slide presentations can easily be made with tools such as Powerpoint or KeyNote. If you want to build your own e-commerce web site then tools such as Shopify or SquareSpace can greatly assist.
There are “no-code” tools which help you to build your own smart phone app. You simply drag and drop predefined widgets into your overall design and then interconnect them with arrows to indicate flow: such as adding a form, calendar, map, chart, authorisation, email, product catalogue, payment processing and so on.
Machine learning technology is also becoming closer to a “no-code” approach. A number of start-ups, as well as majors such as Apple, Google and Microsoft, offer predefined machine-learning models which you can then adapt to your own requirements. Using examples you can reasonably simply teach a templated model to automatically categorise data (including images, audio and text). Again by supplying examples you can train a model to learn to make predictions and forecasts from your data.
But if “no-code” is increasingly viable, why then is there still such a strong demand for professional software developers? In part it is because no-code solutions, like the spreadsheet, have become so successful: if almost anyone can easily create one then many go right ahead and do so. A groundswell builds, with results at various stages of completion and of quality, which do not always work as expected.
Much of software engineering is just too dangerous to get wrong: you would probably not wish to drive a smart car whose intelligence had been built using “no-code”, or invest in a pension fund whose investment decisions had been automated by a “drag and drop” no-code approach.
Much as automated exchanges made the telephone more convenient to use and much less prone to connection errors, a new approach to software is needed that not only makes it convenient to build but, much more importantly, automatically ensures that it is robust and error-free.