... plus c'est la meme code

Having spent the last decade and a half working as a commercial computer programmer, two main things strike me

Having spent the last decade and a half working as a commercial computer programmer, two main things strike me. The first is that despite the phenomenal technological advances, the basic programming function has not changed that much. The second is an awful lot of "advances" have a familiar ring.

Since the early 1980s, I have worked on 16, 32 and now 64-bit systems; I have seen magnetic disk capacities expand by orders of magnitude; the typical PC nowadays has more memory capacity than mini computers had disk space several years ago; processor speeds have greatly increased.

The PC is ubiquituous and graphical user interfaces (GUI's) have become standard. Relational databases are the norm. The move towards object-oriented software development is gathering pace. Client-server computing is well established and the Internet has exploded onto the scene. Yet, at its most basic, every computer system takes in some information, processes it, and outputs it. Information, whether the source of that input is from a VDU, a PC screen or from a barcode scan, is still input.

The processing of a payroll, a stock control system or of an insurance claim is still processing. Output, whether to a postscript document, an Excel worksheet or an HTML page is still output. Today I develop new systems using Microsoft's Visual Basic language on a PC. However, behind the GUI, the code does not greatly differ from the kind I wrote 15 years ago.

READ MORE

Most of the systems I work on use a major relational database, accessed via SQL queries. Many of these systems are on Novell and Windows NT networks. Nevertheless, despite this "state of the art" client-server environment, data accessing is not greatly different from retrieving data from flat files. The programmer still retrieves whatever data is required for the job in hand. In a client-server environment, the data may be stored on a remote server, but the flow of data is the same from a programming point of view.

For users, the arrival of GUI's, with the Apple Macintosh and Microsoft Windows, has been a tremendous advance. While an Excel bar chart of, say, company sales figures, may be a far superior presentation tool than a boring old graph, it contains the same information.

The one development that looked like a fundamental change in computer programming was the introduction of Object Oriented Programming (OOP). At its most simple, the difference between OOP and traditional procedural languages like COBOL or BASIC is that the functional flow of a procedural language is replaced by message passing between objects. Several of the features of these OOP languages, such as Inheritance and Polymorphism, are based on very powerful concepts.

However, if you look behind the new terminology, the changes are not that fundamental. In procedural languages, blocks of computer code are called modules, or functions. In OOP, these have become object methods. An argument passed to a procedural language function is called a parameter, while in OOP it is an object property.

On the analysis side of the programmer/analyst's job, the changes by comparison have been almost sedate. Some of the older methods of systems design, such as flowcharting are long gone.

More modern methods, such as Pseudo Code and the Structured Systems Analysis and Design method (SSADM) are well-established. However, you still sit down with the users to analyse their requirements, then design a system to meet them.

At college I was taught that programming was a craft. I still see it that way. Despite the rapid evolution of technology, and the changing tools, the craft remains the same.

Just as programming methods have changed less than might be expected, the same is true of the technology itself. When computers were first used in business, data processing was based on central mainframes.

Over time, with technological advances, mini systems evolved; this allowed the same processing power to be distributed over a number of systems. A conundrum arose: to keep costs down, you need to centralise support skills. To give flexibility, you need to distribute decision-making to the users.

With the arrival of PC's the problem has grown more serious. The replacement of "dumb" terminals by networked PCs has involved a huge increase in the support workload. Now every user has decision-making capacity, including the opportunity to make incorrect decisions. This has led to the evolution of "thin clients", a lovely name for a dumb PC. Many of these thin clients will connect to very powerful server systems, often called "enterprise servers", allowing centralised control of the network of users. Are these not mainframes by another name? Have we come full circle?

Conor Horgan: chorgan@irish-times.ie