Complexity: are easily-checked problems also easily solved?
That’s Maths: Many problems are hard to solve but easy to verify, such as Sudoku puzzles
The term algorithm comes from the name of Persian mathematician al-Khwarizmi. Photograph courtesy of Prof Irfan Shahid
From the name of the Persian polymath al-Khwarizmi, who flourished in the early ninth century, comes the term algorithm. An algorithm is a set of simple steps that lead to the solution of a problem.
An everyday example is a baking recipe, with instructions on what to do with ingredients (input) to produce a cake (output). For a computer algorithm, the inputs are the known numerical quantities and the output is the required solution.
Some problems are very difficult to solve, with the solution time increasing exponentially with the number of variables. A famous example is the travelling salesman problem (TSP), to find the shortest round trip through a set of cities. First defined in the 1800s and studied by Irish mathematician William Rowan Hamilton, TSP is one of the toughest problems in computer science – what is called an NP-hard problem.
As the number of cities increases, an exact solution of TSP becomes impractical. There is probably no method of solving TSP that always finds the shortest route. Heuristic approaches give approximate solutions but these are not guaranteed to be optimal.
TSP does not stand in isolation; it is one of a large number of problems that come under the major challenge known as “P = NP”. This is one of seven “Millennium Problems” posed by the Clay Mathematics Institute, and a prize of $1 million (€820 million) is on offer for a solution. The idea is that many problems are hard to solve but easy to verify. A simple example is a Sudoku puzzle, hard to solve but easy to check. A more important instance is factorisation: it is trivial to multiply two large prime numbers but, given the product, it is almost impossible to find the original numbers. This idea underlies modern cryptography.
Efficiency of algorithms is of fundamental importance in computing: there is no value in having an algorithm that requires centuries to generate a solution if the answer is needed today. TSP can be solved by a brute-force method: check every possible route that visits all the cities. But the number of routes increases super-exponentially with the number of cities. For four cities there are six routes, for eight cities, 40,000, and for 12 cities, almost 500 million. Clearly, less costly algorithms must be found.
Methods with cost growing as some power of the number of cities, or polynomial-time algorithms, are needed. The P in “P = NP” stands for problems that can be solved in polynomial time. The NP stands for problems for which a given solution can be verified in polynomial time. The problem of factoring a large number is NP-hard, but no polynomial-time algorithm is known; we don’t know if it is in class P. In fact, we don’t know any problem that is in the class NP of easily-verified problems but for which there is definitely no polynomial-time algorithm.
The study of the travelling salesman problem has brought about major breakthroughs in mathematics and computing and has enabled advances in solving several practical problems. Applications include traffic routing, designing circuit boards, flight operations and DNA sequencing. But the big question remains: is there an efficient algorithm for solving the TSP for an arbitrary set of cities?
Could it be possible that the classes P and NP are the same? In 1971, computer scientist Stephen Cook proved that if an efficient algorithm is found for one problem in NP, then all such problems, including TSP, can be solved efficiently. Thus, if an efficient algorithm to solve TSP could be found, it would be of great benefit for a wide range of other problems. Most people believe that NP is much bigger than P and that “P=NP” is false, but this has never been proved and $1 million hangs on the answer.