Essay: Undecidability in Mathematics

To many people, mathematics is synonymous with absolute certainty; once a theorem is correctly proved, the result holds for all time. This apparent immutability of mathematical logic also helps describe much of our physical world. By the early 19th century, the French scholar and polymath, Pierre Simon de Laplace was so impressed by the success of Newton’s classical physics that he asserted that the future of the universe could be determined for ever with the aid of mathematics.

But Laplace’s confidence was misplaced. Implicit in his assertion was his belief that all mathematical problems are provable. That means either a problem can be solved or it can’t. There are no shades of grey. As it turns out, there are indeed shades of grey in the very heart of mathematics. The story of how this insight came to be touches not only mathematics but also physics and computer science at the most fundamental level.

Quantum mechanics delivered the first blow to Laplace’s optimism. This is the theory that is key to our understanding of the constituents of matter such as protons, electrons, and other fundamental particles. The thing is, in quantum mechanics, all measurements and observations are random rather than deterministic. Therefore, to describe the behavior of particles, one has to employ the language of probability, something that isn’t necessary in the world of Newtonian physics. The great physicist, Albert Einstein, was so disturbed by the random character of quantum mechanics that he insisted that God does not play dice.

Quantum mechanics was not the only affront to determinism. A few decades after it caused a sensation, the study of non-linear dynamics surprised many scientists by showing that even the classical physics of Newton had randomness and unpredictability at its core. A field of study, known as “chaos theory” emerged, exploring systems which are entirely governed by simple, deterministic equations, yet exhibit wildly unpredictable outcomes depending on what the “initial conditions” of the systems were. Even the tiniest changes in any of those initial conditions could, it was discovered, lead to vastly different outcomes. Examples of chaos are everywhere – the waft of cigarette smoke, ocean and weather turbulence, to name just a few. Suddenly, randomness was beginning to look like a unifying principle in physics. In the midst of all these revelations, mathematics stood nervously as the last bastion of certainty. Or so we thought.

Beginning in the 1960s, Gregory Chaitin, an Argentinian-American mathematician who made fundamental contributions to theoretical computing science, showed that there are theorems in number theory that cannot be proved because when we ask the appropriate questions, we obtain results that are equivalent to the random toss of a coin. His results challenged the beliefs of some of the most distinguished mathematicians of the 19th century, including the great David Hilbert. Hilbert famously gave a lecture in 1900 in which he proposed a list of 23 problems as a challenge to the new century. His 6th problem had to do with establishing the fundamental universal truths, or axioms, of physics. One of the points in this question concerned probability theory. To Hilbert, we resort to probability only because we have a limited amount of information, implying that once we have all the appropriate information we need to solve a problem, we can derive a definitive answer to that problem.


Another question Hilbert discussed was his 10th problem, which was connected with solving ‘diophantine’ equations, named after the Greek mathematician Diophantus. These are algebraic equations involving only whole numbers (integers). Hilbert asked: “Is there a way to decide whether of not an algebraic equation has a solution in whole numbers?”

Implicit in Hilbert’s thinking is that every mathematical problem has a solution. We may not be smart enough on a problem but, in principle, it should be solvable. As Chaitin showed years later, even clear, simple mathematical questions do not always have clear answers. Questions involving disphantine equations, for instance, can give answers that are completely random. Instead of a black or white situation, we are forced to look at grey. This is because the only way to prove that a diophantine equation has a solution is to posit that solution as an independent axiom. This is like saying a butterfly is yellow because its wings contain only yellow pigments. the circularity of the argument implies that there is no algorithm that can correctly decide the existence of integer solutions for all equations in this family. Einstein would be horrified to know that not only does God play dice in quantum mechanics, He does the same in mathematics.

Gregory Chaitin (b. 1947) is credited as the co-founder of Algorithmic Information Theory that studies the numerical properties of computer programs.

Hilbert’s confidence in the certainty of mathematics lies in his assertion that once you come up with a consistent list of axioms (fundamental assumptions), you cannot prove both a result and its contrary. If the system is complete, then you can also prove any assertion to be true or false. The problem posed by diophantine equations challenged this view. But it was just the tip of the iceberg. In the 1930s, the famed logician, Kurt Godel proved what is called the “Incompleteness theorem”, an earth-shattering result that demolished all hopes that mathematics is immune to randomness. Independently, in computer science, the British genius, Alan Turing, showed that it is impossible to obtain both a consistent and complete axiomatic theory of mathematics and an algorithm for deciding whether an arbitrary mathematical assertion is true or false, or is possible or not.

Turing’s argument is easier to follow than Godel’s so I will say a bit more about his insights. Turing used the language of the computer – the instructions or programs that a computer needs to work out problems – to show that there does not exist any algorithm for deciding whether an arbitrary program will ever finish its computation and halt.

To show that this “halting problem” can never to solved, we set the program running on a Turing machine (a mathematical idealization of a digital computer with no time limit). The program must be self-contained with all its data wrapped up inside the program. Then we ask: “Will the program halt or will it go on for ever?”

Turing showed that there is no set of instructions that you can give the computer, no algorithm, that will decide in advance if a given program will ever halt. Godel’s incompleteness theorem follows immediately because if the program does not every finish, then there is no complete set of underlying axioms either. If there were, they would provide a procedure for running through all possible proofs to show whether programs halt or not, although that would take a long time.

Instead of asking whether or not a specific program halts, we can look at a collection or ensemble of all possible program. We assign to each program a probability that it will be chosen. Each bit of information in the random program is then chosen by tossing a coin, so that a program containing say, N bits of information will have a probability of 2^{-N} . This was Gregory Chaitin’s thought experiment in the 1960s. He asked: “what is the total probability that those programs will halt?” He calls this halting probability, Omega. Omega wraps up Turing’s question of whether a program halts into one number between 0 and 1. If the program always halts, Omega is 1; if it never halts, it is 0. Just as a computer expresses numbers in binary notation, we can describe Omega in terms of a string of 0s and 1s. The question then is whether the Nth bit in the string is a 0 or 1. It turns out that similar to Godel and Turing’s results, Chaitin’s question cannot be determined because Omega cannot be computed. In fact, he proved that the sequence of 0s and 1s is random. In the words of Chaitin, the halting probability is algorithmically random. Algorithmic randomness corresponds to Turing’s assertion that the halting problem is undecidable. It also injected randomness into number theory, the bedrock and very heart of pure mathematics.

That randomness hides in the heart of mathematics is not necessarily a bad thing. Practical science and engineering is not hampered by this fact. At a more abstract level, randomness in number theory may just be the thing that jives with quantum mechanics, where the properties of nature’s most fundamental particles are also random, and can only be fathomed through the language of probability.

Note:

Parts of this post is adapted from an essay by Gregory Chaitin, entitled “Known Unknowns” published in Chance, New Scientist, London, 2015.

Leave a Reply