The central aim of the Enlightenment (read the Age of Reason) was to achieve a moral inversion and place humanity above Deity and correspondingly, reason above Revelation. Reason alone was sufficient, according to the Enlightenment philosophers, to decide all matters. Descartes said in his discourse on the scientific method that he would take nothing for true except that which “he knew to be such.” This is to say that the mind of man, aided by its principal faculty of reason, is the vessel in which the verity of transcendental truths would be established. Their ideological nemesis was to demonstrate that the instruments of reason, observation and logic, were powerful enough to encompass all of reality. Liebnitz struggled with this for years while professing the virtue of the infinitesimal calculus. The testimony of history is that the finitude and insufficiency of reason has been repeatedly established across many disciplines even in times before the Enlightenment. The most recent and perhaps most decisive arguments in recent times are the assertions of Gödel about the inherent limits of mathematics.
Gödel’s theorems on the incompleteness and undecidability of mathematical systems are among the deepest and most significant discoveries of the twentieth century. They represent a dramatic failure of one of the fondest hopes of European Enlightenment philosophers. This was their core faith that all human knowledge can be obtained by using observations and logic; in particular, revelation, tradition, and received wisdom are nothing but an accretion of superstitions which must be discarded to make progress. Our goal in this essay is to explain how Gödel’s results represent a mathematical tombstone for these hopes.
Geometry was the first rigorous intellectual discipline to be developed by the ancient Greeks. It is a testimonial to their brilliance that Euclid’s methods are still taught to our children, twenty four centuries after their discovery. The axiomatic-deductive methodology of mathematics leads to logical certainty without requiring empirical confirmation – we do not asses the validity of the Pythagorean Theorem by drawing triangles and measuring their sides. It was entirely natural that the same axiomatic and deductive methodology was adopted to study natural science by the Greeks. Unfortunately, this turned out to be a big mistake. Unlike mathematics, scientific hypotheses require empirical observations for validation.
Axiomatic deductive methods lead to a deadlock in a controversy that lasted for centuries: does the eye generate the light with which we see objects, or does light come from the object to our eyes? Mathematical style proofs were available for both propositions, and there seemed to be no logical way to resolve the controversy. Then Ibn-ul-Haytham (born 965) used a dazzling series of observations, including the fact that eyes get burnt from staring at the sun, to definitively resolve the dispute. Replacing logic by observations laid the basis of the scientific method, and has been called the most important discovery of the second millennium by historian Richard Powers.
The natural methodology for science is empirical and inductive – it is based on observing patterns of nature and guessing at the causes which create these patterns. Scientific hypotheses (like gravity) represent our best guesses at explaining what we observe (like the falling apple). It is only after we abandon the quest for logical certainty that it become possible to make progress in achieving scientific knowledge. Even though Aristotle was among the most brilliant humans to walk on this planet – his writings are still studied at leading universities today – he failed to understand this difference between natural science and mathematics. After coming to the wrong conclusion that heavier stones would fall faster than lighter ones, he never picked up two stones and dropped them to test his theory. Observational tests are essential for science, but not part of the methodology of geometry.
The bitter conflict between science and the Catholic Church that resulted from the burning of Bruno at the stake, and the trial of Galileo, led to an extreme antipathy to religion among European scientists. A concerted effort was made to prove that science led to certainty, whereas religion was mere superstition. This effort, which became known as the “philosophy of science”, initially concentrated on the problem of induction. If we observe a pattern in the real world, can we be sure that this pattern will continue? For example, having observed sunrise every day for millions of years, can we confidently predict sunrise tomorrow? After much effort, it was discovered that this problem cannot be solved. Despite repeated strong empirical confirmations of patterns, exceptional and unexpected events – sometimes called Black Swans – can always arise. After centuries of stability, a one-time earthquake or volcano can destroy everything.
Failure to solve the problem of induction led the “logical positivists” in the early twentieth century to a new approach to proving the certainty and superiority of scientific knowledge. They argued that science appears to be based on induction, but we can reformulate it to make it follow axiomatic deductive methodology of mathematics which does lead to certainty. Logical positivism was spectacularly successful and succeeded in dominating the philosophy of science in the second quarter of the twentieth century. Later, in the second half of the 20th century, it had an equally spectacular crash, when many of its basic ideas were disproven. Even A. J. Ayer, one the most enthusiastic exponents of positivism, eventually had to admit that “it was all wrong”. Current consensus among philosophers of science is that uncertainty is an inherent feature of scientific theories.
Among the many fronts on which logical positivism failed, one of the most crucial was mathematics itself. Discoveries in physics led to the understanding that the real world is wild and wacky, with particles and phenomenon that defy common sense: Quantum jumps from one state to next without passing intermediate states, spontaneous emergence of matter from nothing, backwards motion in time, particles randomly choosing slits to pass through and many other baffling concepts are routinely used by theoretical physicists. While logic and observations might fail in this wild real world, surely in the stable and sedate world of the natural numbers 1,2,3,4 … logical reasoning would lead to us to certainty and complete truth? The attempt to prove this intuition engaged the efforts of several mathematicians and logicians in the early part of the twentieth century.
German logician Kurt Gödel finally achieved spectacular and entirely unexpected results in this area. His first result was the Incompleteness Theorem. This showed that no matter how we formulate the axiomatic-deductive machinery, there will always exist true statements about numbers which this machinery cannot prove. This means that the “whole” truth about numbers will forever remain out of the grasp of logical reasoning. The second was the Undecidability Theorem, which proves that logic cannot be used to decide the truth or falsity of certain statements. One famous example is Euclid’s Parallel Postulate. Whether it is true or false is a matter of choice, not logic. If we choose to deny this postulate, we create a non-Euclidean geometry which has its own valid and useful insights, quite different from the Euclidean world we studied in school.
The Enlightenment hopes that man could reach truth purely by observations and logic, cannot be fulfilled even in the limited domain of mathematics. Gödel proved what poets have always known, that transcendental truths are beyond the reach of reason:
Iqbal easily transcended the realms of logic
But he could not plumb the depths of the mysteries of love.
==(free translation of couplet from Allama Iqbal, Poet Laureate of the East)
Published in The Express Tribune, April 12th, 2015 by Dr. Asad Zaman: author page on LinkedIn. Links to Other Works: Index.