MathOverflow: a miscellany (Part VI)

Nice quote-worthy answers to select questions from MathOverflow every Tuesday!

Why do we care about rigor?

Any pure mathematician will from time to time discuss, or think about, the question of why we care about proofs, or to put the question in a more precise form, why we seem to be so much happier with statements that have proofs than we are with statements that lack proofs but for which the evidence is so overwhelming that it is not reasonable to doubt them.

That is not the question I am asking here, though it is definitely relevant. What I am looking for is good examples where the difference between being pretty well certain that a result is true and actually having a proof turned out to be very important, and why. I am looking for reasons that go beyond replacing 99% certainty with 100% certainty. The reason I’m asking the question is that it occurred to me that I don’t have a good stock of examples myself.

The best outcome I can think of for this question, though whether it will actually happen is another matter, is that in a few months’ time if somebody suggests that proofs aren’t all that important one can refer them to this page for lots of convincing examples that show that they are.

Added after 13 answers: Interestingly, the focus so far has been almost entirely on the “You can’t be sure if you don’t have a proof” justification of proofs. But what if a physicist were to say, “OK I can’t be 100% sure, and, yes, we sometimes get it wrong. But by and large our arguments get the right answer and that’s good enough for me.” To counter that, we would want to use one of the other reasons, such as the “Having a proof gives more insight into the problem” justification. It would be great to see some good examples of that. (There are one or two below, but it would be good to see more.)

Further addition: It occurs to me that my question as phrased is open to misinterpretation, so I would like to have another go at asking it. I think almost all people here would agree that proofs are important: they provide a level of certainty that we value, they often (but not always) tell us not just that a theorem is true but why it is true, they often lead us towards generalizations and related results that we would not have otherwise discovered, and so on and so forth. Now imagine a situation in which somebody says, “I can’t understand why you pure mathematicians are so hung up on rigour. Surely if a statement is obviously true, that’s good enough.” One way of countering such an argument would be to give justifications such as the ones that I’ve just briefly sketched. But those are a bit abstract and will not be convincing if you can’t back them up with some examples. So I’m looking for some good examples.

What I hadn’t spotted was that an example of a statement that was widely believed to be true but turned out to be false is, indirectly, an example of the importance of proof, and so a legitimate answer to the question as I phrased it. But I was, and am, more interested in good examples of cases where a proof of a statement that was widely believed to be true and was true gave us much more than just a certificate of truth. There are a few below. The more the merrier.


I once got a letter from someone who had overwhelming numerical evidence that the sum of the reciprocals of primes is slightly bigger than 3 (he may have conjectured the limit was π). The sum is in fact infinite, but diverges so slowly (like log log n) that one gets no hint of this by computation.


I would like to preface this long answer by a few philosophical remarks. As noted in the original posting, proofs play multiple roles in mathematics: for example, they assure that certain results are correct and give insight into the problem.

A related aspect is that in the course of proving an intuitively obvious statement, it is often necessary to create theoretical framework, i.e. definitions that formalize the situation and new tools that address the question, which may lead to vast generalizations in the course of the proof itself or in the subsequent development of the subject; often it is the proof, not the statement itself, that generalizes, hence it becomes valuable to know multiple proofs of the same theorem that are based on different ideas. The greatest insight is gained by the proofs that subtly modify the original statement that turned out to be wrong or incomplete. Sometimes, the whole subject may spring forth from a proof of a key result, which is especially true for proofs of impossibility statements.

Most examples below, chosen among different fields and featuring general interest results, illustrate this thesis.

  1. Differential geometry. It had been known since the ancient times that it was impossible to create a perfect (i.e. undistorted) map of the Earth. The first proof was given by Gauss and relies on the notion of intrinsic curvature introduced by Gauss especially for this purpose. Although Gauss’s proof of Theorema Egregium was complicated, the tools he used became standard in the differential geometry of surfaces.b. Isoperimetric property of the circle has been known in some form for over two millenia. Part of the motivation for Euler’s and Lagrange’s work on variational calculus came from the isoperimetric problem. Jakob Steiner devised several different synthetic proofs that contributed technical tools (Steiner symmetrization, the role of convexity), even though they didn’t settle the question because they relied on the existence of the absolutely minimizing shape. Steiner’s assumption led Weierstrass to consider the general question of existence of solutions to variational problems (later taken up by Hilbert, as mentioned below) and to give the first rigorous proof. Further proofs gained new insight into the isoperimetric problem and its generalizations: for example, Hurwitz’s two proofs using Fourier series exploited abelian symmetries of closed curves; the proof by Santaló using integral geometry established more general Bonnesen inequality; E.Schmidt’s 1939 proof works in n dimensions. Full solution of related lattice packing problems led to such important techniques as Dirichlet domains and Voronoi cells and the geometry of numbers.
  2. Algebra. For more than two and a half centuries since Cardano’s Ars Magna, no one was able to devise a formula expressing the roots of a general quintic equation in radicals. The Abel–Ruffini theorem and Galois theory not only proved the impossibility of such a formula and provided an explanation for the success and failure of earlier methods (cf Lagrange resolvents and casus irreducibilis), but, more significantly, put the notion of group on the mathematical map.b. Systems of linear equations were considered already by Leibniz. Cramer’s rule gave the formula for a solution in the n×n case and Gauss developed a method for obtaining the solutions, which yields the least square solution in the underdetermined case. But none of this work yielded a criterion for the existence of a solution. Euler, Laplace, Cauchy, and Jacobi all considered the problem of diagonalization of quadratic forms (the principal axis theorem). However, the work prior to 1850 was incomplete because it required genericity assumptions (in particular, the arguments of Jacobi et aldidn’t handle singular matrices or forms. Proofs that encompass all linear systems, matrices and bilinear/quadratic forms were devised by Sylvester, Kronecker, Frobenius, Weierstrass, Jordan, and Capelli as part of the program of classifying matrices and bilinear forms up to equivalence. Thus we got the notion of rank of a matrix, minimal polynomial, Jordan normal form, and the theory of elementary divisors that all became cornerstones of linear algebra.
  3. Topology. Attempts to rigorously prove the Euler formula VE+F=2 led to the discovery of non-orientable surfaces by Möbius and Listing.b. Brouwer’s proof of the Jordan curve theorem and of its generalization to higher dimensions was a major development in algebraic topology. Although the theorem is intuitively obvious, it is also very delicate, because various plausible sounding related statements are actually wrong, as demonstrated by the Lakes of Wada and the Alexander horned sphere.
  4. Analysis The work on existense, uniqueness, and stability of solutions of ordinary differential equations and well-posedness of initial and boundary value problems for partial differential equations gave rise to tremendous insights into theoretical, numerical, and applied aspects. Instead of imagining a single transition from 99% (“obvious”) to 100% (“rigorous”) confidence level, it would be more helpful to think of a series of progressive sharpenings of statements that become natural or plausible after the last round of work.a. Picard’s proof of the existence and uniqueness theorem for a first order ODE with Lipschitz right hand side, Peano’s proof of the existence for continuous right hand side (uniqueness may fail), and Lyapunov’s proof of stability introduced key methods and technical assumptions (contractible mapping principle, compactness in function spaces, Lipschitz condition, Lyapunov functions and characteristic exponents).b. Hilbert’s proof of the Dirichlet principle for elliptic boundary value problems and his work on the eigenvalue problems and integral equations form the foundation for linear functional analysis.c. The Cauchy problem for hyperbolic linear partial differential equations was investigated by a whole constellation of mathematicians, including Cauchy, Kowalevski, Hadamard, Petrovsky, L.Schwartz, Leray, Malgrange, Sobolev, Hörmander. The “easy” case of analytic coefficients is addressed by the Cauchy–Kowalevski theorem. The concepts and methods developed in the course of the proof in more general cases, such as the characteristic variety, well-posed problem, weak solution, Petrovsky lacuna, Sobolev space, hypoelliptic operator, pseudodifferential operator, span a large part of the theory of partial differential equations.
  5. Dynamical systems Universality for one-parameter families of unimodal continuous self-maps of an interval was experimentally discovered by Feigenbaum and, independently, by Coullet and Tresser in the late 1970s. It states that the ratio between the lengths of intervals in the parameter space between successive period-doubling bifurcations tends to a limiting value δ4.669201 that is independent of the family. This could be explained by the existence of a nonlinear renormalization operator R in the space of all maps with a unique fixed point g and the property that all but one eigenvalues of its linearization at g belong to the open unit disk and the exceptional eigenvalue is δ and corresponds to the period-doubling transformation. Later, computer-assisted proofs of this assertion were given, so while Feigebaum universality had initially appeared mysterious, by the late 1980s it moved into the “99% true” category.The full proof of universality for quadratic-like maps by Lyubich (MR) followed this strategy, but it also required very elaborate ideas and techniques from complex dynamics due to a number of people (Douady–Hubbard, Sullivan, McMullen) and yielded hitherto unknown information about the combinatorics of non-chaotic quadratic maps of the interval and the local structure of the Mandelbrot set.
  6. Number theory Agrawal, Kayal, and Saxena proved that PRIMES is in P, i.e. primality testing can be done deterministically in polynomial time. While the result had been widely expected, their work was striking in at least two respects: it used very elementary tools, such as variations of Fermat’s little theorem, and it was carried out by a computer science professor and two undergraduate students. The sociological effect of the proof may have been even greater than its numerous consequences for computational number theory.


When I teach our “Introduction to Mathematical Reasoning” course for undergraduates, I start out by describing a collection of mathematical “facts” that everybody “knew” to be true, but which, with increasing standards of rigor, were eventually proved false. Here they are:

  1. Non-Euclidean geometry: The geometry described by Euclid is the only possible “true” geometry of the real world.
  2. Zeno’s paradox: It is impossible to add together infinitely many positive numbers and get a finite answer.
  3. Cardinality vs. dimension: There are more points in the unit square than there are in the unit interval.
  4. Space-filling curves: A continuous parametrized curve in the unit square must miss “most” points.
  5. Nowhere-differentiable functions: A continuous real-valued function on the unit interval must be differentiable at “most” points.
  6. The Cantor Function: A function that is continuous and satisfies f'(x)=0 almost everywhere must be constant.
  7. The Banach-Tarski paradox: If a bounded solid in R^3 is decomposed into finitely many disjoint pieces, and those pieces are rearranged by rigid motions to form a new solid, then the new solid will have the same volume as the original one.


  1. Nonexistence theorems can not be demonstrated with numerical evidence. For example, the impossibility of classical geometric construction problems (trisecting the angle, doubling the cube) could only be shown with a proof that the efforts in the positive direction were futile. Or consider the equation xn+yn=zn with n>2. [EDIT: Strictly speaking my first sentence is not true. For example, the primality of a number is a kind of nonexistence theorem — this number has no nontrivial factorization — and one could prove the primality of a specific number by just trying out all the finitely many numerical possibilities, whether by naive trial division or a more efficient rigorous primality test. Probabilistic primality tests, such as the Solovay–Strassen or Miller–Rabin tests, allow one to present a short amount of compelling numerical evidence, without a proof, that a number is quite likely to be prime. What I should have written is that nonexistence theorems are usually not (or at least some of them are not) demonstrable by numerical evidence, and the geometric impossibility theorems which I mentioned illustrate that. I don’t see how one can give real evidence short for those theorems other than by a proof. Lack of success in making the constructions is not convincing: the Greeks couldn’t construct a regular 17-gon by their rules, but Gauss showed much later that it can be done.]
  2. You can’t apply a theorem to all commutative rings unless you have a proof of the result which works that broadly. Otherwise math just becomes conjectures upon conjectures, or you have awkward hypotheses: “For a ring whose nonzero quotients all have maximal ideals, etc.” Emmy Noether revolutionized abstract algebra by replacing her predecessor’s tedious computational arguments in polynomial rings with short conceptual proofs valid in any Noetherian ring, which not only gave a better understanding of what was done before but revealed a much broader terrain where earlier work could be used. Or consider the true scope of harmonic analysis: it can be carried out not just in Euclidean space or Lie groups, but in any locally compact group. Why? Because, to get things started, Weil’s proof of the existence of Haar measure works that broadly. How are you going to collect 99% numerical evidence that all locally compact groups have a Haar measure? (In number theory and representation theory one integrates over the adeles, which are in no sense like Lie groups, so the “topological group” concept, rather than just “Lie group”, is really crucial.)
  3. Proofs tell you why something works, and knowing that explanatory mechanism can give you the tools to generalize the result to new settings. For example, consider the classification of finitely generated torsion-free abelian groups, finitely generated torsion-free modules over any PID, and finitely generated torsion-free modules over a Dedekind domain. The last classification is very useful, but I think its statement is too involved to believe it is valid as generally as it is without having a proof.
  4. Proofs can show in advance how certain unsolved problems are related to each other. For instance, there are tons of known consequences of the generalized Riemann hypothesis because the proofs show how GRH leads to those other results. (Along the same lines, Ribet showed how modularity of elliptic curves would imply FLT, which at the time were both open questions, and that work inspired Wiles.)


Mathematical thought often proceeds from a confused search for what is true to a valid insight into the correct answer. The next step is a careful attempt to organise the ideas in order to convince others.BOTH STEPS ARE ESSENTIAL. Some mathematicians are great at insight but bad at organization, while some have no original ideas, but can play a valuable role by carefully organizing convincing proofs. There is a problem in deciding what level of detail is necessary for a convincing proof—but that is very much a matter of taste.

The final test is certainly to have a solid proof. All the insight in the world can’t replace it. One cautionary tale is Dehn’s Lemma. This is a true statement, with a false proof that was accepted for many years. When the error was pointed out, there was again a gap of many years before a correct proof was constructed, using methods that Dehn never considered.

It would be more interesting to have an example of a false statement which was accepted for many years; but I can’t provide an example.


I was not going to write anything, as I am a latecomer to this masterful troll question and not many are likely going to scroll all the way down, but Paul Taylor’s call for Proof mining and Realizability (or Realisability as the Queen would write it) was irresistible.

Nobody asks whether numbers are just a ritual, or at least not very many mathematicians do. Even the most anti-scientific philosopher can be silenced with ease by a suitable application of rituals and theories of social truth to the number that is written on his paycheck. At that point the hard reality of numbers kicks in with all its might, may it be Platonic, Realistic, or just Mathematical.

So what makes numbers so different from proofs that mathematicians will fight a meta-war just for the right to attack the heretical idea that mathematics could exist without rigor, but they would have long abandoned this question as irrelevant if it asked instead “are numbers just a ritual that most mathematicians wish to get rid of”? We may search for an answer in the fields of sociology and philosophy, and by doing so we shall learn important and sad facts about the way mathematical community operates in a world driven by profit, but as mathematicians we shall never find a truly satisfactory answer there. Isn’t philosophy the art of never finding the answers?

Instead, as mathematicians we can and should turn inwards. How are numbers different from proofs? The answer is this: proofs are irrelevant but numbers are not. This is at the same time a joke and a very serious observation about mathematics. I tell my students that proofs serve two purposes:

  1. They convince people (including ourselves) that statements are true.
  2. They convey intuitions, ideas and techniques.

Both are important, and we have had some very nice quotes about this fact in other answers. Now ask the same question about numbers. What role do numbers play in mathematics? You might hear something like “they are what mathematics is (also) about” or “That’s what mathematicians study”, etc. Notice the difference? Proofs are for people but numbers are for mathematics. We admit numbers into mathematical universe as first-class citizen but we do not take seriously the idea that proofs themselves are also mathematical objects. We ignore proofs as mathematical objects. Proofs are irrelevant.

Of course you will say that logic takes proofs very seriously indeed. Yes, it does, but in a very limited way:

  • It mostly ignores the fact that we use proofs to convey ideas and focuses just on how proofs convey truth. Such practice not only hinders progress in logic, but is also actively harmful because it discourages mathematization of about 50% of mathematical activity. If you do not believe me try getting funding on research in “mathematical beauty”.
  • It considers proofs as syntactic objects. This puts logic where analysis used to be when mathematicians thought of functions as symbolic expressions, probably sometime before the 19th century.
  • It is largely practiced in isolation from “normal” mathematics, by which it is doubly handicapped, once for passing over the rest of mathematics and once for passing over the rest of mathematicians.
  • Consequently even very basic questions, such as “when are two proofs equal” puzzle many logicians. This is a ridiculous state of affairs.

But these are rather minor technical deficiencies. The real problem is that mainstream mathematicians are mostly unaware of the fact that proofs can and should be first-class mathematical objects. I can anticipate the response: proofs are in the domain of logic, they should be studied by logicians, but normal mathematicians cannot gain much by doing proof theory. I agree, normal mathematicians cannot gain much by doing traditional proof theory. But did you know that proofs and computation are intimately connected, and that every time you prove something you have also written a program, and vice versa? That proofs have a homotopy-theoretic interpretation that has been discovered only recently? That proofs can be “mined” for additional, hidden mathematical gems? This is the stuff of new proof theory, which also goes under names such as Realizability, Type theory, and Proof mining.

Imagine what will happen with mathematics if logic gets boosted by the machinery of algebra and homotopy theory, if the full potential of “proofs as computations” is used in practice on modern computers, if completely new and fresh ways of looking at the nature of proof are explored by the brightest mathematicians who have vast experience outside the field of logic? This will necessarily represent a major shift in how mathematics is done and what it can accomplish.

Because mathematicians have not reached the level of reflection which would allow them to accept proof relevant mathematics they seek security in the mathematically and socially inadequate dogma that a proof can only be a finite syntactic entity. This makes us feeble and weak and unable to argue intelligently with a well-versed sociologist who can wield the weapons of social theories, anthropology and experimental psychology. So the best answer to the question “is rigor just a ritual” is to study rigor as amathematical concept, to quantify it, to abstract it, and to turn it into something new, flexible and beautiful. Then we will laugh at our old fears, wonder how we ever could have thought that rigor is absolute, and we will become the teachers of our critics.


I guess the question “Is the rigor just a ritual” has got enough answers, so I’ll address another one:

Has something happened in the world of mathematics that I am not aware of?

My answer is: yes, if you replace “aware of” by “consciously aware of”. Of course, what I’ll say will be “subjective and argumentative”.

1) There are far too many people that call themselves “mathematicians” or “mathematical education specialists”. Many of them are just street peddlers who make their living by selling their “results” and “theories” and whose mentality is that of an egg seller on the flea market. The goal is to get as good price as possible keeping the production costs as low as possible. One also has to maintain good relationships with nearby sellers and with market authorities and to keep an eye on the latest consumer trends. It would be nice to get a better place for the stand, etc. The question of the quality of eggs has to be addressed only if an angry mob of people is approaching. Otherwise, everything that is oval-shaped and white or brown in color will do.

2) The professor-student relationship is no longer that of a master and an apprentice but that of a service person and a client. The result is the most abominable. I’ll abstain from discussing what it means for professors but for the students it ultimately means that they are treated as subhuman beings, i.e., they are considered as having almost no intelligence whatsoever, so instead of lifting the students to the level of the craft, the craft is lowered to their level. This happened in the arts when primitive ancient drawings were declared masterpieces alike to the paintings of Renaissance masters. Like the primitivization of arts led to all monstrosities that fill the “modern art” museum halls, which make me doubt that most modern artists can draw or sculpt at all, this primitivization of mathematics (whose main expression is presenting the mathematics as a mere taxonomy, a bunch of simple algorithms, and the art of pushing calculator buttons) will inevitably lead to reverting the craft to its pre-Greek level. Moreover, I have read a couple of math. education papers that, after you remove all fancy buzzwords from them, advocate exactly this transition.

3) Many mathematicians lost all pride and turned into mere beggars for money (grants, salary increases) and recognition (competition for prizes, publications in top journals, etc). I’ve recently heard some amazing new terminology like “the submission-rejection cycle” (you submit to a journal, get rejected, submit to another one, get rejected, etc.).

4) There is no hope for fundamentally new weapons that can be developed soon using further advances in pure math. This removed the need for rigorous mathematical education for military purposes and made the math. education a purely political issue. Despite all my disgust towards the wars, I have to grant the military the basic common sense: they have a clear goal to beat the enemy and whatever can serve this goal will be promoted and maintained at the operational level. The politicians need only to please the electorate for whom they coined the wonderful name “taxpayers”. It doesn’t matter how much a “taxpayer” knows about the science. As long as it is done on his money, he is the boss and he is the one to tell the right from the wrong. Moreover, even when the taxpayers do have common sense, their representatives in the legislature usually don’t.

5) The Platonic idea of mathematics as an objective (super)reality was replaced by the idea of mathematics as sociological and cultural phenomenon. Note the words “mathematics is on the edge of a philosophical breakdown since there are different ways of convincing and journals only accept one way, that is, proof”. They show clearly that the person saying them has lost all sense of an explorer of an unknown land whose task is to find out what is there and to make sure that what he sees is not a fata morgana. His goal now is merely to “convince other people of something”.

I can continue, but I guess you got the idea by now. We are no longer viewed as high priests, or explorers, or technical experts, but rather as street sellers of strange and hardly digestible goods by the general public (which would be still tolerable) and by ourselves (which is suicidal, IMHO).

There is still a simple remedy: behave with pride and teach the craft properly whenever you can do it without losing your means of living immediately. I have little hope that this remedy will be applied widely, but you can always do it locally. And the last piece of advice: do not lose your sleep over the opinions of other people and do not argue with them. Look at what real results they achieved with their approach instead. If they have nothing to put on the table, just consider them a bunch of flies. The fly buzz can be quite irritating and some flies can deliver a venomous bite, but still a fly is a fly and a human is a human (not because a human has two eyes and a nose and the fly has a pair of wings as the modern humanists try to convince us, but because a human can absorb the whole Universe and to transcend his temporal and spatial limits and his self-centeredness, while the fly will always see only the piece of honey or shit it can feed on at the next moment).


An analogous question in physics might be: Is relativity just a ritual that most physicists wish to get rid of if they could? When we’re going about our daily lives, most of the time people don’t care about relativity: Newtonian physics explains everything we’re going to see, it’s simpler, and it’s intuitive. We wouldn’t bother to set up a relativistic calculation to decide when the bus is going to arrive. But in situations where our intuition is lacking, and/or it’s really important to us that our answer is correct, then we need to incorporate relativity (and sometimes we learn that our intuition isn’t always dependable!).

In math, when we’re going about our daily lives, most of the time people don’t care about rigor: intuitive arguments, exhibiting a few terms in a pattern, and arguing from experience and approximation work pretty well. We wouldn’t bother setting up an integral to calculate how far our car gets on a tank of gas. But in situations where our intuition is lacking, and/or it’s really important to us that our answer is correct, then we need to incorporate rigor (and sometimes we learn that our intuition isn’t always dependable!).


Here is an “uncommon” answer in favor of rigour from an old paper of Atiyah (Bull. Inst. Math. App. 10 (1974), 232-234): “How research is carried out?” In particular, I like the first part of the quote that is less heard (uncommon).

Now you may well ask what is the point of rigour? Some of you may define rigour as “rigor mortis” and believe that pure mathematics comes along to stifle the activities of people who really know how to get the answers. Again, I think, we ought to bear in mind that mathematics is a human activity and our aim is not only to discover things but to pass this information on. Now somebody like Euler, who knows how to write down divergent series and get correct answers, must have a good feeling of what ought to be done and what ought not to be done. Euler had an intuition built up out of a great variety of experience, and this kind of intuition is very hard to convey. So the next generation will come along and will not know how it is done, and the point of having a rigorous mathematical statement is so that something which in the first place is subjective and depends very much on personal intuition, becomes objective and capable of transmission. I have no wish at all to deny the advantages of having this kind of intuition, but only to emphasis that in order for this to be conveyed to other people it must eventually be presented in such a way that it is unambiguous and capable of being understood by someone who does not necessarily have the same kind of insight as the originator. Beyond this, of course, as long as you deal with a certain range of problems then your intuition is quite capable of leading to the right answer although you may not be sure how to justify it. But when you go to the next stage of development and start to build a more elaborate problem on the structure you already have, it becomes more and more important that the initial groundwork should be fairly firmly understood. So the necessity for having rigorous arguments is again because you are going to be building, and if you do not build on solid foundations the whole structure will be in danger.


Rigor is a relative notion. Eg., I work in formal verification, and by my community’s standards most published mathematics isn’t rigorous, since proofs aren’t machine-checked. But mathematicians still know what they’re talking about, even when we don’t know how to fully formalize the proofs! The same is true for science & engineering; they have lots of good conceptual ideas/techniques which we mathematicians don’t know how to formalize (such as Feynman diagrams). IMO, the right reaction is not to laugh, but to view it as a research opportunity.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s