# Why the nagging feeling in some that 0.999… isn’t “really” equal to 1?

“In asking “how can two things that are unequal be equal”, you confuse the map with the territory. Two different labels are given to the same number.”

— Quora’s answer wiki, “Why is 0.999… equal to 1?

Convincing beginning students of math that 0.999… = 1 turns out to be an interestingly challenging and rewarding exercise in pedagogy. Asserting that it is “true by definition” is completely unilluminating; I personally feel that it shouldn’t even count as an answer in the first place if the question is genuine and the intention is to convince. (It’s probably a good teacher’s password though, as is the next argument.) The standard “proof” (technically correct but also completely unenlightening if you’re not convinced in the first place) uses the representation “1/3 = 0.333….” and the fact that 1 is three times 1/3; students report that it seems more like arithmetic sleight-of-hand, in the manner of the “proof” that 0 = 1. Sometimes even the assertion that “two numbers are equal if the difference between them is zero” combined with the fact that 1- 0.999…. = 0 (assuming the student doesn’t feel that there’s a “lingering 1” at the end of 0.000…. in the first place) isn’t enough; there’s a feeling that there should be a non-zero yet non-measurable distance between 0.999…. and 1 that stubbornly persists in the face of the arguments presented before it.

As usual, Quora serves up a ton of good answers that I’m reproducing below in full, mostly for future reference.

Before I go on, here’s a highly-upvoted plausibility argument by Michael Hochster:

Maybe you are not so sure what the expression 0.999… means. But even without knowing what it means, you can conclude it must be 1 from a few plausible assumptions:

1. 0.999… is a number.
2. 0.999… is no bigger than 1.
3. 0.999… is bigger than any number of the form 0.99…99.

From #1 and #2, 0.999… is either 1 or a number smaller than 1. Any number smaller than 1 has a number of the form 0.99…99 bigger than it (this follows from the fact that any positive number has a number of the form 0.00..001 smaller than it).

So 0.999.. can’t be smaller than 1, or there would be a number of the form 0.99….99 bigger than it, violating #3.

So 0.999… must be 1.

Uri Zarfaty makes a good remark on what a good explanation should be like:

….the best explanations describe both the essence of the paradox (that there is a difference between the notation and the thing being notated) as well as its details (what decimal notation actually means).

That the notation itself is different from what the notation represents – this is just a specific instance of not conflating the map with the territory – is something that doesn’t get enough emphasis.

More from Uri:

Convincing people that 0.999… and 1 are identical is an interesting exercise in pedagogy: many of the proofs suggested here, though logically sound, result in frustration when they listener fails to reconcile the proof with their preconceptions about decimals and infinitesimals. In my opinion, the best explanations describe both the essence of the paradox (that there is a difference between the notation and the thing being notated) as well as its details (what decimal notation actually means). The former is easier to explain than the latter.

For an interesting summary of some of the research into the teaching of this paradox, see Does 0.999… Really Equal 1? from The Mathematics Educator.

Proof by black magic

Probably the most common proof provided is the simple algebraic manipulation:

0.333… = 1 / 3
0.999… = 3 × 1 / 3 = 1

Despite its simplicity, this is unsatisfactory for many people: it looks like one of those trick proofs that 1 = 0. Smarter listeners may explicitly question whether the algebraic manipulation above is valid or if it’s a ‘trick’; alternatively they may question whether 0.333… is genuinely equal to 1/3. Without a better explanation of decimals, they have to take the response on trust; and because the proof provides no intuition of why 0.999… and 1 are identical, that’s not very satisfactory.

The key point

Explaining the essence of the paradox involves highlighting the difference between notation and the thing being notated. This can be done with analogies to other more familiar scenarios where there is more than one way of writing the same number. For example:

1. Integers. Most integers can be written in just one way: e.g. 4 represents ‘four’, -1 represents ‘negative one’. However, the integers 0 and -0 are two representations of the same number, ‘zero’. Any weak feeling that -0 is ‘slightly smaller’ than 0 is typically overridden by the stronger intuition that ‘I owe you nothing’ is the exactly same as to ‘you owe me nothing’.
2. Rationals. All rational numbers can be written in multiple ways: e.g. the fractions 1/2 and 2/4 are two representations of the same ratio, ‘half’. Again, there may be a weak feeling that 1/2 and 2/4 represent different ‘situations’, but it’s easy to persuade listeners that 1/2 and 2/4 are the same: “half a cake” is the same amount of cake as “two quarters of a cake” or “a quarter of two cakes”. This is particularly convincing with real cake.

The essence then of why 0.999… and 1 behave the same when you divide them by 3 for example is that, just like -0 and 0, they are just two different ways of writing the same number, ‘one’. One of these ways is clearer (just as zero is usually written 0, not -0, and half is usually written 1/2, not 2/4) but neither is more accurate (just as 2/4 is as much ‘half’ as 1/2).

Understanding why there are two ways of representing the number ‘one’ requires an understand of infinite decimals.

Infinite decimals

People are taught to use infinite decimals at school, but rarely to understand what they mean. It’s therefore natural to think of them simply as a sequence of digits, or the result of a mechanical calculation, rather than as a representation of an actual number. That last viewpoint is essential, as either of the other two make it ‘obvious’ that 0.999… and 1 are different.

A good starting point is to consider the fraction 1/3. It’s easy to understand why 1/3 can’t be represented by a precise decimal fraction, and show that 0.3, 0.33, 0.333, etc are increasingly good estimates for it (for example using long division). However, introducing 0.333… as shorthand for these estimates is not quite enough. What does it mean to say that 0.333… is 1/3? After all none of the estimates is precisely 1/3; in fact, all are strictly less than it. The point is that not only do the estimates get closer and closer to 1/3, they get arbitrarily close. To say that 0.333… is 1/3 is simply to say that.

Note that with this definition 0.333… can’t refer to any number other than 1/3 as it would have to get arbitrarily close to that too. If there was any gap between that number and 1/3 then this is clearly impossible. If there’s no gap, then what does it mean to say that the number and 1/3 are different?

Another good example is Zeno’s dichotomy paradox: to cover any distance you must first cover half the distance. Hence to cover a distance of 1, you must first cover one of 1/2, then by the same reasoning one of 1/4, then 1/8, and so on. The sequence of distances covered is 1/2, 3/4, 7/8, etc. Again, all these distances are strictly less than 1, yet the sequence gets arbitrarily close to it. Due to the physical presentation of the paradox, people are sometimes more receptive to the idea that ‘at the end’ the entire distance is covered.

The same logic holds for 0.999…  and 1. The sequence 0.9, 0.99, 0.999, etc gets arbitrarily close to 1. So trivially does the sequence 1.0, 1.00, 1.000, etc. The fact that one of the sequences is always less than one while the other is always equal to it doesn’t actually matter. Neither does the fact that 0.999… has lots of 9s in it. It’s simply the accidental result of the meaning of infinite decimal notation.

Michael Hamburg makes the point that mathematicians use the alternative (infinitely-repeating) decimal representation of 1 to be able to write fractions in decimal form, even when no terminating decimal suffices.

Sridhar Ramesh points out that mathematicians use infinite decimal notation for a number in the following manner:

[the] number which is >= the rounding downs of the infinite decimal at each decimal place, and <= the rounding ups of the infinite decimal at each decimal place. This is the definition of what infinite decimal notation means; it’s true because we say it is, just as the three letter word “dog” refers to a particular variety of four-legged animal because we say it does.

So, for example, when a mathematician says “0.166666…”, what they mean, by definition, is “The number which is >= 0, and also >= 0.1, and also >= 0.16, and also >= 0.166, and so on, AND also <= 1, and also <= 0.2, and also <= 0.17, and also <= 0.167, and so on.” What number satisfies all these properties? 1/6 satisfies all these properties. Thus, when a mathematician says “0.16666…”, what they mean, by this definition, is 1/6.

Similarly, when a mathematician says “0.9999…”, what they mean, by that same definition, is “The number which is >= 0, and also >= 0.9, and also >= 0.99, and also >= 0.999, and so on, AND also <= 1, and also <= 1.0, and also <= 1.00, and also <= 1.000, and so on.” What number satisfies all these properties? 1 satisfies all these properties. Thus, when a mathematician says “0.9999…”, what they mean, by definition, is 1.

Finally, here’s Mark Eichenlaub as usual, wrapping up by clarifying the notion of a limit. A cursory think-through by someone unacquainted with limits might lead to the feeling that this explanation doesn’t satisfice; after all, 0.999… “only” approaches 1 arbitrarily closely, but “never really reaches it” a la Zeno. The point (per Mark) is that arbitrarily closeness is good enough. It’s definitely good enough for the entire field of analysis to make sense, although this works on the fact that the number system chosen fits the purpose (see below this quote). More form Mark:

It is true that we can “prove” .999… =  1 using algebraic tricks, but most people who are dubious of .999… = 1 do not have enough background to know whether those tricks are right.  A young student has every right to call into question something like 10*0.999… = 9.999… as unjustified based on the mathematics they already understand.

Instead ask, “What is meant by infinitely-many repeating nines?”  Surely, it doesn’t mean that we’re literally supposed to write down “9” infinitely many times.  That is impossible.

The answer is subtle; in fact it took mathematicians a long time to come up with a solid definition of what something like .999…. means.

.999… is a mathematical limit.  Loosely speaking, it is

But this still leaves us wondering what is meant by the dots.

Noting that we can write the above as

we can write .999… rigorously as

The limit is has a definite mathematical meaning.

is read as “the limit of as goes to infinity is .”  It does not mean that is infinity, or becomes infinity, since infinity is not a number.  It also does not mean that is ever equal to .  (That may or may not be true.)  What it means is that by choosing a minimum value for that is large enough, we can get as close as we want to .

For example,

as you can convince yourself.  Mathematically, we write this as

Suppose someone comes along and says, “that’s not right.  The limit is not actually 2.  It’s something a tiny bit less.”  You then say, “okay, how much less?”

They retort, “well, I’m not sure, but the real limit is less than two by at least one over a billion, so that sum eventually strays permanently more than .000000001 away from two”  You can prove them wrong by saying, “No, that’s not right.  If my sum contains at least 30 terms, its distance from two will definitely be less than one billionth.”  (As long as you prove that, which you can.)

Then maybe they try one trillionth, but as long as you go to 40 terms you’re fine for that, too.  No matter how tiny a window they give you around 2, you can always find a certain number of terms so that the sum fits in that window from there on out.

This is what is meant by a limit; for any small (but non-zero) window around the limit, you can find a certain number of terms of the series such that after that point, it stays inside the window.  This particular series never becomes equal to the limit – no matter how many terms you add you never get to 2 exactly – but that simply is not required by the definition of a limit.

This may sound like a dubious definition, and it is almost certainly a confusing one if you haven’t heard it before.  Nonetheless, it turns out to be a good definition.  The entire field of analysis is built around it (perhaps with some exaggeration). You can check out the Wikipedia article on this definition of a limit here: http://en.wikipedia.org/wiki/(%C…

Now we know what it means.  We conjecture that it’s equal to 1.

To prove it, we first evaluate the sum explicitly in terms of .  This comes to , which you can prove using induction.
To prove the limit is 1, we must assume there is a window around 1 with arbitrary size .  If we carry out enough terms that , we will be inside our window.  This is always possible, so the limit is indeed 1.  Further, the limit is not any other number, because a general theorem tells us that limits are unique.

That is what is meant when we say

Aaron Hosford mentions that this equality only holds true in the reals; I feel this is a technicality best dealt with after one has really understood why the equality holds in the reals first. He’s worth quoting anyway:

The statement that .999… = 1 is predicated on the assumption that we are working with a particular number system with a particular set of properties — the real numbers. There are alternative systems (some of them extensions to the reals, much as reals are an extension to the rationals and rationals are an extension to integers) which admit the possibility of infinitesimal numbers and do not necessarily equate .999… with 1, since .999… becomes ambiguous within these systems. Our digital representation is inadequate for distinguishing certain numbers from others infinitesimally close to them. While the many proofs given here make sense within the “standard” real number system, there is no proof that this is the best system to use, because that choice is a matter of taste and/or utility.

You may find these links interesting:
Infinitesimal
Non-standard analysis
Hyperreal number
Superreal number

For an interesting summary of some of the research into the teaching of this paradox, see Does 0.999… Really Equal 1? from The Mathematics Educator.