Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The way I was taught decimals in school (in Romania) always made 0.99... seem like an absurdity to me: we were always taught that fractions are the "real" representation of rational numbers, and decimal notation is just a shorthand. Doing arithmetic with decimal numbers was seen as suspect, and never allowed for decimals with infinite expansions. So, for example, if a test asked you to calculate 2 × 0.2222... [which we notated as 2 × 0,(2)], then the right solution was to expand it:

  2 × 0.2222...
   = 2 × 2/9 
   = 4/9 
   = 0.444...
Once you're taught that this is how the numbers work, it's easy(ish) to accept that 0.999... is just a notational trick. At the very least, you're "immune" to certain legit-looking operations, like

  0.33... + 0.66...
    = 1/3 + 2/3
    = 3/3
    = 1
Instead of

  0.33... + 0.66...
    = 0.99...
So, in this view, 0.3 or 0.333... are not numbers in the proper sense, they're just a convenient notation for 3/10 and 1/3 respectively. And there simply is no number whose notation would be 0.999..., it's just an abuse of the decimal notation.



Both ways are just notation. There’s nothing more real about 3/10 compared to 0.3.

Telling you otherwise might have worked as a educational “shorthand”, but there are no mathematical difficulties as long as you use good definitions of what you mean when you write them down.

The issues people have with 0.333… and 0.999… is due to two things: not understanding what the notation means and not understanding sequences and limits.


I agree that ultimately both are just notations. I do think the fractional notation has some definite advantages and few disadvantages, so I think it's better to regard it as more canonical.

I disagree though that it's necessary or even useful to think of 0.99... or 0.33... as sequences or limits. It's of course possible, but it complicates a very simple concept, in my opinion, and muddies the waters of how we should be using notions such as equality or inifinity.

For example, it's generally seen as a bad idea to declare that some infinite sum is actually equal to its limit, because that only applies when the series of partial sums converges. It's more rigorous to say that sigma(1/n) for n from 2 going to infinity converges to 1, not that it is equal to 1; or to say that lim(sigma(1/n)) for n from 2 to infinity = 1.

So, to say that 0.xxx... = sigma(x/10^n) for n from 1 to infinity, and to show that this is equal to 1 for x = 9, muddies the waters a bit. It still gives this impression that you need to do an infinite addition to show that 0.999... is equal to 1, when it's in fact just a notation for 9/9 = 1.

It's better in my opinion to show how to calculate the repeating decimal expansion of a fraction, and to show that there exists no fraction whose decimal expansion is 0.9... repeating.


> The issues people have with 0.333… and 0.999… is due to two things: not understanding what the notation means and not understanding sequences and limits.

Also a possible third thing: not enjoying working in a Base that makes factors of 3 hard to write. Thirds seem like common enough fractions "naturally" but decimal (Base-10) makes them hard to write. It's one of the reasons there are a lot of proponents of Base-12 as a better base for people, especially children, because it has a factor of 3 and thirds have nice clean duodecimal representations. (Base-60 is another fun option; it's also Babylonian approved and how we got 60 minutes and 60 seconds as common unit sizes.)


You get the same problem with 0.44... + 0.55... - I don't think that makes it any easier to anyone who is confused. It's more likely just that 0.33... and 0.66... are very common and simple repeating fractions that lead to this issue.


Sure, I was just pointing out that Base you use for your math does affect how common repeating digits are, based on the available factors in that base.

In Base-12 math, 1/3 = 0.4 and 2/3 = 0.8. With the tradeoff that 1/5 is 0.2947 repeating (the entire 2947 has the repeating over-bar).

Base-10 only has the two main factors 2 and 5, so repeating fractions are much more common in decimal representation, making this overall problem much more common, than compared to duodecimal/dozenal/Base-12 (or even hexadecimal/Base-16). It's interesting that this is a trade-off directly related to the base number of digits we want to express rational numbers in.


The fact that people are still debating whether 0.9999.... = 1 suggests that one notation is less confusing than the other.

Nobody debates whether 9/9 = 1.


I was taught something of the same.

But I think it was misguided. I'll note that 1/3 is not a number, it's a calculation. So more complicated.

And fractions are generally much more complicated than the decimal system. Beyond some simple fractions that you're bound to experience in your everyday life, I don't think it makes sense to drill fractions. In the end, when you actually need to know the answer to a computation as a number, you're more likely to make a mistake because you spend your time juggling fractions instead of handling numerical instability.

Decimal notation used to be impractical because calculating with multiple digits was slow and error-prone. But that's no longer the case.


> I'll note that 1/3 is not a number, it's a calculation. So more complicated.

1/3 is a calculation the same way 42 is a calculation (4*10^1 + 2*10^0). Nothing is real except sets containing sets! /j


Yes, true. *BUT* 1/3 is a fraction with denominator 3. 1/5 is a fraction with another denominator, and 1/7 has yet another. So how much is 1/3 + 1/5 + 1/7? You can't just add up, you first have to multiply to get to common ground. The decimal expansions of these use the same base and are readily comparable.


How would you add 0.(3) + 0.2 + 0.(142857)?

Computations are exactly the place where fractions shine over (repeating) decimals.

The disadvantage of fractions is that there is an infinite number of ways to represent each rational number - 1/3 is the same number as 2/6 or as 818171/2454513. Also, comparisons are harder as well. It's easy to tell that 1/3 is bigger than 1/7, but is 2/3 greater or smaller than 5/7? Here you have to do a computation to really tell.

Irrational numbers have a similar problem, btw. The square root of two is the same number as the fourth root of 8, but you can't tell this without performing some computations.


> How would you add 0.(3) + 0.2 + 0.(142857)?

Well I don't suggest that adding 1/3, 1/5 and 1/7 isn't more precisely done by keeping them as fractions and multiply them out to get (35+21+15)/105=71/105. In this case it's relatively easy to get an idea of the resulting value but the next computation with other fractions could give me 83742/36476 as you say which is harder to judge without doing the division as well, so the price for doing fractions as non-decimal fractions is that in order to get the sum of three numbers I'll generally have to perform 2 multiplications (ab)c to find a common denominator, than do 2*3=6 different multiplications with three different numerators to get the normalized numerators, then do 2 additions to get the resulting numerator, and finally 1 division to get a normalized answer. That's a whopping 8 multiplications, two additions and 1 division for the sum of three numbers.

If I am given the fractions as above I could also do three divisions to get their decimal expansions, followed by 2 additions to get the decimal expansion of the normalized result; this result will be imprecise in the general case but it can be as precise as I want to.

If I am given the decimal expansions right away then I can do 3+2+1=6 immediately to get 0.6 as an approximate answer which is not bad given that the correct answer is more like 0.6761904... and all I did was looking at the figures. The slightly harder 33+20+14 is already much closer with it's result, 0.67. There's no denying the fact that many mathematical problems are better done with fractions than with decimals but when doing things like physical measurements, decimal expansions are IMO more practical.


No, they arent. Adding periodic decimals can yield terrible results. Just... don't.


This is ultimately a matter of definitions, and neither defining the fractions nor the decimals as the "true" representation of rationals is ultimately more or less correct.

But, operations on fractions are definitely easier than operations on decimals. And fractions have the nice property that they have finite representations for all rational numbers, whereas decimal representations always have infinite representations even for very simple numbers, such as 1/3.

Also, if you are going to do arithmetic with infinite decimal representations, the you have to be aware that the rules are more complex then simply doing digit-by-digit operations. That is, 0.77... + 0.44... ≠ 1.11... even though 7+4 = 11. And it gets even more complex for more complicated repeating patterns, such as 0.123123123... + 0.454545... (that is, 123/999 + 45/99). I doubt there is any reason whatsoever to attempt to learn the rules for these things, given that the arithmetic of fractions is much simpler and follows from the rules for division. The fact that a handful of simple cases work in simple ways doesn't make it a good idea to try.


Rationals are numbers, not calculations. They can evaluate to themselves as members from a set.


> Doing arithmetic with decimal numbers was seen as suspect, and never allowed for decimals with infinite expansions.

With that attitude how do you handle e.g. pi or sqrt(2), which it's perfectly legitimate to do arithmetic with?


This is a very strange question. With repeating decimals, it is technically possible, though very complicated, to do arithmetic directly on the representations. You have to remember a bunch of extra rules, but it can be done.

However, with numbers that have non-repeating inifinite decimal expansions, it is completely imposible to do arithmetic in the decimal notation. I'm not exagerating: it's literally physically impossible to represent on paper the result of doing 3pi in decimal notation in an unambiguous form other than 3pi. It's also completely impossible to use the decimal expansion of pi to compute that pi / pi = 1.

Here, I'll show you what it would be like to try:

  pi / pi
    =  3.141592653589793238462643383279502884197169399375105820949445923078164062862089986280348253421170679821480865132820664709384460955058223172.... 
Now, of course you can do arithmetic with certain approximations of pi. For example, I can do this:

  pi / pi
    ≈ 3.1415 / 3.1415
    = 1
Or even

  3 × pi 
   ≈ 3 × 3
   = 9
But this is not doing arithmetic with the decimal expansion of pi, this is doing arithmetic with rational numbers that are close enoigh to pi for some purpose (that has to be defined).


pi/pi would evaluate to 1 as most proper languages would deal with pi symbolically and not arithmetically.


My point was only that even trivial arithmetic is impossible to do with the infinite decimal representations of irrational numbers.


in python

  import math

  result = math.pi / math.pi
  print(result)     #1.0
bit more long winded than raku, but nearly right

fwiw I want my pi/pi to be 1 (ie an Int) not 1.0 but then I’m a purist


in raku

  say pi/pi;   #1


Sadly, that's just Num.gist showing 1.0 as "1" though.

say (pi/pi).^name; # Num


lol … my bad I should have realized that pi is a Num since it’s an Irrational and a Num over a Num is a Num


You don't. You keep them in symbolic form until they simplify and you do arithmetic at the last possible moment.


Sure, but when you reach that "last possible moment", what then?


Then you calculate the decimal expansion to the desired number of decimal places. This avoids accumulation of roundoff errors in intermediate results.

Note that writing sqrt(2) as 1.41 or 1.41421 or any other decimal expansion you might want to write is incorrect: you will always get some roundoff error. If you want to calculate that sqrt(2)*sqrt(2)=2 then you can’t do so by multiplying the decimal expansions.


You never evaluate symbols until your giving a numerical equivalent.

Sure if a question asks for the escape velocity from Jupiter this has an approximate numerical value, but you don't just start by throwing numbers at a wall, you get the simplest equation which represents the value you're interested in an then evaluate it once you have a single equation for that parameter.

Yes sqrt(2)*pi has a numerical approximation but you don't want that right at the start of taking about something like spin orbitals or momenta of spinning disks. Doing the latter compounds errors.

It's no different to keeping around "i"/"j" until you need to express a phase or angle as it's cleaner and avoids compounding accuracy errors.


If it's a maths problem you just leave it as symbols. If it's a science or engineering problem you expand it to a decimal approximation with the precision needed for the specific problem you are dealing with.


Note that even for an engineering problem, you don't necessarily use a decimal representation. You may well want to represent pi as 3 or 4 or 22/7 or any other approximation that is good for your particular use case. Or you may even have usecases where you do things the opposite way - you may want to approximate 1 as pi/3 or something like that for certain kinds of problems (e.g if you're going to take the sin of your result).


Once you're dealing with irrational numbers you have to understand that all results are approximations.


Well, sure, but you should still be able to ask and answer questions like "Is pi + sqrt(2) less than or greater than 4.553?"


It's important to understand that this was a non-trivial question for thousands of years. The ancient Babylonians would have probably believed this to be false (their best known approximation had pi ≈ 25/8, which is too small). The right way to approach this problem from first principles would be to construct some geometrical objects that have these lengths and try to compare them (for example by taking the perimeter of a square inscribing a unit circle and a square inscribed in a unit circle as the upper and lower bounds for pi, though that may not be good enough for this particular problem).

When you're doing something like pi + sqrt(2) ≈ 3.14159 + 1.41421 = 4.5558, you're taking known good approximations of these two real numbers and adding them up. The heavy lifting was done over thousands of years to produce these good approximations. It's not the arithmetic on the decimal representations that's doing the heavyh lifting, it's the algorithms needed to produce these good approximations in the first place that are the magic here.

And it would be just as easy to compute this if I told you that pi ≈ 314159/100000, and sqrt(2) ≈ 141421/100000, so that their sum is 455580/100000, which is clearly larger than 4553/1000.


> their best known approximation had pi ≈ 25/8, which is too small

I'm curious if they had a better one that we don't know of yet—their best known approximation of sqrt(2) is significantly more accurate.


When you have those sorts of problems the best way is to approach them using inequalities.

   3.1415<pi<3.1416 and 1.4142<sqrt(2)<1.4143, => 4.5557<pi + sqrt(2)<4.5559
   => 4.553 < 4.5557 < pi + sqrt(2) => 4.553 < pi + sqrt(2)


Finite decimal representations being of course the most convenient notation for inequalities like this.


In that case you know how many decimal places you want to expand them to, in order to compare.


Note that "expanding them to some number of decimal places" gives a somewhat misleading idea about how this works. What you're actually doing is computing a good enough approximation of pi, and expressing that as a decimal. But this is not the same kind of simple process that naturally gives decimals as it is for a rational fraction. Instead, you have to find some series with rational elements which converges to pi, and then compute enough terms of that series that you have a good enough approximation of pi for your purpose. Ideally, since you're interested in an inequality, you'd pick a series which is monotoniclaly increasing or decreasing, so that you know that computing more terms can't put you below or above the target number after you've reached a conclusion. But there is no canonical answer, there are numerous series which converge to pi that you could use, and they would givw you different decimal expansions as you are computing them.


Less approximations and more representations of complex things at times. (Just my opinion)

I prefer comparing it to complex numbers where I can't have "i" apples but I can calculate the phase difference between 2 power supplies in a circuit using such notation.

Nobody really cares about the 3rd decimal place when taking about a speeding car at a turn but they do when talking about electrons in an accelerator, so accuracy and precision always feel mucky to talk about when dealing with irrationals (again my opinion).


Well, 3.141 is an approximation of pi, not a representation of it, insomuch as you use it in an arithmetic expression. Of course, you can write 3.141... to just represent pi, but you can't eaisly use that in an arithmetic expression. For example, I can't tell you from "mechanical" operations if 3.141... - 3.1417 > 0, I have to lookup how big pi actually is.


Not really. Like the sibling comment said - you simply keep the symbolic values. I.e. instead of 4.442882938158... you write π√2, just like you would ⅚ and not 0,8333... in both cases you preserve the exact values. Decimal (or any other numbering system, really) approximations are only useful when you never want to do any further arithmetic with the result.


> Decimal (or any other numbering system, really) approximations are only useful when you never want to do any further arithmetic with the result.

What? The opposite is the case. Anything you want to do something with, you can only measure inaccurately; arithmetic doesn't have any use if you can't apply it to inaccurate measurements. That's what we use it for!


So I take it you never wrote any numerical simulations or did symbolic calculations then?

Catastrophic cancellation and other failures are serious issues to consider when doing numerical analysis and can often be avoided completely by using symbolic calculation instead. You can easily end up with wrong results, especially when composing calculations. This would make it difficult to, for example, match your theoretical model against actual measurement results; particularly if the model includes expressions that don't have closed-form solutions.


in American math classes (as opposed to science classes) you almost never expand PI or sqrt(2), you either cancel them out or leave them in the answer until the end. Maybe if it's a word problem you sub them in the very last step but the problem itself is almost certainly going to be designed so it's not an issue.


>in American math classes (as opposed to science classes) you almost never expand PI

Except we have some fascination with memorizing the digits of pi and having competitions for doing so for some reason.


There is no school math test that will require you to know the digits of Pi, except as a silly extra credit question.

The fascination is just dick measuring. "I'm smarter than you", for memorizing a longer string? It's quite dumb, but American media loves to use the dumbest possible ways of demonstrating that a character is intelligent, because uh it's really really hard to demonstrate "This person is very intelligent" to a subset of the population that is mostly at a middle school reading level and barely comprehends basic arithmetic, let alone algebra.


And that's useless for actual math.


>And that's useless for actual math.

Agreed. The schools always seem to have these learning adjacent things that are theoretically supposed to make subjects engaging, but in reality are so disconnected from the subject that they are meaningless.


Even the games/puzzles from Martin Gardner are a much better solution than memorizing a random... string. Because pi is not about 3.1415... but a proportion.


Just for fun.


You would use rational approximations good enough for different scales and roundings.


Forth and Lisp users often try to use rational first and floats later. On Scheme Lisps, you have exact-inexact and inexact->exact functions which convert rationals to floats and viceversa.


>So, in this view, 0.3 or 0.333... are not numbers in the proper sense, they're just a convenient notation for 3/10 and 1/3 respectively. And there simply is no number whose notation would be 0.999..., it's just an abuse of the decimal notation.

I wonder if students from Romania are hamstrung in more advanced mathematics from being taught this way.


I don't think they would be; I think they might even have an advantage. They'd understand that the only numbers that have infinite repeating expansions are rationals, and that decimals are, in general, just approximations.


I don't think decimals, especially repeated decimals or other infinite decimal expansions, show up much if at all in any advanced math subjects beyond the study of themselves, of course). Higher math is almost exclusively symbolic. You're more likely to need to learn that "1" is just a notation for the set which contains the empty set then to learn that it's OK to add 0.22... + 0.44... = 0.66...


> we were always taught that fractions are the "real" representation of rational numbers

There is no "real" representation of rational numbers, and fractions are no more real—or fake—than decimals.

> And there simply is no number whose notation would be 0.999...

There is, though. It's 1.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: