I agree that ultimately both are just notations. I do think the fractional notation has some definite advantages and few disadvantages, so I think it's better to regard it as more canonical.
I disagree though that it's necessary or even useful to think of 0.99... or 0.33... as sequences or limits. It's of course possible, but it complicates a very simple concept, in my opinion, and muddies the waters of how we should be using notions such as equality or inifinity.
For example, it's generally seen as a bad idea to declare that some infinite sum is actually equal to its limit, because that only applies when the series of partial sums converges. It's more rigorous to say that sigma(1/n) for n from 2 going to infinity converges to 1, not that it is equal to 1; or to say that lim(sigma(1/n)) for n from 2 to infinity = 1.
So, to say that 0.xxx... = sigma(x/10^n) for n from 1 to infinity, and to show that this is equal to 1 for x = 9, muddies the waters a bit. It still gives this impression that you need to do an infinite addition to show that 0.999... is equal to 1, when it's in fact just a notation for 9/9 = 1.
It's better in my opinion to show how to calculate the repeating decimal expansion of a fraction, and to show that there exists no fraction whose decimal expansion is 0.9... repeating.
I disagree though that it's necessary or even useful to think of 0.99... or 0.33... as sequences or limits. It's of course possible, but it complicates a very simple concept, in my opinion, and muddies the waters of how we should be using notions such as equality or inifinity.
For example, it's generally seen as a bad idea to declare that some infinite sum is actually equal to its limit, because that only applies when the series of partial sums converges. It's more rigorous to say that sigma(1/n) for n from 2 going to infinity converges to 1, not that it is equal to 1; or to say that lim(sigma(1/n)) for n from 2 to infinity = 1.
So, to say that 0.xxx... = sigma(x/10^n) for n from 1 to infinity, and to show that this is equal to 1 for x = 9, muddies the waters a bit. It still gives this impression that you need to do an infinite addition to show that 0.999... is equal to 1, when it's in fact just a notation for 9/9 = 1.
It's better in my opinion to show how to calculate the repeating decimal expansion of a fraction, and to show that there exists no fraction whose decimal expansion is 0.9... repeating.