Hacker News new | past | comments | ask | show | jobs | submit login

Asymptotically, that's true.

In reality, O(n) and O(2n) can be quite different for small n. As can O(n)+k0 and O(2n)+k1. Or worse, O(n^2)+k2 where sufficiently large k0 and k1 make the quadratic system better because it's k2 constant is so much smaller. Setup time matters.

Nowadays, you rarely have enough elements that the asymptotic behavior is the defining performance characteristic.




O notation is asymptotic by definition. The sentence ā€œO(n) for small nā€ is completely meaningless. O(n) cannot never be different from O(2n) because they refer to the exact same set of algorithms.


Theoretically yes but practically no. Theoretically Big O notation is defined as a limit that goes to infinity but in practice there's no such thing as infinity and saying that an algorithm's runtime is in O(n) implies certain things about its behaviour for finite n, that's kind of the point. And it's perfectly possible for an algorithm to be defined piecewise on the input size and so saying it has runtime O(n) for small n is a statement that has practical significance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: