Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Recursive fibonacci is highly artificial, but the fact that a compiler can greatly transform the program is an important advantage of compilers over interpreters.


That's sort of true, in the abstract, but it wouldn't explain what's going on here. Even JavaScript (with a JIT, but not a separate compilation step, and certainly no type annotations) is faster at this than compiled Crystal. There's clearly other factors which are much more significant than simply having a compiler.


When you're doing something like writing your own routine for calculating fibonacci numbers, perhaps. But that's not the kind of optimization that's likely to be significant to most developers writing business applications.

I would also argue that the ubiquity of things like BLAS wrappers largely blurs these lines. For example, one of the big reasons I choose Python over Java (my company's primary language) for my work is that, thanks to numpy, Python absolutely smokes Java at crunching numbers, for my purposes. This despite Java being a compiled static language and Python being an interpreted dynamic language.


I had the (mistaken) impression that the GCC code signicantly transformed the algorithm because there is only one recursive call instruction, but looking at it closer it's still exponential. I still think it's a pretty silly idea to use such an unrealistic benchmark. Especially since it might be possible to make a compiler turn the exponential algorithm linear.


I'd be very surprised if there's a commercial compiler that would automatically optimize this into linear time. Maybe some obscure functional research language? But certainly don't think LLVM is capable of dynamic programming.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: