Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I read the paper a few years ago and I agree that for such an incredible algorithmic improvement, it’s not trivial to find a use case, as you still need to maintain a separate (albeit algorithmically insignificant) lookup table. When I read the paper I (mistakenly) hoped it could be used for indexing on-disk structures without ever hitting the disk for things like B tree internal nodes. To get its sweet sweet algorithmic complexity, up its sleeve is once again (as I recall) the age old trick of rebuilding the structure when the size doubles, which makes it much less efficient than it sounds for most practical use cases. I suppose one good use case for this might be compressing database indexes, where you need to maintain a separate structure anyway and the space savings can be worth it.


I didn't invest a lot of time in it because when these papers show no practical applications I always assume that the constant factors are too high. Especially when there's also no specific values of the constant factors presented. It's still somewhere back in my mind to implement it one day to figure out the nitty gritty details.


Back of the envelope calculations from the abstract: An 8-bit tiny pointer would be sufficient to reference an array of size 2^2^2^3 ≈ 1e77 (≈atoms in the visible universe) with fullness 31/32 ≈ 97%. Or size 65536 and fullness 63/64. For 16-bit tiny pointers, there's no point in going bigger than the universe; fullness goes to 99.98%. Like you, I'd love to see such an example worked out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: