Hacker News new | past | comments | ask | show | jobs | submit login

I wish authors would actually test their performance claims before publishing them.

I quickly benchmarked the two code snippets:

    arr.slice(10, 20).filter(el => el < 10).map(el => el + 5)
and

    arr.values().drop(10).take(10).filter(el => el < 10).map(el => el + 5).toArray()
but scaled up to much larger arrays. On V8, both allocated almost exactly the same amount of memory, but the latter snippet took 3-4x as long to run.



Over time, these performance characteristics are very likely to change in Iterator’s favour. (To what extent, I will not speculate.)

JavaScript engines have put a lot of effort into optimising Array, so that these sorts of patterns can run significantly faster than they have any right to.

Iterators are comparatively new, and haven’t had so much effort put into them.


I'm am the author of this blog. As for speed, you are probably right, I was mainly talking about wasting memory for temporary arrays, not the speed, it's unlikely that iterators are faster. But I'm curious, how large arrays did you test with? For example, will there be a memory difference for 10M size arrays.


> I was mainly talking about wasting memory for temporary arrays

Right, but the runtime is perfectly capable of optimizing those temporary arrays out, which it appears to do.

> I'm curious, how large arrays did you test with? For example, will there be a memory difference for 10M size arrays

10M size arrays are exactly what I tested with


My speculation is also that with iterators the array size might be somewhat less predictable, because you might not know when the iterator finishes. For example by doing .filter().map(). So there is no way to precisely know how much memory will be preallocated.


Interesting, I did some testing, just opened the task manager and run this js code in the browser without opening dev tools in order to see how the browser will behave when I don't prevent any optimizations.

Then I commented withArrayTransform and uncommented withIteratorTransform and did again in a fresh tab to prevent the browser reusing the old process.

//////////////////////////////////////////////////////////////////////////////////

const arr = Array.from({length: 100000000}, (_, i) => i % 10);

function withArrayTransform(arr) { return arr.slice(10, arr.length - 10).filter(el => el < 8).map(el => el + 5).map(el => el * 2).map(el => el - 7); }

function withIteratorTransform(arr) { return arr.values().drop(10).take(arr.length - 20).filter(el => el < 8).map(el => el + 5).map(el => el * 2).map(el => el - 7).toArray(); }

console.log(withArrayTransform(arr)) //console.log(withIteratorTransform(arr))

////////////////////////////////////////////////////////////////////////////////////

The peak memory usage with withArrayTransform was about ~1.6GB. The peak memory usage with withIteratorTransform was about ~0.8GB. Results sometimes vary, and it honestly feels complicated, but the iterator version is consistently more memory efficient. As of the speed, the iterator version was about ~1.5 times slower.

So probably the GC quickly cleaned up some temporary arrays when it saw an excessive memory usage in the process of running withArrayTransform(arr).

But imagine you use flatMap which unrolls the returned iterable and it can create even a bigger temporary array than the original and the final one. So using iterables still has an advantage of protecting from excessive memory usage and potentially crashing the browser tab or the whole Node server. I think it's still a nice thing to have.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: