Have you seen that Flanagan's JavaScript the Definitive Guide, the famously big book with the rhinoceros on its cover, has recently seen a new edition? I wonder how deep its JavaScript goes :-)
Unfortunately, the language has changed so much in recent years that I wouldn’t necessarily recommend that book anymore. A large number of fundamental features have been introduced since 2008, including async/await, lambda functions, class and extends syntax, proxies, and a ton of builtin methods like map() and forEach() on native list types.
Though I recommend to my team to avoid mutation if at all possible, there are times which I have found immer absolutely invaluable. Try spreading out an object 15 levels deep to replace a single property value and you'll catch my drift.
My biggest challenge with immer has been that my code aesthetic taste leans functional, but immer makes code look imperative even if it is ultimately functional. Always looks a little wrong to my eye!
Writing old-style pure code to do changes in a deeply nested structure is cumbersome, both to write and read. Eventually every language will find nicer ways to allow programmers to express these mutations.
JavaScript doesn't provide a nice way to extend the language syntax, unlike some other programming languages. So most DSLs end up looking like imperative code but do pure mutations underneath. I understand, for the reader it's confusing, because unless you know that it's a DSL you never know whether the mutations are pure or not.
Compare that with Haskell, with its support for custom operators, allows for some quite succinct code to express deep mutations (cf. lens library)
This feature feels pretty new, even though it was released in ES5, and had some more fixes in ES6. It's useful for having immutable data, which was all the hype when React and Redux were introduced.
This looks like it's worth a place in an electronic bookshelf. Among other things, following a link in the contents led me to this little pearl: https://mathiasbynens.be/notes/globalthis
Some languages are typically taught in introductory programming courses - it's a very deliberate process aimed at explaining concepts from the ground up. JS is the opposite, people mostly learn it top-down, doing web development and being forced to use it.
And like eager college graduates need to learn it's sometimes a waste of time to optimize an algorithm, DIY JS devs can greatly benefit from a bit of theory and technical depth. Resources like this are perfect for that.
I can also wholeheartedly recommend Kyle Simpson's You Don't Know JS.
One of the things I love about JavaScript is that you can write an almost 10x faster hash table than the standard library if you take care of cache misses and GC: https://github.com/ronomon/hash-table#motivation
In fact, all the usual low-level optimization techniques like reducing branch mispredictions and expensive memory accesses apply, and make a huge difference, even though you're writing in a high-level language.
*for a very specific use-case where you have an ungodly amount of data to insert
Still, it's interesting that there is something to be gained under these circumstances. I'm typically skeptical of this sort of thing because the standard library is written in C++. One time I thought I was very clever and hand-wrote a more appropriate sorting algorithm for a specific use-case only to discover that, no, Array.sort() was still faster by sheer brute force.
We actually wrote the original version in C with SIMD extensions as a Node.js binding, but the JavaScript version was still twice as fast. I kid you not. You will find the reason for this in the README, it's the last bullet point under "Fast": https://github.com/ronomon/hash-table#fast
I am not sure but I wouldn't think so, unless you have to serialize/deserialize or otherwise transform or inspect function call arguments in some or other way, as you need to do when binding JavaScript with C.
It's why managed languages have such a bad rap: people think that this stuff ceases to matter.
I remember the C# DirectX billboard sample, it was something like 10x slower than the same C++ sample in the same version of the SDK. Why? C# didn't have generics yet, and a value type was being stored in a non-generic list. Something like 4MB of memory was being copied around due to boxing and unboxing.
I do appreciate the effort the author put into that library and obviously performance is important.
However, JS doesn't primarily live in the server side world, it mostly lives in the browser world. And in the browsers, you rarely do heavy processing. If you are, you're doing it wrong - that logic needs to live on the server.
You're also doing it wrong in you are relying on server side NodeJS for tasks that involve heavy duty computations.
Still, interesting to know. I wonder if the author can make this repo a proposal to the TC39 committee.
I consider this wrong on all counts and I don't see where you're coming from to make these claims. You obviously benefit from performance gains on the server and client in standard use-cases.
A Node.js server's main thread is nickel and dimed by a thousand little cuts. A faster hash implementation can reduce loop delay by a nontrivial amount. This isn't heavy processing.
Same with the client. Any sort of game could have a good reason to be doing hashtable lookups in a hot loop on the client. This isn't heavy processing in some exotic use-case, it's rather elementary. And perf trade-offs especially help slow clients. And freeing up the main thread lets you fit in more nonreducible cycles.
Also, moving work to the server because your client implementation is too slow and then generalizing that to "always do work on the server", is tautological. Where you do work is fundamentally a business logic / product design concern that is only a performance concern in the suboptimal case where you can't fit the work on the server or client. So faster implementations move what are performance concerns back into the realm of higher level product design decisions.
These aren't symptoms of "doing it wrong". This is just plain jane software engineering.
> And in the browsers, you rarely do heavy processing. If you are, you're doing it wrong - that logic needs to live on the server.
This is very incorrect. At my last company we had a React interface (a specialized IDE, really) that needed to juggle (sort, filter, process) and work with sometimes hundreds of thousands of entities at once, all in-memory. JavaScript did this just fine, and the user experience (and the API design) really benefitted from not having a bunch of extra round-trips to the server.
Even in this extreme use-case, the bottleneck was always the few-hundred items that were actually rendered in the DOM at a time. We virtually never had performance problems from sheer volume of underlying data.
> At my last company we had a React interface (a specialized IDE, really) that needed to juggle (sort, filter, process) and work with sometimes hundreds of thousands of entities at once, all in-memory.
The author's example inserts 4M elements.
> and the user experience (and the API design) really benefitted from not having a bunch of extra round-trips to the server.
Possibly, but unless you benchmarked both scenarios, this might not be the case.
Performance over networked devices often has trade offs, sometimes those trade offs are directly contradictory, eg sorting/filtering/mapping/etc data on client side versus server side via api call - sometimes one is better than the other but you wont wont know unless you benchmark.
I get the impression that your company did not go through that exercise, considering you guys are building an IDE with scripting language.[1]
The only reason the example inserts 4M elements was because Set and Object start to become prohibitively slow at some point and crash the process with too many allocations, not to mention the stress on the GC which now has to follow so many pointers.
HashTable performance is a fundamental component of any language.
> HashTable performance is a fundamental component of any language.
Agreed, but why use Node if performance was so critical to your use case? Could this 400M insert logic exist in a separate service that lives outside your Node code base?
It's one thing to write a web service in a different language for performance; it's a much bigger question to move logic from your client to your server for the sake of performance. Both from a user experience standpoint and from an architectural complexity standpoint.
One of the selling points of tensorflow.js is that you do your processing on the client, for privacy reasons. This is heavy, heavy processing, and you're doing it right.
The thing is, the result above is not even specific to JavaScript (in PHP for example it also returns `true`), and it makes a lot of sense if you know what the bitwise operators do.
Now I'm curious what result the OP was expecting... that's how bitwise NOT should work on (IEEE 754) floating-point numbers regardless of the language, isn't it?
(https://www.amazon.co.uk/JavaScript-Definitive-Guide-David-F...)