You might never have to do a bitshift when writing analytics software in Python, sure, but wanting to understand how a computer works is necessary to be a good developer.
Curiosity is the point here. And it pays off in the long term.
If you don't look and explore outside of your comfort zone you'll end up missing something and doing poor design or reinventing the wheel quite a lot.
I did plenty of the usual bitwise stuff in C in my undergrad years and recently dove back into K&R C to see if I could re-learn it. No lack of curiosity here.
In general I agree with you; I just don't think demanding that my developers know bitwise stuff is the particular hill upon which I'd like to die. There are other fundamentals I'd value far more highly and would consider to be hard requirements such as understanding data structures and some decent idea of big-O, and a few other things as well.
> hard requirements such as understanding data structures and some decent idea of big-O,
That's the problem with current software. Web-devs and others who have gone through some sort of a CS curriculum, know only high-level stuff and know big-O, but have no idea of how a computer works. Big-O (mostly) only kicks in when large numbers of stuff are involved, but meanwhile what matters is how each pass of the iterating process is slow (and even when Big-O kicks in, it just multiplies the base unit, so if the base unit is slow, it will also account, it will scale as they like to say :-) ).
If you only know and trust what the high level gives you, you have no idea about how each base unit (operation) is slow, on how each may require data movement and memory. You can believe that copying a whole memory area (a data structure) is the same as copying a native integer, since it is the same operation in the high level language, except one can be a simple register-to-register transfer, while the other one means performing a loop with many memory accesses, both for reading and writing. You can think that exponentiation is the same as an addition, since they both are provided as primitives by the high level language, except one is a native single-cycle CPU operation, while the other one means looping over a couple of instructions because the CPU do not have hardware exponentiation. You can think that floating-point calculation are as fast as integer operations, especially when some of the favourite languages of that demographics found it clever to provide only a single type of number and implement it as FP; while they are still slower (except division).
That's a bit the theme of the original post. Instead of doing an actually simple operation, doing a lot of things without having any idea about how much pressure they put on the HW, how much resources they use: allocations, several loops. After all, it looks like a chain of simple operations from the high-level language; the author doesn't know which translate to basic operations, which translate into a complex loop-intensive and/or resources-heavy stream of instructions, which are just there to please the compiler and are not translated into anything.
The cure for some of those things is however quick and simple: opening and giving a look, once in one's life, at the PDF manual of a CPU at the instructions set chapter, should suffice to somewhat ground the reader into reality. It doesn't bite.
I am actually very curious, but I curate what to learn in the other direction - databases, multi core programming, systems architecture, organisation architecture, consumer and business psychology, sales and marketing, game theory, communication, understanding business processes, that sort of thing.
You might never have to do a bitshift when writing analytics software in Python, sure, but wanting to understand how a computer works is necessary to be a good developer.
Curiosity is the point here. And it pays off in the long term.
If you don't look and explore outside of your comfort zone you'll end up missing something and doing poor design or reinventing the wheel quite a lot.