Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No shame in that.

You're going to see lots of people scoffing at a failure to understand such "foundational" aspects of programming but I would not agree with them.

In a "traditional" (let's say, up through the 90's? perhaps longer? maybe it's still that way today?) computer science education with lots of C and assembly, you do a lot of twiddling of individual bits. Either because you're trying to pack multiple values into a single integer (bits 0-3 are the player's score, bit 4 is the number of lives remaining, bits 5-8 are the time remaining, etc) or for certain kinds of low level math. So, folks who went through that sort of education will tend to view this as foundational knowledge.

There will always be a need for such bit-twiddling, of course. Folks writing low level code, binary network protocols, etc.

However, as you may already be thinking to yourself, most software engineering work today decidedly does not involve that sort of thing.

Essentially, IMHO you just need to know that this stuff exists - to know that most languages give you ways to twiddle bits in case you actually need to twiddle bits somebody. Bitwise stuff is generally pretty simple, you can just get familiar with it you ever actually need it.



I disagree. Would you fly with a pilot who never learned the basics of lift and drag? Or hire a builder who didn't understand loads and support? But we call people professionals who build software with no understanding of computing fundamentals?

Even getting an associates degree in networking required me to understand how binary operations work and do variable length subnetting by hand, even though subnet calculators are readily available.

Maybe I'm being off base here in considering it a computing fundamental, but it is difficult for me to see things like this and not associate it with my super-computer phone routinely pausing for several seconds while I am typing.


This comes up once in a while, both when I talk to people who've been in the field much longer than me and when I interview new folks. We had a couple of people try to add bit shifting and memory management into the interview process, and we're a shop that does APIs on Go & Java, apps on Ruby/Rails, frontend on React/HTML/CSS, mobile apps on Kotlin, Swift, and data on Postgres, Redis, S3, DynamoDB etc. Most people in this kind of shop simply haven't heard of binary operations or malloc/free and none of our codebases have them. Lots of people had a hard time letting go of the notion that these skills were "foundational" or coming to grips with candidates not knowing this but still being effective developers.


Again, while I can see where you're coming from, the fact is I really do find it difficult to believe that they can be all that great while looking around at all the painfully slow software I have to use that was deployed by billion dollar companies and worked on by an army of developers who, presumably, were hired with that very mindset.


I really disagree with this. I've spent time optimizing web apps (both client and server side), and I've never used bit-twiddling to do it. Your application will be very fast and have none of that.

The performance problems you run into on the server side are generally related to bad data access patterns with an external database or caching layer. They are solved by writing better queries, or organizing the data more efficiently. As an end user of those systems, you do not interact with the raw bytes yourself. Of course if you're actually writing a database engine or a cache, then you would.

On the front-end, it's a bit more complex. Often what makes apps slow is actually the way their assets are loaded. For example, using web fonts can incur a major performance hit if you're not very careful. Many shops have moved away from using web fonts for this reason. Similarly, loading javascript at the wrong time can cause things to be slow, because the browser has to wait for the JS to be loaded. Beyond that, slowness is often about inefficiently rendering data, or again loading more data than you need. To make a fast app, you mostly need to be aware of how browsers load and render JS, CSS and HTML. Bit twiddling is not really relevant.

But really what this comes down to is that no one wants to pay for performance. An app that's kind of slow still pays the bills, and engineers are not in the driver's seat to prioritize performance. Instead, the focus is on finding ways to get people to pay you more money. It's much easier to sell features than performance, so that's where the focus is.


    looking around at all the painfully slow software I have to use
Unsound fundamentals (when to use the right data structures, etc) surely do contribute to software bloat but I don't think it's anywhere near the largest contributor to that problem, not by a long shot.

The overwhelming issue IMO is a lack of time allocated for developers to actually focus on perf. I've been doing this for years and I almost always need to fight+claw for that time, or just do it guerilla style when I'm really supposed to be doing something else.

It certainly is true that lot of performance issues could surely be avoided in the first place if developers made better choices in the first place, choosing the correct data structures and so on, and stronger fundamentals (though not bit twiddling in particular) would definitely help with that. But in most projects I've worked on, requirements change so rapidly it's tough to bake perf in from the start even if you're willing and capable.


Is there any evidence that engineers who do know all of the fundamentals you expect them to actually produce faster software? I would bet that a lot of the sluggish apps you have in mind were written by people with great fundamental comp sci knowledge.

Google probably hires for fundamentals as much as any company in the world and yet some of their main apps (ie. gmail) have abysmal performance. There are just so many factors that can result in poor performing software it's weird to me that you are so focused on something that a great many engineers have never needed to use in practice.


> Most people in this kind of shop simply haven't heard of binary operations or malloc/free

And this is how the industry reinvents the wheel every 10 years.


Or how it moves forward? I also don’t know about vacuum tubes, tape drives, GOTO, soldering, capacitors, IC schematics or assembly. So Steve Woz is a role model, but today’s Woz will write Swift or Go, or god-forbid, JavaScript.


You are making a strawman.

If people forget lessons from the past and redo the same mistakes over and over this is the opposite of technology moving forward. It hampers progress. This is why it's called "reinventing the wheel".

Yet, if you want an extreme example here's RISC-V *currently* adding support for vector instructions.

Introduced on DEC Vax in 1970 and forgotten for a good while.

https://riscv.org/wp-content/uploads/2015/06/riscv-vector-wo...


> vector instructions ... forgotten for a good while.

Not quite "forgotten" all of that time, though. Didn't x86 add them around the turn of the century? And probably other CISC architectures too, now and then over the decades. It's not like they went away totally, and only popped up again right now.


Apparently you do "know about" all those things, at least on some level -- otherwise, how would you know to mention them?


How do these developers fix implementation bugs? Go, Ruby, etc segfaults continuously with some code does all development just stop? Understanding these things doesn't make you an ineffective developer it just allows you to go deeper. Also learning low level programming gives knowledge to write performant solutions in high level languages.


No, you are right, it’s fundamental. It’s like not knowing what memory is, let alone how it’s managed. This isn’t an attempt at gatekeeping and I do understand that it’s possible to write useful software without understanding the fundamentals of von Neumann machines. But how is it possible to not wonder about how these magical boxes actually represent the software we write in memory? It’s not like you need a comprehensive understanding of solid state physics here.


I did plenty of the usual undergrad bitwise stuff in school and recently dove back into K&R C to refresh myself, so in a lot of ways I agree.

    But how is it possible to not wonder about how these 
    magical boxes actually represent the software we 
    write in memory? 
I mean, we all draw the line somewhere. Somewhere, there's a chip designer wondering how you have spent so long in this industry without bothering to get familiar with chip lithography techniques and such.


I get where you're coming from with the metaphor, but I see these types of comparisons a lot when talking about computing "fundamentals" and they've never really felt right to me.

Even though pilots may know the basics of lift and drag, they have abstractions over those things (tools, processes) to management. That really isn't any different than saying "I get what/why bit manipulation, but have never needed it".

Also - you _can_ learn fundamentals on the fly in software dev. Sure, not everyone has the drive to, but you can't reasonably google "how does lift/drag work" as a pilot who is flying the plane :)


    I disagree. Would you fly with a pilot who 
    never learned the basics of lift and drag?
Lift and drag are applicable to every flight. Bit twiddling is not relevant to every software development effort.

    But we call people professionals who build 
    software with no understanding of computing 
    fundamentals?
Broadly speaking, I'd actually agree with you here! Solid fundamentals are good and vital and a lot of developers don't have them.

I just don't think bit-twiddling is one of the particular software development fundamentals I'd deem necessary.


> Would you fly with a pilot who never learned the basics of lift and drag? Or hire a builder who didn't understand loads and support? But we call people professionals who build software with no understanding of computing fundamentals?

Try going to a construction site and ask the people working here about the properties of the materials they use, you're in for a surprise.


How many of those people are going to describe themselves as engineers though?


Probably no one because that's not the culture, in software everyone is a engineering because that's what we call people. In France we sometimes call cleaners "technicien de surface", which roughly translate to "surface technician". That's obviously not the kind of technician we usually think about, but in that context it's clear.


In Sweden, we went from "städare" (cleaner) to "lokalvårdare" ("premises caretaker"), and thence -- sarcastically! -- to "sanitetstekniker" ("sanitation technician")... in the 1980s, IIRC.


Ah I see why you're getting so much disagreement in this thread. Software job titles have no precise definitions in my experience. It would never occur to me to be astonished that a Software Engineer doesn't know something, while being unfazed by a Software Developer with the same knowledge and experience. ICs most often don't even pick their own titles; the company comes up with whatever they feel like.


"I don't need bitshift" is not enough.

You might never have to do a bitshift when writing analytics software in Python, sure, but wanting to understand how a computer works is necessary to be a good developer.

Curiosity is the point here. And it pays off in the long term.

If you don't look and explore outside of your comfort zone you'll end up missing something and doing poor design or reinventing the wheel quite a lot.


I did plenty of the usual bitwise stuff in C in my undergrad years and recently dove back into K&R C to see if I could re-learn it. No lack of curiosity here.

In general I agree with you; I just don't think demanding that my developers know bitwise stuff is the particular hill upon which I'd like to die. There are other fundamentals I'd value far more highly and would consider to be hard requirements such as understanding data structures and some decent idea of big-O, and a few other things as well.


> hard requirements such as understanding data structures and some decent idea of big-O,

That's the problem with current software. Web-devs and others who have gone through some sort of a CS curriculum, know only high-level stuff and know big-O, but have no idea of how a computer works. Big-O (mostly) only kicks in when large numbers of stuff are involved, but meanwhile what matters is how each pass of the iterating process is slow (and even when Big-O kicks in, it just multiplies the base unit, so if the base unit is slow, it will also account, it will scale as they like to say :-) ).

If you only know and trust what the high level gives you, you have no idea about how each base unit (operation) is slow, on how each may require data movement and memory. You can believe that copying a whole memory area (a data structure) is the same as copying a native integer, since it is the same operation in the high level language, except one can be a simple register-to-register transfer, while the other one means performing a loop with many memory accesses, both for reading and writing. You can think that exponentiation is the same as an addition, since they both are provided as primitives by the high level language, except one is a native single-cycle CPU operation, while the other one means looping over a couple of instructions because the CPU do not have hardware exponentiation. You can think that floating-point calculation are as fast as integer operations, especially when some of the favourite languages of that demographics found it clever to provide only a single type of number and implement it as FP; while they are still slower (except division).

That's a bit the theme of the original post. Instead of doing an actually simple operation, doing a lot of things without having any idea about how much pressure they put on the HW, how much resources they use: allocations, several loops. After all, it looks like a chain of simple operations from the high-level language; the author doesn't know which translate to basic operations, which translate into a complex loop-intensive and/or resources-heavy stream of instructions, which are just there to please the compiler and are not translated into anything.

The cure for some of those things is however quick and simple: opening and giving a look, once in one's life, at the PDF manual of a CPU at the instructions set chapter, should suffice to somewhat ground the reader into reality. It doesn't bite.


I am actually very curious, but I curate what to learn in the other direction - databases, multi core programming, systems architecture, organisation architecture, consumer and business psychology, sales and marketing, game theory, communication, understanding business processes, that sort of thing.


...and that's why a good interview process requires many interviews on many topics.

If you can prove that you have in-depth knowledge on a lot of topics a single datapoint about knowing bitshift will not be an issue.


Exactly. The other thing I get dirty look for from some people is when I tell them I've never done memory management. Every language I've worked in professionally has garbage collection, and I've also never done pointer arithmetic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: