Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In many of the newer systems, all those physical dials and switches are just inputs to the computer system which ultimately decides to do what the user is requesting.

Even so, a program for processing a switch or dial can be really short and simple. You can print it out on a sheet and check and double check every line of code for to make sure it's correct and all possibilities are accounted for.

A program handling a touchscreen will be complicated. Millions of lines of code. Maybe even billions. The best you can hope for is empirically verifying it's mostly correct most of the time.



You've done a lot of programming for hardware switches and such then?

I do some. And the last device we built, we still fight with a simple rotary switch. You have to do things like debounce inputs that seem like obvious binary switches. Getting the debounce windowing right can be just as "guessy". And guess what the highest point of failure on said device is. That selector switch. Had similar experience with buttons. I think the software part is just two forms of the Law of Conservation of Ugly.

I do like tactile better, but more for affordance/discoverability (e.g. ergonomic) issues than what you're driving at above.


Yeah processing switch and button and encoder data manually is terrible. Once you install a library to wrap this hardware device in a sane process, how different is that than getting an x/y pixel coordinate from a touchscreen? Touch technology is incredibly reliable. I touch my phone probably 5k times a day or more and I don’t have touch failures, and I carry it around with me and get debris on it and drop it off of tables and all that too. I would argue that a touch interface is one of the most reliable from a hardware standpoint despite not being tactile.


If anyone else is wondering what debouncing is:

https://my.eng.utah.edu/~cs5780/debouncing.pdf


Haha, this is actually a remarkably funny read with some grizzled, hard-won bits of wisdom sprinkled in.

"It’s surprising how many of those derelicts hanging out at the waterfront bars pick an almost random time constant. “The boys ‘n me, we jest figger sumpin like 5 msec”. Shortchanging a real analysis starts even a clean-cut engineer down the slippery slope to the wastrel vagabond’s life."


Pressure / proximity info is just as noisy and requires its own version of denouncing and x/y jitter handling on top. (Is it a click/drag/hovering over?) My partner can't even stop accidentally registering right-clicks on her laptop touchpad, which should be a really polished experience these days.

I'm not sure I buy touchscreens ever being simpler to handle. (or even in the similar range - they're strictly harder)


Even modern AAA computer games sometimes miss mouse clicks, because they foolishly poll for transitions of the button up/down state in the main loop, for each frame they render, instead of properly tracking the OS event queue.

It's a very common (and lazy) way of programming games (and other more mission-critical apps): naively polling the input device state in the main simulation or rendering loop, instead of actually responding to each and every queued operating system event like mouse clicks.

It's entirely possible to get multiple mouse down/move/up/click events per render frame, if the system has frozen or stalled for any reason (which happens all the time in the real world). But polling just can't deal with that, so it sometimes ignores legitimate user input (often at a critical time, when other things are happening).

So it's still unfortunately quite common for many apps to sometimes miss quick mouse clicks or screen touches, just because the system freezes up for an instant or lags behind (like when the CPU overheats and the fan turns on madly and SpeedStep clocks the CPU waaaay down, or even the web browser opens up another tab, or anything else blocks the user interface thread), and it just doesn't notice the quick down/up mouse button transition that it would have known about if it were actually tracking operating system events instead of polling.


I had a microwave that used a digital knob that was completely screwed up, from a debouncing perspective. You would turn the knob to try and add 30 seconds to the time and it would stutter between 5-10 seconds for a bit and then shoot up to 3 minutes and then you’d try to drop it down to 30 seconds and end up stuck between 1-2 minutes. It was infuriating! The old mechanical microwave dials were way more reliable than that piece of junk!


Are you debouncing on tactile or is the hardware doing it for you?


In an ancient textbook I was reading, they were explaining how to debounce with transistors, capacitors and resistors - so at one time in history debounce was done in hardware.


In my own brief stint in (non-critical) hardware development, all debouncing was done manually in software.


It all depends on application.... and perspective...

Hardware debouncing works well for most applications but may not be financially rewarding at scale. With time and effort software debouncing can render sometimes better/good or good enough results as hardware..

Remember the saying, "When all you have is a hammer, everything starts to look like a nail..."


Mechanical switches and rotary controls require debouncing which is no picnic.

I can 100% tell from your comment that you've never had to work with one.

It's less science than black magic to avoid double presses or missed presses.


I've done a bunch of debouncing switch inputs, keypad, keyboard, rotory switches. And wrote some test code for capcitive touch display (long time ago)

It's the kinda problem that will tend to bite you in the butt if you aren't aware of all the gotchas. Difficulty is they are application specific. But I wouldn't describe the code as particularly complicated.

Most of this stuff a crusty old neckbeard embedded programmer can do half drunk on Friday afternoon.


So because an expert can do it easily means it's easy? In that case literally anything is easy.

OP was saying that mechanical switches could be deterministic, which is something that I haven't experienced.

I do agree that there is less to go wrong than a complicated touchscreen interface however.


Billions? That sounds like a vast overestimate, no?

Are there any programs that approach a billion lines of code?


I'd think a few npm dependencies should do the trick ;)

a quick search brought up https://www.freecodecamp.org/news/the-biggest-codebases-in-h... which reports google's codebase is around 2 billion LOC. MS Office comes in close to 50 million, for example.

Not sure how accurate these are, but seem to give some rough comparisons, and yeah, not too many things are billions of LOC.


I wouldn't be surprised, the amount of crap that gets downloaded for a simple react app is incredible.


> A program handling a touchscreen will be complicated. Millions of lines of code. Maybe even billions.

I think you're off by a few orders of magnitude.

https://www.visualcapitalist.com/millions-lines-of-code/


The phone switch for Nortel's Meridian PBX system circa 1994, which supported SONET and IP, had about 16 million lines of code. The complexity of a touchscreen is less than 1%, maybe less than 0.1% of that. Lines of code, however an absurd metric, in this case does say something. I'm just not sure exactly what, though.


Isnt code just vastly different... abstracted from 94’ era lingo?


Interesting link. Can anyone explain why a car needs so much code?


Instead of having a single computer that stores all the code, automobiles have lots of embedded systems with their own code and hardware, and lots of systems designed for validating safety critical functionality. When I say lots, I mean it's usually several dozen and can be over 100. Since many of these need to meet special regulations and oftentimes require hard real-time characteristics, this tends to add to the complexity significantly.


Apparently a lot of it is just generated templates coming from commercial SDKs. Adding to that, a car is a set of distinct embedded systems interacting in a sort of a ladder network topology rather than a vertebrate analog like multi-core PCs, so a lot of code in a car would be redundant or has little footprint restrictions.


I mean ... we all know a car does not need that much code


Really, why do you know that?

I would expect a car to have tons of code.

Think of all the functions...

Engine management, Engine monitoring, Powertrain control, Emissions, Diagnostics, Infotainment, Satnav, Climate Control, Traction Control, ABS, Anti-collision radar, Cruise control, Lane keeping, Backup camera, Parking sensors...

Now keep in mind that these hundreds of components exist in many many possible configurations so the system needs to handle having certain hardware available or not, and also handle a multitude of failure modes gracefully.


Perhaps because cars ran just fine (albeit with fewer features) for a long time with zero lines of code.


So did horses without any gasoline


Yeah, and so did banks, and so did airplanes.


So the more important question is: did adding software improve things (enough to be worth the “cost”)?

With cars, there are certainly many things where it did improve things: satnav, reverse camera, traction control etc, but also some where it made a perfectly working system worse (ie the “fixed” something that wasn’t broken): touchscreen dashboards.


Because my car has exactly zero lines of code and it runs just fine.


I guess in cars they use less (public) librarys because of safety. So there own libs are included in LOC. If you look at a modern Microsoft licence, they list tons of used open source libs in there products. I guess if you include all the librarys, the LOC number should be much higher.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: