Hacker News new | past | comments | ask | show | jobs | submit | agentultra's comments login

Makes me wonder why people think I want to read what they send me if they haven’t even bothered to write it.

Innovation in trad engineering disciplines didn't stop when the industry gained professional guilds and legal status. We have far better buildings, extremely reliable jet planes, high speed rail, the internet...

I disagree. I believe we’ve had far less progress in real world engineering in the past 50 years than the 50+ prior.

I don't think mechanical engineering is more professional now than it was 50 years ago. If anything its less so, with execs calling on Engineers to wing it, and getting sign off on shoddy designs.

Airline accidents and deaths have steadily declined for 50 years: https://ourworldindata.org/grapher/fatal-airliner-accidents-... , https://en.wikipedia.org/wiki/Aviation_accidents_and_inciden...

Car crash death rates in the US have been declining for 50 years in both total number and per-capita rates (except for an uptick that started in 2020, presumably some knock-on effects from covid): https://injuryfacts.nsc.org/motor-vehicle/historical-fatalit... https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in... -- 40,990 deaths in 2023 vs. 54,052 deaths in 1973; 12.06 vs 25.51 per 100k population

Car fuel efficiency has ~doubled since 1975: https://www.energy.gov/eere/vehicles/articles/fotw-1237-may-...

Airline fuel efficiency has quadrupled since 1975: https://en.wikipedia.org/wiki/Fuel_economy_in_aircraft#/medi...

Corn yields per acre have doubled since 1975: https://www.agry.purdue.edu/ext/corn/news/timeless/YieldTren...

Lithium-ion batteries didn't practically exist in 1975.

Modern blue LEDs (and thus white LEDs) didn't exist until 1990.

The entire personal computing industry. The Internet. Search engines. LLMs. Cellular networks. Neodymium magnets. Reusable orbital launch vehicles. Genetically modified crops. mRNA vaccines. CRISPR.


It's quite the feat that capital holders have pulled off with software, being able to run amok in the public sphere wrecking anything they want with little over-sight or regulation.

... or cough "innovation"


Doesn’t seem like competition when the standards are, do what Google says.

Some of that funding has been used for DRM, tracking, etc.

Some things have turned out good though.

Seems like it will be a tough time for browsers to find alternate funding sources.


A TigerBeetle client for Haskell.

The smallest (in terms of system calls and code) event sourcing database I can make.

Being more present.


Most engineers you worked with probably cared about getting it right and improving their skills.

LLMs seem to care about getting things right and improve much faster than engineers. They've gone from non-verbal to reasonable coders in ~5 years, it takes humans a good 15 to do the same.

LLMs have not improved at all.

The people training the LLMs redid the training and fine tuned the networks and put out new LLMs. Even if marketing misleadingly uses human related terms to make you believe they evolve.

A LLM from 5 years ago will be as bad as 5 years ago.

Conceivably a LLM that can retrain itself on the input that you give it locally could indeed improve somewhat, but even if you could afford the hardware, do you see anyone giving you that option?


Cars have improved even though the Model T is as bad as it ever was. No one's expecting the exact same weights and hardware to produce better results.

Are you sure this is the general understanding? There's a lot of antropomorphic language thrown around when talking about LLMs. It wouldn't surprise me that people believe chatgpt 5.5 is chatgpt 1.0 that has "evolved".

You cannot really compare the two. An engineer will continue to learn and adapt their output to the teams and organizations they interact with. They will be seamlessly picking up core principles, architectural nouances and verbiage of the specific environment. You need to explicitly pass all that to an llm and all approaches today lack. Most importantly, an engineer will continue accumulating knowledge and skills while you interact with them. An llm won't.

With ChatGPT explicitly storing "memory" about the user and access to the history of all chats, that can also change. Not hard to imagine an AI-powered IDE like Cursor understanding that when you reran a prompt or gave it an error message it came to understand that its original result was wrong in some way and that it needs to "learn" to improve its outputs.

Human memory is new neural paths.

LMM "memory" is a larger context with unchanged neural paths.


Maybe. I'd wager the next couple of generations of inference architecture will still have issues with context on that strategy. Trying to work with the state of the art models at their context boundaries quickly descends into gray goop like behavior for now and I don't see anything on the horizon that changes that rn.

I don’t think the argument from such a simple example does much for the authors point.

The bigger risk is skill atrophy.

Proponents say, it doesn’t matter. We shouldn’t have to care about memory allocation or dependencies. The AI system will eventually have all of the information it needs. We just have to tell it what we want.

However, knowing what you want requires knowledge about the subject. If you’re not a security engineer you might not know what funny machines are. If someone finds an exploit using them you’ll have no idea what to ask for.

AI may be useful for some but at the end of the day, knowledge is useful.


This. This times a thousand.

Every problem I overcome is a lesson learned that enriches my life and makes me more valuable.

The results are nice but it's the journey and understanding that matter to me.

There's more value in having a team of people who understand the problem domain so deeply that they can create a computer system to automate the solution. When things go wrong, and they will, you have to have the understanding in order to confidently resolve the issue.

"But AI can do it!"

Sure.

But can you?


Sometimes the problem is just dealing with someone else's janky spreadsheet and obstinance...

There is no inherent honor in toil.


I just like to know things and learn them.

If I’m encountering a new framework I want to spend time learning it.

Every problem I overcome on my own improves my skills. And I like that.

GenAI takes that away. Makes me a passive observer. Tempts me to accept convenience with a mask of improved productivity. When, in the long term, it doesn’t do anything for me except rob me of my skills.

The real productivity gains for me would come from better programming languages.


> GenAI takes that away.

Not for me. Put me at a company with a codebase in technology Z and I can learn it MUCH faster than starting from the docs. I will still read the docs, but everything goes far, far faster if you start me out in an existing codebase.

You can use GenAI the same way. Get a codebase that's doing a thing you're interested in immediately and dive right in. You do not HAVE to be tempted into being a passive observer, you can use it as a kickstart instead.


We've collectively spent decades trading almost anything in favor of convenience. LLMs will be the same, and AI if we get there.

I'm of the opinion that we'd be a lot better off if convenience was a lot further down our priority list.


No amount of chest-thumping about how good of a programmer you are and telling everyone else to, "get good," has had any effect on the rate of CVE's cause by memory safety bugs that are trivial to introduce in a C program.

There are good reasons to use C. It's best to approach it with a clear mind and a practical understanding of its limitations. Be prepared to mitigate those short comings. It's no small task!


I am not sure the number of CVEs measures anything meaningful. The price for zero-days for important targets goes into the millions.

While I am sure there can not be enough security, I am not at all sure the extreme focus on memory safety is worth it, and I am also not sure the added complexity of Rust is really worth it. I would prefer to simplify the stack and make C safer.


If that's your preference you're going about it all wrong. Rust's safety is about culture and you're looking at the technology, it's not that Rust doesn't have technology but the technology isn't where you start.

This was the only notable failing of Sean's (abandoned) "Safe C++" - it delivers all the technology a safe C++ culture would have needed, but there is no safe C++ culture so it was waved away as unimportant.

The guy whose mine loses fifty miners in a roof collapse doesn't need better mining technology, inadequate technology isn't why those miners died, culture is. His mine didn't have safety culture, probably because he didn't give shit about safety, and his workers either shared this dismissal or had no choice in the matter.

Also "extreme focus" is a misinterpretation. It's not an extreme focus, it's just mandatory, it's like if you said humans have an "extreme focus" on breathing air, they really don't - they barely even think about breathing air - it was just mandatory so if you don't do it then I guess that stands out.


Let's turn it around: Do you think the mining guy that does not care about safety will start caring about a safety culture because there is a new safety tool? And if it is mandated by government, will it be implemented in a meaningful way, or just on paper?


So there's a funny thing about mouthing the words, the way the human mind works the easiest way to explain to ourselves why we're mouthing the words is that we agree with them. And so in that sense what seems like a useless paper exercise can be effective.

Also, relevantly here, nobody actually wants these terrible bugs. This is not A or B, Red or Blue, this is very much Cake or Death and like, there just aren't any people queueing up for Death, there are people who don't particularly want Cake but that's not the same thing at all.


It will certainly be implemented in a meaningful way, if the consequences for the mining guy are hard enough that there won't be a second failure done by the same person.

Hence why I am so into cybersecurity laws, and if this is the only way to make C and C++ communities embrace a safety culture, instead of downplaying it as straitjacket programming like in the C vs Pascal/Modula-2 Usenet discussion days, then so be it.


At some point, in order to make C safer, you're going to have to introduce some way of writing a more formal specification of the stack, heap and the lifetime of references into the language.

Maybe that could be through a type system. Maybe that could be through a more capable run-time system. We've tried these avenues through other languages, through experimental compilers, etc.

Without introducing anything new to the language we have a plethora of tools at our disposal:

- Coq + Iris, or some other proof automation framework with separation logic.

- TLA+, Alloy, or some form of model checking where proofs are too burdensome/unnecessary

- AFL, Valgrind and other testing and static analysis tools

- Compcert: formally verified compilers

- MISRA and other coding guidelines

... and all of this to be used in tandem in order to really say that for the parts specified and tested, we're confident there are no use-after-free memory errors or leaks. That is a lot of effort in order to make that statement. The vast, vast majority of software out there won't even use most of these tools. Most software developers argue that they'll never use formal methods in industry because it's just too hard. Maybe they'll use Valgrind if you're lucky.

Or -- you could add something to the language in order to prevent at least some of the errors by definition.

I'm not a big Rust user. Maybe it's not great and is too difficult to use, I don't know. And I do like C. I just think people need to be aware that writing safe C is really expensive and time consuming, difficult and nothing is guaranteed. It might be worth the effort to learn Rust or use another language and at least get some guarantees; it's probably not as hard as writing safe C.

(Maybe not as safe as using Rust + formal methods, but at least you'll be forced to think about your specification up front before your code goes into production... and where you do have unsafe code, hopefully it will be small and not too hard to verify for correctness)

Update: fixed markup


The problem is not tools don't exist, lint was created in 1979 at Bell Labs after all.

It is the lack of culture to use them unless there is a goverment mandate to impose them, like in high critical computing.


I agree.


Definitely, but the idea is that its unique feature set is worth it.


Yeah, there are still good reasons to use it.


A family member of mine did this as an engineer for Chrysler. He passed on a copy of his “dictionary” to me and I’ve kept adding to it. I enjoy a good malapropism/egg-corn. He’s not around anymore but the legacy continues.

Update we kept our practice a secret though, it wasn’t nice to point these things out to people.


My grandfather was well known at work for, uh, creative sayings. Malapropisms, misheard cliches, or just wild-ass new phrases. His coworkers took to secretly writing them down over the years, and they read them off during his retirement party to universal delight.

A copy of the list ended with us, the family, and has come up during my grandfather's wake and a few times since then.

Absolutely agree that it might not be nice, but context depending it absolutely can be -- as well as a really touching legacy.


Had a boss was terrible in other ways (he got fired over sexually harassing one of my coworkers) but he would constantly mess up common sayings. The one I remember most is "bumpin the bumper traffic" instead of "bumper to bumper".


> he got fired over ... bumpin the bumper


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: