I think it all boils down to, which is higher risk, using AI too much, or using AI too little?
Right now I see the former as being hugely risky. Hallucinated bugs, coaxed into dead-end architectures, security concerns, not being familiar with the code when a bug shows up in production, less sense of ownership, less hands-on learning, etc. This is true both at the personal level and at the business level. (And astounding that CEOs haven't made that connection yet).
The latter, you may be less productive than optimal, but might the hands-on training and fundamental understanding of the codebase make up for it in the long run?
Additionally, I personally find my best ideas often happen when knee deep in some codebase, hitting some weird edge case that doesn't fit, that would probably never come up if I was just reviewing an already-completed PR.
It's very interesting to me how many people presume that if you don't learn how to vibecode now you'll never ever be able to catch up. If the models are constantly getting better, won't these tools be easier to use a year from now? Will model improvements not obviate all the byzantine prompting strategies we have to use today?
In the early days, the interfaces were so complex and technical, that only engineers could use them.
Some of these early musicians were truly amazing individuals; real renaissance people. They understood the theory, and had true artistic vision. The knew how to ride the tiger, and could develop great music, fairly efficiently.
A lot of others, not so much. They twiddled knobs at random, and spent a lot of effort, panning for gold dust. Sometimes, they would have a hit, but they wasted a lot of energy on dead ends.
Once the UI improved (like the release of the Korg M1 sampler), then real artists could enter the fray, and that’s when the hockey stick bent.
Not exactly sure what AI’s Korg M1 will be, but I don’t think we’re there, yet.
I have been a lead engineer for a few decades now, responsible for training teams and architecting projects. And I've been working heavily with AI.
I know how to get Claude multi-agent mode to write 2,500 lines of deeply gnarly code in 40 minutes, and I know how to get that code solid. But doing this absolutely pulls on decades on engineering skill. I read all the core code. I design key architectural constraints. I invest heavily in getting Claude to build extensive automated verification.
If I left Claude to its own devices, it would still build stuff! But with me actively in the loop, I can diagnose bad trends. I can force strategic investments in the right places at the right times. I can update policy for the agents.
If we're going to have "software factories", let's at least remember all the lessons from Toyota about continual process improvement, about quality, about andon cords and poke-yoke devices, and all the rest.
Could I build faster if I stopped reading code? Probably, for a while. But I would lose the ability to fight entropy, and entropy is the death of software. And Claude doesn't fight entropy especially well yet, not all by itself.
What I've found out is that a lot of people don't actually care. They see it work and that's that. It's impossible to convince them otherwise. The code can be absolutely awful but it doesn't matter because it works today.
I have been able to write some pretty damn ambitious code, quickly, with the help of LLMs, but I am still really only using it for developing functions, as opposed to architectures.
But just this morning, I had it break up an obese class into components. It did really well. I still need to finish testing everything, but it looks like it nailed it.
I like the analogy but I think you are underestimating how much random knob twiddling there is in all art.
Francis Bacon and The Brutality of Fact is a wonderful documentary that goes over this. Bacon's process was that he painted every day for a long time, kept the stuff he liked and destroyed the crap. You are just not seeing the bad random knob twiddling he did.
Picasso is even better. Picasso had some 100,000 works. If you look at a book that really gets deep to the more obscure stuff, so much of Picasso is half finished random knob twiddling garbage. Stuff that would be hard to guess is even by Picasso. There is this myth of the genius artist with all the great works being this translation of the fully formed vision to the medium.
In contrast, even the best music from musical programming languages is not that great. The actual good stuff is so very thin because it is just so much effort involved in the creation.
I would take the analogy further that vibe coding in the long run probably develops into the modern DAW while writing c by hand is like playing Paganini on the violin. Seeing someone playing Paganini in person makes it laughable that the DAW can replace a human playing the violin at a high level. The problem though is the DAW over time changes music itself and people's relation to music to the point it makes playing Paganini in person on the violin a very niche art form with almost no audience.
I read the argument on here ad nauseam about how playing the violin won't be replaced and that argument is not wrong. It is just completely missing the forest for the trees.
I think this is very well stated. I’m gonna say something that’s far more trite, but what I’ve noticed is that in an effort to get a better result while assisting AI coding, I have to throw away any concept I previously held about good code hygiene and style. I want it to be incredibly verbose. I want to have everything explicitly asserted. I want to have tests and hooks for every single thing. I want it to be incredibly hard for the human to work directly on it …
I think we are. I'm helping somebody who has a non-technical background and taught himself how to vibe code and built a thing. The code is split into two GitHub repos when it should have been one, and one of the repos is named hetzner-something because that's what he's using and he "doesn't really understand tech shit"
Exactly. The fact that an LLM isn't very good at helping you fix basic organizational issues like this is emblematic. Quoting the article: "We have automated coding, but not software engineering."
If you can use an imperfect tool, perfectly, you’ll beat people using them imperfectly. As long as the tool is imperfect, you won’t have much competition.
That’s where we are, right now. Good engineers are learning how to use klunky LLMs. They will beat out the Dunning-Kruger crew.
Once the tool becomes perfect, then that allows less-technical users into the tent, which means a much larger pool of creativity.
I think so, that's why I think that the risk of pretty much ignoring the space is close to zero. If I happen to be catastrophically wrong about everything then any AI skills I would've learned today will be completely useless 5 years from now anyway, just like skills from early days of ChatGPT are completely useless today.
That's another dumb thing that unfortunately some people can be led to believe. There have been parents who genuinely thought that screen time would make their kids digitally savvy and prepared for the future.
It has worked out quite well for some of them, but there's a lot of devil in the details of the implementation of that screentime that led to eg Mark Zuckerberg vs Markiplier.
I do think there's value in trying out fully vibe coding some toy projects today (probably nothing real or security sensitive haha).
The AI will get better at compensating, but I think some of it's weaknesses are fundamental, and are going to be showing up in some form or another for a while yet
Ex, the AI doesn't know about what you don't tell it. There's a LOT of context we take for granted while programming (especially in a corporate environment). Recognizing what sort of context is useful to give the AI without distracting it (and under what conditions it should load/forget context), I think is going to be a very valuable skill over the next few years. That's a skill you can start building now
I do think that there's some meta-skills involved here that are useful, in the same way that some people have good "Google-fu". Some of it is portable, some of it isn't.
I think if you orient your experimentation right you can think of some good tactics that are helpful even when you're not using AI assistance. "Making this easier for the robot" can often align with "making this easier for the humans" as well. It's a decent forcing function
Though I agree with the sentiment. People who have been doing this for less than a year convinced that they have some permanent lead over everyone.
I think a lot about my years being self taught programming. Years spent spinning my wheels. I know people who after 3 months of a coding bootcamp were much further than me after like ... 6 years of me struggling through material.
> in the same way that some people have good "Google-fu"
or, perhaps, in the same way that google-fu over time became devalued as a skill as Google became less useful for power users in order to cater to the needs of the unskilled, it will not really be a portable skill at all, because it is in the end a transitory or perhaps easily attainable skill once the technology is evenly distributed.
I think the AI-coding skill that is likely to remain useful is the ability (and discipline) to review and genuinely understand the code produced by the AI before committing it.
I don't have that skill; I find that if I'm using AI, I'm strongly drawn toward the lazy approach. At the moment, the only way for me to actually understand the code I'm producing is to write it all myself. (That puts my brain into an active coding/puzzle solving state, rather than a passive energy-saving state.)
If I could have the best of both worlds, that would be a genuine win, and I don't think it's impossible. It won't save as much time as pure vibe coding promises to, of course.
> I think the AI-coding skill that is likely to remain useful is the ability (and discipline) to review and genuinely understand the code produced by the AI before committing it.
> I don't have that skill; I find that if I'm using AI, I'm strongly drawn toward the lazy approach. At the moment, the only way for me to actually understand the code I'm producing is to write it all myself. (That puts my brain into an active coding/puzzle solving state, rather than a passive energy-saving state.)
When I review code, I try to genuinely understand it, but it's a huge mental drain. It's just a slog, and I'm tired at the end. Very little flow state.
Writing code can get me into a flow state.
That's why I pretty much only use LLMs to vibecode one-off scripts and do code reviews (after my own manual review, to see if it can catch something I missed). Anything more would be too exhausting.
I've had reasonable results from using AI to analyse code ("convert this code into a method call graph in graphml format" or similar). Apart from hallucinating one of the edges, this worked reasonably well to throw this into yED and give me a view on the code.
An alternative that occurred to me the other day is, could a PR be broken down into separate changes? As in, break it into a) a commit renaming a variable b) another commit making the functional change c) ...
Feel like there are PR analysis tools out there already for this :)
Don't you think automated evaluation and testing of code is likely to improve at an equally breakneck pace? It doesn't seem very far-fetched to soon have a simulated human that understands software from a user perspective.
Yup, this is why even though I like ai coding a lot, and am pretty enthusiastic about it, and have fun tinkering with it, and think it will stick around and become part of everyday proper software development practice (with guardrails in place), I at least don't go telling people they need to learn it now or they'll be obsolete or whatever. Sitting back and seeing how this all works out — nobody really knows imo, I could be wrong too! — is a valid choice and if ai does stick around you can just hop in when the landscape is clearer!
Using nano banana does not require arcane prompt engineering.
People who have not learnt image prompt engineering probably didn't miss anything.
The irony of prompt engineering is that models are good at generating prompts.
Future tools will almost certainly simply “improve” you naive prompt before passing it to the model.
Claude already does this for code. Id be amazed if nano banana doesnt.
People who invested in learning prompt engineering probably picked up useful skills for building ai tools but not for using next gen ai tools other people make.
Its not wasted effort; its just increasingly irrelevant to people doing day-to-day BAU work.
If the api prevents you from passing a raw prompt to the model, prompt engineering at that level isnt just unnecessary; its irrelevant. Your prompt will be transformed into an unknown internal prompt before hitting the model.
> Claude already does this for code. Id be amazed if nano banana doesnt.
Nano Banana is actually a reasoning model so yeah it kinda does, but not in the way one might assume. If you use the api you can dump the text part and it's usually huge (and therefore expensive, which is one drawback of it. It can even have "imagery thinking" process...!)
If you’ve never driven a model T, how would you ever drive a corolla? If you never did angular 1, how would you ever learn react? If you never used UNIX 4, you’ll be behind in Linux today. /s
>if you don't learn how to vibecode now you'll never ever be able to catch up
There's a dissonance I see where people talk about using AI tools leading to an atrophy of their abilities to work with code, but then expecting that they need no mastery to be able to use the AI tooling.
Will the AI tooling become so much better that you need little to no mastery to use it? Maybe. Will those who have a lot of fundamentals developed over years of using the tooling still be better with that tooling than the "newbs"? Maybe.
That's my take. I know LLMs arent going away even if the bubble pops. I refuse to become a KPI in some PM's promotion to justify pushing this tech even further, so for now I don't use it (unless work mandates it).
Until then, I keep up and add my voice to the growing number who oppose this clear threat on worker rights. And when the bubble pops or when work mandates it, I can catch up in a week or two easy peasy. This shit is not hard, it is literally designed to be easy. In fact, everything I learn the old way between now and then will only add to the things I can leverage when I find myself using these things in the future.
There is a huge amount of superstition around prompting. I've copied and pasted elaborate paragraph long results and then gotten. Same or better results with only a few words.
People write long prompts primarily to convince themselves that they're casting some advanced spell. As long as the system prompt is good you should start very simply and only expand if results are unsatisfactory.
I think there's something to this, but I also there there's something to the notion that it'll get easier and easier to do mass-market work with them, but at the same time they'll become greater and greater force multipliers for more and more nuanced power users.
It is strange because the tech now moves much faster than the development of human expertise. Nobody on earth achieved Sonnet 3.5 mastery, in the 10k hours sense, because the model didn't exist long enough.
Prior intuitions about skill development, and indeed prior scientifically based best practices, do not cleanly apply.
Wait around five years and then prompt: "Vibe me Windows" and then install your smart new double glazed floor. There is definitely something useful happening in LLM land but it is not and will never be AGI.
Oooh, let me dive in with an analogy:
Screwdriver.
Metal screws needed inventing first - they augment or replace dowels, nails, glue, "joints" (think tenon/dovetail etc), nuts and bolts and many more fixings. Early screws were simply slotted. PH (Philips cross head) and PZ (Pozidrive) came rather later.
All of these require quite a lot of wrist effort. If you have ever screwed a few 100 screws in a session then you know it is quite an effort.
Drill driver.
I'm not talking about one of those electric screw driver thingies but say a De W or Maq or whatever jobbies. They will have a Li-ion battery and have a chuck capable of holding something like a 10mm shank, round or hex. It'll have around 15 torque settings, two or three speed settings, drill and hammer drill settings. Usually you have two - one to drill and one to drive. I have one that will seriously wrench your wrist if you allow it to. You need to know how to use your legs or whatever to block the handle from spinning when the torque gets a bit much.
...
You can use a modern drill driver to deploy a small screw (PZ1, 2.5mm) to a PZ3 20+cm effort. It can also drill with a long auger bit or hammer drill up to around 20mm and 400mm deep. All jolly exciting.
I still use an "old school" screwdriver or twenty. There are times when you need to feel the screw (without deploying an inadvertent double entendre).
I do find the new search engines very useful. I will always put up with some mild hallucinations to avoid social.microsoft and nerd.linux.bollocks and the like.
> I think it all boils down to, which is higher risk, using AI too much, or using AI too little?
This framing is exactly how lots of people in the industry are thinking about AI right now, but I think it's wrong.
The way to adopt new science, new technology, new anything really, has always been that you validate it for small use cases, then expand usage from there. Test on mice, test in clinical trials, then go to market. There's no need to speculate about "too much" or "too little" usage. The right amount of usage is knowable - it's the amount which you've validated will actually work for your use case, in your industry, for your product and business.
The fact that AI discourse has devolved into a Pascal's Wager is saddening to see. And when people frame it this way in earnest, 100% of the time they're trying to sell me something.
Those of us working from the bottom, looking up, do tend to take the clinical progressive approach. Our focus is on the next ticket.
My theory is that executives must be so focused on the future that they develop a (hopefully) rational FOMO. After all, missing some industry shaking phenomenon could mean death. If that FOMO is justified then they've saved the company. If it's not, then maybe the budget suffers but the company survives. Unless of course they bet too hard on a fad, and the company may go down in flames or be eclipsed by competitors.
Ideally there is a healthy tension between future looking bets and on-the-ground performance of new tools, techniques, etc.
They're focused no the short-term future, not the long-term future. So if everyone else adopts AI but you don't and the stock price suffers because of that (merely because of the "perception" that your company has fallen behind affecting market value), then that is an issue. There's no true long-term planning at play, otherwise you wouldn't have obvious copypcat behavior amongst CEOs such as pandemic overhiring.
Every company should have hired over the pandemic due to there being a higher EV than not hiring. It's like if someone offered an opportunity to pay $1000 for a 50% chance to make $8000, where the outcome is the same between everyone taking the offer. If you are maximizing for the long term everyone should take the offer even if it does result in a reality where everyone loses $1000.
To be fair, that's what I have done. I try to use AI every now and then for small, easy things. It isn't yet reliable for those things, and always makes mistakes I have to clean up. Therefore I'm not going to trust it with anything more complicated yet.
We should separate doing science from adopting science.
Testing medical drugs is doing science. They test on mice because it's dangerous to test on humans, not to restrict scope to small increments. In doing science, you don't always want to be extremely cautious and incremental.
Trying to build a browser with 100 parallel agents is, in my view, doing science, more than adopting science. If they figure out that it can be done, then people will adopt it.
Trying to become a more productive engineer is adopting science, and your advice seems pretty solid here.
> The right amount of usage is knowable - it's the amount which you've validated will actually work for your use case, in your industry, for your product and business.
This is fair. And what I've been doing it. I still mostly code the way I've always coded. The AI stuff is mostly for fun. I haven't seen it transformatively speed me up or improve things.
So I make that assessment, cool. But then my CEO lightly insists every engineer should be doing AI coding because it's the future and manual coding is a dead end towards obsolescence. Uh oh now I gotta AI-signal for the big guy up top!
> Test on mice, test in clinical trials, then go to market.
You're neglecting the cost of testing and validation. This is the part that's quite famous for being extremely expensive and a major barrier to developing new therapies.
> my best ideas often happen when knee deep in some codebase
I notice that I get into this automatically during AI-assisted coding sessions if I don't lower my standards for the code. Eventually, I need to interact very closely with both the AI and the code, which feels similar to what you describe when coding manually.
I also notice I'm fresher because I'm not using many brainscycles to do legwork- so maybe I'm actually getting into more situations where I'm getting good ideas because I'm tackling hard problems.
So maybe the key to using AI and staying sharp is to refuse to sacrifice your good taste.
Yeah, I get this too. Still, I think sometimes being forced to grind on something will spur the "oh wait" moment that leads to new ways of thinking about things. Whereas when the LLM is doing the grinding, you don't see it. You just get a final PR with only the answer to the problem at hand, and you miss the bigger opportunity.
That said, maybe it's not a big deal. Kind of like way back when I moved from C++ to GC code, I remember I missed memory leaks, because having it all automatically taken care of for free felt like giving up control and encouraging of lazy practices and loose ends. Turns out it wasn't really a big deal at all.
Or just wait for things to settle. As fast as the field is moving, staying ahead of the game is probably high investment with little return, as the things you spend a ton of time honing today may be obsolete tomorrow, or simply built into existing products with much lower learning cost.
Note, if staying on the bleeding edge is what excites you, by all means do. I'm just saying for people who don't feel that urge, there's probably no harm just waiting for stuff to standardize and slow down. Either approach is fine so long as you're pragmatic about it.
Settle. Not necessarily slow down. We'll see people gravitate towards a few things, and those will become the standards. It's already started, with claude and codex, compared to the wild west situation a year ago.
The closest parallel I can think of is javascript frameworks. The 2010s had a new framework out every week. Lots of people (somewhat including myself) wasted a ton of time trying to keep up with the churn, imagining that constantly being on the bleeding edge was somehow important. The smart ones just picked something reasonably mature and stuck with it. Eventually things coalesced around React. All that time trying to keep up with the churn added essentially no value.
What makes you think they won’t? And even if they won’t, not wasting energy going through the churn is a winning strategy if eventually AI reads your mind to know what you want to do.
Yeah, it's frustrating that it seems most AI conversations devolve into straw men of either zero AI or one shot apps. There's a huge middle ground where I, and it seems like many others, have found AI very useful. We're still at the stage where it's somewhat unique for each person where AI can work for them (or not).
Coaxed into dead-end architecture is the exact issue I have had when trying agentic coding. I find that I have the greatest success when I plan everything out and document the implementation plan as precisely as possible before handing it off to the agent. At which point, the hard part is already done. Generating the code was not really the bottleneck.
Using LLMs to generate documentation for the code that I write, explaining data sheets to me, and writing boilerplate code does save me a lot of time, though.
Interesting analogy, but I'd say it's kind of the opposite. In the two you mentioned, the cost of inaction is extremely high, so they reach one conclusion, whereas here the argument is that the cost of inaction is pretty low, and reaches the opposite conclusion.
Indeed, another key difference with the climate change wager is that both the action and the consequences are global, whereas the OG wager and the AI wager are both about personal choice.
Very reasonable take. The fact that this is being downvoted really shows how poor HN's collective critical thinking has become. Silicon Valley is cannibalizing itself and it's pretty funny to watch from the outside with a clear head.
Impossible to say right now... consider just the idea of reactive agentic workflows: test fails, agent is instantly triggered and response is passed off for review, or whatever, something along those lines.
Thats staying power, suddenly that lease isnt a lease, its an ongoing cost for as long as that system exists. its gas.
It definitely comes up if you're just reviewing an already-"completed" PR. Even if you're not going to ship AI-generated code to prod (and I think that's a reasonable choice), it's often informative to give a high-level description of what you want to accomplish to a coding agent and see what it does in your codebase. You might find that the AI covered a particular edge case that you would have missed. You might find that even if the PR as a whole is slop.
> I think it all boils down to, which is higher risk, using AI too much, or using AI too little?
It's both. It's using the AI too much to code, and too little to write detailed plans of what you're going to code. The planning stage is by far the easiest to fix if the AI goes off track (it's just writing some notes in plain English) so there is a slot-machine-like intermittent reinforcement to it ("will it get everything right with one shot?") but it's quite benign by comparison with trying to audit and fix slop code.
Even if you believe that many are too far on one side now, you have to account for the fact that AI will get better rapidly. If you're not using it now you may end up lacking preparation when it becomes more valuable
But as it gets better, it'll also get easier, be built into existing products you already use, etc. So I wouldn't worry too much about that aspect. If you enjoy tinkering, or really want to dive deep into fundamentals, that's one thing, but I wouldn't worry too much about "learning to use some tool", as fast as things are changing.
I don't think so. That's a good point but the capability has been outpacing people's ability to use it for a while and that will continue.
Put another way, the ability to use AI became an important factor in overall software engineering ability this year, and as the year goes on the gap between the best and worst users or AI will widen faster because the models will outpace the harnesses
That’s the comical understanding being pushed by management in software companies yes. The people who never actually use the tools themselves, but the concept of it. It’s the same AGI nonesense, but dumped down to something they think they can control.
why does every AI skeptic assume that everyone is lying to them. theres millions of developers using AI to be more productive and you just keep plugging your ears and screaming, claiming its only dumb managers, meanwhile Linus Torvalds is vibe coding stuff.
Who said anything about that? The argument was "if you're not using AI RIGHT NOW, you will fall behind forever"
This is the nonsense management and CTOs are pushing. Use it now if you want, I do. Wait for things to cool down if you want. You'll be fine either way. The comical view that it'll be a "winner takes all" subset of developers who some how would have figured out secret AI techniques that make them 10Kx more productive and every other developer will be SOL is laughable.
> Put another way, the ability to use AI became an important factor in overall software engineering ability this year, and as the year goes on the gap between the best and worst users or AI will widen faster because the models will outpace the harnesses
Is it, lol? Know any case where those “the best users of AI” get salary bumps or promotions? Outside of switching to the dedicated AI role that is? So far I see clowns doing triple the work for the same salary.
I mean, right now "bleeding edge" is an autonomous agents system that spends a million dollars making an unbelievably bad browser prototype in a week. Very high effort and the results are jibberish. By the time these sorts of things are actually reliable, they'll be productized single-click installer apps on your network server, with a simple web interface to manage them.
If you just mean, "hey you should learn to use the latest version of Claude Code", sure.
I mean that you should stay up to date and practiced on how to get the most out of models. Using harnesses like Claude code sure, but also knowing their strengths and weaknesses so you can learn when and how to delegate and take on more scope
Okay yeah that's a good middle ground, and I'd even say I agree. It's not about being on the bleeding edge or being a first adopter or anything, but the fact that if you commit to a tool, it's almost always worth spending some time learning how to use it most effectively.
I mean if the capacity has outpaced people's ability to use it, to me that's a good sign that a lot of the future improvements will be making it easier to use.
The baseline, out-of-the-box basic tool level will lift, but so will the more obscure esoteric high-level tools that the better programmers will learn to control, further separating themselves in ability from the people who wait for the lowest common denominator to do their job for them.
Maybe. But so far ime most of the esoteric tools in the AI space are esoteric because they're not very good. When something gets good, it's quickly commoditized.
Until coding systems are truly at human-replacement level, I think I'd always prefer to hire an engineer with strong manual coding skills than one who specializes in vibe coding. It's far easier to teach AI tools to a good coder than to teach coding discipline to a vibe coder.
I wonder if psychology plays a role here. An engineer with strong manual coding skills might be hesitant to admit that a tool has become good enough to warrant less involvement.
How so? Why would a couple of months break in employment (worst case, if I truly become unemployable for some reason until I learn the tools) harm or destroy my career?
It won’t, the state of the art is changing so quickly it is near impossible to stay on top of. Right now claude code is doing stuff for our team that was impossible with ai coding six month ago. Probably a year from now it will be something else. I think that if you are not staying on top of things though, you will discover that you should have stayed more on top of things the day you get fired.
I have noticed a troubling skill atrophy in some people who heavily use LLMs (this is particularly concerning because it renders them incompetent to review ‘their own’ PRs prior to submission). I’m… not keen to sign up for that for no reason, tbh.
Even if the models stopped getting better today, we'd still see many years of improvements from improving harnesses and understanding of how to use them. Most people just talk to their agent, and don't e.g. use sub-agents to make the agent iterate and cross-check outcomes for example. Most people who use AI would see a drastic improvement in outcomes just by experimenting with the "/agents" command in Claude Code (and equivalent elsewhere). Much more so with a well thought out agent framework.
A simple plan -> task breakdown + test plan -> execute -> review -> revise (w/optional loops) pipeline of agents will drastically cut down on the amount of manual intervention needed, but most people jump straight to the execute step, and do that step manually, task by task while babysitting their agent.
Nothing gets worse in computers. Name me one thing. And if the current output quality of LLM stays the same but speed goes up 1000, quality of the generated code can be higher.
Hot keys. Used to be, you could drive a program from the keyboard with hotkeys and macros. No mouse. The function keys did functions. You could drive the interface blindfolded, once you learned it. Speed is another one. Why does VSCode take so long to open? and use so much memory and CPU? it's got a lot of features for a text editor, but it's worse than vim/emacs in a lot of ways.
Boot time.
Understandability. A Z80 processor was a lot more understandable than today's modern CPUs. That's worse.
Complexity. It's great that I can run python on a microcontroller and all, but boring old c was a lot easier to reason about.
Wtf is a typescript. CSS is the fucking worst. Native GUI libraries are so much better but we decided those aren't cool anymore.
Touchscreens. I want physical buttons that my muscle memory can take over and get ingrained in and on. Like an old stick shift car that you have mechanical empathy with. Smartphones are convenient as all hell, but I can't drive mine after a decade like you can a car you know and feel, that has physical levers and knobs and buttons.
Jabber/Pidgin/XMPP. There was a brief moment around 2010? when you didn't have to care what platform someone else was using, you could just text with them on one app. Now I've got a dozen different apps I need to use to talk to all of my friends. Beeper gets it, but they're hamstrung. This is a thing that got worse with computers!
Computers are stupid fast these days! why does it take so long to do everything on my laptop? my mac's spotlight index is broken, so it takes it roughly 4 seconds to query the SQLite database or whatever just so I can open preview.app. I can open a terminal and open it myself in that time!
And yes, these are personal problems, but I have these problems. How did the software get into such a state that it's possible for me to have this problem?
Native GUI libs look like shit out of the box, and are terrible to work with when you want to make something that doesn't look like out of the box tkinter/swing/qt/winforms Windows 95 looking crap.
Software has gotten considerably worse with time. Windows and MacOS are basically in senescence from my point of view. Haven't added a feature I've wanted in years, but manages to make my experience worse year to year anyways.
CPU vulnerability mitigations make my computer slower than when I bought it.
Computers and laptops are increasingly not repairable. So much ewaste is forced on us for profit.
The internet is a corporate controlled prison now. Political actors create fake online accounts to astroturf, manipulate, and influence us.
The increasing cost of memory and GPU make computers no longer affordable.
Right now I see the former as being hugely risky. Hallucinated bugs, coaxed into dead-end architectures, security concerns, not being familiar with the code when a bug shows up in production, less sense of ownership, less hands-on learning, etc. This is true both at the personal level and at the business level. (And astounding that CEOs haven't made that connection yet).
The latter, you may be less productive than optimal, but might the hands-on training and fundamental understanding of the codebase make up for it in the long run?
Additionally, I personally find my best ideas often happen when knee deep in some codebase, hitting some weird edge case that doesn't fit, that would probably never come up if I was just reviewing an already-completed PR.