As another Seattle SWE, I'll go against the grain and say that I think AI is going to change the nature of the market for labor for SWE's and my guess would be for the negative. People need to remember that the ability of AI in code generation today is the worst that it ever will be, and it's only going to improve from here. If you were to just judge by the sentiment on HN, you would think no coder worth their weight was using this in the real world—but my experience on a few teams over the last two years has been exactly the opposite—people are often embarrassed to admit it but they are using it all the time. There are many engineers at Meta that "no longer code" by hand and do literally all of their problem solving with AI.
I remember last year or even earlier this year feeling like the models had plateau'd and I was of the mindset that these tools would probably forever just augment SWEs without fully replacing them. But with Opus 4.5, gemini 3, et al., these models are incredibly powerful and more and more SWEs are leaning on them more and more—a trend that may slow down or speed up—but is never going to backslide. I think people that don't generally see this are fooling themselves.
Sure, there are problem areas—it misses stuff, there are subtle bugs, it's not good for every codebase, for every language, for every scenario. There is some sloppiness that is hard to catch. But this is true with humans too. Just remember, the ability of the models today is the worst that it will ever be—it's only going to get better. And it doesn't need to be perfect to rapidly change the job market for SWE's—it's good enough to do enough of the tasks for enough mid-level SWEs at enough companies to reshape the market.
I'm sure I'll get downvoted to hell for this comment; but I think SWEs (and everyone else for that matter) would best practice some fiscal austerity amongst themselves because I would imagine the chance of many of us being on the losing side of this within the next decade is non-trivial. I mean, they've made all of the progress up to now in essentially the last 5 years and the models are already incredibly capable.
This has been exactly my mindset as well (another Seattle SWE/DS). The baseline capability has been improving and compounding, not getting worse. It'd actually be quite convenient if AI's capabilities stayed exactly where they are now; the real problems come if AI does work.
I'm extremely skeptical of the argument that this will end up creating jobs just like other technological advances did. I'm sure that will happen around the edges, but this is the first time thinking itself is being commodified, even if it's rudimentary in its current state. It feels very different from automating physical labor: most folks don't dream of working on an assembly line. But I'm not sure what's left if white collar work and creative work are automated en masse for "efficiency's" sake. Most folks like feeling like they're contributing towards something, despite some people who would rather do nothing.
To me it is clear that this is going to have negative effects on SWE and DS labor, and I'm unsure if I'll have a career in 5 years despite being a senior with a great track record. So, agreed. Save what you can.
Exactly. For example, what happens to open source projects where developers don't have access to the latest proprietary dev tools? Or, what happens to projects like Spring if AI tools can generate framework code from scratch? I've seen maven builds on Java projects that pull in hundreds or even thousands of libraries. 99% of that code is never even used.
The real changes to jobs will be driven by considerations like these. Not saying this will happen but you can't rule it out either.
> I'm extremely skeptical of the argument that this will end up creating jobs just like other technological advances did. I'm sure that will happen around the edges, but this is the first time thinking itself is being commodified, even if it's rudimentary in its current state. It feels very different from automating physical labor: most folks don't dream of working on an assembly line.
Most people do not dream of working most white collar jobs. Many people dream of meaningful physical labor. And many people who worked in mines did not dream of being told to learn to code.
The important piece here is that many people want to contribute to something intellectually, and a huge pathway for that is at risk of being significantly eroded. Permanently.
Your point stands that many people like physical labor. Whether they want to artisanally craft something, or desire being outside/doing physical or even menial labor more than sitting in an office. True, but that doesn't solve the above issue, just like it didn't in reverse. Telling miners to learn to code was... not great. And from my perspective neither is outsourcing our thinking en masse to AI.
I keep getting blown away by AI (specifically Claude Code with the latest models). What it does is literally science fiction. If you told someone 5 years ago that AI can find and fix a bug in some complex code with almost zero human intervention nobody would believe you, but this is the reality today. It can find bugs, it can fix bugs, it can refactor code, it can write code. Yes, not perfect, but with a well organized code base, and with careful prompting, it rivals humans in many tasks (certainly outperforms them in some aspects).
As you're also saying this is the worst it will ever be. There is only one direction, the question is the acceleration/velocity.
Where I'm not sure I agree is with the perception this automatically means we're all going to be out of a job. It's possible there would be more software engineering jobs. It's not clear. Someone still has to catch the bad approaches, the big mistakes, etc. There is going to be a lot more software produced with these tools than ever.
> Just remember, the ability of the models today is the worst that it will ever be—it's only going to get better.
This is the ultimate hypester’s motte to retreat to whenever the bailey of claimed utility of a technology falls. It’a trivially true of literally any technology, but also completely meaningless on its own.
I think whether you are right or wrong it makes sense to hedge your bets. I suspect many people here are feeling some sense of fear (career, future implications, etc); I certainly do on some of these points and I think that's a rational response to be aware of the risk of the future unknown.
In general I think -> if I was not personally invested in this situation (i.e. another man on the street) what would be my immediate reaction to this? Would I still become a software engineer as an example? Even if it is doesn't come to past, given what I know now, would I take that bet with my life/career?
I think if people were honest with themselves sadly the answer for many would probably be "no". Most other professions wouldn't do this to themselves either; SWE is quite unique in this regard.
> code generation today is the worst that it ever will be, and it's only going to improve from here.
I'm also of the mindset that even if this is not true, that is, even if current state of LLMs is best that it ever will be, AI still would be helpful. It is already great at writing self contained scripts, and efficiency with large codebases has already improved.
> I would imagine the chance of many of us being on the losing side of this within the next decade is non-trivial.
Yes, this is worrisome. Though its ironic that almost every serious software engineer at some point in time in their possibly early childhood / career when programming was more for fun than work, thought of how cool it would be for a computer program to write a computer program. And now when we have the capability, in front of our eyes, we're afraid of it.
But, one thing humans are really good at is adaptability. We adapt to circumstances / situation -- good or bad. Even if the worst happens, people loose jobs, for a short term it will be negatively impactful for the families, however, over a period of time, humans will adapt to the situation, adapt to coexist with AI, and find next endeavour to conquer.
Rejecting AI is not the solution. Using it as any other tool, is. A tool that, if used correctly, by the right person, can indeed produce faster results.
I mean, some are good at adaptability, while others get completely left in the dust. Look at the rust belt: jobs have left, and everyone there is desperate for a handout. Trump is busy trying to engineer a recession in the US—when recessions happen, companies at the margin go belly-up and the fat is trimmed from the workforce. With the inroads that AI is making into the workforce, it could be the first restructuring where we see massive losses in jobs.
> I mean, they've made all of the progress up to now in essentially the last 5 years
I have to challenge this one, the research on natural language generation and machine learning dates back to the 50s, it just it only recently came together at scale in a way that became useful, but tons of the hardest progress was made over many decades, and very little innovation happened in the last 5 years. The innovation has mostly been bigger scale, better data, minor architectural tweaks, and reinforcement learning with human feedback and other such fine tuning.
We're definitely in the territory of splitting hairs; but I think most of what people call modern AI is the result of the transformer paper. Of course this was built off the back of decades of research.
> People need to remember that the ability of AI in code generation today is the worst that it ever will be, and it's only going to improve from here.
I sure hope so. But until the hallucination problem is solved, there's still going to be a lot of toxic waste generated. We have got to get AI systems which know when they don't know something and don't try to fake it.
The "hallucination problem" can't be solved, it's intrinsic to how stochastic text and image generators work. It's not a bug to be fixed, it's not some leak in the pipe somewhere, it is the whole pipe.
> there's still going to be a lot of toxic waste generated.
And how are LLMs going to get better as the quality of the training data nosedives because of this? Model collapse is a thing. You can easily see a scenario how they'll never be better than they are now.
> People need to remember that the ability of AI in code generation today is the worst that it ever will be
I've been reading this since 2023 and yet it hasn't really improved all that much. The same things are still problems that were problems back then. And if anything the improvement is slowing down, not speeding up.
I suspect unless we have real AGI we won't have human-level coding from AIs.
I feel like the idea here is cute; but does it realistically work at scale? Of course, a messaging app like this—if it's going to work anywhere, is going to work in Gaza, one of the (at least formerly) most densely populated areas in the world. But bluetooth was not designed for this type of communication whatsoever; phones can only establish bluetooth connections between devices at the very most 100ft under the most ideal conditions; and is probably much lower than that in practice.
Even if people are living in open-air conditions I can imagine messages getting stuck or being delivered very late; especially at night when there may not be a lot of human movement. How well does this actually work in practice?
The point the OP is making is that LLMs are not reliably able to provide safe and effective emotional support as has been outlined by recent cases. We're in uncharted territory and before LLMs become emotional companions for people, we should better understand what the risks and tradeoffs are.
I wonder if statistically (hand waving here, I’m so not an expert in this field) the SOTA models do as much or as little harm as their human counterparts in terms of providing safe and effective emotional support. Totally agree we should better understand the risks and trade offs but I wouldn’t be super surprised if they are statistically no worse than us meat bags this kind of stuff.
One difference is that if it were found that a psychiatrist or other professional had encouraged a patient's delusions or suicidal tendencies, then that person would likely lose his/her license and potentially face criminal penalties.
We know that humans should be able to consider the consequences of their actions and thus we hold them accountable (generally).
I'd be surprised if comparisons in the self-driving space have not been made: if waymo is better than the average driver, but still gets into an accident, who should be held accountable?
Though we also know that with big corporations, even clear negligence that leads to mass casualties does not often result in criminal penalties (e.g., Boeing).
> that person would likely lose his/her license and potentially face criminal penalties.
What if it were an unlicensed human encouraging someone else's delusions? I would think that's the real basis of comparison, because these LLMs are clearly not licensed therapists, and we can see from the real world how entire flat earth communities have formed from reinforcing each others' delusions.
Automation makes things easier and more efficient, and that includes making it easier and more efficient for people to dig their own rabbit holes. I don't see why LLM providers are to blame for someone's lack of epistemological hygiene.
Also, there are a lot of people who are lonely and for whatever reasons cannot get their social or emotional needs met in this modern age. Paying for an expensive psychiatrist isn't going to give them the friendship sensations they're craving. If AI is better at meeting human needs than actual humans are, why let perfect be the enemy of good?
> if waymo is better than the average driver, but still gets into an accident, who should be held accountable?
Waymo of course -- but Waymo also shouldn't be financially punished any harder than humans would be for equivalent honest mistakes. If Waymo truly is much safer than the average driver (which it certainly appears to be), then the amortized costs of its at-fault payouts should be way lower than the auto insurance costs of hiring out an equivalent number of human Uber drivers.
> I would think that's the real basis of comparison
It's not because that's not the typical case. LLMs encourage people's delusions by default, it's just a question of how receptive you are to them. Anyone who's used ChatGPT has experienced it even if they didn't realize it. It starts with "that's a really thoughtful question that not many people think to ask", and "you're absolutely right [...]".
> If AI is better at meeting human needs than actual humans are, why let perfect be the enemy of good?
There is no good that comes from having all of your perspective distortions validated as facts. They turn into outright delusions without external grounding.
Talk to ChatGPT and try to put yourself into the shoes of a hurtful person (e.g. what people would call "narcissistic") who's complaining about other people. Keep in mind that they almost always suffer from a distorted perception so they genuinely believe that they're great people.
They can misunderstand some innocent action as a personal slight, react aggressively, and ChatGPT would tell them they were absolutely right to get angry. They could do the most abusive things and as long as they genuinely believe that they're good people (as they almost always do), ChatGPT will reassure them that other people are the problem, not them.
> LLMs encourage people's delusions by default, it's just a question of how receptive you are to them
There are absolutely plenty of people who encourage others' flat earth delusions by default, it's just a question of how receptive you are to them.
> There is no good that comes from having all of your perspective distortions validated as facts. They turn into outright delusions without external grounding.
Again, that sounds like a people problem. Dictators infamously fall into this trap too.
Why are we holding LLMs to a higher standard than humans? If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike. If others are okay with having their egos stroked and their delusions encouraged and validated, that's their prerogative.
> If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike.
It's not a matter of liking or disliking something. It's a question of whether that thing is going to heal or destroy your psyche over time.
You're talking about personal responsibility while we're talking about public policy. If people are using LLMs as a substitute for their closest friends and therapist, will that help or hurt them? We need to know whether we should be strongly discouraging it before it becomes another public health disaster.
> We need to know whether we should be strongly discouraging it before it becomes another public health disaster.
That's fair! However, I think PSAs on the dangers of AI usage are very different in reach and scope from legally making LLM providers responsible for the AI usage of their users, which is what I understood jsrozner to be saying.
>Why are we holding LLMs to a higher standard than humans? If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike.
We're not holding LLMs to a higher standard than humans, we're holding them to a different standard than humans because - and it's getting exhausting having to keep pointing this out - LLMs are not humans. They're software.
And we don't have a choice not to interact with LLMs because apparently we decided that these things are going to be integrated into every aspect of our lives whether we like it or not.
And yes, in that inevitable future the fact that every piece of technology is a sociopathic P-zombie designed to hack people's brain stems and manipulate their emotions and reasoning in the most primal way possible is a problem. We tend not to accept that kind of behavior in other people, because we understand the very real negative consequences of mass delusion and sociopathy. Why should we accept it from software?
Sure, but the specific context of this conversation are the human roles (taxi driver, friend, etc.) that this software is replacing. Ergo, when judging software as a human replacement, it should be compared to how well humans fill those traditionally human roles.
> And we don't have a choice not to interact with LLMs because apparently we decided that these things are going to be integrated into every aspect of our lives whether we like it or not.
Fair point.
> And yes, in that inevitable future the fact that every piece of technology is a sociopathic P-zombie designed to hack people's brain stems and manipulate their emotions and reasoning in the most primal way possible is a problem.
Fair point again. Thanks for helping me gain a wider perspective.
However, I don't see it as inevitable that this becomes a serious large-scale problem. In my experience, current GPT 5.1 has already become a lot less cloyingly sycophantic than Claude is. If enough people hate sycophancy, it's quite possible that LLM providers are incentivized to continue improving on this front.
> We tend not to accept that kind of behavior in other people
Do we really? Maybe not third party bystanders reacting negatively to cult leaders, but the cult followers themselves certainly don't feel that way. If a person freely chooses to seek out and associate with another person, is anyone else supposed to be responsible for their adult decisions?
It's hilariously depressing to imagine how impossible it would be to build something like that in the US. It's not only the fact that it's an engineering feat—it's also the fact that it was built in such a human-centric way. The cafe at the top, the light show with the water. These things are all superfluous, but make these projects exciting and add novelty which makes these areas just fun places to be. The U.S., in it's current form, could never build any infrastructure projects in such a human-centric way, because, well, we apparently have an inability to build anything at all.
Seriously, when's the last time we built something like this. The only initiative I can even think of is California high speed rail and that project just so happens to be a testament to the absolute antithesis of what I'm proclaiming.
The two longest floating bridges in the world are in Seattle, Washington. The longest, Evergreen Point Floating Bridge, has a mixed use lane for cycling and walking and supposedly took 5 years to construct after construction started (ignoring that it replaced a bridge that existed there and also planning took longer, I'm not sure how to compare that). Seattle also has the world's only floating bridge that has rail on it, Homer M. Hadley Memorial Bridge, which is also the world's 5th longest floating bridge. While not the same exact sort of feat of engineering, it's pretty cool.
And the best demonstration of Seattle's hapless "can't do" attitude is that they left the watertight doors open one day in 1990 and the bridge pontoons filled with water and the bridge sank. Back then, by some miracle the bridge was fixed in a few weeks, but today it would take 10 years.
When the light rail line was installed on the I-90 bridge, after the whole thing was done it was discovered that the rail ties were built incorrectly. This was in April 2023. Thousands of concrete ties had to be demolished and construction had to start over. Of course this took years. God forbid that someone should check the work along the way.
If Seattle was a Simpsons character, it would be Ralph Wiggum when he's grown up and has one foot permanently stuck in a bucket.
Nobody in the Seattle area uses those names. They are always just the I-90 bridge or 520 bridge among the people I talk to, although both roads actually use multiple bridges to span Lake Washington.
Yeah, I had to look up the bridge names. The "I-90 bridge" is actually both the Homer M. Madley and Lacey V Murrow Memorial bridges, and that's excluding the two bridges that are east of Mercer Island. I wanted to be more exact, and at that point I also added the 520 bridge's real name.
I'm not saying America can't build at all, I'm just saying it can't build in the modern era. Apparently the i90 bridge you are referring to was built in 1940.
I realize that Seattle has the only floating bridge with rail on it. Actually my mom is the lead photographer for Sound Transit, the agency in charge with it's development. Sound Transit.. to say the least, is a huge embarrassment for the region. They're way over budget, way behind schedule on all of their initiatives with the lightrail. Sure, there are engineering marvels that exist here and there on the project—but it's not a testament to the U.S's ability to deliver infrastructure at a reasonable speed or budget.
Again, the wikipedia article that you linked points to a preposterous insufficiency in our ability to maintain infrastructure:
"[...] On February 26, 2014, in the wake of another suicide from the bridge, independent Rep. Joe Brooks of Winterport proposed emergency legislation to the Maine Legislature to require the installation of a suicide barrier on the bridge.[22] This proposal was rejected due to cost, as a barrier was estimated to cost between $500,000 and $1 million, plus additional costs for regular inspections. As an alternative, two solar-powered phones were installed on each end of the bridge in May 2015 which connect users to a suicide hotline. The phones cost $30,000. State officials were aware of instances the phones were not functional, and increased inspections of them to weekly from the previous monthly. They could not determine if the phones were functional when a March 5, 2017 suicide, the first since the phones were installed, occurred. The phones were found to be out of order on June 23, 2017, when an abandoned car on the bridge resulted in a search of the Penobscot River by authorities looking for its driver.[7] The emergency phones on the Penobscot Narrows Bridge were reported out of order following another suicide in 2021.[9][23] They were subsequently replaced.[24] In May 2022, the Maine legislature was reportedly planning to "pull together a study group on suicides by bridge."[25] Funding was subsequently approved for a barrier, but the installation slated for 2024 was delayed for further testing..."
Why are our systems like this? There is no culture of accountability, for one. There is also no desire to dream big, anymore, it seems.
If you get a chance, you should read (or listen to) Robert Caro's biography of Robert Moses[1] who built much if not most of the parks in New York. He built many of them at a time with Roosevelt was using public money to provide jobs for people thrown out of work during the Great Depression. These public works projects are often scorned by today's electorate.
But perhaps more importantly, getting insights into how the government building things gives the people who can grant contracts tremendous political leverage and power. It is remarkable to see that what he built was definitely good for New Yorkers (although as the book points out, really for rich and white New Yorkers) and the distortions they caused in the political machine caused some people serious grief from loss of property to loss of their entire livelihood.
Authoritarian systems can operate like that but it comes at a tremendous cost.
And it’s amazing that this got built in only 3 years! I can’t imagine anything this substantial being built that fast in the US. I can’t think of any examples either but I’d be happy to see some that anyone knows of.
Visiting China will shake your world view as a westerner. It did mine.
I’m still grateful at a personal level to live in a democracy, but I’m not as certain as I used to be that it’s the only way to run a country that benefits the people.
The infrastructure is incredible yes, but the complete lack of fear on the streets, and the positive consequences of that, are something to behold. Women are not afraid to walk home alone at 2am. People young and old dance together in the street. You never feel on your guard, at all.
They haven’t completely eradicated poverty, but they seem to be giving it a real go. In one very rural area I saw an elderly couple living in a rundown shack, but they had a bunch of modern medical equipment, provided to them for free by the state.
Going to need a business case that translates to value, sorry. Common sentiment, apparently, is that our postal service must generate profit. Clown show.
"The super stem cells prevent age-related bone loss while rejuvenating over 50% of the 61 tissues analyzed." (including the brain).
What do people die of when they die of 'old age'? There's the 3 pillars: cancer, cardiovascular, neurodegenerative. These are often (but not always) metabolic diseases; i.e. cardiovascular death often arises from kidney insufficiency. If you can regenerate the liver, kidney, etc. indefinitely, a large vector of metabolic disease is probably diminished or disappears.
In the paper, monkeys restored brain volume. They reduced the levels of senescent cells to youthful levels. They increased bone mass. This reduces or eliminates many of the threats that inflict casualties among the centenarian population.
Sure, something else could come up that the monkeys start dying from instead. But, given the way humans and monkeys die of old age—by reducing or eliminating all known threats—it's hard to see how this wouldn't extend lifespan.
This paper doesn't prove that it extends lifespan. So to speculate on that extraordinary claim without extraordinary evidence to back it up is useless. It would be far easier to prove this out on a species with a much smaller lifespan like mice, not to mention cheaper, but so far we're unable to make a mouse live longer than 5 years.
Yes, I'm certainly speculating. It certainly seems that this could be a path to extending lifespan. I think the claim is less than "extraordinary" though. Many teams are working to figure out how to extend lifespan in many species—it seems likely that there will be meaningful progress in the coming years or decades.
Harvard’s health blog sort of always gets it wrong which diminishes their brand IMO. This is also the publication that said something to the effect of “[…] other than reducing stress, there are no health benefits of using the sauna.” Nobody is claiming that standing desks will help you lose a massive amount of weight. People are using them for maintaining proper posture, and to be more generally active during the day. If you use a standing desk, you’re more likely to go from standing to sitting and vise versa, you’re more likely to move around a bit as you complete your work—as opposed to sitting in a chair which makes it easy to be sedentary for hours.
Exactly. Econ 101: why do consumption taxes work at all? By increasing the amount of pain associated with purchasing a particular indulgent product, you decrease the consumption of that product on the margin. When you increase the price of cigarettes by 20%, cigarette smoking in a society decreases. But for the most addicted, no consumption tax will probably act as a deterrent.
Some individuals will find a way to distribute and consume child pornography no matter the cost. But other addicted individuals will stop consuming if doing so becomes so laborious because they are consuming or distributing on the margin. I.e, imagine the individual who doesn't want to be consuming it, who knows they shouldn't—this type of deterrent may be the breaking point that gets them to stop altogether. And if you reduce the amount of consumption or production by any measure, you decrease a hell of a lot of suffering.
But anyway, the goal of this legislation is not to drive the level of distribution to 0. The goal of policymakers could be seen charitably as an attempt to curtail consumption, because any reduction in consumption is a good thing.
Let's say you're actually texting in a group. Even if you use perfect operational security, odds are terrible that all members of your group will perfectly uphold the same level of security every time they share their content.
One is going to slip up. He's going to get arrested. And he's going to turn the whole group in to reduce his sentence. Everyone else meanwhile has their operational security become proof of intent, proof of deliberation, proof of trying to evade authorities. They thought they were clever with the encrypted ZIP files, but the judge and jury are going to be merciless. I don't think most authorities have a problem with that.
I remember last year or even earlier this year feeling like the models had plateau'd and I was of the mindset that these tools would probably forever just augment SWEs without fully replacing them. But with Opus 4.5, gemini 3, et al., these models are incredibly powerful and more and more SWEs are leaning on them more and more—a trend that may slow down or speed up—but is never going to backslide. I think people that don't generally see this are fooling themselves.
Sure, there are problem areas—it misses stuff, there are subtle bugs, it's not good for every codebase, for every language, for every scenario. There is some sloppiness that is hard to catch. But this is true with humans too. Just remember, the ability of the models today is the worst that it will ever be—it's only going to get better. And it doesn't need to be perfect to rapidly change the job market for SWE's—it's good enough to do enough of the tasks for enough mid-level SWEs at enough companies to reshape the market.
I'm sure I'll get downvoted to hell for this comment; but I think SWEs (and everyone else for that matter) would best practice some fiscal austerity amongst themselves because I would imagine the chance of many of us being on the losing side of this within the next decade is non-trivial. I mean, they've made all of the progress up to now in essentially the last 5 years and the models are already incredibly capable.
reply