No, I'm paid much more to do much more than what I did in this simple task. Claude didn't even test the changes (in this case, it does not have the hardware required to do that), or decide that the feature needed to be implemented in the first place. But my comparison wasn't "how do I compare to Claude Code", it was "how does Aider compare to Claude Code". My boss does not use Aider or Claude Code, and would not be happy with the results of replacing me with it (yet).
I said that the AI literally does not have the hardware required to do the testing necessary. But ignoring that, automated testing is not sufficient for shipping software. Imagine shipping a website that has full test coverage but never once opening the browser. This isn't a fundamentally impossible problem for AI, but no amount of "good prompting" is going to get you there today.
I think I pretty directly addressed that point. Yes, it would be more expensive to hire me to do what Claude Code / Aider did, but nobody would be satisfied with my work if I stopped where Claude Code / Aider did.
They aren't necessarily saying it can replace you. They're saying that even though it's expensive, it's cheaper than your time (which can be better spent on other tasks, as you point out.)
The first half is correct, but the conclusions shouldn’t be ‘we’re replicating our software engineers with Claude today’, they’re ‘our experienced engineers just 10x their productivity, we’ll never need to hire an intern’
Productivity gains decrease exponentially after a few weeks as your engineering skills become rusty very fast (yes, they do, in 100% cases)
Thats the biggest part everyone misses. It’s all sunshine and rainbows until in a month you realize you start asking llm to think for you and at that point the code becomes shit and degrades fast.
Like with everything else “use it or lose it”
If you don’t code yourself- you will lose the ability to properly so it very fast, and you won’t realize it until too late
If you're using the LLM poorly. Many team leads spend very little time programming, and spend a lot of time reviewing code, which is basically what working with LLMs is. If the LLM writes code that you couldn't have written yourself, you aren't qualified to approve it.
I'm pondering where this "AI-automated programming" trend is heading.
For example: thirty years ago, FX trading was executed by a bunch of human traders. Then, computers arrived on the scene, which made all of them practically obsolete. Nowadays FX trading is executed by a collection of automated algorithms, being monitored by few quants.
My question is: is the software development in 2025 basically like what the foreign exchange was in the 2000s?
With industrialisation blacksmiths were replaced by assembly lines.
I'm sure that blacksmiths are more flexible and capable in almost any important dimension, but the economics of factories just made more sense.
I expect that when the dust settles (assuming that the dust settles), that most software will be an industrial product. The humans involved in its creation will be engineers and not craftsmen. Today we have machinists and industrial engineers - not blacksmiths.
Quality and quality assurance processes will become more important, I also expect optimised production processes.
I think a lot of the software ecosystem is a baroque set of over-engineered (or over crafted) steps and processes and this will probably be refactored.
I expect code quality metrics to be super refined.
Craftsmen don't usually produce artifacts to the tolerances that our machines do now - code will be the same.
I expect automated correctness proofs, specification languages, enhanced type systems to have a renaissance.