I'll set aside my objection to calling software development "coding" (10% of the job at best) ...
I guess AI-assisted coding (sounds like assisted living for the elderly) is replacing Stack Overflow "cut n paste" coding, where there was similar lack of attention to what people were copying. In the case of AI what is being generated/pasted comes from the training data, so perhaps "statistical paste coding" ? "hive mind coding" ?
"Hive mind coding" almost sounds like a good thing (leaning on the collective experience of the global software community - what could be better ?!), until you realize (e.g. as Fred Brooks describes in "Mythical Man Month") that any software project large enough to need multiple developers (or AI equivalent) needs a clear vision and an architect, lead, etc. 1M uncoordinated coding monkeys isn't going to cut it, even if they are all guided by a "hive mind" cookbook of what to write.
Maybe this is the real disconnect, or a large part of it, between benchmarks indicating super-human leetcode and competitive coding AI capability, and Google reporting 30% of software AI developed (= test cases?), and not hearing of much utility outside of these one-man projects or narrow "write me a test function" type things.
Yeah - being open to surprises (code has tons of broken corner cases, security vulnerabilities, missing error handling, etc, etc) isn't exactly what comes to mind when writing real software.
I'm not even sure in what use cases, even toy ones, vibe coding works. It's been a few months since I tried using Claude to even do simple things like generate a React-based prototype for part of a web page (the sort of use case I would expect it to do well on), and even there it wasn't a "haha isn't this cool" hands-off "vibe coding" experience - I had to intervene and to do my own google research to find out why it was failing and then tell it explicity what to do.
I also have to wonder how many of these AI researchers fully realize the massive gap in complexity between what they are writing (typical ML model or prototype) and what real software development looks like. Let's see Karpathy "vibe code" a 100L-1M LOC system that needs to interact with a dozen undocumented legacy systems via proprietary interfaces, then come back and tell us how that went.
The original intent for this architecture was for modelling large spiking neural networks in real-time, although the hardware is really not that specialized - basically a bunch of ARM chips with high speed interconnect for message passing.
It's interesting that the article doesn't say that's what it's actually going to be used for - just event driven (message passing) simulations, with application to defense.
In a recent talk by the author (I just posted a link), he says a best practice for large requests (e.g. implement an entire project/solution) is to ask Claude Code to think about it and present you with alternative approaches/designs (which you can then review). You could provide feedback and iterate if you wanted to.
Interesting that about 80% of developers at Anthropic are now using it.
There's a question at the end of the presentation about why is Claude Code a command line tool, not an IDE... basic answer was because command line is ubiquitous so it fits into everyone's workflow regardless of tool choice, but second part was more interesting ... That internal to Anthropic they are seeing how fast Claude itself is improving, and are projecting that using IDEs to develop software may shortly no longer make sense!
I almost got banned from high school chemistry for making this (UK 1970's). Made it during break in middle of Chemistry lab while the teacher was out, and sadly was rather sloppy - got it over the floor, threw damp filter paper covered in it into the waste paper basket (which later self-detonated), etc.
There's a popular saying, e.g. used by NVIDIA CEO Jensen Huang, that "AI won't replace you - a human using AI will replace you", which may be temporarily true while AI isn't very capable, but the AI CEOs are claiming AGI will be here in 2 years, and explicitly saying that it will be a "drop-in replacement remote worker". Obviously one of these is wrong - it's either just a tool to be learnt and used, or it is in fact a drop-in replacement for a human.
One can argue about the timeline and technology (maybe not LLM based), but it does seem that human-level AGI will be here relatively soon - next 10 or 20 years, perhaps, if not 2. When this does happen, history is unlikely to be a good predictor of what to expect... AGI may create new jobs as well as detstroy old ones, but what's different is that AGI will also be doing those new jobs! AGI isn't automating one industry, or creating a technology like computers that can help automate any industry - AGI is a technology that will replace the need for human workers in any capacity, starting with all jobs that can be conducted without a physical presence.
The trouble with looking at past examples of new tech and automation is that those were all verticals - the displaced worker could move to a different, maybe newly created, work area left intact by the change.
Where AI will be different (when we get there - LLMs are not AGI) is that it is a general human-replacement technology meaning there will be no place to run ... They may change the job landscape, but the new jobs (e.g. supervising AIs) will ALSO be done by AI.
I don't buy this "AGI by 2027" timeline though - LLMs and LLM-based agents are just missing so many basic capabilities compared to a human (e.g. ability to learn continually and incrementally). It seems that RL, test-time compute (cf tree search) and agentic application, have given a temporary second wind to LLMs which were otherwise topping out in terms of capability, but IMO we are already seeing the limits of this too - superhuman math and coding ability (on smaller scope tasks) do not translate into GENERAL intelligence since they are not based on general mechanism - they are based on vertical pre-training in these (atypical in terms of general use case) areas where there is a clean reward signal for RL to work well.
It seems that this crazy "we're responsibly warning you that we're going to destroy the job market!" spiel is perhaps because these CEOs realize there is a limited window of opportunity here to try to get widespread AI adoption (and/or more investment) before the limitations become more obvious. Maybe they are just looking for an exit, or perhaps they are hoping that AI adoption will be sticky even if it proves to be a lot less capable that what they are promising it will be.
reply