In the preview video, I appreciated Katy Shi's comment on "I think this is a reflection of where engineering work has moved over the past where a lot of my time now is spent reviewing code rather than writing it."
As I think about what "AI-native" or just the future of building software loos like, its interesting to me that - right now - developers are still just reading code and tests rather than looking at simulations.
While a new(ish) concept for software development, simulations could provide a wider range of outcomes and, especially for the front end, are far easier to evaluate than just code/tests alone. I'm biased because this is something I've been exploring but it really hit me over the head looking at the Codex launch materials.
> a lot of my time now is spent reviewing code rather than writing it.
Reviewing has never been a panacea. It’s a best-effort at catching obvious mistakes, like a second opinion. Only with highly rigorous tests can reviewing give as high confidence as I trust another engineer or myself. Generally cadence of code output has never been a bottleneck for me, rather the opposite (if I had more time I’d write you a shorter letter).
Most importantly, writing code that is testable on meaningful boundaries is an extremely difficult and delicate art form, which ime is something you really want to get right if possible. Not saying an AI can or can’t do that, only that it’s the hardest part. An army of automated junior engineers still can’t win over the complexity beast that yolo programming causes. At some point code mutations will cause more problems as side effects than what they fix.
> An army of automated junior engineers still can’t win over the complexity beast that yolo programming causes. At some point code mutations will cause more problems as side effects than what they fix.
++ Kind of my whole thesis with Graphite. As more code gets AI-generated, the weight shifts to review, testing, and integration. Even as someone helping build AI code reviewers, we'll _need_ humans stamping forever - for many reasons, but fundamentally for accountability. A computer can never be held accountable
I think the issue is not about humans being entirely replaced. Instead, the issue is that if AI replaces enough number of knowledge workers while there's no new or expanded market to absorb the workforce, the new balance of supply and demand will mean that many of us will have suppressed pay or worse, losing our jobs forever.
That is true regardless of whether there is or isn't a "new or expanded market to absorb the workforce".
It's a crucial insight that's usually missed or eluded in discussions about automation and workforce - unless you're literally at the beginning of your career, losing your career to automation screws you over big time, forever. At best, you'll have to downsize your entire lifestyle, and that of your family, to be commensurate with your now entry-level pay. If you're halfway through the career that suddenly ended, you won't recover.
All the new jobs and markets are for the kids. Mind you, not your kids - your kids are going to be disadvantaged by their household being suddenly thrown into financial insecurity or downright poverty, and may not even get a chance to start a good career path with their peers.
That, not "anti technology sentiment", is why Luddites smashed the looms. Those were people who got rug-pulled by business decisions and thrown into poverty, along with their families and communities.
I feel like I've been thinking along similar lines recently (due to re-read this though!) but instead of "computer" am replacing it with "AI" or "Agents" these days. Same point holds true.
> I think this is a reflection of where engineering work has moved over the past where a lot of my time now is spent reviewing code rather than writing it.
This was always true. Front-End code is not really code. Most of the back-end code is just convert and moving data around. For most functionality where you need "real code" like crypto, compression, math, etc.. you use a library used by another 100k developers.
I used Cline to build a tiny testing helper app and this is exactly what it did!
It made changes in TS/Next.js given just the boiletplate from create-next-app, ran `yarn dev` then opened its mini LLM browser and navigated to localhost to verify everything looked correct.
It found 1 mistake and fixed the issue then ran `yarn dev` again, opened a new browser, navigated to localhost (pointing at the original server it brought up, not the new one at another port) and confirmed the change was correct.
I was very impressed but still laughed at how it somehow backed its way into a flow the worked, but only because Next has hot-reloading.
recently both llama.cpp and ollama got better support for them too, which makes this kind of integration with local/self-hosted models now more attainable/less expensive
Preview video from Open AI: https://www.youtube.com/watch?v=hhdpnbfH6NU&t=878s
As I think about what "AI-native" or just the future of building software loos like, its interesting to me that - right now - developers are still just reading code and tests rather than looking at simulations.
While a new(ish) concept for software development, simulations could provide a wider range of outcomes and, especially for the front end, are far easier to evaluate than just code/tests alone. I'm biased because this is something I've been exploring but it really hit me over the head looking at the Codex launch materials.