Hacker News new | past | comments | ask | show | jobs | submit login

I have some 20-year-old codebases, so I do know what massive problems are. In such codebases AI assistants have finally made it possible to solve problems nobody had time to solve for decades and Claude Code seems to be best fit for that work so far.



Couldn't have put it better myself. It's great for the problems nobody has ever had time to solve. It's the problems that fall just below the ROI threshold, that tend to pile up until you have hundreds to thousands of them, each with icky weird dependencies that are _exactly_ the reason nobody has time to work on them.

Now you can chip through a dozen of these complex issues a day, and hope has finally arisen for getting through the backlog. That's a life-changing difference for anyone with a legacy code base.


Again, grand claims with zero substance. What are these supposed "massive problems"? I think your definition of "massive" and mine could be entirely different things.


Hey, I am not trying to convert you. You might not be the target audience of the post, but I recognized our situation. We have some old, intertwined, outdated, messy distributed complex systems to take care of. Big Balls of Mud. Big Massive Balls of Mud. Big Expensive Balls of Mud. Big Unique Balls of Mud. Big Balls of Critical Mud. So in very general terms, there is the massive problem of entropy we have to reduce. Undocumented APIs actually are an example of such entropy, so beware. AI has helped us with those, just saying. And yes, we've seen generative AI itself being a source of entropy, less so recently.


When the undocumented APIs change, is the AI going to know? Is it going to test hundreds of APIs continuously and then do what? If the API changes aren't something I need to worry about, maybe the AI still will, and so then I have to babysit the AI when it cries about something I don't have to worry about. I have tests for the undocumented APIs. That's really all I need. AI has been more of a miss than a hit for me, it generates far more noise than signal in my experience.

If it works for you, that's great, but I wouldn't trust it with anything important. It would still require a lot of my time to vet the fixes it produces, and from what I've seen (mostly from Copilot) it almost never produces an acceptable result - but then I'm probably outside the typical use case for coding AI.

My company pays for AI but from what I've seen of it, I'd never pay for it for my personal projects.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: