Hacker News new | past | comments | ask | show | jobs | submit login

This is a very typical reply when we see someone excited about AI and then a luddite needs to come along and tell them why they should not be so excited and helpful.

I mostly jest but your comment comes off quite unhelpful and negative. The person you replied to wasn’t blaming the parent comment, just offering helpful tips. I agree that today’s AI tools aren’t perfect, but I also think it’s important for developers to invest time refining their toolset. It’s no different from customizing IDE shortcuts. These tools will improve, but if devs aren’t willing to tinker with what they use, what’s the point?




I think your reply is a bit unfair. The first problem of the AI using outdated libraries sure it can be fixed if you are a competent developer who keeps up with industry standards. But if you need someone like that to guide an AI, then "agentic" coding loses a lot of its "agentic" ability which is what the article is talking about. If you need enthusiasts and experts to steer the AIs, they are a bit like "full self driving" where you are still asked to touch the steering wheel if it thinks you're not paying attention to the road.


I don't see how it's unfair at all. This is novel technology with rough edges. Sharing novel techniques for working around those rough edges is normal. Eventually we can expect them to be built-in to the tools themselves (see MCP for how you might inject this kind of information automatically.) A lot of development is iterative and LLMs are no exception.


Sure it is normal, but the reply was also a normal reply because, as you said it's a product with rough edges advertised as if it's completely functional. And the comment I replied to, criticizing the other reply, was not fair because he's only giving a normal reply, imo


I don't believe it is a normal reply on HN, or at least it shouldn't be.

We all know that this technology isn't magic. In tech spaces there are more people telling you it isn't magic than it is. The reminder does nothing. The contextual advice on how to tackle those issues does. Why even bother with that conversation, you can just take the advice or ignore it until the technology improves since you've already made up your mind about the limit you or others should be willing to go.

If it doesn't meet the standard of what you believe is advertised than say that. Not, "workarounds" are problematic because they obfuscate how someone should feel about how the product is advertised. Maybe you are an advertising purist and it bothers you, but why invalidate the person providing the context into how to utilize those tools in their current state better?


> We all know that this technology isn't magic.

I didn't say it's magic. I said what it is advertised as.

> The reminder does nothing. The contextual advice on how to tackle those issues does.

No, the contextual advice doesn't help because it doesn't tackle the issue because the issue is "It doesn't work as advertised". We are in a thread of an article whose main thesis is "We’re still far away from AI writing code autonomously for non-trivial tasks." Giving advice that doesn't achieve autonomous writing code for non-trivial tasks doesn't help achieve that goal.

And if you want to talk about replies that do nothing. Calling the guy a Luddite for saying that the tip doesn't help him use the agent as an autonomous coder, is a huge nothing.

> since you've already made up your mind about the limit you or others should be willing to go.

Please read the article and understand what the conversation is about. We are talking about the limits that the article outlined, and the poster is saying how he also hit those limits.

> If it doesn't meet the standard of what you believe is advertised than say that.

The article says this. The commenter must have assumed people here read the articles.

> why invalidate the person providing the context into how to utilize those tools in their current state better?

Because that context is a deflection from the main point of the comment and conversation. It's like in a thread of mechanics talking about how an automatic tire balancer doesn't work well, and someone comes in saying "Well you could balance the tires manually!" How helpful is that?


I disagree. It’s unfair to put words in other people’s mouth which is who I was replying to.


It is definitely not unfair. #2 is a great strategy, I'm gonna try in my agentic tool. We obviously need experts to steer the AIs in this current phase, who said we don't?


> We obviously need experts to steer the AIs in this current phase, who said we don't?

I don't know if it's explicitely said but if you call it agentic, it sounds like it can do stuff independently (like an agent). If I still need to hand feed it everything, I wouldn't really call it agentic.


There are two different roles here, the dev that creates the agent and the user that uses the agent. The user should not need to adapt/edit prompts but the dev definitely should for it to evolve. Agents aren't AGI, after all.


> We obviously need experts to steer the AIs in this current phase, who said we don't?

Much of the marketing around agents is about not needing them. Zuck said Meta's internal agent can produce the code of an average junior engineer in the company. An average junior engineer doesn't need this level of steering to know to not include a 4 year old outdated library in a web project.


Where are the goalposts then? If the answer is always "you are prompting it wrong" then there are no goalposts for agentic development.

Where is the consensus position that is demonstrably more effective than traditional development?

My observation is that it's somewhere close to "start this project and set an overall structure", after that nobody seems to agree.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: