Hacker News new | past | comments | ask | show | jobs | submit login

While you give your process feedback, here is my emotions related one. When I dev with LLM, I don't face my own limits in term of reasoning and architecturing, but I face the limit of the model to interpret prompts. Instead of trying to be a better engineer, I'm frustratingly prompting an unintelligent human-like interface.

I'm not fuding LLM, I use it everyday, all the time. But it won't make me a better engineer. And I deeply believe that becoming a good engineer helped me becoming a better human, because how the job make you face your own limits, train you to be humble and constantly learning.

Vibe coding won't lead to that same and sane mindset.




I feel like this is the big question now. How to find the correct balance that lets you preserve and improve your skills. I haven’t yet found the answer, but I feel like it should be in being very deliberate about what you let the agent do, and what kind of work you do in the “artisanal” way, so not even AI-enabled auto-complete.

But in order to be able to find the right balance, one does need to learn fully what the agent can do, and have a lot of experience with that way of coding. Otherwise the mental model is wrong.


Agreed. Auto-complete on steroid isn't what I call using LLM for coding. It's just a convenience.

What I do is to write pure functions with LLM. Once I designed the software and I have the API, I can tell the model to write a function which does a specific job where I know the inputs and outputs but I'm lazy to write the code itself.


it will make you a worse thinker




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: