Hacker News new | past | comments | ask | show | jobs | submit login

My advice: don't jump around between LLMs for a given project. The AI space is progressing too rapidly right now. Save yourself the sanity.





A man with one compass knows where he's going; a man with two compasses is never sure.

Isn't that an argument to jump around? Since performance improves so rapidly between models

I think the idea is you might end up spending your time shaving a yak. Finish your project, then try the ne SOTA on your next task.

But it's also churning: I think it's more in the direction of you'll be more productive with a setup you've learnt the quirks of than the newest one which you haven't.

Each model has their own strength and weaknesses tho. You really shouldn’t be using one model for everything. Like, Claude is great at coding but is expensive so you wouldn’t use them for debugging to writing test benches. But the OpenAI models suck at architecture but are cheap, so are ideal for test benches, for example.

You did not read what I said:

> don't jump around between LLMs for a given project

I didn't say anything about sticking to a single model for every project.


You should at least have two to sanity check difficult programming solutions.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: