Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I use GPT 4, and it still constantly invents things and presents them to me with authority. I haven’t tried GPT-4o yet.


I do find that current LLMs are quite bad at design problems and answering very specific questions for which they may lack sufficient training data. I like them for general Q&A though.

A different architecture or an additional component might be needed for them to generalize better for out-of-training-distribution questions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: