Hacker News new | past | comments | ask | show | jobs | submit login

I think the key is how you define “good” - LLMs certainly can turn small amounts of text into larger amounts effortlessly, but if in doing so the meaningful information is diluted or even damaged by hallucinations, irrelevant info, etc., then that’s clearly not “good” or effective.





Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: