Hacker News new | past | comments | ask | show | jobs | submit login

That's exactly what I've understood, and this becomes even more important as the size of codebase scales.

Ultimately, LLMs (like humans) can keep a limited context in their "brains". To use them effectively, we have to provide the right context.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: