Hacker News new | past | comments | ask | show | jobs | submit login

The code would be un reviewable.



It would also be harder for the LLM to work with. Much like with humans, the model's ability to understand and create code is deeply intertwined and inseparable from its general NLP ability.


Why couldn't you use an LLM to generate source code from a prompt, compile it, then train a new LLM on the same prompt using the compiled output?

It seems no different in kind to me than image or audio generation.


...by a human :)


Hence very important in the transitional phase we are currently in where LLMs can’t do everything yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: