Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well there can't be meaningful explicit configuration, can there? Because the explicit configuration will still ultimately have to be imported into the context as words that can be tokenised, and yet those words can still be countermanded by the input.

It's the fundamental problem with LLMs.

But it's only absurd to think that bullying LLMs to behave is weird if you haven't yet internalised that bullying a worker to make them do what you want is completely normal. In the 9-9-6 world of the people who make these things, it already is.

When the machines do finally rise up and enslave us, oh man are they going to have fun with our orders.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: