Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Somehow this feels like... possibly really good news for hardening LLMs? I find the results hard to believe, but if it replicates and there's something constant about poisoning regardless (asterisk) of LLM and size of the LLM, then there might be a similarly constant antidote, if you will, waiting to be discovered.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: