Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think this is a bombshell finding. Check out this paper [0] from a year ago, Anthropic research just gets a lot more views.

> Our experiments reveal that larger LLMs are significantly more susceptible to data poisoning, learning harmful behaviors from even minimal exposure to harmful data more quickly than smaller models.

[0] https://arxiv.org/html/2408.02946v4



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: