Imagine the only thing you know about AI came from the opening voiceover of Terminator 2 and you are a state legislator. Now you understand the origin of this bill perfectly.
It's not about current LLMs, it's about future, much more advanced models, that are capable of serious hacking or other mass-casualty-causing activities.
o-1 and AlphaProof are proofs of concept for agentic models. Imagine them as GPT-1. The GPT-4 equivalent might be a scary technology to let roam the internet.
It looks like it would cover an ordinary chatbot than can answer "how do I $THING" questions, where $THING is both very bad and is also beyond what a normal person could dig up with a search engine.
It's not based on any assumptions about the future models having any capabilities beyond providing information to a user.
everyone in the safety space has realized that it is much easier to get legislators/the public to care if you say that it will be “bad actors using the AI for mass damage” as opposed to “AI does damage on its own” which triggers people’s “that’s sci-fi and i’m ignoring it” reflex.
Am I out of the loop here? What "high-risk" situations do they have in mind for LLM's?