Hacker News new | past | comments | ask | show | jobs | submit login
AGI Is Not a Milestone (aisnakeoil.com)
35 points by steebo 2 days ago | hide | past | favorite | 10 comments





LLMs are calculators for language.

Just as the calculator cleaved computation from mathematical understanding, LLMs have cleaved language use from linguistic reasoning. We used to treat expression and comprehension as tightly entangled. Now they're demonstrably separable. We’ve built a machine that can "speak" without understanding, just as calculators can "solve" without knowing.


I really like this way of putting it. Thanks!

And yeah I think the industry leans too heavily on LLMs right now. They don't understand, but they're really good at making it seem they do. Then the users get impressed by that and want more and more stuff that requires understanding and the industry just makes it because $$$. But because of that it's so hit and miss.


Maybe we could do with a new term. I mean "general intelligence" is pretty vague and could apply to all sorts of stuff.

Re "momentous milestone, ... obvious when it has been built" personally I think a major point is when the AIs could keep running the world without us, including building energy plants, chip factories and so on. AI independence maybe?

I think they are wrong on "AGI won't be a shock to the economy because diffusion takes decades" - ChatGPT reached 100m users in 2 months. These things can happen quickly.


It should be more discussed that LLM progress had almost nothing to do with building energy plants and whatnot:

  Whereas early AGI proponents believed that machines would soon take on all human activities, researchers have learned the hard way that creating AI systems that can beat you at chess or answer your search queries is a lot easier than building a robot to fold your laundry or fix your plumbing. The definition of AGI was adjusted accordingly to include only so-called “cognitive tasks.” DeepMind cofounder Demis Hassabis defines AGI as a system that “should be able to do pretty much any cognitive task that humans can do,” and OpenAI describes it as “highly autonomous systems that outperform humans at most economically valuable work,” where “most” leaves out tasks requiring the physical intelligence that will likely elude robots for some time.
(via this excellent Melanie Mitchell essay https://www.science.org/doi/10.1126/science.ado7069)

Transformer-powered robots still seem exceptionally stupid compared to frogs, bees, etc.

Re the shock comment - it is difficult to argue that ChatGPT has actually changed the economy much, despite widespread adoption. Overheated tech stocks, dev tooling, better scammers, better academic cheaters, etc are not exactly an industrial revolution. And frankly for many users it is more of a toy than a useful tool, e.g. the Studio Ghibli fad.


Great piece—I appreciate how you frame AGI as a continuous set of capabilities rather than a singular endpoint. At RunLLM, we've observed precisely this: generalized intelligence as just the starting line, with specialization critical to delivering reliable, practical value. Curious about your views on specialization as a way to address common LLM issues, like hallucinations?

With the release of OpenAI’s latest model o3, there is renewed debate about whether Artificial General Intelligence has already been achieved. The standard skeptic’s response to this is that there is no consensus on the definition of AGI. That is true, but misses the point — if AGI is such a momentous milestone, shouldn’t it be obvious when it has been built?

In this essay, we argue that AGI is not a milestone. It does not represent a discontinuity in the properties or impacts of AI systems. If a company declares that it has built AGI, based on whatever definition, it is not an actionable event. It will have no implications for businesses, developers, policymakers, or safety.


Why do you repeat the first two paragraphs of the article here?

I put it in the text field when I made the submission. I assumed it would go in a summary block beneath the link (and it's not made clear what the function of the text field is, apart from being "optional.")

As you can see, I don't do this often.


Putting a > before each paragraph works best. In the web it just shows as such and most mobile clients render it as a quote block

In other words, it's a marketing term.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: