Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

“Moving the goalposts” in AI usually means the opposite of devaluing the term.

Peter Norvig (former research director at Google and author of the most popular textbook on AI) offers a mainstream perspective that AGI is already here: https://www.noemamag.com/artificial-general-intelligence-is-...

If you described all the current capabilities of AI to 100 experts 10 years ago, they’d likely agree that the capabilities constitute AGI.

Yet, over time, the public will expect AGI to be capable of much, much more.



I don't see why anyone would consider the state of AI today to be AGI? it's basically a glorified generator stuck to a query engine

today's models are not able to think independently, nor are they conscious or able to mutate themselves to gain new information on the fly or make memories other than half baked solutions with putting stuff in the context window which just makes it use that to generate stuff related to it, imitating a story basically.

they're powerful when paired with a human operator, I.e. they "do" as told, but that is not "AGI" in my book


> nor are they...able to mutate themselves to gain new information on the fly

See "Self-Adapting Language Models" from a group out of MIT recently which really gets at exactly that.

https://jyopari.github.io/posts/seal


Check out the article. He’s not crazy. It comes down to clear definitions. We can talk about AGI for ages, but without a clear meaning, it’s just opinion.


For a long time the turing test was the bar for AGI.

Then it blew past that and now, what I think is honestly happening, is that we don't really have the grip on "what is intelligence" that we thought we had. Our sample size for intelligence is essentially 1, so it might take a while to get a grip again.


The commercial models are not designed to win the imitation game (that is what Allan Turing named it). In fact the are very likely to loose every time.


The current models don't really pass Turing test. They pass some weird variations on it.


That's a quite persuasive argument.

One thing they acknowledge but glance over, is the autonomy of current systems. When given more open ended, long term tasks, LLMs seem to get stuck at some point and get more and more confused and stop making progress.

This last problem may be solved soon, or maybe there's something more fundamental missing that will take decades to solve. Who knows?

But it does seem like the main barrier to declaring current models "general" intelligence.


> If you described all the current capabilities of AI to 100 experts 10 years ago, they’d likely agree that the capabilities constitute AGI.

I think that we're moving the goalposts, but we're moving them for a good reason: we're getting better at understanding the strengths and the weaknesses of the technology, and they're nothing like what we'd have guessed a decade ago.

All of our AI fiction envisioned inventing intelligence from first principles and ending up with systems that are infallible, infinitely resourceful, and capable of self-improvement - but fundamentally inhuman in how they think. Not subject to the same emotions and drives, struggling to see things our way.

Instead, we ended up with tools that basically mimic human reasoning, biases, and feelings with near-perfect fidelity. And they have read and approximately memorized every piece of knowledge we've ever created, but have no clear "knowledge takeoff path" past that point. So we have basement-dwelling turbo-nerds instead of Terminators.

This makes AGI a somewhat meaningless term. AGI in the sense that it can best most humans on knowledge tests? We already have that. AGI in the sense that you can let it loose and have it come up with meaningful things to do in its "life"? That you can give it arms and legs and watch it thrive? That's probably not coming any time soon.


"If you described"

Yes, and if they used it for awhile, they'd realize it is neither general nor intelligent. On paper sounds great though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: