Hacker News new | past | comments | ask | show | jobs | submit login

They haven't. Why are you stating lies as facts?



Artificial: man-made.

General: able to solve problem instances drawn from arbitrary domains.

Intelligence: definitions vary, but the application of existing knowledge to the solution of posed problems works here.

Artificial. General. Intelligence. AGI.

As in contrast to narrow intelligence, like AlphaGo or DeepBlue or air traffic control expert systems, ChatGPT is a general intelligence. It is an AGI.

What you are talking about is, I assume, a superintelligence (ASI). Bostrom is careful to distinguish these in his writing. Bostrom, Yudkowsky et al make some implicit assumptions that led them to believe that any AGI would very quickly lead to ASI. This is why, for example, Yudkowsky has a very public meltdown two years ago, declaring the sky is falling:

https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-annou...

(Ignore the date. This was released on April 1st to give plausible deniability. It has since become clear this really represents his view.)

The sky is not falling. ChatGPT is artificial general intelligence, but it is not superintelligence. The theoretical model used by Bostrom et al to model AGI behavior does not match reality.

Your assumptions about AGI and superintelligence are almost certainly downstream from Bostrom and Yudkowsky. The model upon which those predictions were made has been falsified. I would recommend reconsidering your views and adjusting your expectations accordingly.


I appreciate these definitions and distinctions. Thanks for sharing. You've helped me understand that I need a better, more precise vocabulary about this topic. I think on an abstract level I would think of AGI as "the brain that's capable of understanding", but I really then have no way to truly define "understanding" in the context of something artificial. Maybe ChatGPT "understands" well enough, if the output is the same.


It does understand to a certain degree for sure. Sometimes it understands impressively well. Sometimes it seems like a special needs case. Ultimately its understanding is different than that of a human’s.

The issue with the “once OpenAI achieves AGI [sic], everything changes” narrative is that it is based off models with infinite integrals in them. If you assume infinite compute capability, anything becomes easy. In reality as we’ve seen, applying GPT-like intelligence to achieve superhuman capabilities, where it is possible at all, is actually quite difficult, field-specific, and time intensive.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: