I think the point is that you are putting human thought on a pinnacle, that is actually not proven or even commonly accepted. And if you have proof, then you should write a paper and put to rest centuries of questions.
The parents example goes like this:
Some other 'alien' species could find and examine humans and make the same leap, "these things appear to be forming rudimentary structures like the ants, and forming basic social structures. Hmmmm, my dear Watson, I wonder if they have any metacognition. Do you think they have qualitative thoughts?"
-> "But we, humans, qualitatively think in a way that we absolutely know for sure GPT does not."
Everyone is using GPT as the goal post. But it is not.
Deep Mind just released a paper where AI could do conceptual thought to solve geometry problems.
Deep Mind AlphaGo 'imagined' new moves that no human ever would.
It is hubris to think that someone wont put AI on a feedback loop, non unlike our own default mode network, and allow continual learning and adapting.
And at that point, any distinction between carbon and silicon, for what is happening internally will be on shaky ground.
EDIT: Maybe not you. Could have been different thread that used animals. So not you. But part of point, so don't want to edit.
No, I did say that animal intelligence has some characteristics. I think I made it clear that metacognition is more optional but I may have written less clearly.
However, I was responding to assertions made specifically about GPT, not some other or future AI.
Other AI systems may be different (I am enthusiastic about Steve Grand’s approach for example) but GPT is not “thinking” by any useful stretch of the word.
I am not placing human intelligence at the pinnacle. I am rebutting the idea that GPT is usefully thinking. It alarms me how many people are willing to strongly suggest it has magical, unknown emergent abilities, when all it can really do is surface embedded logic in the corpus of the written word.
ah.
Ok, if it was specific to GPT. Got it. I was extrapolating the arguments to all AI.
I think the reason people are freaking out about GPT/LLM, is it is dealing with 'natural language' specificaly.
However it is doing it, forget all the arguments about consciousness. It is touching on something with the general human that they believe is specific to humans.
There is something about having a computer 'understand' and 'speak back' in natural language, that triggers humans on some level. It's something that wasn't supposed to happen, because it is 'innately' human. People said "it will take a 100 years", and now it is happening.
You included all animals.
I think the point is that you are putting human thought on a pinnacle, that is actually not proven or even commonly accepted. And if you have proof, then you should write a paper and put to rest centuries of questions.
The parents example goes like this:
Some other 'alien' species could find and examine humans and make the same leap, "these things appear to be forming rudimentary structures like the ants, and forming basic social structures. Hmmmm, my dear Watson, I wonder if they have any metacognition. Do you think they have qualitative thoughts?"
-> "But we, humans, qualitatively think in a way that we absolutely know for sure GPT does not."
Everyone is using GPT as the goal post. But it is not.
Deep Mind just released a paper where AI could do conceptual thought to solve geometry problems.
Deep Mind AlphaGo 'imagined' new moves that no human ever would.
It is hubris to think that someone wont put AI on a feedback loop, non unlike our own default mode network, and allow continual learning and adapting.
And at that point, any distinction between carbon and silicon, for what is happening internally will be on shaky ground.
EDIT: Maybe not you. Could have been different thread that used animals. So not you. But part of point, so don't want to edit.