I've been an atheist since I was old enough to form any thoughts about existence. I don't believe in man's uniqueness, or the concept of a soul. But it irks me when people talk about what we currently call AI as something that thinks or has an intellect.
LLMs do not think. Our poor human brains are just fooled by the accuracy in which they predict words. Maybe one day we'll invent an AI that does think, but LLMs are not it.
I understand that you're responding within the thread, but to take this back to the original point, which is about human dignity, justice and labor:
LLMs do not need to “think” for the point to be valid. Chess engines do not “think” and do not have any conception of what they're really doing, but they still win at chess every time. The worry is that AI will put an end to human dignity, not that it “thinks too much”.
Is this because our concept of human dignity/moral worth is predicated on what we “do”? Perhaps we can move away from the idea that our identity is tied to our works, and just have moral worth rooted in our being. Maybe having a human experience is enough justification for dignity?
Then it doesn’t matter if LLMs are better than us, at least unless they can be shown to have equivalent depth of experience.
I completely agree with you — we should definitely decouple our conception of worth from what we “do”. And if the only issue with AI is that it gets good at doing what we want it to do, then I also agree with you.
But if AIs become superintelligent and plot an overthrow of humanity, it won't matter what our conception of identity or dignity is, the AI will still kill us. The AI doesn't ascribe dignity or worth to humans. It only pursues goals.
No, and I'm not arguing with the point that they pose a risk to human dignity, because I wholeheartedly agree with that. I'm taking issue with the idea that LLMs are an intellect, or intelligent. Your example of chess engines is absolutely on point. LLMs don't think any more that chess engines do, but their chess game is language
Words can have multiple meanings. Feet can be the things on the end of legs, or a unit of measurement.
LLMs process data in a more intelligent manner than previous systems. The solution, whatever it is, is presented in a thoughtful manner to the system operator. "Thinking" seems like a pretty convenient term for the process. It doesn't have to imply sentience or sapience.
Its the same with "Learning" or "Training" these models aren't actually learning, and they certainly aren't being trained. But these words are convenient shorthand that conveys a similar process.
The battle for language prescriptivists isn't just to ban a certain use of a word, its to find a convenient alternative. I dont see any here.
Although true, consciousness itself doesn't seem to be unique among humans. The other mammals certainly appear to experience qualia just as much as we do, and at least some birds act suspiciously like they are conscious too.
Why should it be unique among proteins? Bags of water and proteins famously behave in extremely statistically predictable ways on a cellular level ... and are conscious. Animals and humans are nothing but a lot of those bags. Ok ... a very large number. I also have a Msc in statitstics, which tells me a combination of a large number of statistically predictable variables is itself a statistically predictable variable.
So why couldn't a large collection of statistical variables be conscious? Mathematically, it's the same thing.
Again just to be clear, I'm not saying that AI can never or will never be conscious, or think. I'm saying LLMs (which are currently being mislabeled as AI) are not conscious intellects.
They are, at their core, predictive text engines.
And this is is just my opinion, yours may of course differ. Maybe you believe humans are just predictive text engines
Absolutely. The ease and indeed hunger with which businesses went for this sort of language is evidence of how far removed they have become from reality.
LLMs do not think. Our poor human brains are just fooled by the accuracy in which they predict words. Maybe one day we'll invent an AI that does think, but LLMs are not it.