> And AI is a machine – is not going to come alive any more than your toaster will.
There have been claims that AIs are conscious. For example, Ilya Sutskever has suggested that LLMs may be slightly conscious.
It is possible that machine consciousness could be quite different from human consciousness. This idea aligns with the philosophy of Nonduality, which proposes that pure consciousness is the fundamental substratum of the universe. Our minds are able to reflect this pure consciousness, albeit in a limited way. If our human minds can reflect consciousness, perhaps artificial neural networks can as well, but in their own manner.
It also ignores the fact that there’s no need for it to become “alive” or “conscious” to be a threat in the way he describes. It just needs to be an agent with an mis-specified, poorly specified, or maliciously specified goal. And there are already numerous examples of those. The only debate is around capability, and here he makes multiple references to “infinitely” capable. So the whole argument seems like wildly disingenuous strawman, consistent with his attempt to classify all those raising concerns as naive (or corrupt) cultists - not exactly the vibe from the likes of Geoff Hinton / Stuart Russell / Max Tegmark; all of whom generally act with far more integrity (it seems) than Marc Andreessen shows here.
Ironically I think the whole article is motivated by the thing he claims to condemn - namely: he’s a bootlegger, who has an interest in freedom of ai development.
Part 2 is much more interesting. Part 1 was very very weak.
It should be noted that Ilya Sutskever, judged purely as a philosopher of mind and neurologist, is a great machine learning engineer. I don't feel the need to pay any attention to what he says about LLMs being conscious, nor is there any particular reason to assume they are (even if we posit that LLMs are "intelligent", which I think is a category error, why should intelligence be related to consciousness?).
'alive' and 'conscious' are very far apart semantically though, and it's quite possible that something can be 'not alive' by our normal definition of alive and yet that it can attain consciousness.
But switching it off isn't the same as murdering it, you can switch it back on again just as easily, and you can do all of the CRUD operations on it as well as copying, versioning and checkpointing.
> And AI is a machine – is not going to come alive any more than your toaster will.
There have been claims that AIs are conscious. For example, Ilya Sutskever has suggested that LLMs may be slightly conscious.
It is possible that machine consciousness could be quite different from human consciousness. This idea aligns with the philosophy of Nonduality, which proposes that pure consciousness is the fundamental substratum of the universe. Our minds are able to reflect this pure consciousness, albeit in a limited way. If our human minds can reflect consciousness, perhaps artificial neural networks can as well, but in their own manner.