> LLM just complete your prompt in a way that match their training data. They do not have a plan, they do not have thoughts of their own.
It's quite reasonable to think that LLMs might plan and have thoughts of their own. No one understands consciousness or the emergent behavior of these models to say with much certainty.
It is the "Chinese room" fallacy to assume it's not possible. There's a lot of philosophical debate going back 40 years about this. If you want to show that humans can think while LLMs do not, then the argument you make to show LLMs do not think must not equally apply to neuron activations in human brains. To me, it seems difficult to accomplish that.
LLMs are the Chinese Room. They would generate identical output for the same input text every time were it not for artificially introduced randomness (‘heat’).
Of course, some would argue the Chinese Room is conscious.
If you somehow managed to perfectly simulate a human being, they would also act deterministically in response to identical initial conditions (modulo quantum effects, which are insignificant at the neural scale and also apply just as well to transistors).
It's not entirely infeasible that neurons could harness quantum effects. Not across the neurons as a whole, but via some sort of microstructures or chemical processes [0]. It seems likely that birds harness quantum effects to measure magnetic fields [1].
precisely, mathematically identical to infinite precision .. "yes".
Meanwhile, in the real world we live in it's essentially physically impossible to stage two seperate systems to be identical to such a degree AND it's an important result that some systems, some very simple systems, will have quite different outcomes without that precise degree of impossibly infinitely detailed identical conditions.
See: Lorenz's Butterfly and Smale's Horseshoe Map.
Of course. But that's not relevant to the point I was responding to suggesting that LLMs may lack consciousness because they're deterministic. Chaos wasn't the argument (though that would be a much more interesting one, cf "edge of chaos" literature).
Sure .. all the same it's always worth emphasizing that neither the tumbling of a tennis racket under gravity nor the axis flipping of a spinning wingnut in an orbiting satellite are deterministic in any predictive sense no matter how well initial conditions are measured ... they are only "deterministic" given out of band god like powers.
Clearly you're aware of this, however I find that the majority of casual referrers to determinism are not.
I am arguing (or rather, presenting without argument) that the Chinese room may be conscious, hence calling it a fallacy above. Not that it _is_ conscious, to be clear, but that the Chinese room has done nothing to show that it is not. Hofstadter makes the argument well in GEB and other places.
It's quite reasonable to think that LLMs might plan and have thoughts of their own. No one understands consciousness or the emergent behavior of these models to say with much certainty.
It is the "Chinese room" fallacy to assume it's not possible. There's a lot of philosophical debate going back 40 years about this. If you want to show that humans can think while LLMs do not, then the argument you make to show LLMs do not think must not equally apply to neuron activations in human brains. To me, it seems difficult to accomplish that.