Hacker News new | past | comments | ask | show | jobs | submit login

To detect a presence, a real brain takes in sensory input and compares it to expectations, and stays calm or registers surprise, and from time to time issues predictions to guide the organism.

To detect an absence, the brain cannot rely on sensory input, by definition. To be surprised if sensory evidence is _not_ there requires a model of the world strong enough to register surprise if the expectation is not there, without a sensory prompt.

It seems to me detecting an absence is a strictly higher-order neurological task than processing sensory input.

If LLMs can't do this strictly higher-order neurological task, is that not a capability currently unique to living things?






Thinking is still currently unique to living things, so you don't need to resort to what you describe to find the human brain uniquness.

Onto what you describe, it has to do with memory. Memory is storing and playing back sensory input, in the absence of that sensory input. So your brain plays back some past sensory input and checks it against current sensory input.

Eg you left the pen on the table. When you come back the pen isn't there. Your brain compares the stored memory of seeing the pen on the table vs what you see now.


LLMs might not be very consistent overall in their learned architecture. Some paths may lead to memorized info, some paths may lead to advanced pattern matching.

> from time to time

I know less-than-zero about the subject but I’d imagine the temporal aspect alone is a problem. Aren’t these agents reasoning from a fixed/ frozen version of “reality” rather than adjusting in real-time??




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: