Hacker News new | past | comments | ask | show | jobs | submit login

The interpolation isn't really based on statistically occurring text. It's based on statistically occurring concepts. A single token can have many meanings depending on context and many tokens can represent a concept depending on context. A (good) LLM is capturing that.



Neither just text or just concepts, but text-concepts — LLMs can only manipulate concepts as they can be conveyed via text. But I think wordlessly, in pure concepts and sense-images, and serialize my thoughts to text. That I have thoughts that I am incapable of verbalizing is what makes me different from an LLM - and, I would argue, actually capable of conceptual synthesis. I have been told some people think “in words” though.


Nope, you could shove in an embedding that didn't represent an existing token. It would work just fine.

(if not obvious.. you'd shove it in right after the embedding layer...)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: