I don't really mind using analogies for LLMs "assuming" things or being "confused" too much. I think there really is _some_ value to such analogies.
However I gotta take issue with using those analogies when "it's trained for text completion and the punchline to this riddle is surely in its training data a lot" is a perfectly good explanation. I guess I would also add that the answer is well-aligned with RLHF-values. I wouldn't go for an explanation that requires squishy analogies when the stuff we know about these things seems completely adequate.
However I gotta take issue with using those analogies when "it's trained for text completion and the punchline to this riddle is surely in its training data a lot" is a perfectly good explanation. I guess I would also add that the answer is well-aligned with RLHF-values. I wouldn't go for an explanation that requires squishy analogies when the stuff we know about these things seems completely adequate.