Hacker News new | past | comments | ask | show | jobs | submit login

To generalise this idea: if we look at a thousand points that more or less fill a triangle, we'll instantly recognize the shape. IMO, this simple example reveals what intelligence is really about. We spot the triangle because so much complexity - a thousand points - fits into a simple, low-entropy geometric shape. What we call IQ is the ceiling of complexity of patterns that we can notice. For example, the thousand dots may in fact represent corners of a 10-dimensional cube, rotated slightly - an easy pattern to see for a 10-d mind.





Cool. Since ChatGPT 4o is actually really good at this particular shape identification task, what, if anything do you conclude about its intelligence?

Recognizing triangles isn't that impressive. What's the ceiling of complexity of patterns in data it can identify with is the real question. Give it a list of randomly generated xyz coords that fall on a geometric shape, or a list of points that sample a trajectory of Earth around Sun. Will it tell you that it's an ellipse? Will it derive the 2nd Newton's law? Will it notice the deviation from the ellipse and find the rule explaining it?

The entire point here is that LLMs and image recognition software is not managing this task, so, not really good at this particular shape identification task.

No, the post's article is not about the sort of shape identification task discussed by GP. Or indeed any image recognition task: it's a paper about removed context in language.

Fwiw, I did test GP's task on ChatGPT 4o directly before writing my comment. It is as good at it as any human.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: