Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe they used an LLM to explain it. Gemini in particular is obsessed with these utterly useless analogies for everything, when I would prefer something closer to Wikipedia with more context. (Needless to say, I currently don't find LLMs useful for learning about things. That's a shame because that use case feels promising.)


I saw this ChatGPT-created analogy on a JS subreddit the other day:

> Imagine you have a robot in a room, and this robot can perform actions like turning on a light, opening a door, or picking up objects. Now, if you want to tell the robot to do something, you usually say something like, "Robot, pick up the pen!" or "Robot, open the door."

> In JavaScript, ‘this’ is like the "robot" in the room

Terrible.


LLMs are like an unlimited, poorly written encyclopedia. Often inaccurate or not entirely helpful, but will get you enough of an idea to find better sources. Sort of solving the "I don't know what I don't know" gap.


In this regard, they have been extraordinarily fruitful for my research and studies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: