Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But you can expect to learn in both cases. Just like you often learn from your own failures. Learning doesn’t require that you’re given the right answer, just that it’s possible for you to obtain the right answer


Hopefully you're mixing chemicals, diagnosing a personal health issue or resolving a legal dispute when you do that learning!


We’ve been down this road before. Wikipedia was going to be the knowledge apocalypse. How were you going to be able to trust what you read when anyone can edit it if you don’t already know the truth.

And we learned the limits. Broadly verifiable, non-controversial items are reasonably reliable (or at least no worse than classic encyclopedias). And highly technical or controversial items may have some useful information but you should definitely follow up with the source material. And you probably shouldn’t substitute Wikipedia for seeing a doctor either.

We’ll learn the same boundaries with AI. It will be fine to use for learning in some contexts and awful for learning in others. Maybe we should spend some energy on teaching people how to identify those contexts instead of trying to put the genie back in the bottle.


If you can't discern the difference between a LAMP stack returning UGC and an RNG-seeded matmut across the same UGC fined-tuned by sycophants then I think we're just going to end up disagreeing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: