Hacker News new | past | comments | ask | show | jobs | submit login

What about the engagement maximizing algorithms of the last decade plus which have seemingly helped fracture mature democracies by increasing extremism and polarization? Seems like we already have examples of companies using AI (or more specifically machine learning) to maximize some arbitrary goal without consideration for the real human harm that is created as a byproduct.



Ok, that's a more interesting goal to me, because unlike "make as many paperclips as possible" those are algorithms optimizing for actual real revenue and profit impact in a way that "as many paperclips as possible" doesn't. But it shares the "in the long run, this has a lot of externalities" aspect.

You could turn this into a "this is why superintelligence will good" thought experiment, though! Maybe "the superintelligence realizes that optimizing for these short term metrics will harm the company's position 30 years from now in a way that isn't worth it" - the superintelligence is smart enough to be longtermist ;) .

I realize that the greater point is supposed to be more like "this agent will be so different that we can't anticipate what it will be weighing or not, and whether it's longterm view would align with ours", but the paperclip maximizer example just requires it to be dumb in a way that I don't find consistent with the concern. And I find myself similarily unconvinced at many other points along the chain of reasoning that leads to the conclusion that this should be a huge immediate worry or priority for us, instead of focusing on human incentives/systems/goals.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: