Hacker News new | past | comments | ask | show | jobs | submit login

If you accept the implied premise that there are irresponsible deployments of AI out there, the alternative explanation is that they did consider the ramifications and simply don't care. That's even worse. Calling them ignorant is actually giving them the benefit of the doubt.



Or the researchers don't think existential threats are realistic, and paper maximizing thought experiments are silly. Maybe they're wrong, but maybe not. It's easy to imagine AI takeover scenarios by giving them unlimited powers, it's hard to show the actual path to such abilities.

It's also hard to understand why an AI smart enough to paperclip the world wouldn't also be smart enough to realize the futility in doing so. So while alignment remains an issue, the existential alignment threats are too ill-specified. AGIs would understand we don't want to paperclip the world.

Fun game though.


I agree completely with your first paragraph, and disagree completely with your second.

"Futility" is subjective, and the whole purpose of the thought experiment is to point out that our predication of "futility" or really any other purely mental construct does not become automatically inherited by a mind we create. These imaginary arbitrarily powerful AIs would definitely be able to model a human being describing something as futile. Whether or not it persues that objective has nothing to do with it understanding what we do or don't want.


> It's also hard to understand why an AI smart enough to paperclip the world wouldn't also be smart enough to realize the futility in doing so.

Terminal goals can't be futile, since they do not serve to achieve other (instrumental) goals. Compare: Humans like to have protected sex, watch movies, eat ice cream, even though these activities might be called "futile" or "useless" (by someone who doesn't have those goals) as they don't serve any further purpose. But criticizing terminal goals for not being instrumentally useful is a category error. For a paperclipper, us having sex would seem just as futile as creating paperclips seems to us. Increased intelligence won't let you abandon any of your terminal goals, since they do not depend on your intelligence, unlike instrumental goals.


It's not like you want to eat ice cream constantly, even if it means making everything into ice cream.

Of course the premise becomes that the AI has been instructed to make paperclips. They should have hired a better prompt engineer, capable of actually specifying the goals more clearly. I don't think an AI that eradicates humankind, will have such simplistic goals, if an AI ever becomes the end of humans. Cybermen, though, are inevitable.


Yes, they should just write prompts without bugs. Can't be that much harder than writing software without bugs.


> AGIs would understand we don't want to paperclip the world.

Even if they did, what if they aren't smart enough for eloquent humans to convince them it's for the greater good. True AGIs will need a moral code to match their intelligence, and someone will have to decide what's good and bad to make that moral code.


Then they won't be smart enough to paperclip the world. No human organization can do that.


I've seen people calculate how much human blood would be needed to make an iron sword, for fun. AGIs won't need the capability to transmute all matter into iron, just enough capabilities to become significantly dangerous.


That would be not accepting the premise that deployments are irresponsible. I guess there could be a situation where every researcher thinks everyone else's deployment is irresponsible and theirs is fine, but I don't think that's what you're saying.


Another explanation is that there are those who considered and thoughtfully weighed the ramifications, but came to a different conclusion. It is unfair to assume a decision process was agnostic to harm or plain ignorant.

For example, perhaps the lesser-evil argument played a role in the decision process: would a world where deep fakes are ubiquitous and well-known by the public be better than a world where deep fakes have a potent impact because they are generated seldomly and strategically by a handful of (nefarious) state sponsors?


there's also the issue that most of the AI catastrophizing is a pretty clear slipperyslope argument:

if we build ai AND THEN we give it a stupid goal to optimize AND THEN we give it unlimited control over its environment, something bad will happen.

the conclusion is always "building AI is wrong" and not "giving AI unrestricted control of critical systems is wrong"


The massive flaw in your argument is your failure to define "we".

Replace the word "we" with "a psychotic group of terrorists" in your post and see how it reads.


If you’re talking about some group of evildoers that deploy ai in a critical system to do evil… the issue is why do they have control to the critical system? Surely they could jump straight to their evil plot with the ai at all


Your question is equivalent to "if you have access to the chessboard anyway, why use Stockfish, just play the moves yourself."


Or "board of directors beholden to share-holders".


I completely agree that's a valid argument. I just think it is rational for someone to come to a different conclusion, given identical priors.


If it wasn’t clear, I agree with your parent comment


My main takeaway from Bostrom's Superintelligence is that a super intelligent AI cannot be contained. So, the slippery slope argument, often derided as a bad form of logic, kind of holds up here.


See also social media platforms. They are very well informed of the results of their algorithmic changes.

See also big tobacco. They exactly what their additives to the product did.

See also 3M and PFAS. See also Big Oil. See also, see also...

Why would I expect anything different from any other branch of business using the precedence laid before us?


I think they do know. Corporations are filled with people that 'know' but can't risk leaving, so they comply, and even promote such decisions. It's a form of group think with added risk of being fired, passed over for promotion.

Eichmann.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: