I think you’re missing some big pieces of the idea here.
The first is that these constraints aren’t easy. Make paperclips in a way that doesn’t hurt anyone. Ok, so it’s going to make sure every single part is ethically sourced from a company that never causes any harm to come to anyone ever, and doesn’t give any money to people or companies that do? That doesn’t exist. So you put in a few caveats and those aren’t exactly easy to get right.
The second part is an any versus all issue. Even if you get this right in any one case, that’s not enough. We have to get this right in all cases. So even if you can come up with an idea to make an ethical super intelligence, do you have an idea to make all super intelligences act ethically?
I actually believe in the general premise of this question as being the biggest threat to humans. I don’t think it’s a doomsday bot that gets us. It’s going to be someone trying to hit a KPI, and they’ll make a super intelligence that demolishes us like a construction site over an anthill.
The first is that these constraints aren’t easy. Make paperclips in a way that doesn’t hurt anyone. Ok, so it’s going to make sure every single part is ethically sourced from a company that never causes any harm to come to anyone ever, and doesn’t give any money to people or companies that do? That doesn’t exist. So you put in a few caveats and those aren’t exactly easy to get right.
The second part is an any versus all issue. Even if you get this right in any one case, that’s not enough. We have to get this right in all cases. So even if you can come up with an idea to make an ethical super intelligence, do you have an idea to make all super intelligences act ethically?
I actually believe in the general premise of this question as being the biggest threat to humans. I don’t think it’s a doomsday bot that gets us. It’s going to be someone trying to hit a KPI, and they’ll make a super intelligence that demolishes us like a construction site over an anthill.