Good luck trying to tax AGI companies 95% of profits. Perhaps you could strike but you'll be unemployed.
I guess we'll just have to cross our fingers that Sam Altman and friends will pity us enough to give us some pocket money to survive. That's assuming the AGI created is aligned with our values.
I mean, we can keep hoping that AGI will be a good thing with zero evidence or we can take action to slow progress now so we can proceed with caution.
In the history of political movements,
have there been any that have been successful with slowing progress? The Luddites failed to. stop the industrial revolution, despite all the looms they broke. The cat's been released from BagGPT, and even if all the world's countries ban AGI research (somehow), that's just going to drive the work underground, and the first ones to get usefully further along the curve will be able to outcompete any who haven't.
It's a real life prisoners dilemma, with 8 billion prisoners, and the first one to defect get to decide what the fate of the rest of humanity looks like.
Yeah, that's kinda the problem, same with tax evasion; at worst the company would make a subsidiary in country and only make it cost center on balance sheet with all profits going to the tax-haven company
> In the history of political movements, have there been any that have been successful with slowing progress?
Not an argument, unless you believe "we failed in the past so why bother trying now" is sound reasoning.
> The cat's been released from BagGPT
Do you say the same thing about nuclear weapons? I mean, given that the nuclear weapon cat is now out of the bag we might as well see just how big a bang we can make, right? Anyone who wants to stop progress of nuclear weapon development is obviously a Luddite.
And while we're at it, is the fossil fuel cat out of the bag too? Is any effort to try to limit the release of CO2 as a global community pointless?
--------
Please keep an open mind and let me reason with you.
I assume you believe AGI poses some level of risk to humanity. The exact nature of that risk isn't too important – it could be economic risk, political risk, existential risk or all three. Basically I am assuming you believe there are enough things which could go wrong in creating superhuman AGI that a sane species would seek to limit the progress of capability research until we can proceed safely. If we disagree on this please explain why you do not believe AGI poses any risk to humanity. Note, hopeium that things will be okay is not an argument.
Okay, so since we agree that a sane species like us humans would limit AI capability research and proceed cautiously we now need to solve the prisoner dilemma which you correctly identified.
My solution to this would be as follows:
First we need to take this seriously. We need to be frank about the risks we face from AGI and try to educate the public about what may be coming. Currently the general public is so clueless about AI they're either not aware of recent advances, or they believe silly things like it being possible to unplug an AGI, or program it to be good.
Secondly, we need to establish an independent international organisation to oversee state-of-the-art AI research. Any country which does not agree to this will be sanctioned and as a global community we must agree do everything we can to pressure those countries to cooperate. In my opinion this includes war, but that's only because I believe AI is a significantly large existential risk to humanity that this is necessary. Appropriate actions in practise would obviously need to be debated and agreed upon as a global community.
Thirdly, we need a way to increase global trust and transparency of AI research. To do this I would propose the creation of a global AI whistle blowing fund. All countries which are part of the international agreement to oversee state-of-the-art AI research would be required to contribute to this fund annually. This fund would allow citizens from any country in the world to come forward with evidence against corporations, governments or individuals in violation of the agreement. These citizens would then receive a reward for their information and protection from any country (of their choosing) which signed on to the agreement. By incentivising whistle blowing it would be hard (although admittedly not impossible) for any large research project to take place.
Fourthly, fund and research ways to identify and limit unauthorised AI projects via technology and audits of things like GPUs orders. Simply limiting the distribution and capabilities of GPUs at a global level would be one of the easiest ways to ensure AI capability research can't advance too quickly. Of course we would need countries like China to play ball, but so long as we can do this in a transparent way and they understand the risks to humanity should they not corporate they would have no reason not to. This isn't too different from limiting the development of nuclear weapons as a global community which we have been quite successful at doing.
Finally, all AI research projects which are approved should be done at an international level for the benefit of all of humanity and research teams must detail how they are approaching safety and publish all safety research. Additionally, there should be government grants into things like alignment research to better prepare us for superhuman AGI.
What I'm proposing obviously isn't perfect. In the same way we can't guarantee nuclear weapons won't be created in the future, we also can't limit the risk of AGI entirely. Instead a more reasonable goal of slowing capability research while increasing safety research as much as possible should be pursued. There is a possible future where AGI will be great for humanity and our goal should be to maximise the chance of that outcome. The goal is not to "limit progress". Even if we disagree about exact actions to take there is no alternative world in which it is reasonable to allow a handful of billionaires to continue AI capability research unregulated while crossing our fingers that the AIs they create will be aligned with our values and that they pity us enough to give us food and shelter once we are all unemployed and fully dependent on their creations.
> I guess we'll just have to cross our fingers that Sam Altman and friends will pity us enough to give us some pocket money to survive. That's assuming the AGI created is aligned with our values.
My guess is that Altman and co. already have some kind of exit strategy (like fleeing to New Zealand after they've captured a huge chunk of the developed world's wealth).
I guess we'll just have to cross our fingers that Sam Altman and friends will pity us enough to give us some pocket money to survive. That's assuming the AGI created is aligned with our values.
I mean, we can keep hoping that AGI will be a good thing with zero evidence or we can take action to slow progress now so we can proceed with caution.