Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These people are too full of themselves. The physicists that invented The Bomb didn't have any special insight into the philosophical and societal implications. It takes a special kind of person to be able to invent something so big it can change the world, but it's about 0% chance that those people can control how the technology then gets used.

I wish they'd focus more on the technical advances and less on trying to "save the world".



Not that people, especially tech oligarchs, aren't full of themselves, but many physicists who were inventing the bomb, most famously Oppenheimer, also laid a lot of the philosophical basis of implications of nuclear weapons.

The situation may be a bit different now. The megacorporate world is the biggest threat from these tech developments, and the inventors are getting deeply embedded in this. Sort of similar situation that implications of nuclear weapons would be handled only by the military.


They laid them out yet did not have the ability to prevent using it, didn't predict the cold War, and if you asked any of them, the chances the world would exist 10 years after proliferation were basically zero. They were all depressed because of how sure they were that they helped destroy humanity, it was just a matter of time. And here we are. We're talking about the most intelligent humans ever and even they got it wrong.

I'm not saying they can prevent themselves from thinking about the implications, anyone would, but this grandstanding as if nobody else will be able to figure it out or that only them understand the dangers is what is a bit weird.

My main point isn't "don't listen to the inventor", it's more like, "listen to the inventor but don't think that they know the future just because they invented a gadget". These are people that have investment documents saying they don't know what role money will play in post-AGI world. It has the vibes of a cult mixed with role play.


I don't think that's a fair characterization of their predictions.

Szilard predicted the development of the bomb would end major war, and he was mostly right for the right reasons, though he envisioned a UN-type organization to control the bombs. And he was one of the first to understand the potential for fission chain reaction once nuclear physics got underway. And he was involved in its development. I think Ilya would be happy to be compared to him.

Bohr, too, had pretty good predictions about the implications of the bomb.

Oppenheimer seemed to understand some of the implications but was happy to leave the policy stuff to the government, and not too try to influence anything like Bohr and Szilard tried to do.

Teller just wanted to keep pushing the tech bigger and bigger.

So the inventors had all sorts of different predictions and values, same as here. Some better than others.


They had very significant effects on the nuclear policy. Oppenheimer's (of course a big movement of which he figureheaded) big idea was to prevent a nuclear arms race, which obviously didn't fully materialize, but what was the basis for e.g. test bans, disarmantments and limiting proliferation.

I agree that you shouldn't ask the inventor just because they invented the gadget. But at least in the Manhattan project the scientists were in a very strong position that if they refuse to co-operate, the bomb just won't happen (soon enough). And for that you'd want inventors versed in the wider implications.

One difference from that era is probably that interest in wider philosophy and politics was encouraged from academics. E.g. the "giants of modern physics" (Oppenheimer, Einstein, Bohr etc) took great interest and scholarship in philosophy and societal issues whereas nowadays, as you said, there's practically 0% of the inventor of the gadget to understand the issues. They should "shut up and calculate", and leave the philosophy to philosophers and the societal impact to economists.

A problem is that philosophers and economists don't really understand the technology and its ramifications, and are heavily influenced by the hype. And philosophers have very little power to influence anyway. There are valid reasons for such specialization, but it has drawbacks.


How can you be sure that it wasn’t their efforts that allowed us to continue existing? Maybe if they hadn’t made the destructive implication of their work clear no one else would have recognized it. None of us can say.

It’s obvious in hindsight, but can we really say with certainty that things would have been exactly the same if the inventors did or said nothing? I’m not willing to take that bet.

To me, effort such as this is always worth it, even if it DOES have no effect, because the chance it can change things for the better is worth it.


Because they were not charge, if anything the more aggressive voices among them had bigger roles in the cold war. You can never rule some impact, but there was an awful lot of effort going into handling cold war dynamics where those scientists had no role in.


In such complicated things there's nobody "in charge". Some policy documents etc may be signed by some person in some capacity but this doesn't mean that person dictated the thing in a vacuum.

There were many different voices from many different walks of life of course.


There were huge research efforts and, of course, some people were in charge of those.


>They were all depressed because of how sure they were that they helped destroy humanity, it was just a matter of time. And here we are.

There's still plenty of anthropocene left for them to be right. At least I hope there is.


They all got together and wrote a letter to FDR w/ Einstein as signatory because the implication of not having it was the Axis Powers getting and using a nuclear weapon.


You are talking about the people not only doing the work and creating the wealth, but doing the innovations and creating new things. Then you say these are the people who are "too full of themselves". That they should give up on how the commodity they create is used - and by your definition it is a commodity. That they should focus on the needed innovations and that's it.

Well if the workers doing the work don't have a hand in making decisions, who does? We know the answer, from our current era of heirs, limited partners and such, the scions one can see on Rich Kids of Instagram, although they leave the work to their private wealth advisors.

It's the most parasitic idea possible. "Just do all the work and figure out the innovations slave, the aristocracy will take it from there".


I'm saying a bunch of millionaire silicon valley broskis are too full of themselves because they keep on and on about how what they are working on will radically transform all of society in a pinch, and only they can save us from it.

I can assure you their random dev #56 isn't taking any decisions already, this is all PR and grandstanding from the already-millionnaires leadership with their power plays about who knows best how to save all of us from AGI.


That's an overly broad generalization, Leo Szilard certainly had insight into the implications:

> "With an enduring passion for the preservation of human life and political freedom, Szilard hoped that the US government would not use nuclear weapons, but that the mere threat of such weapons would force Germany and Japan to surrender. He also worried about the long-term implications of nuclear weapons, predicting that their use by the United States would start a nuclear arms race with the USSR. He drafted the Szilárd petition advocating that the atomic bomb be demonstrated to the enemy, and used only if the enemy did not then surrender. The Interim Committee instead chose to use atomic bombs against cities over the protests of Szilard and other scientists. Afterwards, he lobbied for amendments to the Atomic Energy Act of 1946 that placed nuclear energy under civilian control."

https://en.wikipedia.org/wiki/Leo_Szilard


Their insight is just as valid as anybody else's. The difference is, as the people who can actually take that step forward, they have a unilateral ability to take it, not take it, or decide how to take it.

I think they are exactly the right amount full of themselves. Their insight may not be special, but what makes some bureaucrat's insight more valuable than theirs?


Whatever steps they choose not to take, someone else will.

And LLMs aren't taking over the world any time in the foreseeable future, they're glorified parrots.


The former president of the US is a glorified parrot, sometimes not even good at that, and still he was able to become president, so I wouldn't hold a defense for it to just that.


what does that have to do with anything?


That many (most?) humans, in power but more so outside are worthless parrots as well. And hallucinate whatever they don’t know or are unsure of. Even presidents. While using their native language terribly. It’s related to the LLMs taking over the world the ggp commented above: chatgpt is smarter than many humans I encounter daily, why couldn’t something just a tad better take over? If trump could, why not a stochastic parrot?


Just because something seems plausible doesn’t make it more likely. People are not LLMs, writing software is nothing like a house, being good at math doesn’t mean you’re good at other things, and bad analogies are just meaningless words.


I didn’t say humans are LLMs; I am saying many humans are hallucinating parrots and dumber than LLMs. And when that is a president, LLMs can take power even though not necessarily related to human intelligence.


Please, its dead obvious for anyone with a working brain that in average ChatGPT says a lot more true statments than Trumpt, and that didn't stop him in the slightest to become President, so judging those capabilities for estimating the potential damage it could make to society is just plain wrong.


I mean if your only qualification for president is that they speak well, I’m not sure you understand what the president does.


Pretty sure that if Trump can do it there is nothing remarkable about it, the only subject he had some remarkable insight was golf and nothing else. Unless you hold the believe that he hold the position without meeting the qualifications but that makes things go into a too subjective context.

Anyway, this has nothing to do with what I believe or not that the president does, it's about the intelectual level the general public demands to chose one, for chat-gpt the ability to be persuading it's far easier than the ability to reach AGI, the same happens with humans.


The irony is that they are opposing forces. AI as it is now is still tame, any technical improvement makes it more dangerous to not just economic fears of taking over jobs, but even the science fiction fears become viable if an AI becomes self aware and consistent in intent.

If you focus on the technical advances then you are focusing on NOT saving the world. Good that at least this guy isn't so wholly focused on the technical side even though saving the world is such a blurry concept.


They are focusing on technical advances to enable people to save themselves from future AGI. For that, they need more time since AI Safety research lags quite a bit behind capability research.

https://openai.com/blog/introducing-superalignment


Respectfully, I totally disagree.

I am become death, destroyer of worlds.

The Nuremberg Trials.

And AGI is 10x impact of nuclear.

Builders are not cogs, and should not try to be them.

We need MORE ethics and good intentions - not less. It is the psychopathic corrupt business rot we should be afraid of - not people actively trying to do good


you talk about AGI like it is an actual thing and not just some delusional pipe dream that these nerds use to tell themselves they are different from the Valley, they’re not just there for the obscene amount of money, and their shit doesn’t stink.


Even if chatGPT and LLMs not really AGI,but if they continue to improve their answers they can make serious impact. Especially if it will be very difficult to distinguish them from a human.


Sakharov had some insight into the implications


There is an even more important lesson in this in that the people who worked on the bomb thought the world would be over in short order and were totally wrong.

Feynman talked about how there was an idea of "normal" people walking around not knowing they were basically doomed and going to die in a certain nuclear holocaust in a few years.

Von Neumann thought the U.S. should launch a nuclear first strike at Moscow. Obviously, if war is inevitable then you don't have to be the father of game theory to figure out you should strike first.

It was just a year ago that literally everyone was predicting we would be in a recession right now. We can't predict that but we can predict how AGI plays out even when we haven't bothered to define a measure of what AGI even is. Even people who grew up "knowing" we would all have sentient robots by 1997. I can't think of a single prediction I have heard in my lifetime that has turned out to be true other than the government debt going up.


Not only that, but go back a decade ago and look at what the singularity people (which the AI alignment crowd comes out of) were saying that. When employment was struggling to recover after the great recession, the singularity folks said that it wasn't going to recovers, because tech was replacing humans, and that this was just the beginning of mass unemployment (one of the reasons why UBI got so popular).

CGP Grey's popular "Human's Need Not Apply" video is a good example of this kind of thought:

https://www.youtube.com/watch?v=7Pq-S557XQU&t=199s

Claims that self-driving cars are already here and already better than human drivers, and that the only question is how quickly they replace humans. He argued that Baxter, the general purpose robot, could already copy the tasks of a human worker and do the work for much cheaper. Baxter was discontinued in 2018 because of low interest.

These people have a horrible track record when it comes to technology predictions, and it's unnerving that, instead of reflecting on how wrong they've been, they're doubling down and trying to slow technological advancement.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: