Hacker News new | past | comments | ask | show | jobs | submit login




I honestly don't think anything in that post 'demolishes' the criticism or even advances some sort of argument.

It's just a huge wall of text full of weird analogies which is quite typical for these 'rationalist' community posts.

People like Bostrom or Yudkowsky have one thing in common. They are not engineers and they stand to gain financially (in fact it is what pays their bills) to conjure up non-scientific pie in the sky scenarios about artificial intelligence.

In Bostrom's case this goes much further, he has given this treatment to anything including nuclear energy and related fields. Andrew Ng put it quite succinctly. Worrying about this stuff is like worrying about overpopulation on Mars, and there's maybe need for one or two people in the world to work on this.

I really wish we could stop giving so much room to this because it makes engineers as a community looks like a bunch of cultists.


> Worrying about this stuff is like worrying about overpopulation on Mars, and there's maybe need for one or two people in the world to work on this.

When something is shown to be doable in principle, it's often not clear how difficult it will be in practice.

In 1932, Ernest Rutherford thought nuclear energy would not be a viable source of energy, let alone weaponized. In 1933, Leo Szilard filed a patent on the concept of the neutron-induced nuclear chain reaction. At this point in time, nuclear fission was not yet known and actually making nuclear energy viable was a pipe dream.

As we all know, in 1945, the first nuclear weapons were used. Until the day that the weapons were used, German physicists thought that nuclear weapons would not be used in the war because, while possible in principle, the actual construction of a working device would require a herculean effort that no nation would expend in time for the war.

The German physicists weren't too far off in estimating how difficult nuclear weapons were. They just failed to predict that the US would throw 130,000 people, including most of their top minds, at the problem for years.

Now, we have no idea how difficult superintelligence will be. But the possibility that we're a couple of breakthroughs and a Manhattan project away from superintelligence is real, and I want a hell of a lot more than one philosopher and an eccentric fanfic writer working on this.

EDIT: No offense to Yudkowsky. I thought the fanfic was fairly good and, more importantly, achieved its purpose.


So you agree there is room for them to work on this, yet you feel they are making engineers generally look like cultists?

Maybe you’re just being oversensitive. The hype wave on AI danger is completely over, and there’s nothing wrong with people studying the question if that’s their interest.


You know we've been here before, right? I mean, lighthill report, Ray Kurzeweil is a serial offender for over thirty years, the singularity is around the corner thing, outrageous claims for fMRI, self driving cars. Over hyped ibm Watson which now health professions are talking about misdiagnosis problems.

Sure. We have google image match and better colorisartion and some improvements in language processing, and good cancer detection on x-rays. These are huge. But hype is, alas, making engineering increments look like cult.


Ray Kurzweil was never part of AI danger-hype


No. That was my random anti AI bias coming out. Ranter gotta rant


You're sure about that?

> [after discussing alphago]

> Consider, for example, an old doctor; suppose they’ve seen twenty patients a day for 250 workdays over the course of twenty years. That works out to 100,000 patient visits, which seems to be roughly the number of people that interact with the UK’s NHS in 3.6 hours. If we train a machine learning doctor system on a year’s worth of NHS data, that would be the equivalent of fifty thousand years of medical experience, all gained over the course of a single year.

Doctoring is just like playing Go, right? Just increase the CPU cycles on it and pack more data in there, it's more or less the same.

You can make that assumption but I don't think it'd be based in fact or reality, because Go has far fewer inputs and states than treating humans does. And you don't get to test every hypothesis and then rewind either.

Most answers there are like this, making unfounded conclusions into supposedly insightful rebuttals.

Let's try another.

> If we then take into account the fact that whenever one Einstein has an insight or learns a new skill, that can be rapidly transmitted to all other nodes, the fact that these Einsteins can spin up fully-trained forks whenever they acquire new computing power, and the fact that the Einsteins can use all of humanity’s accumulated knowledge as a starting point, the server farm begins to sound rather formidable.

Or maybe they tell each other fake news so quickly they can't tell what's right and what's wrong, like we do whenever we find a more efficient way to communicate?

Anyone can make unfounded assumptions about anything; I just did it. It's up to you to decide if you care whether these assumptions are based in reality or not. But if you consider yourself "rational", I think it'd be in your best interest to care.


Well the assumption is that AI will be able to learn similar amounts of knowledge from access to the same amount of data as a human would. That is of course totally wrong for almost all problems for today's algorithms, but that might change in the future. Alpha Go for example improved a lot by playing against itself without outside input.


I wouldn't say "demolishes". A lot more like challenges his arguments with many, many words.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: