"I think the fears about “evil killer robots” are overblown. There’s a big difference between intelligence and sentience. Our software is becoming more intelligent, but that does not imply it is about to become sentient."
It is the period between sentience and "advanced" artificial intelligence that should be worrying for two reasons: First, it is close at hand. More importantly, the unintended consequences of tech are never well thought out in the initial stages of adoption.
I'm reading Eric Schlosser's book on the early days of nuclear weapons and the many, many near misses the US experienced as it adopted nuclear arms without much thought to risk management. I see parallels in the race to develop and deploy pre-sentient A.I. Link below to Schlosser's book.
This appears to be an unpopular idea on hn, but: We are nowhere close to developing sentient machines. We have made no progress toward it, we're not getting closer. Statistical optimization and sentience are parallel lines. No matter how far you travel down one, you don't get any closer to the other.
If "evil killer robots" are created, it won't be because of advances in big data or deep learning or any other buzzword. It'll happen independently, potentially aided by current state of the art techniques but not because of them. It could happen at any time, some guy in his basement could be on the verge of a breakthrough as we speak. But google's ability to detect cat faces or Watson's winning on jeopardy are not signs of a coming apocalypse. It's highly unlikely that we can cluster or gradient descent our way to human level of intelligence, no matter how many cores or hidden layers are used.
This whole "famous people are worried about AI" thing is understandably media friendly, but it's a bit of a distraction. Fortunately the majority of active researchers aren't taking it seriously at all.
>Statistical optimization and sentience are parallel lines. No matter how far you travel down one, you don't get any closer to the other.
Source? How can you possibly know how AI will be developed? Machine learning is likely to be a huge component in any AI. Deep learning has been shown to beat all sorts of tasks thought to require strong AI. it can learn complicated patterns and heuristics from raw data, which is the hardest part of AI.
It's quite possible we are 99% of the way there, and we just need someone to figure out how to use it for general intelligence.
Regardless, this is all irrelevant. The people concerned about AI are mostly worried for the long term. No one is claiming we will have AI in ten years for certain. I'm not sure why people keep equating these two totally different predictions.
That's why ML researchers saying the stuff they work on isn't dangerous, isn't very reassuring. No one is claiming that it is.
To be honest, I'm far more worried by cellular level simulations like OpenWorm[0] because there seems to be a clearer path to human level AI: more processing power and more advanced knowledge of brain biology. Given enough time and effort, it seems inevitable to me. Luckily or unluckily - depending on your perspective - we're still far from simulating a human brain.
If nature clustered or gradient'ed its way to human intelligence why can't we? We have algorithms that can preform as well as fruit flies in simulations; why can't we continue to expand on the quality of our simulations and algorithmic approaches?
> It is the period between sentience and "advanced" artificial intelligence that should be worrying for two reasons: First, it is close at hand. More importantly, the unintended consequences of tech are never well thought out in the initial stages of adoption.
Pretty much every academic that I'm aware of who works in the field of AI has similar views to Ng, i.e., suggestions of an impending AI cataclysm are wildly overblown and don't fit the facts. The people predicting doom are those who for the most part aren't involved with current research and as such don't have a good perspective on the limitations of current techniques.
There is no magic in modern AI advances and there is no reason to expect that they'll yield anything like generalized intelligence. By the nature of the methods they work well in the specific domains they're trained on and poorly everywhere else.
The difference between a ball of uranium and a nuclear bomb is... a slightly bigger ball. Dangerous autonomous software is very easy to imagine. What if someone hid code in flight control software that activated at a given time and sent all planes flying into the nearest building. Software is only safe because we program it to be safe. The more capable and widespread software becomes the more dangerous it becomes.
>The difference between a ball of uranium and a nuclear bomb is... a slightly bigger ball.
And a very precise detonation device strapped around said ball of uranium.
Software is not by itself dangerous. It only exists in the most abstract sense. What is dangerous is creating physical devices that have the ability to cause physical damage if manipulated in a certain way. Allowing those devices to be controlled autonomously through a software-hardware interface is where the danger from "AI" comes in.
No, they'll just roll back selected transactions arbitrarily like they did in past flash crashes.
We're much closer to runaway biological threats than we are to AI-inspired ones. Or maybe that's my knowledge gap in the two domains speaking and we're equally far away in each.
Yes, but no physicist will tell you its safe to leave uranium lying around. Virtually everyone working in AI, including the biggest names in the field, will tell you that the methods are very domain specific (they generalize, but you have to train them specifically for a given domain), and bear no resemblance to sentience as it is typically understood other than that they give rise to seemingly intelligent behavior. Which actually isn't so much intelligence, as discovering a function mapping inputs to outputs.
And no one is arguing that software can't be dangerous. It clearly can, just like poorly constructed bridges or cars can be dangerous. But the idea of machines thinking for themselves is science fiction. There is no indication that we're anywhere close to that happening.
I've been thinking about what I visualize as "plant" intelligence, in response to Elon Musk's statements about killer AI. It seems like Andrew Ng is saying the same thing.
The metaphor is that you could have "sedentary" superintelligences that solve a relatively narrow problem. They take in huge amounts of data, and spit out ingenious answers to well-defined problems. Things like traffic-routing, energy allocation, or photo/video recognition, etc. They are tools for humans.
By what mechanism do these superintelligences involve into organisms motivated to extinguish humans? Reproduction, which leads to independent evolution. That process is inherently unpredictable.
But right now we're firmly in the stage of humans creating AI. Our AI is nowhere near capable enough to create more AI. And I would argue that there is no possibility of this happening by "accident"; you would have to specifically engineer AI to able to create more AI.
I agree with Ng in that it's more of a distraction now than anything else. The jobs issue is a lot more pertinent.
It reminds me of this book I got at Google 8 or 9 years ago. This guy asked a robotic professors how they would avoid being killed by a robot. And the professor said: "Climb up one step" (on a staircase).
But I will say that there is a difference between feedback and unconstrained evolution. Machine learning mechanisms are feedback loops. But that doesn't mean they will break out of their own paradigm without human intervention.
Malware is actually a good example.
The Morris worm is a static piece of program text, invented by a human. It took advantage of a homogeneous and static environment to spread widely (lots of computers running the same program with the same vulnerability.) There's a huge difference between spreading in this environment and adapting / reproducing so as to spread in novel environments.
Think about Stuxnet. AFAICT, this was a completely novel solution to a problem that had never been seen before: jumping an air gap and destroying a piece of hardware via software.
What do you think is more fruitful approach to designing something like Stuxnet?
1) Assemble the best and brightest minds from multiple domains, painstakingly build simulation environments based on the most plausible intelligence, test custom malware laboriously in those environments, etc.
2) Start with some existing malware (not unlike the Morris worm), and design a meta-algorithm to evolve it into something capable of destroying a nuclear plant from afar?
#2 is science fiction, as far as I can tell. The level of ingenuity required for a task like this is just qualitatively outside the domain of computers.
This is closer to what would be required for us to lose control: for AI to be designing AI, rather than humans designing AI. And keep in mind that AI designing AI is harder than AI designing malware.
I have seen the demos from Deep Mind where it learns to play video games without any knowledge of the rules. I don't claim to fully understand these techniques. (I have however worked in a research lab on robotics, and seen the vast gulf between what the media portrays and reality.)
But I would be interested if anyone knowledgeable in those techniques sees a realistic path from deep learning to something like #2.
Couldn't agree much. Every time there is some advance in so called "AI", killer robot and others strike back.
These are just advances in doing things human can do, i.e. recognizing in pictures, driving a car. Deep learning is known since long, calculation were just too long.
Fundamentally nothing has changed, we're not closer to an "intelligent machine".
What I wish I'd see more of is a fear of AI even without general intelligence. When you combine autonomy with AI, even stupid AI, you can still have bad results and significant collateral damage. See Knight Capital's trading bug, Roombas that can't distinguish between dust and dog poo, and more. Yes, Google's self-driving cars may end up being more reliable and safe than average human drivers. But that would assume the engineering team has a good handle on all situations and bugs. I'm not saying hold up progress on this front. I'm only saying that people seem to be happily buying into even trivial AI advances without both eyes open.
In Yemen or rural Afghanistan or Pakistan the inhabitants already have to deal with killer flying robots that are currently people operated but facing a manpower shortage. They will probably eventually be 100% automated. Just like any military action, whether the robots/algorithms or people who carry out these actions are perceived as evil or not depends on which country one lives in and what one's political beliefs are and whether one knows and how one feels about the people killed by said actions. In the future I expect it will be exactly the same. The future is already here, it's just not evenly distributed.
An autonomous drone that has been tasked to kill a human and a machine that decides on its own to kill a human are very much not the same thing. Autonomous killing is nothing new or novel - we've had sentry guns for many years now.
> One of the things we did with the Baidu speech system was not use the concept of phonemes. It’s the same as the way a baby would learn: we show [the computer] audio, we show it text, and we let it figure out its own mapping, without this artificial construct called a phoneme.
Eh? A human baby normally doesn't even start to learn text until well after it shows a complete mastery of every phoneme in its native language along with some basic vocabulary and a decent understanding of syntax.
I find it baffling that such a distinguished AI expert would assert that phonemes are unnecessary invented constructs. Are there many who share the same belief?
Complete mastery? I don't think so. Parents read books with children before the child can speak.
As Ng noted, phonemes are artificial constructs. If they are necessary for a particular language then they should be learned in the same way other constructs are, not imposed on the learning algorithm from the outside.
Are phonemes imposed on human babys? Maybe if you were raised by Hooked on Phonics[1], but I'd wager that most humans did not have the idea imposed on them.
Well, think of it this way. Characters are artificial constructs, and even much more artificial (with only thousands of years of history): there are still languages in this world without a writing system, and they're doing fine. (Well, maybe not fine, but that's more to do with the influence of more powerful languages than lack of writing.)
However, no serious NLP researcher would suggest building a text-to-speech system by getting rid of the middle OCR layer and just hooking the raw image directly to the expected sound. (Well, at least I think so, but I'd be glad to know if I'm wrong.)
Phonemes are absolutely imposed on human babies. Sure, they wouldn't know the word "phoneme", but that doesn't make the concept any less real. Imagine how a typical education on reading/writing would start like: "This is G. It is used for words like grass, game, and girl. Actually, it can also be used for gem or genie!"
Can you imagine an English-speaking child listening to this and not immediately understanding that there are two distinct sounds involved, that the first sounds for "grass", "game", and "girl" are somehow the same (even though the waveforms are different), and that this sound is somehow different from that of "gem" or "genie"? If a child doesn't understand it, then the child probably needs a speech therapy.
Andrew is simply saying (if I read him correctly) that if you use generative models you might classify sounds into something other than Phonemes on one layer, and that classification may actually be better than "Phonemes" for recognising sounds. Phonemes are a construct for us to express a concept with language, but might not be the best way to represent a sound in a machine. So why try to write the routines to map sounds to phonemes when you can train it to just learn the categories that help it understand how to recognise the sounds better.
> However, no serious NLP researcher would suggest building a text-to-speech system by getting rid of the middle OCR layer and just hooking the raw image directly to the expected sound. (Well, at least I think so, but I'd be glad to know if I'm wrong.)
Perhaps you're thinking mainly of Western writing systems?
Presumably since Andrew now works for Baidu, he is particularly interested in reading Chinese. The interesting thing about Chinese is that, as I understand it, the same symbols can have different pronunciations in different dialects, but all pronunciations mean the same thing.
I.e. the symbol has a unique meaning, but multiple possible sounds. Which implies the sound isn't really what's important here.
But no one explicitly tells the baby what each phoneme is. It's not imposed on them, they have to learn phonemes by themselves.
In speech recognition, people were taking audio segments of different phonemes, manually labeling them, and having the algorithm predict them. Rather than just giving it the raw audio and desired output.
Ng's point is that babies don't need to be explicitly taught the concept of phonemes. They learn them from the intrinsic structure of the language (same with syntax). Training with text is roughly analogous with pointing at things and saying what they are called.
While the concept of phonemes is certainly valid, the specific mappings created by linguists could be considered ad hoc. E.g. English has a huge number of pure vowels. Maybe there is some order order to English vowels them that is hard to capture in the representations linguists use, but easy in these models.
> Ng insists that Baidu is “only interested in tech that can influence 100 million users.”
Is this an effective way to start projects? Sure its the end goal but using this as the starting point might be unnecessarily limiting. It might limit ideas to things that seem safe at the beginning.
The seeding of most novel ideas often seem very niche/minor and then evenutally exploded in popularity - then their importance starts to become apparent (see Twitter).
I'd rather invest in a large group of people broken up into small teams trying a varied approach from bold darpa-style ideas to more practical localized problem, similar to Valve or YC.
There's a difference between the kind of things you are referring to as "projects" and the kind of things Ng is referring to as "tech".
A "project" is usually an application of existing technology done in an innovative way. It may well end up being the next Facebook, but it isn't something that usually requires breakthroughs in scientific fields.
Ng's (and Baidu's) "tech" is a much more basic level of research. Baidu isn't interested in funding research into translating English into Latin because - while it is novel - it is unlikely to impact 100 million users.
OTOH: Chinese/English translation? Yes. World leading voice recognition? Yes[1]. Understanding images? Of course![2]
anything happening with cultured neural networks? If we can't approximate neurons or neural networks well enough in software or hardware, why not just grow a large biological neural network and interface with that
It is the period between sentience and "advanced" artificial intelligence that should be worrying for two reasons: First, it is close at hand. More importantly, the unintended consequences of tech are never well thought out in the initial stages of adoption.
I'm reading Eric Schlosser's book on the early days of nuclear weapons and the many, many near misses the US experienced as it adopted nuclear arms without much thought to risk management. I see parallels in the race to develop and deploy pre-sentient A.I. Link below to Schlosser's book.
http://www.amazon.com/Command-Control-Damascus-Accident-Illu...