Part 1 - Musk and others said we'll have self-driving cars all over the place by 2020. Author tweeted that we wont and discusses that for roughly half the article.
This part finishes with a quote by Urmson that he expects self-driving cars to take "up to 30-50 years" before they are really common.
Part 2 - this one quote aobve agrees with the author so he then looks at an actual forecast from a 2018 “Human Level AI” conference where some/most think AGI will occur soonish.
For some reason, he decides Urmson is the only relevant expert, and says these "more large hats than cattle, but [...] people with paying corporate or academic jobs" from the conference must be wrong because Urmson's prediction doesn't match them.
He then does the same for Kurzweil and FHI - they dont match Urmson's prediction either.
Thus AGI has been 'delayed'.
_______
The argument is in my opinion so badly made that I am wondering if the 17 people who upvoted read it, upvoted just because they are anti-AI in general, or because they like the author for other reasons.
Edit: This article is also from May, so it's appearance on the frontpage given the quality puzzles me further.
I don't think it's very productive to divide the debate about AI and AGI in
"pro-AI" and "anti-AI". Brooks is certainly hard to see as an "anti". He has a
long career in AI research, he's the founder of at least three robotics
companies (that I know of) including the one that created the Roomba, perhaps
the only example of a robot that people actually have in their homes today,
and in general he has made significant contributions to AI. He's best known
for the "subsumption architecture", a robotics architecture that at the very
least shook up things in its time (I personally think it's so much old
cobblers but that's not the point).
The situation with the current debate about AI and AGI is that there is an
awful lot of hype and people saying things that make little sense, like the
Nick Cave song goes. That is causing all sorts of problems with public
understanding of the subject and people interested in the subject are
understandably disturbed. Some, like Brooks, try to redress the balance and
show up overblown fantasies of self-driving cars in 5 years or AGI in 10, for
what they are.
This is actually a stance that comes from at the very least, a genuine
interest in the subject, if not a bit of passion for it. It's exactly
backwards to see it as an attitude hostile to AI. Brooks is clearly motivated
by a wish to reduce the amount of noise and clear the discussion of bullshit.
(And of bragging about his past accomplishments, too, indoubitably). That (the
bit outside parentheses) can only benefit AI research.
>I don't think it's very productive to divide the debate about AI and AGI in "pro-AI" and "anti-AI".
I agree, but his article very much plays with the different camps trope. If he had made an argument like you did, I would've upvoted it, but the way he makes that argument comes down to 'This one expert's prediction disagrees with a bunch of other experts prediction thus those other experts are wrong', which is silly.
I not sure I see the value in treating academics as individuals and not a single entity here. Academic consensus matters more than any individual academic, so why not treat it as a single gestalt entity?
You can treat it as a single entity. The questionable thing is then throwing out 'the consensus' because a single other expert gives a different prediction.
What are you talking about?
There is no scientific consensus that fully autonomous cars will be available for 2020s.
It require necessarily AGI cognition on some tasks and research have zero clue, zero formal roadmap to achieve AGI.
Thus any prediction is baseless, absurd wishful thinking.
As what I say is true, unless a sound refutation is provided, this should be the scientific consensus.
We are talking about the linked article. In it there is a an estimate by a bunch of scientists which has been rejected in favour of one person's estimate. None of it is about 2020 by that point in the article or this discussion.
Perhaps, you are confused because you thought we are talking about a worldwide scientific consensus?
>As what I say is true, unless a sound refutation is provided, this should be the scientific consensus.
This is not what consensus means (and again, nobody is claiming today that there is a consensus that self-driving cars will arrive in 2020).
Yes, an "AI" that is simply a collection of situational task-specific algorithms will encounter situations it can't handle.
But humans encounter situations they can't handle too. A fully autonomous but non-AGI car could simply do what humans do: pull over, turn around, or call for help.
> A fully autonomous but non-AGI car could simply do what humans do: pull over, turn around, or call for help.
Right or wrong, there is going to be (as you can already see on Tesla) disproportionately harsh response to the AI failing in a car, doubly so if it results in an accident. Humans mess up, and if someone is found to be at fault, they are punished. We also feel compassion to other human beings. E.g. killing another human, doubly so when by accident (ie not premeditated), is likely haunt anyone to the end of their own life.
A machine is afforded no such sympathy, and if anything, the machine is looked at with vastly increased suspicion and scrutiny because it lacks empathy.
We have to face reality here. If the current implementations do a hack job and just handwave problems away: "meh, humans make mistakes too", they run a very strong risk that there is going to be a big backlash, people in general will write self driving cars off as fundamentally unsafe and set progress back 50 years.
A technology like this has to adapt to humans and our biases, the other way around is not happening.
How do you make the system decide that a situation is uncertain? It might have high conviction in an outcome only because it lacks general understanding. You cannot just handwave the problem away saying "oh just pull over when the system doesn't know". You are ignoring the most difficult class of errors, high conviction but incorrect predictions.
This is not a question of reducing complexity, as even very simple statistical tools will sometimes give you high conviction (and yet incorrect) predictions because they are simplified models.
The assumption that error conditions are easier to recognize and control than normal operating conditions is insidious.
For example, an AI could misidentify a patch of ice with very high conviction that it is line marking. A human could too, but we are talking about an AI that has to recognize when it's unsafe and pull over (as you wrote).
I'm sure you've seen the case where an NN taught to recognize wolves did so by picking up the presence of snow as the sole feature in a photo. Only because that happened to be the most distinguishing feature of images with a wolf on it! That AI would only recognize a wolf as long as the photo had some snow. It'd also of course misidentify mere snowy landscapes as a wolf. The mistake is silly, but issues like this are very difficult to diagnose once the inputs become more complex.
I'll give you a trivial example: to my knowledge, current AI cannot accurately drive when there is rain or snow.
The thing is, there are much more situations than a human can handle than the best pattern matcher of the world can.
Some situations just require reasoning beyong pattern matching and current AI are all about correlations but have zero causal reasoning ability.
> The thing is, there are much more situations than a human can handle than the best pattern matcher of the world can. Some situations just require reasoning beyong pattern matching and current AI are all about correlations but have zero causal reasoning ability.
Sure, but are there many situations where reasoning is required to not have an accident? I mean, humans aren't great at sudden reasoning either, it's something that takes time to engage in. I don't really care if my car just can't handle tricky situations, as long as it doesn't hit anyone or stops in the middle of the highway.
AGI hasn't been delayed. It's been overpromised. And the proponents will continue to overpromise right up till when it arrives. We need a breakthrough in AI research of at least two magnitudes before even tackling the AGI problem.
One is we need to understand the mechanisms of neural networks at least an order of magnitude better than we currently do. Best article on the subject that concisely describes NNs is The NN zoo[0]. I have no idea what needs to happen for us to collectively understand the mechanisms better. We need an Einstein moment I think where some researcher has a happy thought and a decade where the rest of us catch up (at minimum).
The other is we need to understand the problem we are solving. At it's core it seems very easy to define intelligence but you very soon realize that it's an endless Matryoska with no discernible root. It's very easy to define intelligence in terms of solving elementary cogitation tasks and this is what we are doing writ-large. This leads to a culture of solving Weak AI tasks, hoping that the aggregation of the solutions will lead to AGI. Well, it's only going to do so if we are engineering solutions that will solve the larger problem and if we don't understand it well enough it's going to be a really random occurrence finding it.
> we need to understand the mechanisms of neural networks at least an order of magnitude better than we currently do
The problem seems more fundamental than this. We may understand intelligence, but we do not understand human-like intelligence. Human-like intelligence originates from more than a neural network; there are hormones and other things involved. Possibly even quantum mechanics, in some of the more esoteric explanations.
> The other is we need to understand the problem we are solving.
Indeed, we are asserting that we will soon have a solution (AGI) without first knowing what the problem is (human-like intelligence).
> it's going to be a really random occurrence finding it.
I'm placing my bets on this. Human-like intelligence (and by unavoidable association, experience) is, by definition, subjective. The tool that we use to observe and explain the universe around us is science. We have no such tool for the universe within us.
Even dog-like intelligence would be extremely useful.
It's interesting when training a dog. Sometimes you can see the gears turning and know their thought process and conclusion before they do, since we are just much smarter. And other times, they come up with something that makes no sense at all based on their "dog logic". I guess their brain just works differently.
Anyway, focusing on simply humans may be short sighted. I often understand systems better by comparing them to similar systems and working out the differences. If anything bears fruit I think it'll be the researchers that are starting small and trying to replicate a worm brain. Then build from there, faster than evolution, because we can.
> we need to understand the mechanisms of neural networks at least an order of magnitude better than we currently do
Many, many orders of magnitude more. I can't provide a link, but I read a while ago about this fascinating experiment: you put a few cardiac cells in a petri dish. They start crawling around. When they find each other they stick to each other, and after a small group forms, they start beating together.
We think of biological neurons as little more than their ANN cousins. Take a few inputs from a few dendrites, apply a RELU or some other non-linear function to them, send the result through the axon. I think a better way to think of them is they are living things themselves, humongously complex. It's very unlikely they are as simple as we make them to be.
Has anyone tried aggregating top ML algorithms in each perceptual function (vision, audio, etc) and cognitive function into distinct cortices which are then governed by an AGI meta-algorithm?
If that method could yield any success, it wouldn't be any time soon. I imagine we'd need more advanced algorithms for each cortex first (garbage in garbage out) and modern ML is still a young field. However, I'm still curious if that could work. Modeling each neuron seems like a more arduous and perhaps unnecessary task.
IMO the one thing we really need is some reliable way to do online learning. If we can get NNs to the point where you can just say, somewhere in code, "make this decision NNly, based on this set of features" and get reasonably good decisionmaking if the decision is in some sense approximable as a function of the features, I think GOFAI suddenly becomes feasible again. We would then enter an area of risk where we don't really understand NNs and we don't really understand the algorithms we can build with them either, but these algorithms could still have a lot of power.
The aphorism used to be "hard things are easy and easy things are hard". Ie. arithmetic with large numbers is trivial, but opening a door or recognizing a label is the most difficult problem in the world. We're starting to close in on reliable, possibly prepackaged solutions for the latter; on the other hand, understanding the logic of intelligence turns out to have some teeth after all. Reality would do us a great favor if building genuine intelligence required actual understanding and knowledge of how to build reliable cognition; unfortunately our precedent suggests it's mostly done by kludging a half-baked barely-debugged motivation system onto a big heap of instinct-driven neural networks.
I'm worried some spark will get the bright idea to combine some approximating variant of AIXI and a neural network for theory selection...
To your first point I think that's actually where we are. Poking in the dark with no real understanding of the tools at our disposal. And the algorithms DO have power. We don't know WHY, except that we are emulating solutions given to us by biological evolution.
To your second point, that's basically what evolution did. Poked in the dark for several billions of years and ended up with solution machines (us).
Yes, I'm aware, that's in large part why it worries me. And I think you undersell the importance of online learning. NNs are far from "plug in and forget."
>> Now a self driving car does not need to have general human level intelligence, but a self driving car is certainly a lower bound on human level intelligence.
This is exactly the misconception that allows people to believe in fully autonomous cars.
Following a lane is easy (in good weather). Stopping when the car ahead does is easy. Route planning when you have detailed maps and GPS is reasonable.
Understanding what unusual things might be lying in the road and what to do about them is hard. Navigating a construction zone with a flag man directing you on the opposite shoulder requires a brain, not pattern matching. Opening the window and following verbal directions...
The range of scenarios we encounter when driving requires AGI to cope with.
I used to disagree, then one day I was showering and thought I heard something I couldn't possibly have heard (a train crossing bell). False positive, my audio neural net heard some sound and with all the distortion got it wrong. My other facilities quickly ruled it out and concluded that it was unlikely that a train was in my apartment, but also that there is no train crossing there either.
This problem (pattern matching, cross referencing experiences, deducing truth) mirrors the SDV problem pretty well. Can they tell that the plastic bag on the road is safe to drive over? Can they infer that if the truck in front of them is overflowing with poorly-secured tools and equipment/soon-to-be debris, to maybe keep more distance? If a human waves to indicate they should go around, or the road ahead is closed, or they need help, will the car understand?
The way to solve self-driving cars in a few years is to start building roads and infrastructure for such automation. Maybe even have a human driving multiple cars, computer assisted, simultaneously from a remote office.
I don't know if this effect has a name, but the reason you hallucinated a train crossing bell was because shower sounds a lot like white noise, and white noise contains all sounds. Your brain 'found' and selectively filtered a bell in it. It's somewhat similar to the Ganzfeld effect.
If you ever experience a new auditory environment like a factory floor or hospital, you will keep hearing sounds from that environment for a few days if exposed to white noise.
That sounds interesting; when i'm in the shower I hear a lot of things over time (all of them friendly, nothing aggressive strangely enough, so far anyway); music (quite complete songs as if there is music on in the house), dogs trying to get in at the door, people calling me, people on the phone; when I turn the shower off, it is silence (dogs are already inside as well). I have tinnitus and that might make it worse?
> If you ever experience a new auditory environment like a factory floor or hospital, you will keep hearing sounds from that environment for a few days if exposed to white noise.
i've also experienced this when a song got stuck in my head (didn't know the title, so i kept looking for it) and i occasionally heard bits of it in white noise / high pitched sounds.
also fun fact - this is roughly how vocoders (the Kraftwerk / Daft punk voice effect) work! a carrier signal, e.g. white noise, is selectively filtered to match the frequencies in a modulator signal, e.g. a human voice.
And we tend to drop these hallucinations on high level of cognition. Self driving cars also can have a weirdness censor. No particular intelligence is required to ignore "faulty" sensor inputs or to not act on weird sensor combinations.
> The way to solve self-driving cars in a few years is to start building roads and infrastructure for such automation. Maybe even have a human driving multiple cars, computer assisted, simultaneously from a remote office.
My proposed law of online self driving car discussions: Every self-driving car discussion ends in someone inventing "trains" :-)
Self-driving trains are already widely used. London's DLR has been in operation since the 1980's.
While there are similarities I don't think they go that far. In particular, driverless train operation is only made possible by the fact that network operators and governments are fanatical about fighting obstructions and unauthorised crossings on railway track—to say nothing of unauthorised users!—regardless of whether the trains are automated or not. (In fact railways seem (I am not an expert) to be very vulnerable to disruption if these efforts fail, viz. the notoriety of train delays in the UK due to leaves on the line.) That level of expense and centralised control is certainly not coming to local, urban or back roads, though something like it might be possible on motorway.
>> This problem (pattern matching, cross referencing experiences, deducing truth) mirrors the SDV problem pretty well.
May I be a pedant?
"Pattern matching" is how regular expressions are normally used to match a string in a text. "Pattern recognition" is what, e.g., a machine vision algorithm does when it identifies a dog in an image from the features of the image. In a discussion about self-driving cars' AI capabilities, it is "pattern recognition" rather than "pattern mathing" that is relevant.
"Pattern matching" was also used by the OP in this thread:
>> Navigating a construction zone with a flag man directing you on the opposite shoulder requires a brain, not pattern matching.
> The way to solve self-driving cars in a few years is to start building roads and infrastructure for such automation. Maybe even have a human driving multiple cars, computer assisted, simultaneously from a remote office.
Even if the system took one driver-hour per vehicle-hour, with every vehicle controlled by one human driver at all times and each human driver controlling only one vehicle at a time, there would be significant benefits in terms of cost and availability from having all the taxi drivers working from a single ops centre, instantly assignable to any vehicle anywhere on demand. Any driver-hour-per-vehicle-hour reductions would be gravy on top of that. (However the system would certainly need some level of autonomous driving to at least halt the vehicle safely in the event of a communications failure or the like.)
(It's probably a coincidence that Musk happens to be building something which might be extremely valuable as a component of such a remote-driving system https://www.starlink.com/ , but perhaps it is not. 35mm of latency in each direction https://arstechnica.com/information-technology/2018/02/space... is probably not a showstopper in this application http://www.brake.org.uk/component/tags/tag/thinking-time , especially if it can be compensated for by a professional and tightly-monitored driver pool, reduced speed limits and/or failsafe locally-controlled AI braking. The need to rely on autonomous driving and/or a supplementary ground-based network in tunnels and under bridges would likely be a bigger drawback.)
Why are you trying to make a perfect car rather than just better enough than typical human?
People wouldn't spot unsecured cargo more often than not until it falls on the ground.
The big problem in legislation is that we just don't know how bad drivers humans are typically. Legislation is loose for us but somehow machines are supposed to be perfect.
The issue is that we only have an informal definition of a "good driver" that relies on fuzzy things like intuition, empathy, demeanor etc when humans evaluate each other for fitness (or judge each other in court), and self driving cars are forcing us to come up with an actual specification which we've never had to do before. One of the more impactful consequences comes from the way we tied the execution of civil liberties to the ability to drive oneself (at least in the US), and a fully defined specification for driving behavior will necessarily exclude many people that currently drive today and rely on driving for their agency. In fact we've kept the definition fuzzy intentionally in order to allow this. The existence of self driving cars is forcing our hand because you can't empathize with an algorithm.
Courts do not empathize and that's where self driving cars will be ultimately judged.
The essential driving tests are fuzzy and weak not due to empathy. I've seen a lot of hate and rage directed at unskilled, mistaken or law breaking drivers. Or even just because. Road rage is a thing.
The driving exams are weak because transportation is torn between safety and liberty or availability. Good enough self driving car would shift that balance.
Additionally companies want cheap drivers and workers to reach them on their own, they do not care about accidents beyond certain base rate, even less so if a given driver breaks the law or annoys other users of the road.
Including companies in the transportation business.
A substantial majority of drivers - up to 80% - would rate themselves above average [1] so 80% of your target market thinks they're better than such a car.
And if such a product takes off you know eventually it's going to kill someone who is legitimately a top 0.1% driver, who's been teaching advanced driving to secret service limo drivers for 30 years and never been in an accident.
And if you ever end up in court, and jurors notice you've made many billions of dollars from this technology, arguments that the dead guy probably wasn't a very good driver anyway and statistically your system is above average might not win over jurors who are thinking "well I certainly wouldn't have crashed into that clearly marked concrete barrier in broad daylight"
So if you make a car better than what an average driver thinks of themselves, you win?
That is achievable by either making a really good self driving car or by fixing those bad perceptions of car drivers.
Not every country even has a jury based judicial system.
And the good lawyer would ask the jurors to imagine how they would react in a given failure scenario while ignoring the fact that it is a self driving car, much like they're supposed to ignore the fact that a driver is of specific group (not even protected class) as not pertinent. The prosecution would have to show that the failure is caused specifically because it is a self driving car. Burden of proof.
They objection is aimed at the justice not the jury.
Negligence suit against a company is a separate matter than a car crash. And much harder for jurors.
To calibrate juror expectations you might try to show footage from some related human crashes as part of initial presentation.
It depends on the judge whether that would be allowed of course. Definitely should be allowed against an expert to establish their expertise in judging car accidents.
First such case against a level 5 car would be very high profile.
This of course means that driverless cars are allowed on the road, ultimately meaning there would be some sort of car certification.
Indeed, the average driver is pretty bad, that's not the level to aim for with additional vehicular traffic. It is entirely possible we would be better off with limiting the most dangerous elements' participation, but people actually need to go places while computers don't.
If safety is your concern, how do autonomous cars stack up against actually enforcing traffic laws, speed limits, safe following distances, distracted driving? We do not know.
IMO we need less cars not more. Better urban planning, remote working, improved public transport etc. are all different faces of the transportation problem, focusing on the current state of human drivers is just cherrypicking. We have autonomous underground trains today, we had them for years. Telecommuting will always be safer and quicker than autonomous cars (but less sexy and less profitable than AI hype).
No, you won't get away with making a dangerous drone or bicycle, motorbike or scooter.
Cars are obviously somewhat more dangerous due to mass and size but a small vehicle at a high speed... is almost as dangerous if not more so. Check motorbike statistics.
See, cars are only a seductive topic because they're so common, and thus any risk reduction there should have high impact.
Should any other mode of transportation be nearly as ubiquitous and fast, we'd be taking about them.
For example, fast electric scooters are starting to get there.
> The way to solve self-driving cars in a few years is to start building roads and infrastructure for such automation. Maybe even have a human driving multiple cars, computer assisted, simultaneously from a remote office.
Most of the unexpected do not - human drivers handle it on reflex - but there is enough that does.
The manually directed traffic example would require some sort of workaround for instance - map update or protocol to tell it to the car. Human drivers utilise informal protocols in such situations including blinking signals, voice communication and formal ones like policeman signals.
None of which actually requires AGI to handle, and actually educated guesswork to figure out. Think of it as a human driver in another country - they have to learn the behavior from the observed subset to not stick out, but it's not actually too hard or involving. It's not AGI, but it is sparse dictionary learning (bounded set of allowed behaviors) which is hard anyway. And a bunch of high grade logical inputs as context.
Figuring out outdated map such as not driving into a lake, damaged or unfinished road can be done without AGI reasonably well, as can predicting potential accidents and dangers and pre-planning responses.
Fun part would be the car asking for directions with voice synthesis then trying to execute them. I want to see that working in practice, still not AGI really. It'd be really freaky the first few times it happens, especially if it's driverless.
Mind you, human drivers tend to do stupid and unlawful things because they're intelligent too.
I'm not sure we can write off human subconscious behaviour as "not general intelligence". It might not be the conscious, self-analytical reasoning that we tend to think of as 'intelligence' but there's a whole bunch of things that humans 'just know how to do' after practice without being able to explain how we do it.
> Figuring out outdated map such as not driving into a lake, damaged or unfinished road can be done without AGI reasonably well
Yeah, I think a lot of that could be solved by having a hierarchy of different classifiers (maybe they do this already?) so instead of trying to identify every individual object in the world, you can just detect "something large in front of me" and hit the brakes. You don't have to care particuarly what it is, you just have to not hit the thing.
Reflexes are indeed trained, but they're not what is defined as general intelligence.
They trigger in specific cases, both low reflexes and high ones.
Now if you required the car to gain new reflexes on its own then it's an entirely different thing and yes, that is probably a big chunk of AGI.
Essentially that means the car would have to write its own control logic in an essentially unlimited way, yet do so safely. That is not exactly necessary for safe travel.
I can think of two scenario's that played out in my area fairly recently that AGI has no chance of dealing with (in the next decades at least):
1. There was a burst water main on the country road near my house and when I got there the waterboard had not long arrived. There was a man stopping traffic and trying to get them to turn around on a tiny road. No GPS updates, no notifications prior to this, just a bunch of pissed-off drivers having to do a U-turn at the behest of this puny human telling them to go back.
2. Roadworks with a detour that didn't work properly: they had put one of the yellow "Detour" signs in the wrong place but since I knew the area, I went a different way. Will AGI even be able to read a detour sign and know what it is in reference to the journey I am currently trying to make?
I love the concept of AI but it's an absoulte joke these days with everything claiming to be "AI"... case in point: I have been looking at headphones lately and found a pair (Sony I think!) that claimed to have AI to help noise cancelling! WTF? Even the reviews said that it only worked with constant noise like engines... so they invert the sound waves to cancel each other out? Where's the AI?
My main takeaway from your story is that fully autonomous driving requires a universal deployment of a system which allows police and emergency services to deploy updates to the road network map in real time. (Sure, it would be nicer if the car was intelligent enough to understand a man waving at it, but that's the lower bound that I take from these reports.)
More than once in Texas I've seen situations like the above-described in which the waving man was a civilian, no police involvement at all.
I think that's the point of self-driving's limitations. There are many, many ways in which human drivers fail, to the tune of 35k dead per year. But there are also many, many, many ways in which human drivers handle situations safely that would result in self-driving cars either plowing straight ahead or stopping confused, either situation being worse than what happens today. I think these latter situations affect far more than 35k people per year. They might affect more than 35k people per day.
Death vs inconvenience is an obvious trade, self-driving isn't all or nothing. Safety features like auto-stopping and lane-keeping probably reduce that 35k number dramatically if they're more widespread, without taking human hands off the wheel for the other situations.
The builders blocking a road for a delivery of bricks are not emergency services. The Water company is not an emergency service. The actual emergency services do not have time to spend their time micro managing every ad hoc road blockage that will come about (and for damned sure someone will forget to turn the blockage off again).
You are not going to get perfect centralised road status information for driving cars.
100-150 years ago, the world bent hard to integrate cars into the existing traffic infrastructure. I don't see why we won't be able to adjust our infrastructure again to support new developments.
Yep and the problem I have with that is that you can be sure it will be used for everything else including tracking my movements in real time and using them against me at some point or for insurance companies or some other org I don't currently know about.
Those are rare circumstances. Self driving cars will be deployed when they are safely able to handle 99.9% of driving and they have ways to call home for instructions from remote monitors when they encounter a rare situation,who can then click on an interface to specify where to go. That's how Waymo does it currently.
It's not even perfectly clear that human level intelligence is enough for driving. We accept a lot to get it over the line: 1.25 million deaths and 30-50 million injuries. We broadly accept the benefits outweigh the negatives (lives extended, saved, flexibility of social organisation etc..).
But still no-one thinks the average driver is good at it.
It very much depends on what exactly we mean by 'self driving car'. The Wikipedia description of the Levels of autonomy doesn't help much. Level 5, steering wheel optional, gives the example of a self-driving taxi but we pretty much already have those in highly controlled environments. Assumptions about the situations such a car would be expected to be autonomous are critical to clear communication and analysis of this issue.
An autonomous car expert saying we are 30 to 50 years from fully autonomous vehicles by itself doesn't mean much when we actually have fully autonomous vehicles on some roads, for some tasks already[0]. Equally Musk wasn't saying we would have level 5 driving capability in all situations by 2020, he was just taking about highway cruising. The exact parameters of the driving situations we are talking about make all the difference. Both Musk can be right and the autonomous vehicles expert can be right depending on the context.
> Equally Musk wasn't saying we would have level 5 driving capability in all situations by 2020, he was just taking about highway cruising.
Actually he specifically said that by this time the car would be able to do a cross U.S trip autonomously, which I hope you understand consists of more than just highway driving.
As far as the T-Pod driverless truck thing you linked:
> An operator, sitting miles away, can supervise and control up to 10 vehicles at once. The T-Pod has permission to make short trips - between a warehouse and a terminal - on a public road in an industrial area in Jonkoping, central Sweden, at up to 5 km/hr, documents from the transport authority show.
A remotely supervised truck permitted to travel at 3 mph in a highly controlled and scripted path - warehouse to terminal - is hardly what anyone thinks of or means when we talk about autonomous vehicles.
I think largely it is defined as "all situations that can be dealt with by humans". I don't think this needs to be said, because obviously no one expects an autonomous vehicle to be able to do anything about being hit with say, a thermonuclear warhead.
A while back a friend of mine rushed through the city, always a bit above speed limit and going to limits what is legal driving behavior and what is not (I sat next to him). He drives very safely and we made it on time for our meeting even though the city streets were quite busy.
Afterwards I thought: a self-driving car would have never done that because its tolerance level and its non-existing "judgement" of a situation would not have allowed for.
And that would actually be one of those cases where a self driving car would be an improvement over human drivers who feel they are able to 'bend the rules' to satisfy their personal needs instead of simply leaving on time.
I don't know, depending on your definition of improvement, yes. But would you want to live in a world where humans are bound to rules 100%? Where "bending the rules" to the smallest extent is punished?
It's OK for a human to take over in those cases for level 4. Maybe you have some fold-out control pad to drive at low speeds through the complex situation or a remote driver controlling many cars as needed.
Great point, but for this to work you still need the car to know when it doesn't know enough to continue. This is not trivial at all with neural networks. There's going to be a trade-off between giving control back to the driver too much or too little, and that's going to be highly safety critical.
Why do you want a car with an inscrutable decision logic?
The first thing that has to happen is a black box that has to explain the why of any disturbance or crash. It is notoriously hard to actually extract the reasons for any action from an ANN...
I'd argue you are massively underestimating the complexity of AGI (or overestimating the complexity of self driving).
> Navigating a construction zone with a flag man directing you on the opposite shoulder requires a brain, not pattern matching. Opening the window and following verbal directions...
Both of those things seem very much in the realm of what's possible with today's AI tech.
I'd bet a lot of money that self-diving cars will predate AGI by quite a bit.
> I'd bet a lot of money that self-diving cars will predate AGI by quite a bit.
I think that may be true, but I don't think they will get there merely by on-board computers/radars/lidars/machine vision. I don't think Level 5 autonomous driving is possible with just the aforementioned technologies. The laymen have a vastly inflated sense of what today's "AI tech" is and its capabilities, which boil down to curve fitting. At best, we may get geo-fenced cars that operate under highly controlled conditions.
Whilst I think self-driving is a great problem to solve and is probably helping push the envelope of AI, I think using AI to solve this problem is fundamentally the wrong approach. I think we may need to take a leaf from aviation if we really want autonomous vehicles to become a reality. Airplanes don't use machine vision to land for instance, they use radionavigation. I think cars having transponders of some type to detect other cars, and defining where drivable paths are based on some radio signal supported by the infrastructure is the way to go for reliable and robust autonomous driving.
I'm pretty sure that the guy who was watching a Harry Potter movie in his car and was decapitated for his troubles[0] would have been happy to send a baby alone in the same car.
Not everybody does risk-assessment the same way, and humans are really, really bad at risk assessment.
What if initially the AI could just hand over control to a remote human agent temporarily when it finds itself in an unsure state?
Jesús, take the wheel!
Self driving cars are not realistic in that form that general audience understand. Most people think that autonomous car is a regular car with some AI but that not the case. Not the solution for general problem. Regular cars are not advanced form of horse-powered carriage, they are much more different and complex that people could imagine because “car” is a part of extremely complex infrastructure that completely replaced old one ~70 years ago. Gas stations, light poles, traffic rules, changes in laws, education etc etc.
Autonomous car must introduce a new global infrastructure that will be (inevitably) incompatible with current one. I think that man-driven vehicles won't be allowed on AI-roads. AI-driven car doesn't require optical sensors to detect road signs, traffic lights etc, these things will be replaced by multiple invisible RF-modules. And so on. AGI is just not needed here, look at ants/bees, they are “dumb” but build and maintain complex things.
Also, we don't have the same kind tolerance to deadly accidents as before, look airplanes it was rather dangerous to fly (5.2 deaths per 100,000 hours as the 50s) but there was still industry around flying.
The fundamental problem with autonomous vehicles is that I haven't seen (and can't think of) a gradual rollout plan. As it stands, the idea seems to be, as soon as the AI is good enough, autonomous cars will flood the streets. But in the mean time, zero.
Well, that's not going to work, and not just for technical reasons.
That reminds me of IPv6. There is no technical challenge to implementing it, but we're still running out of IPv4 addresses, simply because the designers had no plan for gradual adoption. (Dual stacks was not such a plan, in fact dual stacks was an effective way of delaying adoption compared to any other design choice.)
People (users, insurers, legislators, regulators, lenders, investors ...) won't be confident with autonomous cars if they have no practical experience to look at to trust them. And maybe they will, but at the first significant / publicized accidents, they will be demonized, banned or penalized.
Maybe people working in ML separate AGI from consciousness / self awareness / sentience but I don't think anyone else does. (If so, what is general non-human intelligence?)
Saying AGI is around the corner is like saying "cloud backups for consciousness" is just around the corner. We don't yet:
1. Understand the location of consciousness (if that even makes sense). We think it's in the brain area as that seems to be our CPU. Our knowledge stops there.
2. Understand the physics of consciousness. Is it some sort of emergent-y brainwave pattern thing? Again, pure speculation.
So, we're assuming:
1. Machine learning is in any way analogous to what processes our consciousness arises from. Which is just horse shit, repeated because someone decided to call college level math a "neural network".
2. If we do enough ML simultaneously we'll cross some threshold into sentient-computer land.
3. Or, there exist "magical algorithms" that will do the same.
A self-driving car is as self-aware as a calculator. It's not going to start learning, in earnest, or solving problems at the level humans can. Ever.
Our level of problem solving requires conscious awareness (theory of mind, imagination, et al). We won't be able to recreate that until we understand our own. There's a distinct possibility that is, by definition, unknowable.
But, hey, let's keep bullshitting greedy investors and scaring technophobes. The computers are taking over!!
The idea the human brain must be reverse engineered in order to achieve AGI is speculation. It might be true but we don't know that. To quote "Artificial Intelligence - A Modern Approach":
> The quest for “artificial flight” succeeded when the Wright brothers and others stopped imitating birds and started using wind tunnels and learning about aerodynamics.
given the current state of the art of IT, any close to AGI system most probably would need capacity close to a couple of datacenters (maybe a nuclear reactor close to them to provide power).
Before stating "AGI was delayed" I would double check what's running in any massive DC complex built in the last 5 years, and more interesting, how much power are they using compared to standard datacenters.
If AGI is actually "discovered" at some point, it will spoted by leaked information about power consumption (any money dump in that kind of technology would be seriously disguised probably).
I think you are being 'unfairly' downvoted because AGI is very commonly used on here, or at least it comes up a lot.
But yes, AGI is the holy grail of AI research. Currently all AI successes have been training computers to be really good at one thing, chess, go, driving etc... And in many of these domains AI has outperformed humans by a large margin.
But take those AI systems and ask them to do something else outside of their specialty and their knowledge doesn't transfer. In essence they are really just optimized functions for a very specific set of inputs.
AGI would be an AI that increases in intelligence across multiple domains and can respond to novel problems (like a human)
note: you could argue the real definition of AI is actually AGI but that is a different discussion :)
Right, this is why Turing's "imitation game" test matters. The humans won't accept that a machine is intelligent, whatever the machine does it's just another clever trick that isn't "real" intelligence. So you have to humiliate the humans. You have to rub their faces in the dirt or they'll disregard all sense rather than admit that they weren't all that. It isn't enough to beat them at chess, at Go, at a million other problems, you have to pretend to be a human so well the other humans don't know you're a machine and only then are they forced to confront the reality that they're just a half-arsed compute substrate made of warm biological soup and not after all anything special.
The animal intelligence researchers face a similar problem.
The idea that human intelligence is defined by skill in board games or terminal chat is a hilariously inept failure of human intelligence.
Turing's test is useless for real AGI, because human-level intelligence is embodied. That means a human-equivalent AGI has to be able to improvise solutions with physical objects, parse body language, generate body language, understand social and cultural expectations in different situations, and do all of this while recognising and generating everyday speech in at least one human language.
These are all defining basic skills for humans. Literally every school age and over human can do them at a basic level. And gifted or exceptional humans can take these "simple" challenges to very advanced levels.
A chatbot doesn't come close to approaching them. Nor does a chess-bot or a go-bot. Nor does an equation solver.
AGI will fail until AI research stops trying to build a better AI research nerd, and starts trying to understand what applied human intelligence looks like in the wild.
> Literally every school age and over human can do them at a basic level
This is the other side of the same coin. No. These "defining basic skills" aren't defining anything except if you're going to be a fascist who eradicates people that defy their beliefs about embodiment.
Understanding social expectations is tricky and lots of humans can't do this at all. They require lifetime care as a result, but they are still definitely human and there's no reason to believe they lack general intelligence although it may be stunted if their problems make it hard to undertake activities that let intelligence thrive.
Although human infants do try to bootstrap language, the bootstrap process will not spontaneously produce a working human language from zero in the absence of exposure to existing human language. Humans brought up without language (typically through extreme circumstances because doing this experimentally would be unethical) create a proto-language but not a full-blown human language. We _think_ that a few generations of humans otherwise unexposed to external culture would turn this into a full-blown human language through some mechanism akin to creolisation but we can't check because - as mentioned - it would be grossly unethical. So, there definitely are humans (though not many) who don't recognise and generate "everyday speech" even taking that very broadly to include sign and the procedure used to talk to deaf-blind people.
Elderly people also often cease to be able to generate speech, or give no sign they continue to understand it, while continuing to apparently be intelligent.
> AGI will fail until AI research stops trying to build a better AI research nerd, and starts trying to understand what applied human intelligence looks like in the wild.
I fully agree with your comment, just wanted to point out that imho the quoted paragraph also says a lot about how most of today's researchers totally disregard "human studies" (the "human intelligence in the wild" that you mention), as long as it's not quantifiable/turned into mathematics it has almost no value for them. Which, of course, it's a total bastardization of what even science-oriented people like Popper were writing about.
Not sure that's what you meant but for me yes, the moment when a "machine" would be able to make genuine jokes or when it will become self-aware of its "humiliating" us then I'll agree that we have reached true AGI.
To reiterate, I cannot agree how important it is for AI system to posses "the ability to make jokes/to understand double-entendres", right now that's what makes companies like FB or Google perfect targets for the media and public opinion (that and their total disregard of privacy issues, of course). A true AGI would have to understand the meaning of a green frog or of a cartoony bear by context alone.
Of course we aren't close to AGI. Maybe I should be clearer, I expect that such an AI would pass the Turing test more or less inadvertently in the course of other activities. We might even briefly be unaware of it because of this. From its point of view "humiliate humans by being smarter than them" is not a thing, just as I'm sure sparrows don't aim to "humiliate humans by being able to fly unaided".
But the moving goal posts are a symptom of the problem that inspired Turing here. The humans will claim they're still the only ones with real intelligence even after they fail that test, but they'll know what they're doing. As we see in this thread, today plenty of people are convinced that they aren't moving the goal posts, even though they are. My suspicion is that the humiliation of the "imitation game" stops that, even as the outward pantomime continues. Humans will keep _saying_ they know they're special but they won't really believe it any more. Good.
When a Pope insists they are certain God exists, I know they're lying, whereas when some little old lady in a sleepy rural village insists the same perhaps she really believes. The Pope has seen enough to know he isn't sure, lie to me as he might.
Part 1 - Musk and others said we'll have self-driving cars all over the place by 2020. Author tweeted that we wont and discusses that for roughly half the article.
This part finishes with a quote by Urmson that he expects self-driving cars to take "up to 30-50 years" before they are really common.
Part 2 - this one quote aobve agrees with the author so he then looks at an actual forecast from a 2018 “Human Level AI” conference where some/most think AGI will occur soonish.
For some reason, he decides Urmson is the only relevant expert, and says these "more large hats than cattle, but [...] people with paying corporate or academic jobs" from the conference must be wrong because Urmson's prediction doesn't match them.
He then does the same for Kurzweil and FHI - they dont match Urmson's prediction either.
Thus AGI has been 'delayed'.
_______
The argument is in my opinion so badly made that I am wondering if the 17 people who upvoted read it, upvoted just because they are anti-AI in general, or because they like the author for other reasons.
Edit: This article is also from May, so it's appearance on the frontpage given the quality puzzles me further.