In other news, statistics from 2014[1] suggest that since the autonomous car crash was reported yesterday (11:29pm EST, Mar24), 9 people were killed by human drivers under the influence of an intoxicant of some kind.
Autonomous vehicles will have challenges that human drivers do not -- like software vulnerabilities -- but those are problems worth overcoming if it means that poor decision making by humans can be reduced.
Seeing as nobody was seriously injured and that it wasn't the fault of the autonomous vehicle (reported by other news outlets) I think it's fair to bring it up. 0 deaths in the past six months vs 9 in the past 10 hours. That isn't even all vehicle deaths by humans in that timeframe either.
There are ~260 million automobiles on the road in America. There are a statistically insignificant number of self-driving automobiles. I _suspect_ that self-driving vehicles are going to reduce traffic accidents when deployed en masse. However, at the moment it's comparing apples and oranges.
Toyota thinks they'll need 8.5 billion miles of autonomous driving before they'll have statistically useful data about the safety of autonomous vehicles. Of course, by the time anyone gets to the 8 billionth mile of driving, the technology will have matured considerably, rendering the early data moot. So it will be a long time before we can start making data based proclamations about how safe autonomous vehicles really are.
Escalator accidents kill 17 Americans a year. So I wonder if the average American travelled as many miles per year on escalators as they do in cars if the death/injury rates would be higher or lower?
How do you define "the fault of the autonomous vehicle". If you take the time to read through Google's disclosure reports, you'd find that they had virtually zero, if any, accidents that were the fault of the tech. In fact, I believe all the accidents that could be conceivably be blamed on Google was blame during on the human Google driver. If you read a little closer, though, you'll see that these human-operator errors all arose from the Google driver taking control of the car because they thought the autonomous mode would lead to an accident.
In short, accidents of all varieties are hard to classify. In this case, though you seem to vastly underestimate the numbers involved. According to latest reports [0], the Uber program is only driving about 20K miles a week. Further more, the rate of human-interventions is about once every MILE.
Normal human driving is bad, of course. But how many times in the last 1,000 miles you've driven have you needed your passenger to intervene to prevent an accident?
When reading Google's disclosure reports, note that the human drivers are instructed to preemptively take manual control in situations where an error could have consequences, so the accident-rate statistics do not reflect the systems behavior in the situations that would really test its capabilities.
Also, Uber outright refused to comply with California law and register their self-driving car program, and it's looking more and more like the reason they did that is so they didn't have to publicly disclose those disengagement numbers.
I suspect you're right. However, a higher number of disengagements could just point to a more cautious approach, more difficult environment, etc. Hard to pin it exactly to maturity.
For example, I believe Waymo cars drive mostly on a fixed number of preplanned routes. You would expect limited disengagements over time, as it's got tons of data on those specific routes.
I can't find a complete list, but as I recall, the majority of the accidents consist of Google's vehicles being rear-ended while slowing down to yield at intersections.
>In short, accidents of all varieties are hard to classify
It's probably fair to say that, in the course of a day, many of us are in driving situations where we avoid potentially dangerous situations whether or not it might technically have been someone else's fault were there an accident. Most of us (hopefully) learn to drive defensively to minimize the likelihood of an accident, not to take whatever space on a road we have a "right" to.
Who cares about other people's safety? I value my own safety and time over being legally correct on the road. If I need to break traffic laws to avoid hitting another vehicle, or being struck, that's easy. Being in a collision is a huge time-suck, even if I'm in the right. (That said, I'm not going to intentionally cause other vehicles to collide among themselves because it enhances my safety or reduces my time spent)
I was originally going to write "safety of themselves and others", but that implied a level of nuance I didn't want to get into explaining.
> If I need to break traffic laws to avoid hitting another vehicle, or being struck, that's easy.
IMO based on anecdotes and observation, not enough people do that because of the "it's not my fault" attitude. The first consideration is fault, the second is safety.
You'd also have to factor in the times the human operator has to take over for the autopilot to prevent accidents.
Many articles and press releases make it look like self-driving is "already here" when it's actually still very much a work in progress and not at all a solved problem
I understand statistics but that doesn't make the distinction trivial. Nobody is suggesting (I think) that we should ignore any problems these vehicles represent. That we even need to "argue" this is kind of crazy: lots of people die, get in car minor/moderate/severe accidents that cause injuries, etc. and there is no computer at fault. Getting into a car accident that isn't your fault is basically random.
So why pretend like it's not important to cite how many people die today? We write laws and rules for how to operate vehicles that can be observed and operated by a computer. It's very likely that at a certain point those computers will exceed our aggregate ability to prevent deaths.
There are likely also wins we could get from the design of vehicles by not having to accommodate a human driver. Take for example the window that a person can be ejected from the vehicle through or debris can easily penetrate.
It's an apples-to-oranges comparison at the outset. Autonomous cars are overseen by humans, drive only in good weather, and do almost all their driving at low speed. At the very least, you should be comparing human data at low speed and in good weather.
I agree, but it's more misleading to have headlines about a self-driving car that crashed without mentioning it's not the self-driving car's fault (and without mentioning nobody was even injured).
It's too early to know who is to blame, or even if blame can be assigned so cleanly. Meanwhile, we have Uber's own reports that say that human drivers have to physically intervene during autonomous operation at a rate of one intervention per milehttp://www.recode.net/2017/3/16/14938116/uber-travis-kalanic...
Uber is pretty middle of the pack as far as the state of their progress developing this technology is concerned. One intervention per mile on average isn't any kind of scathing indictment of Uber's autonomous driving program. The sensationalism of Uber's leaked intervention report mostly just illustrates a disconnect between public perception about where the technology is at, and where it's really at.
There are 23 other companies with permits to test autonomous vehicles in CA and they're all doing pretty much the same thing. Waymo is way out in front, Cruise/GM is doing well, but there are maybe a half dozen large, deeply invested players (including Tesla) not nearly as advanced as Uber.
It's possible, of course, that Uber was the victim of nasty coincidence. But bad accidents like this are rare. Most people go many hundreds of thousands of miles between rollover collisions or other equally severe accidents. Uber's self driving car program in Arizona is only a couple of months old. It's not a bad presumption that this is not purely fate.
Yes. It's entirely possible that the other driver was 100% to blame but a rollover accident is actually a very serious accident even if there were apparently no serious injuries in this case. This wasn't some happens all the time fender bender.
If the denominator doesn't matter, then there have been 0 accidents of any kind in the last femtosecond due to humans. Clearly that means autonomous vehicles are a menace, right? Or maybe it's rate of accidents not the absolute number that matters?
What are you even trying to prove? Yeah, the denominator matters man! I agree! Statistics are great and I don't think I was trying to say that "0 is less than 9." I was simply pointing out that this article is stupid and doesn't actually prove anything, which is why the parent brought this up as "in other news."
But then again, you're defending a clickbait headline about a robot getting into an accident that it may or may not be responsible for that didn't even result in someone being injured.
I think the point here is not that the self driving software is already beter than humans but rather that the upside of cars driving themselves is so big that it is worth pursuing until we get it right.
I agree completely with that statement. My fear is that irrational reporting and FUD on behalf of those with a vested interest in maintaining the status quo as long as possible will set back automation by a significant number of years.
Some hype is useful, to be sure, but it is a delicate balance with avoiding a situation where the pace of innovation has been oversold.
One thing I just thought about, Musk said miles trains a better AI. Supposing this yields true valuable results. All SDV may become near perfect drivers. Humans will keep being spread along the spectrum of dangerous to safe driver.
ps: thanks for reminding me that statistics are hard to get right.
>Autonomous vehicles will have challenges that human drivers do not
Wait a minute: I don't think the argument is whether machines can drive better than humans. It's whether these companies that are desperate to make money (Uber, Tesla) are pushing potentially dangerous technology on to society before it's ready.
> The argument should be whether autonomous cars from Uber can drive better than humans.
How much better? 1% over median? That means it's still worse than 49% of the drivers on the road. Statistically, the automated driver would be "better" for society at large, but it's still going to cause a lot of accidents and kill a lot of people.
I also wonder if there might be even more accidents with an automated system which is still in the fat portion of the bell curve, since there will be no "good" drivers to reduce conditions ripe for an accident in the first place.
The much stronger defense in this case is to just cut to the chase and point out that, at least according to the police, the self-driving car was not the one at fault here - it was the driver of the other car blowing a yield sign.
Wait, drunk driving is a willfully criminal act how is that comparable or relevant here?
We already have things like taxis, Uber and Lyft and people still get drunk and drive under the influence of alcohol and kill people.
Why would ride services with autonomous vehicles change that?
If someone is going to get drunk and drive a car rather than calling an Uber with a human driving they are no more likely to get drunk and call Uber because Uber has driverless vehicles.
The target for self driving cars is ride services not individuals.
>We already have things like taxis, Uber and Lyft and people still get drunk and drive under the influence of alcohol and kill people.
Many reasons for that, from being cheap (I already own a car, I won't pay some random guy money!) to mere convenience. But such incidences are bound to get fewer once self-driving cars make up the majority of cars on the road and can take over from drunk drivers.
>The target for self driving cars is ride services not individuals.
In the short-term for Uber maybe, but not in the long-term for everybody else. I doubt that traditional car manufacturers, like BMW, Audi, Honda, Jaguar or Mercedes-Benz are pursuing self driving cars merely to offer ride sharing services, after all these companies are in the business of selling cars to individuals.
1 )That we will see the demise of self-ownership of cars when you state:
>"once self-driving cars make up the majority of cars on the road"
2) Car makers will continue to market car ownerships to individuals when you state:
>"I doubt that traditional car manufacturers, like BMW, Audi, Honda, Jaguar or Mercedes-Benz are pursuing self driving cars merely to offer ride sharing services, after all these companies are in the business of selling cars to individuals"
The problem with inebriation is that people don't exercise good judgement which is the reason they get behind the wheel and drive because they "think" they are OK.
Someone who has a car that has a "self-driving mode" can just as easily get behind the wheel and operate the car in manual mode because in their impaired state they also "think" they are ok to drive.
Self-driving cars don't solve the problem of impaired judgement.
>That we will see the demise of self-ownership of cars when you state
I stated no such thing, a car having self-driving capabilities has literally nothing to do with the ownership/proprietor status of said car.
>Someone who has a car that has a "self-driving mode" can just as easily get behind the wheel and operate the car in manual mode because in their impaired state they also "think" they are ok to drive.
You are still thinking way too much in the present. Try to imagine a future where pretty much every vehicle on the street is autonomously self-driving, legalities have changed that no human "security driver" is required anymore. In such a setting it would become the norm to do something other than driving while traveling in a vehicle, like being productive or having some leisure time. Sure enough, there will still be people who would want the "thrill" of controlling their vehicles themselves, but manual driving will probably be reserved to special lanes/circuits, due to how rare it will become and it negatively impacting the performance of the fully autonomous traffic.
We won't get there from now to tomorrow, we will only get there with many small baby steps until we are actually there.
The real issue here - yet to be resolved at this time - is whether Uber is going about its development of autonomous cars responsibly, as Waymo, for example, appears to be doing. Generalities offer no answer to that question.
Autonomous vehicles will have challenges that human drivers do not -- like software vulnerabilities -- but those are problems worth overcoming if it means that poor decision making by humans can be reduced.
----
[1] https://www.cdc.gov/mmwr/preview/mmwrhtml/mm6430a2.htm