From a strategy point Uber is in a desperate position. After the Google lawsuit their self-driving story is not working. Their existing business is loosing money in most places with no end in sight. Top level executives are leaving which is a sign that they can't imagine a positive course in the medium term.
The company is under extreme pressure and has proven over and over again to be willing to cut corners when it comes to legal compliance. It should be frightening to anyone on the streets that Uber managers are making decisions to put self driving cars on the road. They neither have the culture, patience nor the expertise to make decisions affecting life and death.
I'm surprised we don't hear more about Mercedes. I don't know how much of a story they have when it comes to self-driving car, but they already build cars, and they already have a well established rent-by-the-minute business: car2go.com.
If and when they figure out self-driving cars all they have to do is slowly replace the people driven car2go by self-driven ones and they'll become a leader of that market without any fuss!
Mercedes comes from a tradition of 'keep your mouth shut and then deliver a premium product.' Self-driving cars are not a market where 'move fast and break things' is an acceptable consumer philosophy and Uber demonstrates how tricky it is to reposition your brand from naughty to trustworthy. Mercedes, by contrast, can just run an ad in a year or two announcing that the new Z-Class or whatever it's called can drive you around in any weather and be believed by the vast majority of consumers.
Absolutely agreed and the kind of credibility Mercedes has in the market takes decades of excellent execution to accrue and a few years to fritter it away.
Given that the gap between Mercedes and run of the mill saloon has been closing in real terms rapidly (when I was a kid cars where serviced every 5-12,0000 miles, these days it's not unusual for service periods to be more than twice that with 7 year warranties and the generally higher build/reliability across the board) their brand of understated excellence is a key part of why they can still charge so much.
Every single carmaker will offer self-driving as an option at first, then as a standard feature.
They'll also come with software that allows you to flip a switch and see your car take off to earn some money whenever you tell it – or it is reasonably sure from experience - that you won't need it for a while.
The manufacturer will take care of billing and insure you against vandalism for a 30% cut.
Meanwhile, Uber is but the faint memory of a bro-themed ponzi scheme. They have nothing that would allow them to remain relevant: no reputation, no technology, and -50 billion in capital. All they got to scale was the one part that's irrelevant: managing drivers.
I think it's far more likely you'll just be able to lease a timeshare on a car than this complicated scenario people bring up where a company sells you a car and then rents it back from you. Like, the tax implications of this scheme are pretty obnoxious alone for someone who isn't doing it for primary or even secondary income.
But this way it's the car buyer who assumes the risk in case the timesharing model is less lucrative. The manufacturer doesn't want to hold any more inventory than it already has to.
Probably other companies will step in to manage the inventory and match cars to users, just like existing rental car companies.
Although in the short term, I can see the appeal of selling cars with a built-in ride-share mode. Many current car owners are likely to continue to want to own their own cars. If manufacturers can provide this feature out-of-the-box, it may help drive car sales to people who want to own cars and get "free money".
At that point, why would I own a car? Insuring against vandalism is all very well financially, but if my car needs to be repaired after a random person takes a ride in it I'm still going to be without my car for a few days.
At that point, why wouldn't I just pay the car manufacturer for "car use on demand"? Suddenly I don't have to care about having a garage at home, repairs, upgrades, etc. etc.
I can see the argument for still owning your own car, but owning it and renting it out seems like a half way measure that will satisfy no-one.
Assume you want to go to work at 9am and all your neighbours too. Is it okay for you to wait an hour until a car is free to drive you to work? Or imagine in previous times you bought a car when you got a bonus payment, now you pay the car-service each months - imagine you get sick and now can't afford to pay for the monthly car service, if you owned the car you wouldn't have to worry about that, as it would have been your asset without a fixed monthly payment.
Renting might be convenient, but in the long term owning something is cheaper and more convinient.
Why would the rental company not just scale up to match morning demand? Even if a car is only used for an hour a day the cost of having it parked up somewhere for the rest of the day would be very low.
And you're talking about a car as if buying it is a one-off expense that'll never be added on to. That's not true at all - insurance, maintainence and repairs are all ongoing costs that you'd be similarly unable to afford if you lost your job.
Re: morning demand: The same could be said for public transport or taxis. And do they increase to match the morning demand? Often not, you have to squeeze in to catch the public transport and wait for a taxi.
Insurance, maintenance and repairs are not monthly fixed costs. You can pay insurance e.g. once a year. And maintenance/repair really depends.
Why would car insurance be a once a year cost but car service would not? I don't see any reason why a company couldn't charge once per year. That said, many consumers are unable to pay for a year of anything upfront and choose to pay monthly. Like car insurance.
> And maintenance/repair really depends.
Which is a huge problem! Again, if you don't have a ton of money a surprise repair bill can absolutely wreck your finances.
Also, pretty much every public transit system I know scales up service during peak times and winds it down at other times of day. This isn't a new concept.
Not really. Most systems run extra buses/trains/etc during the morning and evening rush hour. This is true even in smallish US cities that have lackluster service.
It's certainly possible that public transit is still crowded despite this, but I've never heard of anywhere that doesn't even try to scale to match demand.
It doesn't increase the number of vehicles needed, but it may increase the number of vehicles wanted. If a family of four has two kids that attend different schools, and two jobs in different locations, and everything starts at 8 am (because, life); each person can most delay leaving the house if there is one vehicle per person.
There seems to be a big jump from what could happen to what definitely will happen.
I would say highly likely that whatever car companies are left standing, one or more of them will have fairly good margins.
How the ownership ends up working out remains to be seen. May be cities will cut deals with a specific company and all non-commercial vehicles will be controlled by that municipality. May be Google Maps will turn in to a transport marketplace and operators will bid based on destination.
Likely Uber is going to have a very tough time. Likely they can cut a lot of the excess and operating expenses and be profitable for a while but at the loss of growth and new markets.
Uber paved the way for a lot of this. There is no way Google, Apple, Tesla or definitely an established car company would have been able to plow their way in to these local markets, bypassing and ignoring regulations until the point that cities became dependent on Uber. Now it is done, and it isn't particular a moat for Uber. But they fucking did it, they are the ones who need the recognition for it.
>> Uber paved the way for a lot of this. There is no way Google, Apple, Tesla or definitely an established car company would have been able to plow their way in to these local markets, bypassing and ignoring regulations until the point that cities became dependent on Uber. Now it is done, and it isn't particular a moat for Uber. But they fucking did it, they are the ones who need the recognition for it.
I don't understand or like this attitude. A company creating a moat by breaking the law to gain an advantage over people following the law is not something to be admired or celebrated.
(If I've misunderstood your tone I apologise but your final sentence gives me this impression).
You’re misunderstanding the argument. He is not claiming that Uber has any kind of moat. He’s instead arguing (as far as I can tell) that Uber’s disregard for local regulations expanded the supply and convenience of taxis to the point that more people could forgo car ownership or develop a habit of taxi travel not easily switched to other modes of transportation, paving the way for other companies to roll out fleets of autonomous vehicles.
Not sure that argument really holds any water (the desirability of autonomous driving in an Uber-less world is counterfactual, so we can only speculate), but there you go.
> A company creating a moat by breaking the law to gain an advantage over people following the law is not something to be admired or celebrated
There are two wrong assumptions in this statement.
First, the OP said that Uber didn't create a moat, they just paved the way. These days, after Uber moves into an area, other ride sharing services are able to enter the market without Uber being able to abuse their market position to push them out. That's definitely a positive.
Second, Uber did not break the law in most situations. They re-interpreted a formerly gray area of the law, by asserting that their services shouldn't be regulated by taxi laws as they were not a taxi company. This was a legal battle that none of the entrenched players were willing to fight for various reasons, and with that legal precedent, that opened the way also for other ride sharing services to operate.
The point is they didn't create much of a moat but they did bust the regulations. So competitors are free to pop up and consumers win at the expense of taxi drivers and taxi medallion owners (usually separate people).
Mercedes released a video [1] back in 2013 showing pretty decent driving. I imagine they didn't stop with progress and have even more capable systems today.
Mercedes-Benz has successfully completed the first autonomous long-distance drive ever, involving both town and cross-country traffic, using near-production-standard sensor systems. The Mercedes-Benz S 500 INTELLIGENT DRIVE research vehicle covered 100 kilometers from Mannheim to Pforzheim, Germany, under real traffic conditions and complex situations including traffic lights, roundabouts, pedestrians, cyclists and trams.
My husband takes casual carpool most mornings to SF from Berkeley, and last week got a ride in one of the Mercedes Intelligent Drive cars (he mentioned the model, but I forgot). He said the driver didn't have to handle the car at all once he was picked up. The car drove itself onto the freeway, all the way over on the Bay Bridge, navigated the exit into SF and drove several blocks in the city. He said the driver took over only at the very end, when he pulled to the side of the road to drop my husband and the other passenger off. Pretty incredible!
I read an article couple months ago in a RL car magazine where they test-drove a Mercedes prototype. They had the impression that it drove almost perfect, even dealing with complicated highway construction sites without proper safety lines.
Mercedes show cased a self driving van in the 1980s, with stereo black/white cameras, and a big computer in the back.
But can you buy a Merc Maybach S class with self driving comparable to Tesla S autopilot 2? No. That's why you see more and more Tesla S these days, plus it is all electric too.
Both Mercedes and BMW (DriveNow as a co-venture with Sixt) run car sharing businesses. In terms of cities served Uber beats them hands down. Mercedes has 14.000 and BMW 4000 cars on the streets (wikipedia). That is a fraction of what Uber can mobilize. For Uber to be taken on they would need to scale up a magnitude or even two. At that scale it would affect their existing business and their now increasing engagement is an indication how serious the companies are taking this shift.
Transportation as a service is clearly something. Car ownership is not always the best economic solution for an individual. The sharing demand exists but no one has established a large scale sustainable enterprise around it.
What is needed?
- Capital first and foremost as cars are not cheap. Two models so far:
-- Uber leveraging car owner capital. That area is really opaque but one can assume individuals capital cost is higher than manufacturers especially as Uber drivers likely have not the best credit ratings. The volume Uber can leverage here is - assuming 500.000 drivers with cars worth $20.000 then this is 10 Billion USD.
-- Car manufacturer and car rental companies (in the past often linked to manufacturers). There is some synergy in the vertical integration e.g. through manufacturing capacity management, tailored car features and marketing.
- Fleet servicing. Uber has managed to shift that to drivers.
- Driver. Uber has an individual driving. There may be autonomous vehicles. Then there is the person renting assuming driving license.
- Critical mass. There is a focus on serving cities. I suspect autonomous vehicles may be vital to serve the wider area.
- Insurance. Liability needs to be clarified and forms of insurance found if autonomous cars are allowed.
Exactly. Today Uber relies on the owner/driver for the capital behind the vehicle, the vehicle acquisition, servicing, insurance, etc...
Current car-sharing business (car2go, zipcar, etc...) already had to figure out all those parts, the only piece missing for them, like for everybody else so far, is the self-driving technology.
> That is a fraction of what Uber can mobilize
I don't understand your point. You're saying Uber will have random people buy self-driving car for them, service it etc... and let Uber use it? They'd effectively become a pico-bank in a very risky investment... Or am I missing something here?
Very few Uber drivers are likely to be carrying insurance that would cover any claims. The reason being is that standard personal auto insurance excludes use of the vehicle for commercial purposes, and insurance that does cover commercial use is significantly more expensive.
> The sharing demand exists but no one has established a large scale sustainable enterprise around it.
Zipcar et al?
I guess my question is the following: If you don't want exclusive ownership and use of your vehicle, why would you want the hassles of ownership at all? I do get the some money on the side argument but I really wonder if it makes sense with cars given that a lot of depreciation/costs is because of mileage. It's a bit different from the AirBnB situation.
As a rule, Mercedes tends to keep mum on tech they're not ready to roll out (at least to the S-Class), and they like it be pretty near perfect at that piont.
I worked with a guy who had previously interned at Daimler (Mercedes) in Germany working on self driving cars, although it wasn't called that - I don't remember the phrase he used. This was in 2005.
>It should be frightening to anyone on the streets that Uber managers are making decisions to put self driving cars on the road.
Something that probably should have been in the article's title that this accident occurred because the human driver in the other car failed to yield. While Uber is certainly a very flawed company, neither its technology nor its people were at fault here.
We don't know about the specifics of the accident, but I just want to say that simply not being "at fault" is not the same thing as driving safely. A large number of accidents can be avoided by the non-"at fault" party alone exercising caution – something autonomous vehicles should be far superior at than humans.
Say you're approaching an intersection with a green light while another car is traveling perpendicular at a moderately high rate of speed – perhaps just 5mph faster than their maximum safe stopping speed. You're completely allowed to just keep driving through the intersection, and when they hit you, they will be at fault. However, you could have avoided the collision by recognizing the situation and slowing down – and this is a type of analysis that autonomous vehicles are specially suited for compared to human drivers.
Interesting thought. Anecdotally, I do probably make frequent subconscious decisions to avoid dangerous situations when I am able to predict the other driver is about to do something stupid, even if I have the right-of-way.
Its like looking both ways before crossing a one-way street - nobody should ever be coming at me from the wrong way, but if someone was.... I want to be prepared to deal with that.
I live in an area popular with walkers & bicyclists due to the winding, undulating roads. They often travel against traffic on one-ways in the heart of town(roundabouts and one-way streets) and on the shoulderless two lanes.
Anecdote related to TFA:, a couple months back an SUV stopped at a marked crosswalk with ped traffic. I stopped in right lane next to it. When peds cleared it began rolling before I did, and once I commenced it braked hard. I braked, looking for unseen peds and noticed it's nav gear & small uber decal. No additional peds were in crosswalk. There were dozens og people on either sidewalk for an art fair. I wonder if human intervention caused the hard braking or the large amount of data points on the periphery.
This one of the many reasons why self-driving cars are such a hard problem though. They don't just have to drive better than humans based on knowing the road and the conditions, they also have to deal with other drivers (who are perhaps also automated) behaving in unexpected ways. Self-driving cars have to be robust even when (especially when) normal assumptions about the road or other drivers aren't valid.
This isn't tangential to the engineering problems surrounding self-driving cars, rather it's at the heart of those problems and why it's an order of magnitude (or more) harder than what so many people assume. It's not insurmountable, but whenever I hear people making comments like "self-driving cars are a solved problem, and are an inevitability in the near future", I cringe so hard. With a lot of care and skepticism, we can all make self-driving cars a thing in our lifetimes. Or, with a lot of overenthusiastic credulousness, we can make self-driving cars briefly a thing in the very near future, which get regulated out of existence for the remainder of our lives by a relatively small number of high-profile accidents like this.
I agree with you about the hardness of the self driving car problem.
But I think your other point about the inevitability of regulators killing self driving cars if they are short of perfect is just as cringe-worthy, as at least as harmful.
Self-driving cars should be allowed to roll if they are better than humans, which might be the case already.
It is hard, but not impossible, for government to do the right thing.
In theory one of the benefits of an autonomous car is its many-times-faster-than-human reaction and the fact that it doesn't ever get "distracted".
Meanwhile, multiple brands of existing, human-driven cars already have the ability to detect an impending collision and avoid it, either through automatic emergency braking or other mechanisms. If Uber isn't even caught up to the stuff that's in cars for the general public, then what the hell are they doing?
While I agree in general, I don't think we have enough details here to say that this was preventable by the Uber car, or any existing car. It's possible that the officially-at-fault car was sufficiently outside the envelope of expected behavior that no system could have prevented the crash. Stuff like predicting whether someone else will yield is basically mind reading, a Hard Problem in the general case.
> Meanwhile, multiple brands of existing, human-driven cars already have the ability to detect an impending collision and avoid it, either through automatic emergency braking or other mechanisms.
Without more details we can't say whether this accident would have been avoidable with those systems.
Existing active collision avoidance systems only work in the front aspect. Passive warning systems will give warnings about crossing traffic in some limited cases but it's still on the driver to react. That's about it for human-driven cars right now.
When I learned to drive I was taught to always drive defensively - basically to avoid situations that could result in the 'chance of a risk of an accident'. Sure, 16-year old me and my friends joked about the terminology but the lesson stuck with me.
Other driver going faster than they should? Switch lanes and avoid the situation.
Turning into traffic and there is a car in your intended lane signalling that they're going to move out of the lane? Wait for them to actually move and make space for you - don't trust that their lights are the whole truth.
Being a safe driver means not causing accidents but it also is on you to avoid being in situations with greater risk - "identify and reduce".
Let's put ourselves in the shoes of someone who reads this headline and gets a quick summary. This headline makes it sound like Uber isn't confident of its technology and they think their technology might be the cause and hence they are suspending it. In this case, it seems pretty clear, as confirmed by the police officer, that Uber's car wasn't at fault.
It's normal for drivers to fail to yield legally. A human driver would respond by letting the other car go. If Uber tech can't handle this completely standard situation and results in a high-speed collision when another driver makes a minor error then Uber tech is not ready for prime time.
And this is the 'hard bit' about self driving cars. If they are too passive and cautious human drivers will exploit them, if they assume the other cars will follow the rules, they will hit them.No amount of LIDAR magic and deep learning can fix that problem yet.
The worst part of Uber's situation is that they have priced themselves out of any reasonable path to further fund raising. Even a successful IPO would be a huge down round.
I've never understood how valuations computed from white powder are strategically in the best interest of... anyone? Maybe someone more clueful about the nuances of VC can enlighten me. Even if Uber can make the Google suit and the harassment culture go away, I still don't see how they can ever justify that valuation. Every car company is about to ship (or has) some form of auto drive and there are no strong barriers to competing with Uber on ride sharing. Transportation is a commodity.
Abusing legal gray areas is a fundamental part of their business model. They were able to position this in a positive 'move fast and break things' light. Now people are noticing things are broken and are fixing them.
For example, they were abusing an exception for businesses grossing <30K a year to not pay taxes in Canada. That law is now being reviewed.
As far as I can tell Uber's corporate culture is both self-interested, self-righteous, and entirely inconsiderate of the people and markets they affect. You can only piss in the pool so long before it catches up with you, and they don't have any other runway left.
We have some of the greatest minds in the world collaborating, and this is one of their primary ambassadors.
> Abusing legal gray areas is a fundamental part of their business model.
Uber was/is quite willing to break the law (and encourage their drivers to do so, with lies and misdirection), even in areas where it's very clear that they're in the wrong.
Leveraging a ton of market anger about the existing taxi services helped them get started, it's unclear if it'll be enough to sustain them.
What abuses are you talking about? Their assertion that they're not a taxi company holds some water, and their 30k a year exemption in Canada is for independent subcontractors, which Uber argues that its drivers are.
Every company looks at laws and sees just how much money they have to pay. There is no abuse in this case.
If Uber were a publicly traded company I'd be buying as much stock as possible right now. All I see is total hysteria over completely surmountable setbacks. People want your statements to feel true because they have beef with Uber's culture, business practice, etc. so they're bending their perception of every event to fit their model. This is a discussion thread about how a self-driving car that was, according to evidence so far, hit by another driver who failed to yield. And basically every comment is talking about how reckless and irresponsible the company is, and intimating that it was probably Uber's fault. Everyone is just having the conversation they want to have, regardless of how it contacts reality. "Their existing business is loosing [sic] money in most places with no end in sight." According to whom, you? Why should I be even slightly credulous that this company with massive market penetration and a product people want has no wherewithal to turn a profit?
This whole thread is pure platitude. "Kalanick and his bros are sociopaths". Sure. Keep operating in those terms and see where it gets you. I'm going to stay here on planet Earth, thank you.
This is the most absurd comment of this thread, hand waving away every major problem or scandal Uber has had even in just the last year. Uber self-driving cars blew 6 red lights in their 1 week San Francisco autonomous testing run, and lied to everyone saying it was not the technology's fault - it was, thanks to documents sent to NYT. They are on track to lose 3 billion this year. 3 billion. Their self driving golden parachute/ bail-out plan has been a disaster so far, in terms of the clearly not-ready technology and Google's lawsuit. Even ignoring the deep flaws of their executives - bragging about threatening journalists, instituting a culture of sexism and abuse, berating their own Uber drivers - your response is still "Everyone is wrong, everything is amazing at Uber, and I won't even offer any details why." Thanks for this wonderful contribution to HN.
Your whole comment is a perfect demonstration of what I'm saying. My point is not that Uber has no problems, but rather that people are blowing things out of proportion because they've taken an emotional dislike for the company. So people see what they want to see.
For instance, you see 3 billion in losses. In 2015 Q4, Uber generated $560M in net revenue. In 2016 Q1, $960M. Q2: $1100M. Q3: $1320M, and Q4: $1584M. I see massive growth, and massive expenditures into R&D which could be halted with relative ease if solvency becomes a problem.
You see the self-driving program as a boondoggle, and you see 6 blown red lights in 1 week as just a taste of nonfunctional technology that would have only caused more and more damage the more time it was allowed to be on the road. I see people trying to solve a difficult technical problem, and having some issues in an initial run-up test.
You see a company whose executives routinely berate their own Uber drivers. I see one dashcam video where the CEO has a human debate with one of his drivers, which has been extrapolated into a portrait of "the type of person this Kalanick guy really is".
I'm _not_ saying that they don't have their problems, or that it's impossible Uber is going to run into financial or regulatory trouble. But this sky-is-falling talk is not for me.
Do you have a source for "massive R&D spending which could be halted"? I was under the impression their losses come from massive operating/marketing expenses, which cannot be cut without harming the business.
Net revenue is what Uber makes after paying the drivers. IOW, any losses Uber has accrued in that quarter is based upon costs outside of the rides themselves.
Regardless of what you think of Kalanick there are legitimate questions about the business model. I see many people claim things like "They lose money because they spend on R%D". I have not been able to see the most recent year but leaked numbers from prior years show that even if you cut R&D to zero they still lost money. In the more recent year I have not seen the breakout for R&D but the losses scale with revenues by quarter which is not what you would expect to see if they had a fixed R&D budget cutting into profits.
I also see many wildly overoptimistic estimates of how much driverless cars would increase their margins. For example you see absurd statements like "driverless cars can produce revenue 24 hrs a day". Demand for rides is not constant 24 hrs a day. To meet peak demand (which is critical to their competitiveness with alternatives) you will have many cars idle off peak. That means parking (costs money) or driving empty wasting money. Many others have pointed out some of the many costs currently born by drivers that somebody still has to pay. You can debate the exact cost of all those things but it is definitely not zero.
I am not saying it is impossible for them to make changes and make money but until I saw detailed numbers and a plan to increase margins I would not invest my money.
Around the issue of this crash? Yes. Around the issue of how much Uber needs self-driving to pan out, now? Not so much. Around the reality that Kalanick is bad news, and at the same time utterly in control? Not at all.
I see what you're talking about, but overall I see (although not necessarily in this thread) people talking about Uber operating a VC subsidized per-ride loss, and just how much they need self-driving to pan out in a hurry.
Fair points. If I were a VC, I wouldn't be thrilled with Kalanick at the helm, even if I wasn't about to sell sell sell.
I get why people might be concerned about the operating losses, at least to first order. But I just don't see the structure of how people think this disintegration is going to happen. To me, it looks like a very solid business model that makes tons of cash and spends tons of cash (like Amazon). I don't see some kind of tower of Babel built entirely on the perception of value that could topple at any moment (Twitter). People love using ride-sharing apps, and there's no way that's going to change in the near future.
Even if their self-driving-car play isn't working out, I don't see why it has to pan out _now_. They can always just fall back on their massively successful core business..
Transport is a tough industry with a lot of knowledgable competitors, and is a significant 'big data' consumer. It has thin margins. Uber can do what it does because it's burning through capital, but there's a reason why financially-sustainable people-moving companies don't drive all shiny new vehicles and offer bottled water to their passengers.
If you want to talk about reality, then talk about sustainable financials in the transport industry.
"However, police said there was a passenger in the self-driving car. The person was behind the wheel but it's unclear whether they were controlling the SUV or not."
Here's the real question, do you trust Uber to do this (or really anything) the right way, or do you think they'll take whatever shortcuts they can? If someone finds a huge bug in the code and says, "Well, we need to completely rewrite this part or it's never going to work. The rewrite is going to take about 10 months." And someone else says "We can hack a patch in that'll work most of the time in a week.".
Which road is Uber going to take in this case? I know what their past performance has shown, they'll take the shortcut. Have they ever done things the hard way?
Imagine a situation in which Uber cars are better than human drivers, but not by a lot. Then imagine some hastily thrown-together patch being pushed fleet-wide that ends up causing crashes in some unanticipated situations. You could imagine quite a lot of carnage happening out there until they're all yanked off the roads and rolled back.
Statistics doesn't tell the whole story because it's backwards-looking, not future-looking, and the Uber crashes aren't going to be independent events.
Statistics of accidents with human drivers also include a lot of humans who are banned from the road and sometimes even imprisoned for not driving safely enough; the average human could and should be driving a lot better than those statistics. And in many jurisdictions, there's an expectation that a commercial driving service is much better than the average person permitted to use a car. If the service aims to reduce transport costs to the extent that vastly more journeys are taken before - which Uber certainly does aspire to - then its autonomous vehicles can significantly increase the numbers dying on the roads even if they actually do have a persistent and statistically significant per-mile safety advantage over human-driven cars.
And if I'm a regulator tasked with reducing road deaths, the question I'd be asking wouldn't be "is this car's autonomous mode marginally better than the average driver so Uber can cease paying drivers and hit profitability?", it's "is the human behind the wheel of a car operating in [semi]-autonomous mode no longer preventing a statistically significant number of crashes?" (or "is telemetry suggesting their interventions are actually responsible for more accidents than they prevent?"). Just because a car is a lot safer with a computer behind the wheel than a car with the average person and no computer behind the wheel doesn't mean that permitting autonomous vehicle operators to rely on technology alone is a safety improvement.
I trust statistics but I don't trust bad statistics. There isn't a self driving car operator out there that is currently testing their cars in statistically representative conditions. And unless they are, there isn't a statistical methodology out there that can say they are safer.
Google, for example, avoids expressways and won't go on freeways, hasn't tested their cars in extreme weather, barely ever sees even bad weather, and is electronically limited to 25mph. I'd be willing to bet the majority of drivers have nearly perfect driving records in those conditions too.
> ... and is electronically limited to 25mph. I'd be willing to bet the majority of drivers have nearly perfect driving records in those conditions too.
Not me: I'd fall asleep if I were to be limited to 25mph all the time ;)
Curious about the 25mph assertion. I've driven next to Google self-driving cars in Austin in my neighborhood numerous times and it was definitely traveling faster than 25mph.
Statistics says that it will take many, many millions of miles driven by autonomous vehicles autonomously before we can know how safe they are compared to humans [1].
The notion that if self-driving technology is statistically safer in bulk than humans, then "it wins" is I think highly fallacious (even though it is made here repeatedly and often).
We as humans do not judge whether to do one thing vs another on that basis. E.g. we would ban widespread gun ownership in countries that allow it at present (very large numbers of people would not die as a result); we would mandate a basic set of vaccinations in countries that at present do not do that (same outcome); we would fit the passenger seats facing backwards in commercial aircraft (same outcome). The list is endless but we do none of these things because people prefer to judge the risks in some activity or behavior in the context of their own near term pleasure or convenience.
Heck we could mandate very simple low-tech driving safety measures for human-driven cars such as top speed limits and a block on driving the wrong way down the freeway.
A common misperception is that we could ban objects, or mandate behaviours. We can only impose penalties on the possession of objects, or mandate punishments on certain behaviours. Simply passing laws does not save lives in the short run.
The one problem I have with self driving cars is that the software is a single point of failure, so one bad patch (one "mistake") could cause millions of crashes
Is there any proof that this is standard for Car companies?
Tech companies are bound to have good procedures for tech... that's what they do.
Car companies? They should be good at car stuff... but... tech stuff?
Look at all the angst surrounding the IOT - companies making buggy/hack-able stuff that otherwise make good things - IE Washing Machines, Refrigerators, etc... look at all the stories about how hack-able stuff involving cars is...
Personally? I don't have a lot of faith in "Old" companies getting tech right.
> They should be good at car stuff... but... tech stuff?
Cars are a type of technology too. I don't think there's any company that's good at "technology". Maybe a university would count? Or one of those big appliance manufacturing corporations.
If you're talking specifically about computers, software, and electronics, I think IT seems more accurate.
You build those statistics by putting them both on the road and comparing the results. Personally I'm not willing to put a machine on the road until it's murdered enough people to properly tally up
It seems to me the best way to build a case for autonomous driving would be for Uber, Lyft, etc to give incentives to drivers who have safety features like automatic emergency braking, then quietly build up a database of accident rate statistics. These services could be a great way to collect this sort of data and see which approaches improve safety.
I am skeptical of unmanned taxis for reasons having nothing to do with autonomous driving. What do you do if a passenger gets sick, passes out, starts acting violently, has a heart attack, all things regular taxis encounter. Basically, you need a person to monitor the vehicle, passenger and contents even if the car is driving much of the time. Plus also, pleasant conversation if desired.
> I am skeptical of unmanned taxis for reasons having nothing to do with autonomous driving. What do you do if a passenger gets sick, passes out, starts acting violently, has a heart attack, all things regular taxis encounter.
These are risks anywhere. I'm not monitored when I ride an elevator, how is an automated taxi any different?
> pleasant conversation if desired.
This is a good point, I'm sure many people will prefer human-operated taxis for the social aspect and possibly for local recommendations.
If the wrong way is safer than the current way I'll take the wrong way any day of the week and celebrate when someone comes along and competes with them by doing it the right way.
If you didn't click on the article, you wouldn't have learned that Uber hasn't finished its own investigation, and warrants the accident serious enough to suspend its own trials in both Arizona and Pittsburgh.
More details would be helpful. If this goes to court, the video from the self-driving vehicle would be interesting.
An important question is whether their system is smart enough to take evasive action.
It's becoming clear that there are two ways to approach self-driving. The first stems from the DARPA Grand Challenge, which was about off-road driving. For that, the vehicles had to profile the terrain, plotting a path around obstacles, potholes, and cliff edges. The GPS route was just general guidance on where to go. That's the approach Google took, as can be seen from their SXSW videos. Google also identifies moving objects and tries to classify them. With all that capability, it's possible to take evasive action if some other road user is a threat. The control system has situational awareness and knows where there's clear space for escape.
The other approach is to start with lane following and automatic cruise control, and try to build them up into self-driving. This can be done entirely with vision systems. That's the Cruise Automation and Tesla approach. This puts the car on a track defined by lines on the pavement, with lane changes and intersections handled as a special case. There usually isn't a full terrain profile; that requires LIDAR. So there isn't enough info to plan an emergency maneuver for collision avoidance.
This distinction is not well understood, and it should be.
In other news, statistics from 2014[1] suggest that since the autonomous car crash was reported yesterday (11:29pm EST, Mar24), 9 people were killed by human drivers under the influence of an intoxicant of some kind.
Autonomous vehicles will have challenges that human drivers do not -- like software vulnerabilities -- but those are problems worth overcoming if it means that poor decision making by humans can be reduced.
Seeing as nobody was seriously injured and that it wasn't the fault of the autonomous vehicle (reported by other news outlets) I think it's fair to bring it up. 0 deaths in the past six months vs 9 in the past 10 hours. That isn't even all vehicle deaths by humans in that timeframe either.
There are ~260 million automobiles on the road in America. There are a statistically insignificant number of self-driving automobiles. I _suspect_ that self-driving vehicles are going to reduce traffic accidents when deployed en masse. However, at the moment it's comparing apples and oranges.
Toyota thinks they'll need 8.5 billion miles of autonomous driving before they'll have statistically useful data about the safety of autonomous vehicles. Of course, by the time anyone gets to the 8 billionth mile of driving, the technology will have matured considerably, rendering the early data moot. So it will be a long time before we can start making data based proclamations about how safe autonomous vehicles really are.
Escalator accidents kill 17 Americans a year. So I wonder if the average American travelled as many miles per year on escalators as they do in cars if the death/injury rates would be higher or lower?
How do you define "the fault of the autonomous vehicle". If you take the time to read through Google's disclosure reports, you'd find that they had virtually zero, if any, accidents that were the fault of the tech. In fact, I believe all the accidents that could be conceivably be blamed on Google was blame during on the human Google driver. If you read a little closer, though, you'll see that these human-operator errors all arose from the Google driver taking control of the car because they thought the autonomous mode would lead to an accident.
In short, accidents of all varieties are hard to classify. In this case, though you seem to vastly underestimate the numbers involved. According to latest reports [0], the Uber program is only driving about 20K miles a week. Further more, the rate of human-interventions is about once every MILE.
Normal human driving is bad, of course. But how many times in the last 1,000 miles you've driven have you needed your passenger to intervene to prevent an accident?
When reading Google's disclosure reports, note that the human drivers are instructed to preemptively take manual control in situations where an error could have consequences, so the accident-rate statistics do not reflect the systems behavior in the situations that would really test its capabilities.
Also, Uber outright refused to comply with California law and register their self-driving car program, and it's looking more and more like the reason they did that is so they didn't have to publicly disclose those disengagement numbers.
I suspect you're right. However, a higher number of disengagements could just point to a more cautious approach, more difficult environment, etc. Hard to pin it exactly to maturity.
For example, I believe Waymo cars drive mostly on a fixed number of preplanned routes. You would expect limited disengagements over time, as it's got tons of data on those specific routes.
I can't find a complete list, but as I recall, the majority of the accidents consist of Google's vehicles being rear-ended while slowing down to yield at intersections.
>In short, accidents of all varieties are hard to classify
It's probably fair to say that, in the course of a day, many of us are in driving situations where we avoid potentially dangerous situations whether or not it might technically have been someone else's fault were there an accident. Most of us (hopefully) learn to drive defensively to minimize the likelihood of an accident, not to take whatever space on a road we have a "right" to.
Who cares about other people's safety? I value my own safety and time over being legally correct on the road. If I need to break traffic laws to avoid hitting another vehicle, or being struck, that's easy. Being in a collision is a huge time-suck, even if I'm in the right. (That said, I'm not going to intentionally cause other vehicles to collide among themselves because it enhances my safety or reduces my time spent)
I was originally going to write "safety of themselves and others", but that implied a level of nuance I didn't want to get into explaining.
> If I need to break traffic laws to avoid hitting another vehicle, or being struck, that's easy.
IMO based on anecdotes and observation, not enough people do that because of the "it's not my fault" attitude. The first consideration is fault, the second is safety.
You'd also have to factor in the times the human operator has to take over for the autopilot to prevent accidents.
Many articles and press releases make it look like self-driving is "already here" when it's actually still very much a work in progress and not at all a solved problem
I understand statistics but that doesn't make the distinction trivial. Nobody is suggesting (I think) that we should ignore any problems these vehicles represent. That we even need to "argue" this is kind of crazy: lots of people die, get in car minor/moderate/severe accidents that cause injuries, etc. and there is no computer at fault. Getting into a car accident that isn't your fault is basically random.
So why pretend like it's not important to cite how many people die today? We write laws and rules for how to operate vehicles that can be observed and operated by a computer. It's very likely that at a certain point those computers will exceed our aggregate ability to prevent deaths.
There are likely also wins we could get from the design of vehicles by not having to accommodate a human driver. Take for example the window that a person can be ejected from the vehicle through or debris can easily penetrate.
It's an apples-to-oranges comparison at the outset. Autonomous cars are overseen by humans, drive only in good weather, and do almost all their driving at low speed. At the very least, you should be comparing human data at low speed and in good weather.
I agree, but it's more misleading to have headlines about a self-driving car that crashed without mentioning it's not the self-driving car's fault (and without mentioning nobody was even injured).
It's too early to know who is to blame, or even if blame can be assigned so cleanly. Meanwhile, we have Uber's own reports that say that human drivers have to physically intervene during autonomous operation at a rate of one intervention per milehttp://www.recode.net/2017/3/16/14938116/uber-travis-kalanic...
Uber is pretty middle of the pack as far as the state of their progress developing this technology is concerned. One intervention per mile on average isn't any kind of scathing indictment of Uber's autonomous driving program. The sensationalism of Uber's leaked intervention report mostly just illustrates a disconnect between public perception about where the technology is at, and where it's really at.
There are 23 other companies with permits to test autonomous vehicles in CA and they're all doing pretty much the same thing. Waymo is way out in front, Cruise/GM is doing well, but there are maybe a half dozen large, deeply invested players (including Tesla) not nearly as advanced as Uber.
It's possible, of course, that Uber was the victim of nasty coincidence. But bad accidents like this are rare. Most people go many hundreds of thousands of miles between rollover collisions or other equally severe accidents. Uber's self driving car program in Arizona is only a couple of months old. It's not a bad presumption that this is not purely fate.
Yes. It's entirely possible that the other driver was 100% to blame but a rollover accident is actually a very serious accident even if there were apparently no serious injuries in this case. This wasn't some happens all the time fender bender.
If the denominator doesn't matter, then there have been 0 accidents of any kind in the last femtosecond due to humans. Clearly that means autonomous vehicles are a menace, right? Or maybe it's rate of accidents not the absolute number that matters?
What are you even trying to prove? Yeah, the denominator matters man! I agree! Statistics are great and I don't think I was trying to say that "0 is less than 9." I was simply pointing out that this article is stupid and doesn't actually prove anything, which is why the parent brought this up as "in other news."
But then again, you're defending a clickbait headline about a robot getting into an accident that it may or may not be responsible for that didn't even result in someone being injured.
I think the point here is not that the self driving software is already beter than humans but rather that the upside of cars driving themselves is so big that it is worth pursuing until we get it right.
I agree completely with that statement. My fear is that irrational reporting and FUD on behalf of those with a vested interest in maintaining the status quo as long as possible will set back automation by a significant number of years.
Some hype is useful, to be sure, but it is a delicate balance with avoiding a situation where the pace of innovation has been oversold.
One thing I just thought about, Musk said miles trains a better AI. Supposing this yields true valuable results. All SDV may become near perfect drivers. Humans will keep being spread along the spectrum of dangerous to safe driver.
ps: thanks for reminding me that statistics are hard to get right.
>Autonomous vehicles will have challenges that human drivers do not
Wait a minute: I don't think the argument is whether machines can drive better than humans. It's whether these companies that are desperate to make money (Uber, Tesla) are pushing potentially dangerous technology on to society before it's ready.
> The argument should be whether autonomous cars from Uber can drive better than humans.
How much better? 1% over median? That means it's still worse than 49% of the drivers on the road. Statistically, the automated driver would be "better" for society at large, but it's still going to cause a lot of accidents and kill a lot of people.
I also wonder if there might be even more accidents with an automated system which is still in the fat portion of the bell curve, since there will be no "good" drivers to reduce conditions ripe for an accident in the first place.
The much stronger defense in this case is to just cut to the chase and point out that, at least according to the police, the self-driving car was not the one at fault here - it was the driver of the other car blowing a yield sign.
Wait, drunk driving is a willfully criminal act how is that comparable or relevant here?
We already have things like taxis, Uber and Lyft and people still get drunk and drive under the influence of alcohol and kill people.
Why would ride services with autonomous vehicles change that?
If someone is going to get drunk and drive a car rather than calling an Uber with a human driving they are no more likely to get drunk and call Uber because Uber has driverless vehicles.
The target for self driving cars is ride services not individuals.
>We already have things like taxis, Uber and Lyft and people still get drunk and drive under the influence of alcohol and kill people.
Many reasons for that, from being cheap (I already own a car, I won't pay some random guy money!) to mere convenience. But such incidences are bound to get fewer once self-driving cars make up the majority of cars on the road and can take over from drunk drivers.
>The target for self driving cars is ride services not individuals.
In the short-term for Uber maybe, but not in the long-term for everybody else. I doubt that traditional car manufacturers, like BMW, Audi, Honda, Jaguar or Mercedes-Benz are pursuing self driving cars merely to offer ride sharing services, after all these companies are in the business of selling cars to individuals.
1 )That we will see the demise of self-ownership of cars when you state:
>"once self-driving cars make up the majority of cars on the road"
2) Car makers will continue to market car ownerships to individuals when you state:
>"I doubt that traditional car manufacturers, like BMW, Audi, Honda, Jaguar or Mercedes-Benz are pursuing self driving cars merely to offer ride sharing services, after all these companies are in the business of selling cars to individuals"
The problem with inebriation is that people don't exercise good judgement which is the reason they get behind the wheel and drive because they "think" they are OK.
Someone who has a car that has a "self-driving mode" can just as easily get behind the wheel and operate the car in manual mode because in their impaired state they also "think" they are ok to drive.
Self-driving cars don't solve the problem of impaired judgement.
>That we will see the demise of self-ownership of cars when you state
I stated no such thing, a car having self-driving capabilities has literally nothing to do with the ownership/proprietor status of said car.
>Someone who has a car that has a "self-driving mode" can just as easily get behind the wheel and operate the car in manual mode because in their impaired state they also "think" they are ok to drive.
You are still thinking way too much in the present. Try to imagine a future where pretty much every vehicle on the street is autonomously self-driving, legalities have changed that no human "security driver" is required anymore. In such a setting it would become the norm to do something other than driving while traveling in a vehicle, like being productive or having some leisure time. Sure enough, there will still be people who would want the "thrill" of controlling their vehicles themselves, but manual driving will probably be reserved to special lanes/circuits, due to how rare it will become and it negatively impacting the performance of the fully autonomous traffic.
We won't get there from now to tomorrow, we will only get there with many small baby steps until we are actually there.
The real issue here - yet to be resolved at this time - is whether Uber is going about its development of autonomous cars responsibly, as Waymo, for example, appears to be doing. Generalities offer no answer to that question.
People say Uber needs self-driving cars in order to survive. But how realistic is it that we have self-driving cars without a backup human operator anytime soon? If someone needs to be behind the wheel anyway I don't see how it's cutting any costs.
Who knows. But it seems damm sure that the first successful use case isn't going to be a self driving taxi if they do ever exist. It's arguably the most difficult of the common use cases. Uber's strategy seems literally insane.
Really? They seem like pretty decent things to build first.
Owned by a company so the failure of one isn't a huge deal (difference between at worst having to switch cabs compared to losing your own car for a few days).
Costs are already high.
Costs can be spread out.
Costs are very important to the customer.
People expect to be sharing the same space as others have used before them.
Limited expected range and run time.
Customers are already not in control, and taxis are not well known for careful driving in many places (fairly or unfairly).
Speed is not a huge issue (can't run trucks limited to 25mph).
Can pick locations city by city, so you could run an effective business if you can do London but not rural Wales.
Nah. It's long-haul trucking. There's a much easier on-ramp to profitability because you don't have to solve the whole problem at once.
You start by just automating the highway driving, which is a much easier problem than city driving. You might still need the human operator to deal with the off-highway driving at the beginning and end of each journey, but in the middle they can just hang out in the cab and work on their PhD dissertation or whatever. At that point your business has become quite a bit more efficient, because you've gotten rid of the need for your trucks to be stationary for 10 out of every 24 hours.
So that's already a big win for tackling a relatively small piece of the problem. You can keep chipping away from there, and each increment will net even more returns.
They way I see it, companies just need a place outside the city where drivers can enter the truck and do the last part (or leave the truck and get the next one).
This may work similar to ports, where a pilot come to the ship to navigate the port area.
Or even better they park and the trailers are then hooked up to human driven, electric tugs that aren't realistic for cross country highway travel (range/density of battery) but are for under 50 mile trips into the city.
>railroads, which having been eating the lunch of trucking for awhile now
Right. Which means that a lot of trucking isn't especially long haul already which in turn means that taking people out of only the highway portion isn't necessarily the big win that a lot of people assume it could be. If you still need truckers around the endpoints, it doesn't necessarily make sense to take them out of the middle (given other advantages to having a human operator). Of course, good assistive systems can still be used to improve safety.
That's why I worry about this area of technology, especially in the context of the history of AI in the past. The hype factor is extreme, but there's very little domain expertise at play.
I have a relative who is a trucking regulation consultant. It's not some trivial business... there are a lot of moving parts.
> You start by just automating the highway driving, which is a much easier problem than city driving. You might still need the human operator to deal with the off-highway driving at the beginning and end of each journey, but in the middle they can just hang out in the cab and work on their PhD dissertation or whatever. At that point your business has become quite a bit more efficient, because you've gotten rid of the need for your trucks to be stationary for 10 out of every 24 hours.
Can't this already be handled in a pony-express fashion? Hand off the trucks between people? Honest question, I assumed that this kind of thing might already be setup.
Your solution also requires that a person sit in a single box for 24 hours, I can see that taking a toll, and the start and end of the journey must be timed that a person can sleep at the right time before waking up ready to drive again as the truck needs to come off the highway.
I guess my key thoughts are that this doesn't actually lower the human cost (employed for the full time still), and adds on a few extra issues dealing with people. What you gain is a faster overall transport.
However, I can see the added automation cost being a lower part of the overall vehicle bill, and there are enough large trucking companies to mean the costs can be spread out over the business.
The issue is that reliably driving in London without any sort of human backup is a really hard problem--and is unlikely to be achieved in anything like a VC funding time horizon. Sure, cabs in a city are a business whether the driver is organic or digital. But you have to be able to actually do it (relatively) safely and reliably.
>But how realistic is it that we have self-driving cars without a backup human operator anytime soon?
The whole, "in case of emergency, grab the wheel" is just a legal formality. If you are not actively driving you won't be in a state of mind to be able to do that.
I think eventually there will be enough data to show that a human behind the wheel does absolutely nothing to prevent a crash.
The whole, "in case of emergency, grab the wheel" is just a legal formality.
It's looking to be just the opposite, with several of the autonomous projects mumbling about how they don't think it is a workable level of automation, exactly because people don't stay engaged.
They are going to try to jump from assistive systems like automatic braking to vehicles intended to handle unexpected driving conditions without immediate human intervention (maybe the system can't proceed, but it will safely stop).
Yes, I'm a pretty big Tesla fan, but this has been my main criticism against Tesla's cars. The most rabid fanbois don't want to believe it, and just buy Tesla's argument that it's the driver's fault when an accident happens while AutoPilot is running, because he "didn't take the wheel in time."
I think Ford's study proved that humans can't react quickly like that if they don't already drive the vehicle, and I was able to figure that out myself simply through intuition, way before Ford discovered it. I also think that internally Tesla does believe the same thing, which is why it has pulled its "self-driving" ads from around the world.
It's just publicly continuing to say that if there is an accident then it's the drivers' fault "because there are plenty of warnings" (like 200 miliseconds before a crash) or that it asks you to put your hands on the wheel every 5 minutes, as if that does anything. An accident could easily happen in-between those 5 minutes. I guess that feature is there because Tesla also learned that people keep dozing off when a car is self-driven. But I think people who start dozing off because they aren't driving the car themselves will be able to ignore the sounds relatively easily.
The bottom line is that such systems put humans in an almost impossible situation that they're not equipped to deal with as humans, and Tesla and others act like they're legally covered and that's all that matters.
I think there should only be two types of systems: autonomous features that only activate in emergency situations to save people's lives (like auto-braking when there is an accident about to happen), and complete self-driving that doesn't need a human's attention at all. There shouldn't be anything in between.
The change (too many hands-off warnings and the system is disabled until you restart) is a step forwards, but the problem is that hands on the wheel is not a reliable measure of whether the driver is paying attention and ready to intervene.
It may perhaps become possible to allow full autonomy on certain roads under certain conditions. e.g. you can drive on these stretches of highway in daylight in good weather. How something like that gets specified and enforced in any formal sense, I have no idea.
Currently, yes, but as these get better, there's going to be a really dangerous middle ground, where the car is great in 99.99% of cases but can't handle the rare exceptions.
The human driver dozing off or browsing Facebook on their phone simply isn't going to be paying enough attention to realize they're barrelling towards an active train crossing or whatever the unexpected edge case is.
To be clear, this is not an argument that the 'hand on wheel' rule should be dropped for existing systems. It is an argument that the 'hand on wheel' rule should not be used as a justification for allowing the general public to operate semi-autonomous vehicles.
That's just part of the pump and dump scheme that is Uber. Handing out $20 bills to entice people to accept a ride isn't a sustainable business model... robot or no.
Give it time. Soon they will have a foothold (aka presence) all over the planet. By the time they do this the auto industry will be able to produce a reliable self-driving car and regulations will be catching up.
Then someone who owns a self-driving (preferably electric) car will be "renting" his/her car to Uber for the hours when he/she does not need it.
The uber will THRIVE using someone else's vehicle, kicking the responsibility for maintenance etc to THEM and Uber will be collecting a nice 5-10-20% cut JUST for the service provision (booking, ACH, etc).
Uber's got a few advantages, but I think they mostly boil down to being the first ones to have an app, and having a whole bunch of VC cash to pour into a business that hasn't proven to be any more profitable for them in the past few years than it has been for everyone else over the past couple centuries.
They've got one big disadvantage, which is that the fleet maintenance costs for their service model are extraordinarily high. There are no in-house mechanics, and there's no fleet standardization. They've been able to get by without that through a combination of that VC money, and being able to shift the fleet maintenance costs onto the ledgers of their drivers, who presumably aren't generally factoring those costs in when forming a business plan.
I don't know when self-driving cars will be fully autonomous. But it seems a pretty safe bet that, by the time they do, their advantages in the app and the VC cash department will have eroded away.
By that point we're back to the same old commoditized, race-to-the-bottom industry that, minor departures aside, has characterized taxi service since the horse and buggy days. At which point there's no room for individual owners anymore. In that kind of industry environment, the people who have standardized fleets and in-house mechanics will always be able to outcompete the people who need to pay Volvo dealer prices for vehicle maintenance.
I also don't know when self-driving cars will be cheap enough that typical individual owners will have either the need or the inclination to massively jump their depreciation rate, or to subject them to wear and tear by random strangers. But it seems a safe bet that's going to be a long way along the technology's maturity arc. I, personally, wouldn't be willing to lay down cash to bet whether the long term looks like a bunch of individuals renting out cars they own anyway, or people mostly not owning cars because it's not worth the trouble to own one anymore. For my part, though, I'm more keenly interested in the latter - right now, for me, the main value proposition behind owning a car is that it's kind of a hassle to haul myself to a car rental shop when I want one for a weekend trip. That hassle disappears when they can just send the car to me instead.
tl;dr - Uber's "sharing economy" model, attractive as it sounds, never really made a whole lot of economic sense in the long view. Their more likely futures are to either become one of the traditional taxi companies, or be squashed by them.
>I also don't know when self-driving cars will be cheap enough that typical individual owners will have either the need or the inclination to massively jump their depreciation rate, or to subject them to wear and tear by random strangers.
So many "car sharing" discussions seem to be predicated on the idea that auto costs/depreciation are time-based. In fact, they're probably a lot more mileage-based, especially in areas that don't get snow. So that extra 25K miles that get put on a "shared" car are not remotely free just because you already own the car anyway.
I actually don't expect they'd be successful using arbitrary people's cars.
These marketplace companies don't tend to do very well when the supply side of the market isn't properly prepared for the nature of operating in the marketplace - i.e. most Uber drivers are exclusively Uber drivers - it isn't really car sharing any more.
The time and effort would shift away from dealing with drivers who know where they stand, to providing constant support to upset car owners. Quickly, it would switch to the supply primarily being provided by large businesses who own 1k+ cars because the economics stack up well in large quantities.
At that point, Uber might as well just buy the cars directly and cut out the middleman.
In that scenario the one thing they bring to the equation is a famous brand. That doesn't give much of an edge when it is easy to try a different service.
I think safety- and brand-conscious Volvo is going to deeply regret this partnership with Uber. (Where Uber provides the self-driving logic and Volvo provides the the vehicle itself.)
I can only imagine the internal struggles in Volvo right now. At the very least, if I were running Volvo Cars I'd ask Uber to replace the Volvo logos on those cars with Uber logos...
(Volvo has their own, quite advanced self-driving program, but they seem to be doing it the correct/cautious way.)
I can't find any details of the accident. Every headline is the same - Uber self-driving vehicle involved in Arizona crash. You read the article - and no details whatsoever. So why bother reporting it? The news isn't that the accident happen, but why it happened.
That's "news" for the majority of publishers in 2017: Clickbait headline designed to generate traffic to sell ads. Actual journalism and content becomes secondary.
Let's put ourselves in the shoes of someone who reads this headline and gets a quick summary. This headline makes it sound like Uber isn't confident of its technology and they think their technology might be the cause and hence they are suspending it. In this case, it seems pretty clear, as confirmed by the police officer, that Uber's car wasn't at fault.
More importantly, those tweets purport that the other car (driven by a human) failed to yield at the intersection and hit Uber's car. If this proves to be true, the only question is whether a human driver in the Uber car's position would have noticed and braked to avoid entering the intersection.
There's mention of a third car ... we'll see what the traffic cameras say!
There is a link in the second sentence. The rest of the article after that is basically just repeating recent issues around Uber. Guess they need to fill the space.
So the thing I don't get is in this autonomous car future, is everyone time shifting to share these cars? Does this work in a factory town where everyone has to be in at shift change?
Or if I lease my car out when I'm not using it how do I not get my car when I really need it or want to go home sick?
I wonder if that is actually going to happen. Since going to school I'm aware that autonomous systems that "take over responsibility" are an unsolved ethical problem. Delegating this away to the manufacturer or maybe even the 3rd party who delivered the code doesn't make it better. Adding many layers to responsibility usually s*cks. At least for me it's also why I don't like corporates, it's not only that you have less control over your work - you don't even know who pushes the buttons. As it turns out, in many cases it's nobody concrete. It's some kind of weird group dynamic that is oftentimes in control. In case autonomous cars become common, analysing this kind of complexity will be an awesome kind of busywork.
I think that a car accident is the wrong example to reason about autonomous cars and their accidents. A better example would be a toaster that starts a fire, in that case the important questions are, was the manufacturer negligent and was the owner negligent. If neither, then the insurance company will just regulate the damages and nobody goes to jail.
That is also a model that works well for autonomous cars, probably with some kind of licensing of the car and a mandatory insurance. The thing is, we do not need to deter a autonomous vehicle from speeding or driving under the influence, we can directly program it to respect speed limits and not drink.
>was the manufacturer negligent and was the owner negligent
Yes, but in the case of the toaster, it's pretty much the case of some sort of human error on the part of the owner (the cord was damaged, they jammed something in and it got stuck) or it's product liability on the part of the manufacturer.
It's an interesting aspect to these sorts of autonomous systems that you can expect to have accidents, even fatal ones, at some rate and that may have to be considered acceptable because "stuff happens" sometimes. There aren't a lot of examples where this is the norm in consumer-facing products. Pharmaceuticals probably come closest.
The main thing is the "risk", which in insurance speak means something like probability times costs of bad things happening. For a toaster this is less than 1 cent per year. For a human-driven car that's 1000 $ a year.
To be more concrete: the probability that a car owner gets involved in an accident at some point is actually really high. I've already been involved in 2 car accidents, not driving myself though, one of them was very serious. On the other hand I've never heard of someone having a toaster catching fire. And even if it would catch fire, you are probably watching the toaster and can put it out easily - maybe even by throwing the toaster out of the window.
Kind of a silly question but in the case of a "normal" car accident, don't they just come down to compensation costs. Either for repairs and damages, the individual involved or even medical costs. Couldn't the likes of Uber simply underwrite their own insurance and fight their own cases in court where appropriate?. If self-driven Ubers are as safe as we expect all self-driven cars then wouldn't this be a "cheap" option anyway?.
I don't know where you would start with more serious accidents, if Uber is at fault.
If everybody sticked more or less to the rules and if there are no severe injuries/very costly damages, then it's really just compensation costs.
I don't know the exact situation in the US (also I'm no lawyer or something), but in Germany when some kind of Negligence is involved people can lose their drivers licence temporarily or forever, may have to pay fines (on top of compensations) or even have to go to jail. (Edit: also a costly Medical-psychological assessment may be needed afterwards to be able to drive again in the future.)
Coming back to the costs, it's also interesting to say that insurances usually cover sums 1 - 20 millions. That's how much damage a single person can easily do with a car.
It's said that it's a plus that self-driving cars are somehow interconnected and learning from each other. You may argue they are thus one entity and may have to be insured like that. Ok, but that's maybe a bit too esoteric now... ;)
Their rate is only lower than humans because humans take over whenever the software faces a problem they can't deal with, indicates a problem or behaves erratically.
If human pilots had a 15% error rate, and autonomous planes had an error rate of 5% - which would you fly? It's basic math - you pick the one that is most likely to get you there safely.
Most people have no issues getting into cars - despite the fact that they're one of the largest killers. So we've already established that for most people, the convenience (or their love affair) of cars is enough to overcome the current risk of death/injury - if we could decrease that, is that not surely a net win?
If human pilots have an error rate with a large variance, and you can choose your pilot (or fly your own plane) the decision is less clear cut. Especially in a world where human accident statistics include a significant proportion of people exhibiting reckless behaviours that many jurisdictions will take their licenses away or even jail them for.
I'd want autonomous cars to be a lot safer than the average human driver before they were allowed on regular roads without a human behind the wheel. In fact, given that humans and machines are likely to make systematically different errors such that the human can solve most of the driving issues that the machine has and the machine can solve most of the driving issues a human has, the case for getting rid of the legally-responsible human driver altogether (humans ought to have the right to be drunk or sending emails whilst in their own vehicle; taxis ought to be cheaper?) isn't particularly strong.
You're making a subjective judgement that humans are fundamentally less safe than autonomous drivers.
Like most HN commenters you are making this assertion without any facts, as there isn't sufficient data to make the assertion. Other than drunk driving diversion, the case isn't as clear cut as you may think.
If you want to reduce fatalities, reducing SUVs and drunk driving is the end goal, as rollovers and alcohol related crashes are the most preventable fatality sources.
Every system has a quality of performance. In production (let's say steel) we can produce with a degree of quality. , for statistical models we have a degree of confidence, etc. Every machine we currently build will fail to a certain calculable degree. The less you maintain the machine the higher the degree of failure will be.
Slinging psychiatric labels to score points in an internet argument breaks the HN guidelines against name-calling and reliably leads to (and is itself) low-quality discussion, so please don't do that here.
Edit: since we asked you not to post like this before and yout not only ignored us but have been doing almost nothing else, I've banned this account.
Why is this downvoted... a sociopath is someone who holds a complete utter disregard for everyone but themselves. Does that not describe all and any of Uber's actions or this is getting downvoted because your one or want to be his bro?
I think it's wrong on a few levels to use terms like sociopath in general, and especially about someone you've never met.
When you use that word you're talking about another person's mental state, which is opaque to you. No one knows what anyone else thinks and feels.
What you should really be concerned with is that person's actions, not what they feel about their actions. For example, you can be a murderer and not be a sociopath, as long as you feel bad about it. But it's really the murdering itself that's the problem. Does it really matter how the murderer feels about it? Some people think so, but the problem has to be dealt with either way.
I'm not saying you shouldn't criticize his actions, but I'd try to avoid criticizing his mental state or making assumptions about how he perceives reality.
Actually a lot of what makes our judgements about behaviors salient is precisely what we think about their internal state. We're more likely to forgive someone who seems truly remorseful over someone who takes unrepentant pleasure in killing.
Just taking the action in isolation is not enough, we need to make judgements about how likely the person is to do the same or worse actions again. Uber's management has, through their actions, revealed something about their mental states--you'd be wrong to disregard this information.
"I think it's wrong on a few levels to use terms like sociopath in general, and especially about someone you've never met.'
I don't :)
"When you use that word you're talking about another person's mental state, which is opaque to you"
Sorry, but that is not accurate in this case.
A number of these conditions are diagnosable and defined based on their externally presentable actions and behaviors.
In particular, the DSM-IV and DSM-V definitions amount to:
"Antisocial personality disorder is characterized by a lack of regard for the moral or legal standards in the local culture. There is a marked inability to get along with others or abide by societal rules. Individuals with this disorder are sometimes called psychopaths or sociopaths."
Note that it is described in terms of behavior and not internal mental state.
It's true that some disorders are defined by opaque mental state.
This is not one of them :)
So your argument is essentially "he may not be a sociopath just because his actions seem like those a sociopath would take".
But that is not correct.
"I'm not saying you shouldn't criticize his actions, but I'd try to avoid criticizing his mental state or making assumptions about how he perceives reality."
Again, given that sociopathy is defined in terms of behavior, i think this is not correct.
I think it's 100% completely and totally fair to say this about disorders characterized by internal mental state and not external actions.
But in this case, in large part, the DSM-IV defined sociopathy in terms of external behavior.
DSM-V is a little different, in that they cover self-functioning as well, but the self-functioning it covers are externally observable. e.g. " absence of prosocial internal
standards associated with failure to conform to
lawful or culturally normative ethical behavior." Note that the internal standard is viewed solely in terms of failure of externally observable traits.
TL;DR if you stick with the straight definitions of what these disorders are, it's entirely possible to accurately believe someone has one based only on externally visible behavior.
I understand you may not like the societal connotations, and you can reasonably argue about that, but trying to cast this into an argument about whether it's possible to judge someone a sociopath or not based on externally visible actions seems a losing argument to me:
It is possible, and you can validly do it.
I think these definitions of being a sociopath are demonstrably very silly. You can easily think of people who "disregard legal and moral standards" whom no one would ever describe as a sociopath.
When you call someone a sociopath you are absolutely making a judgement about their internal mental state. I don't see how anyone can reasonably argue otherwise and I don't find any of your arguments convincing.
> Sure that's the only reason why people can disagree with your point?
Well, there's also the 6700 Uber employees with a vested interest in the financial success of the company. It wouldn't take a very large fraction of them to downvote things here.
The company is under extreme pressure and has proven over and over again to be willing to cut corners when it comes to legal compliance. It should be frightening to anyone on the streets that Uber managers are making decisions to put self driving cars on the road. They neither have the culture, patience nor the expertise to make decisions affecting life and death.