Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tesla FSD Beta almost causes a head-on collision [video] (youtube.com)
179 points by ra7 on May 16, 2022 | hide | past | favorite | 274 comments


> "That was my first near-accident on beta"

As a tech and machine learning maximalist I'm frustrated with how haphazardly Tesla is approaching self driving. This is not a technology that should be QA'd consumer-first. On top of that their approach of ditching LIDAR and other sensors is shooting themselves in the foot.

Tesla is in danger of putting the entire AV industry back by ~5 years just through loss of reputation. Regulators and AV pessimists will jump on the opportunity to use Tesla as an example of why these vehicles should be completely banned from public roads. Cruise, Waymo, Zoox, et al. are testing and rolling out responsibly while Tesla is using their customer base as crash test dummies.


   There is no denying it. Even the system admitted its fault and said it was sorry.
At this point, it is impossible to defend this contraption. This thing needs to be investigated rather than turning people into crash dummies for beta testing safety-critical software that doesn't even work.

Might as well call it 'Fools Self Driving', since it doesn't work as advertised even when it still requires the driver to have eyes on the road at all times; which by paying attention and intervening, that has just saved his life.


They are going to start with Insane and Ludicrous self-driving modes, before graduateing to 'Fools Self Driving.'


I’m a fan of “Face Sure Destruction” but fools also seems appropriate.


> On top of that their approach of ditching LIDAR

Tesla never had LIDAR. Their older cars have radar.


I believe the above commenter was referring to Elon’s insistence that LIDAR not be used, while many in rest of the are using it.


That’s not what ditching means.


Rejecting the need for the technology at the design stage is ditching it. It was just ditched at design, not ditched at implementation.


i disagree. it means to leave or end association with.

you can't ditch something you was never with


Not just their customer base, but everybody else on or near the road.


So far regulators seem to be smart enough to distinguish between different vendors.


The important difference is that the other companies can back up their safety claims with compelling data. I'm optimistic that the outcome will be a regulatory regime that basically codifies the approach taken by Cruise and Waymo.


Will be fixed in the next 6 months: https://youtu.be/VfyrQVhfGZc?t=3107

Edit: Adding \s \s \s


Yeah we've never heard that before: https://www.youtube.com/watch?v=o7oZ-AQszEI

Musk has been claiming full self-driving to be just around the corner for 9 years straight. Every year.

But I'm sure this time it's going to happen. It's bound to eventually right? (wrong)


The trick is its "Fully Self-Driving" TM. Which doesn't mean what the rest of the industry means by that.


Heh yeah, they'll just rename autopilot to "Fully Self-Driving", then act all confused when people think its fully self driving.


As in you Fully Self-Drive...Yourself.


Perhaps FSD is actually a recursive GNU style acronym - "FSD Self Driving" - an easter egg for those in the know.


>Will be fixed in the next 6 months: https://youtu.be/VfyrQVhfGZc?t=3107

(2017)




Absolutely. It is my personal tragedy that many though I posted it as a factual statement :-))


Nailed it!


Well, the technology is ready, but the regulators aren't going to let us slam into other cars yet because they are humorless troglodytes.


By the number of down votes it seems many needed the sarcasm tag...I take offense that you though I would take Elon Musk seriously :-))


[flagged]


Then hurry up with that rep, because one can't downvote old posts.


This was bad. Notice how little time the driver has to react. At 8:05, everything is going well. Straight road, no obstacles ahead. At 8:06, the car has veered into opposing traffic. So the driver had about 1 second to react to Tesla's mistakes.

Now look at 2:51 .. 2:54, where the driver is distracted from the road for 3 seconds while operating the touchscreen interface. If the operator had been using the touchscreen in the same way at 8:06, there would have been a crash.

This demonstrates a few things:

* One fixed forward facing camera is not enough in rainy conditions at night. Humans will move their head a little in that situation to disambiguate raindrops and windshield dirt from distant lights.

* The system seems to identify oncoming vehicles at night by recognizing two headlights. If reflections or rain on the windshield or additional lights confuse that pattern, the oncoming vehicle is not recognized.

* Failure to recognize an obstacle seems to be treated as a no-obstacle condition.


> One forward facing camera is not enough in rainy conditions at night.

Indeed, I wonder if that single camera is even binocular, or if they're using something like motion parallax to compute depth? The obvious problem with a single camera is that if it's degraded by dirt, debris, water, etc. you have no other input to fall back on.

> The system seems to identify oncoming vehicles at night by recognizing two headlights.

This seems untenable. I often see drivers with a headlight out, or even more commonly, driving at dusk or even night with their headlights completely off. Also, what about motorcycles?


There are three forward-facing cameras, each with a different depth of field, which I would _hope_ it can use as binocular vision, but this still doesn't solve for this exact problem when looking sideways! The single B-pillar-mounted cam, in my mind, cannot POSSIBLY be capable of, say, looking around a bush at an intersection of a side street connecting to a large, fast arterial road. Again, humans move their heads around the bush, they'll creep forward and then _lean_ forward to get a view around it, without having to extend the nose of the vehicle into the perpendicular traffic. The car simply does not have the sensors to be able to replicate this functionality. It has a single camera in the B-pillar for perpendicular vision. That's it. In a position fixed BEHIND THE DRIVER'S HEAD. I cannot fathom how that will ever work for these cases.


>At 8:05, everything is going well. Straight road, no obstacles ahead. At 8:06, the car has veered into opposing traffic.

I am not sure even if the incoming car was detected. Looking at dashboard screen, it appears after driver took control.


> * Failure to recognize an obstacle seems to be treated as a no-obstacle condition.

I'm not sure how else this would work. Do you mean the system may see that there's an obstacle, but doesn't know what it is, so it ignores it?

If that's the case, then fixing it is nearly an impossibility. Certainly you wouldn't want to slam the brakes for a plastic bag floating in the wind, but if every obstacle is treated the same, that's what you'll end up doing. But if you try to build a system that can make a decision on every kind of obstacle, it's an extremely exhaustive list.


We saw a similar behavior in the self driving car that hit the person crossing the road at night in Arizona. It first recognized the obstacle far enough away to safely avoid hitting the person but it effectively couldn't decide what was in front of it or indeed if something was in front of it at all.

Presumably because it spent a large portion of the time in what we would describe in a human being as a state of mild confusion it didn't slow down or react in such instances and it struck the person it had recognized from a substantial distance going 40mph and killed them. Same as this car would have killed its driver.

Regulators shouldn't allow a car on the public roads that might cross the center line and murder a family who didn't get a say in the other drivers choice of vehicle even if said car on average kills fewer people because if we demand better we'll get it. There is too much money to be made for the avenue to be ignored. It seems to me that superior instrumentation could tell the difference between a plastic bag in the wind. It should be required until they can prove they can do it like a human being on vision alone.


> even if said car on average kills fewer people

That is only based on butchered up statistics. You would get very good statistics with a goddamn robot vacuum on a motorway, because it is a relatively uneventful driving environment. Guess where people will take over the wheel? In dangerous situations, basically self-filtering any negatives from the statistics. Very convenient.


For maximum clarity even if a future revision kills less people we shouldn't accept it doing occasionally doing crazy crap that could be fixed by superior instrumentation or engineering.

The goal should be as few casualties as possible NOT better than the incompetent idiots who currently drive cars.


You think as soon as it's safer than the average person they're going to stop improving it?


> Certainly you wouldn't want to slam the brakes for a plastic bag floating in the wind.

Maybe not )slam_ but if I am not confident what's the thing on the road ahead of me, I slow down. At night, when oncoming traffic can blind me, I slow down to give the driver (me) time to react. Otherwise if hitting obstacles is acceptable anyone can create a self driving vehicle.


If you are driving at night, and aren't sure if there might be a dangerous obstacle hidden behind some glare, damn right you should make a gentle brake check (to warn people behind you) and start shoulder checking for exits in case it gets weird. Defensive driving means you have a positive plan to navigate safety. You can't blindly assume your path forward is default-safe. Or you can yolo and risk being part of the 40k dead and countless maimed every year.


> You can't blindly assume your path forward is default-safe. Or you can yolo and risk being part of the 40k dead and countless maimed every year.

Or worse, be the one responsible for killing or maiming others.


He means that the system doesn't doesn't know whether or not there is an obstacle there at all. "I can't see anything" is treated as "go for it!".

The alternative is to require the car to be able to see everything all the time and treat gaps in its view as "might be something there", but doing that in the dark is really only possible with LIDAR at the moment.


A human in my country is supposed to drive in such a way as to be able to stop in the distance that they can see road is clear. So it's not that crazy to ask from Tesla that the car slows down if it can't judge to road to be free, that's exactly what we ask from human drivers.

Not all humans manage to do that, and they're liable if they crash into some other car. Loose their insurance and cannot drive anymore if they do it too often.


>but doing that in the dark is really only possible with LIDAR at the moment.

Which seems a good enough reason that we need to add regulation that these vehicles must have multiple redundant sensors that can operate at any permutation of sunlight/night/rain/snow/fog.

Radar can also see at night, but has poor resolution. Sonar is good at looking through fog but may be unreliable at highway speeds. Lidar really seems like the only viable option here. Maybe even lidar + radar.

I think it's interesting that Elon Musk always says we don't need lidar because it's a visual world. Lidar is a visual sensor. Just an active one.


Because Elon is a scam that wrongly voted for vision-based technology and can’t back out of it without losing money.

We manage to get away with 2 2D sensors, that can actively move around and we have freaking 3 billions of neurons with extensive knowledge of the physical world. We are not giving back only a 3D approximation due to the angle difference between our eyes - we also semantically analyze it, knowing what a car, a tree, the sky, etc are, with their usual dimensions in directions where we don’t even see it.


I wonder if there's cost involved on the decision to not having radar or LIDAR installed on those cars.


Of course there's cost involved, which is why it needs to be regulated.


> Certainly you wouldn't want to slam the brakes for a plastic bag floating in the wind

There's a reason you have radar even on AV's with LiDAR; the plastic bag doesn't absorb much energy


> Notice how little time the driver has to react. At 8:05, everything is going well. Straight road, no obstacles ahead. At 8:06, the car has veered into opposing traffic. So the driver had about 1 second to react to Tesla's mistakes.

Also note the feedback to react. The car did something unexpected that pulled the wheel from his hands. Would the driver have reacted as quickly to the car continuing in the same path when suddenly the motion needed to change?


They claim to have three forward facing cameras.


They claim to have three forward facing cameras.

The forward-facing cameras are all in one module at the rear view mirror location. They just have different fields of view.[1] So they don't get any multiple point of view or stereo benefits.

[1] https://themotordigest.com/how-many-cameras-does-a-tesla-hav...


Seems like stereo from cameras in the upper corners of the windshield would provide valuable depth perception. But what do I know...


It would work great for maybe 50 ft distance. In other words, useless for driving.


I’ve stopped using FSD except on the highway when it uses the old model. Every single time I used FSD I experienced at least one moment of mortal terror as the car made an obviously wrong decision like lurching into oncoming traffic or off the road. I’m shocked frankly there aren’t more stories. The only thing I can imagine is people did like me and stopped using it.


Same. I've also stopped letting it make decisions about lane changes on highways, so it's basically just an advanced cruise control for me.

Once the novelty wears off (both in the case of highway and more recently surface street beta), I ultimately want to use self-driving to remove cognitive load and provide a safer experience. The vehicle does a good job at keeping lanes at a relatively steady speed (note not great, phantom breaking is still a thing and it brakes/accelerates too fast still), but too often makes poor decisions which ends up increasing the amount of attention I have to pay, thus defeating the purpose. On some highways I frequent I can't even turn on auto-navigate at all, even w/ auto-lane change turned off, b/c for some bizarre reason it thinks it needs to get to the far left of a 16 lane highway to follow the route so it keeps bugging me even after I reject it (so again, just more of an annoyance). I pretty much only turn it on on surface streets when showing someone new in the car w/ me or driving on a long straight 2 lane backroad (which it does probably best of all).

Personally I'm not as up in arms about Tesla being reckless as everyone else, I think they make it pretty clear you need to supervise the car thoroughly. But for me it's just not really even close to the level it would need to be to provide the convenience L4/L5 would.


I paid $8K for FSD two years ago and still not in the beta, but I stopped caring a while back. I still use Autosteer all the time.

LIDAR FUD is what everyone talks about the most by far, but I think these are the real, actual issues:

- Side pillar cameras simply do not give enough sight coverage of intersections, no matter what computer models say.

- Camera resolution needs to be a lot higher than everyone thinks. Think of how much harder it feels for a human to drive with even just a little bit of fog or rain.

- Project assumptions based on circa 2015 peak ML/AI hype. Classic development hell where rewrites/improvements are put off far too long.


I really don't understand how anyone is out there saying FSD is doing amazing like all those youtubers and Elon fanboys keep saying. It does horribly. Sometimes it does well enough to fool you into thinking its better... but then it just goes and takes the wrong turn or gets into the wrong lane.

I've requested twice to be removed from the FSD now and so far no one has removed me.


What made me mostly stop using it is that on a winding country road with no shoulder that I drive daily, it often STOPS for oncoming traffic in the opposite lane. These are standard width lanes, with plenty of room for 2-way traffic.

It also (on the same sort of road) applies the brakes gently very often, making me a bit sea-sick from the constant brake tapping.


+1 Bought a used X with FSD and learned the hard way to only use on freeways and highways in simple driving conditions or traffic. Even then I’m on constant alert for phantom braking. I can’t understand the $12,000 price justification.


FSD understands roads _so much better_ than the old stack. I like using without any "navigate" features, aka "drive in a straight line" mode when on surface streets.


The greater your ability to be like "that was a crap move, why did it make that move?" the less likely you are to be using FSD in a situation preceding one of those crap moves to begin with.


It scares me so much that there are people out there trusting this garbage in a vehicle on public roads.


It should be made illegal ASAP and those who bought it be refunded


This has been my experience as well.


But you paid $10K for it, right?


I made a $10k donation to self driving research, at least that’s how I’ve viewed it.


To a private entity owned by a scammer.

But don’t get me wrong, I can emphasize, I had plenty of bad purchases as well..


it's 199/month


Either way, you don't own it.


that has nothing to with the OP


It absolutely does! Tesla's practices are incredibly anti-consumer, and FSD is vapor-ware.

$10K plus $200 monthly? Talk about getting taken for a ride.


it's 10K or 200/month, not both.

who are you kidding - you pay 10 a month for spotify and dont own the songs, you rent movies on amazon and probably storage on dropbox and s3, computing power on google or ec2.

it's no different, except for you hate tesla and think it's different.


Spotify actually plays music as it claims. Amazon, etc that you listed actually provide services.

FSD is a feature that doesn't exist.


Tesla has never claimed their FSD is level 4 and you're not forced to buy it.


[flagged]


Calling it Fully Self Driving is a lie in itself.


> you pay 10 a month for spotify

Actually, I self host nearly all media I consume. And I own the bits much more than anyone who is paying for streaming services, buying lossy MP3's, and letting control over their life be licensed away by corporations.


Ok fine, you are in the absolute minority of humans on planet earth, but i will concede that if you actually do that, my comment does not apply to you.


Fully Self Ridden™.


I keep saying this, but i feel so ripped off with FSD. Why?

1. If i sell, it does not transfer to the new owner. Edit: people disputed this - but i think this only happens if you also hand over the account. Either way, given this write up, it seems rather complex and imo, it shouldn't be if Tesla is trying to be genuinely honestly (https://www.findmyelectric.com/blog/does-full-self-driving-f...)

2. If i sell, I cannot keep it for my next car

3. The subscription option came out a month after we bought our car. Its about 10 years of FSD subscription payments

4. Saving the best for last - it doesn't work / exist

If i could get a refund I would. I cannot wait for the class action against Tesla for this, i'll be the first to signup.


Purchased FSD does transfer to a new owner, unless you sell to Tesla.

If that class action happens, the most you can hope for is a settlement that allows you to either transfer it to your next car or get a comically tiny payout.


I'm not sure why you assume a class action would result in a comically tiny payout. The VW emissions cheating scandal had a class action payout of $5,000 - $10,000 per person plus VW buying back their car at pre-scandal values.

I can easily imagine a Tesla FSD class-action being a token amount plus their FSD fee back. I mean, yes, it would be an interest free loan to TSLA, but your money back is a real thing.


*only I the US.

I'm surprised the FSD is even legal in the EU. Maybe not for long...


I don't know what "only I the US" was supposed to mean.


Re the transfer, i thought it did go away unless you also hand over the account to the car.

If i could transfer it for life, i'd gladly take that...i mean, i just dont want my money to disappear. I don't really want some small payout, i essentially either want to keep the right to get it (hopefully, one day) or get refunded.


You might be thinking of unlimited supercharging - that was initially tied to the car, then they started removing it if the car was moved to another account.


That’s not true. FSD, the purchase, does transfer with the vehicle when sold. (During beta, sure, the new owner will have to enroll, but the FSD purchase is part of the vehicle's resale value now)


So my understanding is that the FSD is linked to the car + user account. If either change, the FSD goes. I guess you could hand over the user account, but i believe both need to go together? Is there a clear answer from Tesla somewhere on this?


FSD gets unenrolled. But the new car owner can re-enroll for no cost, just clicking through some warning screens telling you to pay attention.


if you upgrade your car, you lose fsd beta also, even if you pay for FSD again.


You won't have to pay for FSD again. Stop talking nonsense. And what do you mean, 'upgrade your car'? Specifically? I am not aware of any actions that remove a previously paid FSD entitlement from a Tesla.


upgrade = if you buy a new tesla.


> 3. The subscription option came out a month after we bought our car. Its about 10 years of FSD subscription payments

Is this true? I thought it was $200/mo or $10k

10,000/200 = 50 months or 4 years ish


maybe i got the pricing wrong, i thought it was closer to $100 and that FSD was more expensive than 10k. I would need to check the numbers. I will say though a) if FSD wasnt something i already had, i'd have canceled the subscription by now, and b) I tend to keep my cars about 5-7 years, so it's certainly not a massive payback buying it upfront when it doesnt work for a period of the first few initial years.


It's advancing the human civilization to the next level. Your contribution was essential to that cause.


Haha yeah…so just advertise it as that so i can choose if i want to be in that donation bucket. It’s sold as a product yet the product doesn’t exist.


I got curious. There is a tiny bit of rain in the video but according to tesla that should be supported:

"Many factors can impact the performance of Autopilot, causing the system to be unable to function as intended. These include, but are not limited to: poor visibility (due to heavy rain, snow, fog, etc.)"

But then it goes on to say: "right light (due to oncoming headlights, direct sunlight, etc.), mud, ice, snow, interference or obstruction by objects mounted onto the vehicle (such as a bike rack), obstruction caused by applying excessive paint or adhesive products (such as wraps, stickers, rubber coating, etc.) onto the vehicle; narrow, high curvature or winding roads, a damaged or misaligned bumper and extremely hot or cold temperatures."

What? This is an amazing set of limitations that are normal occurrences on the roads I drive every day. These do not seem like edge cases.

I hate the marketing from Tesla on this feature. When it is mentioned in press it makes it sound like the self-driving/autopilot is nearly done when in fact there is a ton of work that needs to be done here and a lot of people question whether their current approach can even work. I am by no means a self-driving expert, but this terrifies me.

From the results in the video it is dangerous to have this on the road. I do not want my family anywhere near a car that is operating in autopilot mode.


It's the thing that nobody really talks about and conveniently ignores because if you do it means Tesla (and other AV companies) will go out of business.

From my experience - if you don't have well maintained roads you're going to have problems with FSD.

Raining? Water running over the camera/sensors will warp the image. Snowing? Can't see the road lines.

Tesla can't even guarantee that their cars won't barrel directly into a fire truck that's stopped on a freeway trying to address an existing accident.

In Canada, there's a federal mandate that forbids the use of oil-based roadmarkers - provinces instead use water-based roadpaint... you can literally see each year how that works out. Worn down/invisible road lines make it difficult for even experienced drivers to work out how many lanes a road has - good luck getting a computer to work out what lane it's in without any exterior help.

Without some kind of guidance inside the road itself - RFID/Wireless tracking "beacons" of sorts to help keep the vehicles in their lanes - we're not going to see true FSD any time soon globally - probably just in nice climate countries/states like California, Texas, etc where the weather 99% of the time never changes from "clear and sunny".

There's just too many variables for FSD to account for - especially with Tesla choosing to not use other tech like LiDAR, like others have mentioned.

FSD isn't going to be around in any major capacity globally for at least another 15-20 years at least. Too many factors involved in making sure your multi-ton autonomous spontaneous death machine picks the right option when it sees the equivalent of the trolley problem with potentially only a few seconds to make the decisions.


> Tesla can't even guarantee that their cars won't barrel directly into a fire truck that's stopped on a freeway trying to address an existing accident.

Isn’t this a design limitation of radar ACC systems, not FSD?

Not that I’m saying FSD will reliably avoid stationary objects (I have no idea), but afaik all such collisions so far were under radar Autopilot, which by design cannot see stationary objects.


Yeah, I might not be remembering the story correctly. It might have been collision avoidance rather than the FSD portion of their firmware.

I still don't think that FSD is really for prime-time in any capacity though. Seems to make far _far_ too many mistakes that users need to correct. Does that sound like Full Self Driving to anybody? :P


Maybe Tesla radar can't see any stationary objects, but others use radar for collision avoidance / auto braking and seem to do so also for stationary objects. It's harder to do from a signal processing view, but definitely not impossible.


None of those limitations should have mattered in this case, since they all relate to the sensors. We can see the Tesla's view of the world on screen. It knows there are double yellow lines. It knows there is a car in the oncoming lane. It just decides to drive diagonally into oncoming traffic anyway for some reason. There's no way you can blame this on sensor problems. It's just a plain old software bug.


I think this is also a perception failure. If you closely watch the video, the detected car appears on the visualization way too late after the Tesla has already swerved to the oncoming lane.

A good enough sensor stack (read Lidar) would have detected a solid obstacle at X distance (real measurement) way earlier than a camera making sense of the pixels, classifying it as an object and then providing an ML estimated distance (not real measurement) to the object. All you need in this scenario is to be able to tell there is an obstacle without having to even know it's a car, which Lidar is very good at doing.


Hmm, the view of the screen isn't clear enough to tell when exactly the car appears. An eerily familiar problem, lol. But that's kind of beside the point. Here's a screengrab of slightly sooner, at 8:05.

https://i.imgur.com/Hj6glgu.png

The car made a plan to cut across the yellows and then continue straight down the opposing lane. It shouldn't do that whether or not it sees traffic up ahead.


https://imgur.com/a/y1MDoUn

Here's the screen grab of when it decides to cut across the yellows (see the path plotted). The car is clearly visible in the oncoming lane with headlights on. There's also another car closer in the farther left lane. Neither of them is detected in the visualization. This is where a system with more robust sensors would detect objects way earlier and would never decide to cross lanes, whether it's double yellow or not.

https://imgur.com/a/o7hV45F

Here's after it has crossed the yellow lines. The car is again clearly visible, but no detection yet. Clear perception failure.


I agree there were perception failures, but that's besides the point. The car made a plan to drive on the wrong side of the road. It shouldn't ever do that, no matter what it perceives. It's illegal whether or not there's oncoming traffic. You can't blame sensors for those kinds of logic errors.

On the plus side, the computer did say "I'm sorry" after almost killing the driver. So I guess that's something.


Interesting how the oncoming car appears to have one brighter headlight than the other. I wonder if that's what confused it...


Following that thought... confused it into thinking what? (Allowing for anthropomorphic use of 'thought'.)

Really, the key here is that it's on-screen model shows the yellow lines. It must have become terminally confused to plot that course. Looks like it threw away the model and the rules and just decided to end it all.

Headline: "Tesla computer, in fit of depression, commits murder/suicide". (With a number of !s appropriate to the publishing venue.)

What's the name for the slash marks sometimes found on suicide victims, where they tried to cut their own throat or wrists several times before they succeed? Hesitation marks? Maybe that's what we're seeing here. /facetious


And right after that:

"The list above does not represent an exhaustive list of situations that may interfere with proper operation of Autopilot components. Never depend on these components to keep you safe."

Also, what are "extremely hot or cold temperatures"?


>I hate the marketing from Tesla on this feature.

There's a pattern in software marketing you eventually learn to recognize. Almost all of what Tesla does is just repeating this pattern.


It basically only works in California's wide, straight and sunny streets. Everything else is a death wish


"winding roads" is a peculiar limitation. That would exclude a good chunk of the United States. Older roads that aren't high in traffic today often just follow whatever random path the wagons used to take.


If that's really Tesla's position, FSD is absolutely unusable in Germany. Roads are being build not straight on purpose to counter fatigue in drivers.


You are being reasonable based on the data you have, but just like teenage girls on instagram your reality is being distorted by the data you are fed. It is well established that teenage girl mental health is being hurt by that information being shown to them over social media, specifically information that makes them feel less attractive or less self confident.

Similarly, if you watch a subset of car videos, specifically "Tesla FSD Failure" videos, you will believe that FSD is a huge problem. If instead you had watched videos of people leaving bars and manually driving cars after 8:00PM and a subset of FSD videos driving perfectly, you would come to the conclusion that you want your family no where near bars instead of FSD.

The NTSB is our best hope at actually making driving safer and not just knee-jerk reacting to a few dozen videos.


you shouldn't be allowed to operate a vehicle while intoxicated regardless of the technology involved.


I don't think that was the point they were making.

If you watch videos of drunk non-Tesla drivers you get the impression manual driving is unsafe.

But on a side node the long term aim is hopefully that even drunk / sleeping / 10-year old people can use self driving cars.


The original post is "fine", although a bit antagonistic. The point still stands though.

I do not think that this feature should be on the road and it's not "that the car is drunk" it's that the Autopilot feature is fundamentally is mis-designed. Take a look at any thread related to Tesla Autopilot and you have experts calling out that the lack of Radar/Lidar is absolutely reckless. This is a clear reason why: the cameras ability to discern objects is limited and can cause erratic behavior under normal circumstances.

The cost/part cutting option of removing these sensors (or not using them) has time and time again shown that it can act erratically. I don't want to have drunk drivers on the road and I don't want this Autopilot on the road. Both are bad options, but the way that this is marketed to the every-day person makes it seem like it is ready to ship and you can "be drunk and Autopilot will take the wheel". We are quite far from that. FSD needs to exceed the capabilities of a human or prove that it has dramatically less deaths/accidents per mile than human drivers.

I work in software, I work with ML algorithms. I don't trust either with mine or my families life right now. I know that there are life-critical software deployments, but who is regulating Tesla Autopilot right now and are they doing enough? The NTSB is trying... we will see how it works out.


But if the likelihood of FSD causing an accident is N%, but the likelihood of a human error (alcohol or otherwise) is M%, I would always choose the better bet.

I realize the error situations may differ drastically but if the final numbers are in favor of the computer, it's worth betting on. IMO.


Sure, at the moment that is by no means clear cut yet. The statistics on self-driving are if anything: "computer with human watching over it in near perfect conditions" has lower probability of accident than humans in all conditions. Anyone who cites the statistics in any other way is deliberately misleading.

This brings up a second question, if a human driver would do something like in this video and cause an accident, there would very likely be criminal charges. So if an self-driving car does it, should there be criminal charges against the the engineers, the CEO?


Bear with me here. You find random Hacker News comments convincing. But what if you look at the evidence instead of commenters?

Let me save you some time by linking you to an engineering talk by the head of AI at Tesla. I'll link you directly to a time in the video where he directly contradicts the witness of your expert hacker comments by showing the empirically results of sensor fusion side by side with the empirical results of vision only.

https://youtu.be/a510m7s_SVI?t=1400


I don't understand this argument. So because people drive drunk we should accept self driving to utterly fail? Because FSD does better than a drunk driver it's safe? Why can't both be true, that drunk driving and FSD is dangerous. Moreover why are self driving cars the only solution to drunk driving? It's not like the police doesn't know where the bars are they could easily pull all these people out in a few sting operations, make penalties extremely ainfjl and things stop very quickly.


There are 30,000+ car accident related deaths in the US alone every year. There are 365 days in a year. That works out to an opportunity for roughly ninety videos of fatal car accidents every day. Watch the ninety videos of people dying before watching the one video in which no one dies and maybe you will be able to understand the argument better. The actual raw data - the safety statistics - are very clear. Human + AI is safer than human alone and Tesla's cars have the lowest probability of injury given an accident of any of the cars on our roads. The statistics are so opposite your thesis. The reason you have your thesis is because you have been misinformed by the selection bias.


Relevant footage at 8:00 in the video.

https://www.youtube.com/watch?v=zDEWi2nC-Wg&t=480s

See this screenshot, where for a split second the self-driving visualization shows with a blue line, the car deciding to drive straight across incoming traffic lanes, and apparently failing to see the incoming vehicle:

https://hypertele.fi/temp/tesla.png


A split second after that, the road markings briefly disappeared on the in-car display. Seems like the oncoming headlights blinded the cameras.


I'm curious at how much persistence the detected road markings have.

Surely the car doesn't assume the previously observed and mapped road markings just fuckin' disappear the instant there is a camera or data ingestion error?

The jitter evident in the road markings throughout the video doesn't massively inspire confidence.


It's been a common issue in these systems that they seem to lack object permanence


Tesla's Karpathy made it a point that they are now doing 4d training (time) and integrating all the camera data on last AI day. I don't own a tesla, but that talk got my hopes up stuff like this shouldn't happen


So they are finally working to add object permanence, to a feature they've been selling for like 6 years. It should have been there from day one.


How on earth did these shitty vehicles get on the road without it to begin with?


I imagine this is actually a pretty tough problem to solve.... road marking change dramatically within a few feet sometimes.... most of the residential streets in the historic area I live in have no markings at all but its also an area where markings, composition of road materials etc are subject to change QUICKLY from street to street.


>> I imagine this is actually a pretty tough problem to solve.... road marking change dramatically within a few feet sometimes.... most of the residential streets in the historic area I live in have no markings at all but its also an area where markings, composition of road materials etc are subject to change QUICKLY from street to street.

> I imagine this is actually a pretty tough problem to solve.... road marking change dramatically within a few feet sometimes.... most of the residential streets in the historic area I live in have no markings at all but its also an area where markings, composition of road materials etc are subject to change QUICKLY from street to street.

I wonder if they could do something like a "Kalman filter" to address this. If you're driving forward on a road, the driving program shouldn't re-derive the scene from moment to moment, but it should update its existing picture with new data. If all the sudden its long-range sensors get jammed, it should be able use its last picture + dead reckoning for at least a little bit (though only for trying to stop in safe(r) position or avoiding entering a more dangerous area).

It certainly shouldn't move into oncoming traffic, like this Tesla did.

That's exactly what I would do as a human driver (say if all the sudden I'm in a white-out). I know where I am on the road and where the nearby cars were, so I have maybe 10 seconds to try to move off to the side and stop, even if I'm blinded.


In addition to LiDAR, Tesla also bucks self-driving wisdom by refusing to use mapping data. Everyone else’s L3/L4 plans include a reference mapping of what they expect to see, with the car signaling a disengagement if it deviates too far from the reference.


Yeah but they don't change within a few seconds. You constantly see the lane lines wiggling on these displays and that suggests to me that they have a problem with integrating information over time about objects.


That happens whenever FSD beta is disengaged forcefully. It has nothing to do with what the car is actually perceiving.


In general current digital camera sensors still don't have as much dynamic range as the human eye, so when there's a bright light in one part of a scene they lose details in the shadows. It will probably be necessary to use multiple redundant cameras with varying apertures on the same axis and then overlay the images in software to simulate a wider dynamic range.


If your theory is correct, even a low grade laser jammer would be catastrophic out on the highways. Similar to what is used to jam military guidance tech, but not nearly as complex.


I have personally witnessed low angle red sun give artifacts on a Velodyne Lidar. It was not repeatable another sunset, to make it worse.


As someone who kinda-somewhat understands this tech (vision/ML person by training but not automotive) I find watching the self-driving progress to be cool tech, but really a solution in search of a problem.

For me, the problem with cars is that we have too damn many of them.

And making cars better is not gonna make there be fewer cars, at least not at scale. Yes, there might be some secondary effects from self-driving cars being available on-demand in a fleet, and then you don't need your own car anymore, so you're overall less likely to take a car except when you really need to. The same argument also applies pretty much the same way to Uber though, just with potentially with a slightly higher price tag because you have to pay the driver. And do we see decreased car ownership through market entry of Uber?

In the end, cars are a problem for many reasons. Pollution (at usage time for IC, at production for EV), noise, accidents, the unspeakable hell-scape that is car-centric suburbia, loss of public space for humans, you name it. Having to actually drive them is not one of the problems besides safety. There should be fewer cars in total. Which would also largely cover the safety issues.

So that's why the self-driving future for me is... kinda irrelevant really. Yes, the cars we will still have in the future might be cooler if they're self-driving, and I hope they are. But I don't get what all the fuss is about, really.

This is not to offend anyone here of course; I get it. It is cool tech.


The reason why there are so many cars is because people find them convenient and there are many objective benefits to owning a car compared to whatever you propose (sharing cars or public transport).

You personally might not find these benefits useful, or they might not outweigh the disadvantages (which BTW are subjective). But every person is different.


But the reason we find them convenient is that, at least in most of the US, cities are built for cars. Before cars became common, Los Angeles was zoned for higher density and had trolleys that extended all the way into the city's NE reaches. Same goes for São Paulo, the largest city in the Americas. In the case of SP—which is still way serviceable via public transit than LA—an urban planner/mayor built a series of radial avenues & encouraged people to move out of the city center to its new peripheries. That, plus other "improvements," cut the population density of the center by at least 50%, and exacerbated patterns of residential segregation that began after abolition.

This is all to say that cars aren't solely a matter of personal choice. Car culture was been imposed on us by urban planners with tremendous power. For more on this, even in NYC, see the introduction to Robert Caro's Pulitzer Prize-winning book on Robert Moses.


> But the reason we find them convenient is that, at least in most of the US, cities are built for cars.

A subtly better way to frame this is to say that American cities were destroyed for cars. Streetcar tracks were ripped up. Roads were widened at the expense of sidewalk. Neighborhoods were demolished and replaced with parking lots, freeways, and car dealerships. Many municipalities stopped maintaining their core inner cities and bet the house on big-box stores on the outskirts. It cost hundreds of billions of dollars collectively to subsidize the car in the U.S., which had the side effect of cannibalizing passenger rail and metropolitan transit, as well as building places that are prohibitively dangerous to cycle or walk in.

One illuminating exercise is to find photos of Houston from the 1920s and compare them to the 1970s. You would think that someone had carpet-bombed the city.


I live in a city that is awful for cars (Buenos Aires) and has kind of serviceable public transport (it's the method of transport for most people). Yet people that can afford to own a car will do so. And they'll use it whenever the situation allows it (mostly pparking related).

Remember that owning a car doesn't ban you from using the subway or whatever if it's a better choice for that trip.

And public transport will never have the privacy that a car has. Comfortable heated seats. Setting the AC to your liking. Blasting your music. Having a quiet conversation with your partner. Not worrying about others people COVID or whatever. People selling you stuff or crazy in general. Leaving from your driveway and arriving at the destination's doorway. Which is a big deal if it's raining, if you're with your elderly parent, if your carrying something heavy, etc. And if you're a nocturnal beast like me, cars are so much better in general.

BTW I much preferred living in LA when I lived there, to the few stays I had in NYC Manhattan, and to Buenos Aires.


To solve a wide variety of problems we face we really should suck up to it and bare some minor inconveniences. Especially regarding climate change.


Living in a walkable city is extremely convenient. I would much prefer to do so than drive. For short trips parking is a significant time sink that's not necessary with other modes of transportation.

However that ship has largely sailed in North America so you're not wrong. Given the pattern of city planning we're stuck with cars are not going to be replaced any time soon.


Yes and no. They are very convenient in most of the built environment we have in the US (I’d go further and say they’re _necessary_). But we can also build environments where they’re not needed, or even a nuisance for the owner.

There’s a reason most New Yorkers don’t drive.


I think ideal progression would be:

1st step: Everybody have their own cars and drive them manually (current situation, too many cars)

2nd step: Minority of the private individuals now own self driving cars (next step)

3rd step: Majority of the private individuals now own self driving cars. (almost all cars are self driving now!)

4th step: Commercial entities enter the self driving car sharing market as an option to privately owning a car, you pay for a subscription and have access to a fleet of cars.

5th step: People stop buying cars and the only cars on the road are owned by commercial entities, which can be rented at will.

5th step would be ideal, as this also eliminates the need of parking spaces within the city! Whenever their services are not needed, the autonomous cars can just navigate to a parking facility/hangar in the outskirts of the city. The amount of cars on the streets can be increased and decreased at will.


>For me, the problem with cars is that we have too damn many of them.

have you sold (to the wreckers for scrap value obviously, otherwise it would still contribute to the problem ) yours yet?


have never owned one. yes, I'm a city dweller. I get it, that's easy here. In a very walkable european city on top of that.

yes, I know what it's like in the countryside. i come from a village in germany. very difficult there without a car. the point is that it could be much, much better.


Well, the reason you have so many of them is because the cars are dumb and need to be linked to a specific "smart" operator. Thus everyone who needs go places also needs to have a car, a 1:1 mapping. If you move the smarts to the car, then you can change the ratio.

But isn't this public transit? Yes and no. Public transit solves this ratio by at the cost of transfers and fixed lines and lack of comfort. Only autonomous EVs can have the rider to driver ratio of public transit, but the point to point and seating comfort of cars. Autonomous EVs can work with cities as they are, not as a minority would wish them to be.


The problem isn’t so much the total number of cars overall (although that’s a problem) but the number of cars _in use_ at any one time. It’s not really solvable to have individual cars once you reach a certain density, you simply don’t have room for everyone. Density has lots of advantages, so it emerges nonetheless, leaving you in a bind.


Well, public transit solves this by having the vast majority of riders stand and pack very tightly during these times. No reason you can't be cramped into autonomous EVs, other than people don't want to do that.

This is the issue with comparing transit to cars - the amazing capacity numbers touted by eg subways are based on great rider discomfort.

The total number of cars is still a problem - they take up street parking, they require large parking structures to store during the day, etc, etc. You should think of autonomous EVs as red blood cells - constantly circulating and carrying useful loads everywhere, not sitting idle 90 percent of the time.


Public transit will always be able to fit more people per space taken (in use and otherwise), so any problem with being packed like sardines will be multiplied tenfold in autonomous EVs.

It’s 100% possible to run service often enough that people aren’t packed, even during rush hour. Some of that is by automating subways so they run every minute and closer together all day long. That’s a solved problem though, thankfully, so it’s simply a political problem not an engineering/space constraint one like autonomous individual cars.


Automated subways are just autonomous cars that work in a highly constrained environment. Ie, this is that Tesla and Boring Co are attempting to build. The difference between tunnels and streets is the complexity of the environment.

Let's step back and deconstruct the train. A typical LRT car has 5 segments, 50 seats and a total capacity of 250. At rush hour, this train will come every 5 minutes. What if instead, each segment came once per minute? What if instead every 6 seconds a five person vehicle rolled by? The throughput would be the same. With 5 people per vehicle instead of 250, it is highly unlikely that it needs to stop at every stop, so as long as you have a separate loading/unloading zone you can get transit capacity, seating for everyone, less waiting etc. THAT is your autonomous EVs based transit system.

The reason you can't do this with drivers is because the costs of running 50 vehicles instead of one train are astronomically higher, because you need all those extra drivers. Saying it's "political" is correct, but a misdirection. Nobody is gonna pay for such a setup.


> Automated subways are just autonomous cars that work in a highly constrained environment. Ie, this is that Tesla and Boring Co are attempting to build. The difference between tunnels and streets is the complexity of the environment.

No, they’re really not, just like trains aren’t just cars that operate in constrained environments. They have a number of advantages, including feasibility, max speed, efficiency.

The main reason I know this is a poor comparison is that they are already in use in many countries (and they look nothing like cramped tunnels with traffic jams).

> A typical LRT car has 5 segments, 50 seats and a total capacity of 250.

The Paris metro has two automated lines, one built that way and one retrofitted. The retrofitted line, line 1, uses cars with capacity for 722 people https://en.wikipedia.org/wiki/MP_05 (by comparison, an NYC train can carry around 2,000 people, they are longer).

During rush hour, they have headway of 3 minutes (I’ve seen less in other subway systems, as little as 1m30, indeed it looks like line 14 has an 85 second minimum headway for safety).

> What if instead every 6 seconds a five person vehicle rolled by?

You’re going to struggle to embark in 6 seconds. If we solve that problem (with magical thinking or otherwise), subways are even more advantageous, since that’s where they waste tons of time!

> it is highly unlikely that it needs to stop at every stop, so as long as you have a separate loading/unloading zone you can get transit capacity, seating for everyone, less waiting etc. THAT is your autonomous EVs based transit system.

Turns out tons of people go to the same small set of stations, so you’re going to have a large problem embarking and disembarking in the given space. Same with time, everyone wants to move around at the same time.

Again, this is a solved engineering problem, there’s no need for train-like-but-not-as-good transit autonomous individual EVs.


Have you looked at the density of 5 person cars vs a 50 person bus? The bus is certainly not 10 times as long, cars don't scale (let's not even talk about the fact that people feel much less comfortable with sitting in a car with 4 strangers than sitting on a bus with 50). Funny you mention Musks tunnel venture, there are videos of traffic jams in the Las Vegas tunnel, even though the shouldn't occur.


Hopefully this upcoming documentary will help expose how reckless Elon/Tesla's approach has been to a wider audience. https://www.nytimes.com/2022/05/16/NYT-Presents/elon-musk-te...

It's embarrassing to see articles that treat Waymo's Driver and Tesla's software as if they were anywhere in the same league.


tesla's 'self driving' features are something otherworldly to behold sometimes. ive seen a model X approach an intersection with a green light and a clear path of travel only to lock up the brakes and skid just a few feet short of the light.

on a motorcycle ive merged in front of a tesla with plenty of headspace only to see it surge forward momentarily before applying every gram of brake-force to drag the car to a screeching halt. A few streets later this same tesla resumed travel next to me from the light, only to slowly and methodically merge into the shoulder and onto the grass before its driver took over.


Here's a video I shot when we test drove my friend's FSD beta back in November of 2021 when it was on 10.4: https://youtu.be/qKvhvwmynZc

It's downright terrifying. At one point, it swerved us into a suicide lane WHERE ANOTHER TRUCK WAS STOPPED AND FACING US, and he had to take over to avoid it just, well, killing us. In clear, broad daylight with no weather.

My biggest critique that seems completely unsolvable is what I (the passenger) mention at the beginning of this video and throughout: the B-pillar and STATIC positioning of the ONLY side-facing camera makes things VERY difficult on the car as it attempts to turn out from a small feeder road onto a large, fast, multi-lane arterial road. There are limitless occlusions that get in the way, that humans compensate for by moving their head, binocular vision, and gently easing the car forward a bit to see around something, but not so much that you put your nose out into traffic. (Though, also definitely that, people do that all the time.)

The car cannot do anything but ease itself forward. It can't "look around" a telephone or utility box or a bush that hasn't been trimmed down in a bit. It frankly can't even do the human mental gymnastics of being able to see motion THROUGH the branches of a fairly dense bush and interpreting that as a likely vehicle. The number of times we encounter these situations and compensate for them on a daily basis is astounding. The Tesla is simply not equipped with the sensors needed to address this, in any stretch of the imagination. Fixed frame. Fixed focal length. Fixed location. No pan or tilt. No way to see "around" something. A single side-facing camera located BEHIND the driver's head.

Tell me how a Tesla ever manages to turn out on to a road, from a neighborhood 25 mph road to a "nominally-45-mph-but-really-65-mph" road like the one outside my house, when I have to crane my neck around the bush that blocks us, and I know I need to start looking well ahead of the bush to understand the traffic dynamics as I approach?


It's crazy that they didn't put at least an extra camera in the place where for example BMW has their "side view camera" option for pulling out of parking spaces. That's right at the sides of the nose. Would have cost them less than $ 100 per car, while FSD costs thousands...


I'm sure that Tesla will have to eventually bring back radar into the stack. The problem with a vision system only is that it relies on visual clues that can be easily blinded by the conditions of the road. For instance a sun glare, high beams from traffic, water on the windshield, or simply a small depression on the road terrain.

These are the same things that can cause a traffic accident for a human with human eyes, so there's really no logical reason to assume that you can create a more performing and safer computer driver by emulating human vision only.

Radar solves all these problems by enhancing the perception of the car and creating redundancy so the car can cross-check its assumptions about the road, the terrain, and the obstacles with two different and independent perception systems.

I can't understand why they removed radar. You don't need to be an expert to understand why vision alone won't work.


Back in 2005-6 I was working on autonomous off-road vehicles for the US Army, part of the far outer fringes of the [Future Combat System] (https://en.wikipedia.org/wiki/Future_Combat_Systems). In our tests we found that MMWR had major speed limitations even with a leader-follower system (where the manned leader built terrain models as it was manually driven and shipped them to the followers which then tried to terrain-match the leader's MMWR images). We figured based on sensor range versus safe-stopping acceleration rates it was limited to about 30kph.

We also had tremendous problems with dust etc. up in the air causing the radar to think it was a wall and panic stop. And further problems, since we were off-road, the radar LOVED puddles and any sort of standing water- it looked perfectly flat and so wanted to go there every time. Various attempts were made to compensate (some people tried sensor fusion with visual and IR cameras, which doing in real time is hugely hard, our team tried to do it by focusing on human-robot collaboration, but we never found a good way to adjudicate differences between human and robot, and if the human is always having to intervene anyway, the robot's not giving you much help). None of them were working when FCS was cancelled and our project focus shifted.


>since we were off-road, the radar LOVED puddles and any sort of standing water- it looked perfectly flat and so wanted to go there every time.

I'm sorry, but the mental image of a AI driven humvee that gleefully enjoys playing in the mud made me laugh out loud.


Worse- it was a 15 ton Stryker ICV versus a mere 2.5 ton Humvee.


Thank you for sharing this real-world experience!

It is precisely these sorts of really-very-hard challenges that have led me to conclude that "FSD" absent general AI is not plausible, unless you narrow the definition of FSD to mean something that can only be used in the 80% case.

The thing that perplexes me is why the industry isn't chasing the 80% case, e.g. follow-the-leader cross country traffic on interstates... I always think, the interstate system is tailor made in so many dimensions (regulatory and physical) for roll out of limited but robust automation, and the number of highway miles there probably represent the vast majority of total miles driven...

In shipping it seems you could be predicting and real-time sharing local conditions to allow for graceful exit from automation when required, and, avoid the situations where it was reasonably well.

There's always the outliers, though... tumbleweeds, smoke, fog, bad actors...


Yeah, back then I also thought that a leader-follower convoying system would be implemented first (with local drivers for the last mile problem), because that system made sense and was straightforward- with good logistics software (to make sure that the leader is driving the farthest distance or connecting multiple leaders together) you could easily save 2/3 or more of your driver costs.

Then a few years ago I talked to an actual software consultant to shipping companies, and asked him why none of that had happened in over a decade. He felt was that the companies were too fragmented for that. Most shipping companies are small companies, where basically the owners wife keeps a list of all of their shipping loads and destinations. Maybe Walmart has the access to capital to acquire that technology and the scale to profit from it, but he felt that no one else could. Which was why he left that field and went into industrial robotics, where the business case can actually close.


But you were using radar for terrain mapping. Self driving road cars use cameras to follow lanes and radar to detect obstacles and other vehicles.

Normal radar based adaptive cruise control you can buy in the majtory of je car brands today will do 180 kph (110 mph) and pick up other cars from over 200m away.

So I think that setup is very different from the terrain mapping you were trying to do?


Radar has poor resolution, which means it can't always differentiate between an object near the car's path and an object in the car's path. The radar can give false positives when there's a truck on the side of the road or a metal sign on an overpass. Then you have to use the vision cameras to figure out what's really going on. Radar also loses its "lock" sometimes, which can be an issue with cars braking quickly. The radar will say "no obstacles detected" when the car in front of you is slamming on the brakes. Again, you have to use the vision cameras to figure out what's actually going on.

The people at Tesla decided that the radar was creating more noise than it was helping with, so they removed radar last year. Andrej Karpathy explains it in far more detail in this talk: https://www.youtube.com/watch?v=g6bOwQdCJrc&t=7m


These are Tesla-only problems because they were using Continental radars from 2012. Everyone else has developed advanced radars like Waymo with their high-resolution imaging radar that can track objects that are moving and stationary [1] or see through fog [2].

This is another way of saying Tesla wanted to cut costs and not invest in radar development.

[1] https://blog.waymo.com/2020/03/introducing-5th-generation-wa...

[2] https://blog.waymo.com/2021/11/a-fog-blog.html


If they were Tesla only problems, why did other companies feel the need to advance the state of the art in their radar hardware? They did so because poor radar equipment wasn't a Tesla only problem, but was a problem that many identified. Poor radar was not a Tesla only problem, but a common problem.

Now the world state advanced to a world in which WayMo had more advanced radars and Tesla did not have more advanced radars, but instead stopped using bad radars. Both companies however made the decision to abandon the radars that they previously had. Both identified getting rid of their former radars as the correct solution to the problem of radars which did not meet their needs. So it isn't the case that it was a Tesla only problem and it isn't the case that the two companies had terribly different ideas about how to solve the problem. Both companies improved their car by relying on technology to make up for shortcoming in radar systems. The difference was in how they did it. Tesla did it by devoting more effort to the vision stack whereas WayMo did it by devoting more effort to hardware improvements.

In the video you dismiss there are segments that show Tesla's vision-only system successfully identifying distances despite the presence of fog. They also identify things that are stationary. They also identify things that are moving. So the things you cite for supporting your thesis that Tesla failed aren't examples of things that Tesla is doing that WayMo isn't. They are examples of things that both cars are doing. That means they are worthless for making a claim of superiority.

Having failed to establish superiority you move on to claiming that Tesla wanted to cut costs and that this was their motivation for doing this. This is cast in a bad light as if cutting cost harms safety. In reality training data is precious and collecting it improves performance. Cutting cost allows greater volume of production by lowering the capital cost of production. This results in more sales. This results in more data. The net result is a better training set, improving performance. You aren't even correct that cutting costs actually results in less spending on solving the critical problems: cutting costs leads to increased sales which leads to increased revenue and in turn increased profits which leads to more capital for development, not less capital for development, which means more capital gets allocated to solving self-driving problems. As such cutting costs has the impact of increasing the amount that is spent on researching the problem as time goes on.

The big difference in their choices isn't that Tesla is approaching it in a bad way and WayMo in a good way. It is that WayMo is getting funding from projects that aren't itself. It can afford to make bad decisions in terms of cost-effectiveness when it thinks that it improves the probability of eventual success. This sort of spending model has historically been simultaneously underwhelming and overwhelming in what it produced. Overwhelming because it does things like get us to the moon, but underwhelming in that it does so in such a cost ineffective way such that we don't return.

This is shaping up to be the difference between WayMo and Tesla in practice. WayMo handles driving in fenced off locations well! This is the overwhelming awesome thing it does. However, at the same time, it has extremely low production volume and doesn't support all cities. This is like how we got to the moon, but then didn't get to keep going there. Meanwhile, Tesla has literally millions upon millions more self-driving miles.

However, this actually works out in practice to WayMo harming public health in comparison to Tesla? Why? Well, self-driving plus humans is already better than humans driving alone. WayMo has technology that could save lives to the tune of some fraction of the ninety people in the US who die every day in accidents not dying. WayMo isn't letting that technology save those lives. Instead, Tesla is letting a similar technology save those lives. As Tesla scales it captures more of the market, it stops more of those deaths. Thus in terms of their net benefit in terms of death reduction, WayMo actually loses by far. Every moment they don't scale while Tesla does scale, they lose some more. The only hope for them catching up is to achieve greater scale than Tesla which is going to give them a cost effectiveness problem or alternatively they need Tesla to mess up in a truly massive way that forces regulators hands - something that regulators and Tesla don't want at present, because the current trend is lives being saved from human errors.


Radar and lidar may be needed but this is not a good example of it. I am watching this video on youtube recorded with a camera. I can clearly see the lanes in this video and I can clearly see the oncoming traffic. The car should not have crossed over and it should be able to detect the large light approaching in the other lane. This is a completely failure of the AI that radar and lidar would fail to solve.


The motivations I have seen were:

- If radar/lidar fails, vision needs to work. If vision works, radar/lidar are not needed. - Sensor fusion is hard: outliers in radar data ended up hurting more than the non-outliers helped.

The second one sounds to me like the real reason: Sensor fusion is not trivial and if it doesn't work correctly, you'll get the worst of all worlds.


If radar/lidar fails, it's perfectly fine for vision to work just barely enough to slow down for a few seconds while telling you that you're going to have to self-drive yourself.


Aside from the visual clues you mentioned, I like to use the example of a screen door or vertical blinds. Most people have the experience of inadvertently focusing on these things a bit wrong and "feeling" that the objects are much closer or farther than reality. Computer vision can make the same mistakes easily, and in fact can often perform much worse than human vision. Backlit scenes are difficult, frontlit scenes are more difficult, and scenes with a lot of repetition and few large features are almost impossible. Examples of almost impossible scenes are backlit beach on a sunny day and cloudy day while skiing, and all kinds of conditions while driving, such as frontlit or sidelit with a wet road.


> These are the same things that can cause a traffic accident for a human with human eyes, so there's really no logical reason to assume that you can create a more performing and safer computer driver by emulating human vision only.

Okay. This is a concrete claim that I think is falsifiable. I'll give not just one, but multiple arguments.

Argument 1: Connect Four, Chess, Checkers, Go, Limit Hold'em, No Limit Hold'em, Poker, DOTA, and Starcraft II are all examples of games in which AI outperforms even the best humans despite having equal sensory access. Driving can be reduced to a problem comparable to playing an imperfect information game and we can play imperfect information games at superhuman levels using computers. Therefore, it is reasonable to expect that even human-level sensory access can enable safer than human performance.

Argument 2: Currently we have statistics on driver-assistance programs that demonstrate that human + technology assistance is superior to human alone. This holds true even in cases in which those assistance technologies are only using vision. So we have a contradiction. If it wasn't possible to do better than human with only human senses, we wouldn't be able to point to many different situations in which driving with an AI clearly outperforms humans, but we can. For at least those subsets of AI driving, we can say that an AI driver will outperform humans.

> I can't understand why they removed radar.

Check out https://www.youtube.com/watch?v=g6bOwQdCJrc for a very detailed breakdown of why they did it.


RADAR would not have helped in this situation. Even if the oncoming car wasn't there, the Tesla still made an illegal maneuverer - in no state I can think of is it illegal to cross 2 double yellow lines.

Either

1.) The camera's fidelity isn't good enough to make out the double yellow lines

2.) A software bug caused the Tesla to drive to aggressively.


> I can't understand why they removed radar.

The cost per unit times 500k+ units/year is significant. Also, I think the radar they were using started getting deprecated by the manufacturer.

Now, do I think they should have included RADAR -or- stopped selling FSD? Yes. I don't see how they can deliver FSD without RADAR.


I was under the impression, that they removed it because they were having a hard time sourcing the components in the current chip shortage.


False impression and common. We don't need to guess here. There was an engineer talk at the Computer Vision and Pattern Matching conference in 2021 in the Workshop on Autonomous Driving by Andrej Karpathy senior director of AI at Tesla. In the latter portions of the talk he dives deep into why they choose to go all in on video with compelling empirical results showing it to be far more accurate than vision + radar.

https://news.ycombinator.com/item?id=31414374

They do end up doing sensor fusion, but not when you would think. At training time they use radar still, because at training time you can do things to correct for radars errors, for example, you can use hindsight to reason about what happened. At inference time, the uncertainty of hallucinating radars is a real risk to safety. It says things aren't there when they are. It says things are there when they aren't. That makes it a lot harder for the car to plan out the next actions and promotes very dangerous action - like slamming on the brakes - which can surprise other drivers and cause accidents.


Only if visual system came with some fish memory and common sense. AV systems seem to be very focused on now data but lack mental model and decision making. They should really develop for no markings or lanes environment first


> I can't understand why they removed radar.

Elon Musk's ego.


I realize this is an optional (and beta) feature but it gives me pause about buying a Tesla. If the company can be so careless releasing a feature like this, I worry about other safety related decisions there.


It is worth repeating that other road users didn't opt into this beta, but are still put in harms way by it.

For example if this incident had been head-on, the other driver wouldn't have taken any consolation in it being "only in beta" or "you opt into it" (they did not).


That is exactly why I have no interest in buying a Tesla. The ethical way to test the feature is to build larger and more elaborate closed test tracks.


Which is exactly what Google did.[0][1] They built an entire fake city to test in: "Castle", California.

[0]: https://www.theatlantic.com/technology/archive/2017/08/insid...

[1]: https://www.wired.com/story/google-waymo-self-driving-car-ca...


I'd love to see Musk put his money where his mouth is and have Tesla accept liability for accidents that occur with FSD active, similar to what Mercedes is doing[1]. Mercedes's system has many limitations (see article), but it's better than Tesla's approach of constantly over-promising. Maybe having some money on the line would force Tesla marketing to narrow down the use cases where they're actually confident in FSD vs. ones where it should not be used.

[1] https://www.motor1.com/news/575167/mercedes-accepts-liabilit...


I'm shocked that Tesla has the fanbase it has.

The slipshod nature of their manufacturing and their self driving is putting humans at risk - and not just the drivers.

I own a Tesla, and it's been a moderately ok ownership experience, but I've been dealing with issues with the car since day one, and I'd never consider buying another one other manufacturers get their electric vehicles dialed in.

Tesla is rapidly becoming synonymous with low quality in my mind, which is the opposite of what I think they want their brand to be.


I've seen enough fails from Tesla's Autopilot and FSD Beta (typing those names out feels so laughable given how far away they are to describing the things they're meant to, other than Beta) that I actively avoid any Tesla I see on the road. I never know if the car is being driven by someone or one of these tools are engaged. I almost feel there should be regulation that communicates to other drivers that a human is not controlling the vehicle.

It really bothers me that we have been opt-in without choice into Tesla's marketing and hype bubble and live on the streets research and development. These are people's lives. The road isn't a place to move fast and break things.

If one gets hit by a Tesla car under one of these "automated" systems, what prevents a person from holding Tesla responsible?


>I almost feel there should be regulation that communicates to other drivers that a human is not controlling the vehicle

That's an awesome idea which would really expose how trustworthy it actually is/isn't.

Learner drivers are required to show "L" plates here in Australia and have to have a fully licensed driver ready to help, a Tesla is no different so should require the same conditions.


A few years ago, I went to Las Vegas for the first time and had a chance to ride in a self driving BMW 5 series. There was an engineer in the passenger seat, and a full time driver to take over. I forget the company, but it wasn't Tesla/Waymo/Uber.

It was a pretty short ride, near the end an Escalade cut us off pretty egregiously, and the car dove out of the way while braking. I was still so glad we had a backup driver and engineer. I remember thinking how hard those situations will be for a computer to detect, predict, and act better than a human could.

It's going to be a bumpy road getting to transportation utopia.


Sounds like Motional.


"FSD might be degraded due to weather"

and it still allows it to be engaged? Surely as a life critical system, allowing it to carry on in a known degraded state isn't a great thing?


Company culture flows from the top - just as Musk is reckless (plenty of evidence for that around [1] [2]), so is Tesla.

The only way it will stop is if regulators tell Tesla it must stop this beta from being used on public roads.

[1] https://www.reuters.com/business/autos-transportation/court-...

[2] https://www.theguardian.com/technology/2018/jul/15/elon-musk...


What troubles me everytime i watch a video linke this: Why isn’t there any indicator to which degree a car on FSD-Mode operates? I mean - if there are troubling conditions due to bad weather or missing road-marks - the system should be able to tell how well it can make decisions, right?


Incident of interest shortly after the 8 minute mark.

Description: after right turn around 8 min, a few second of travel later it crosses left across lanes in preparation for a left turn. Instead of just changing lanes in preparation, then entering the well marked left turn lane, it prematurely crosses (double yellow) into what I think is dead-space between the alternating turn lanes (shared with oncoming traffic); there is no indication it's going to straighten.


I don't understand why the display in Teslas are always so jumpy/glitchy.

For example the road. Sometimes parts of the road turn on and off. Or cars and pedestrians jumping all over the place.

Doesn't Tesla do some kind of estimation where objects will be next?


I don't know what's scarier - Tesla FSD, or people with the reaction time like the driver in the video...even after the car is clearly angled into oncoming traffic completely inappropriately, he doesn't immediately take control of the vehicle.


I am sure he will get his FSD privileges suspended.


based on what? negative-value comment.


I meant it as a joke (hopefully), but a technical reply to your question would be "violating the non-disclosure agreement": https://www.theverge.com/2021/9/28/22696463/tesla-fsd-beta-n...


Why at 7 min 14 sec does the car accelerates to 50 when there is a posted speed limit for 40? https://youtu.be/zDEWi2nC-Wg?t=435


The video's author addresses this in a comment:

> My offset is set at +10 so it goes 10 (km/h) over the posted speed limit

Which is an... interesting feature. On the one hand, human drivers frequently exceed posted speed limits. But allowing self-driving tech to ignore speed limits feels wrong somehow. Though it's better than your car snitching on you if you speed.

Edit: From an implementation perpective, this should really be a percentage. 10 km/h is only 10% over the speed limit at highway speeds (typically 100 km/h in Canada), but quite dangerous in a school zone (30 km/h) or on a 40 km/h residential street.


One glance at the screen and you immediately notice whatever they use (ML, shitty hardcoded algorithm, some hybrid in between) has zero persistence, no object permanence. Objects vanish and reappear randomly, float in space. Would you trust a driver with 0.016 second short term memory?

At 8:05 right before he grabs the wheel you can see projected trajectory (blue line) jump for one video frame from right side of yellow lines to the oncoming lane and then jump back. Its like being driven by a hamster on crack.


Does anyone know if auto insurance companies charge a premium if you've a Tesla with FSD, since effectively it's an additional, and rather immature, driver?


FSD is a feature you can subscribe to by the month, which would be hard to adjust for, so they're likely to lump all Tesla into the same actuarial category.


Genuinely don't understand why they ditched LIDAR sensor from their solution. LIDAR + image recognition looks like a win choice for this problem.


I thought these cars had radar and/or lidar? The radar should "see" things 160 metres away so regardless of whether the Tesla thought these were cars, shouldn't it still have seen them as obstructions and avoided them?

Is it possible overloading of the CPU that can't keep up with the volume of data being given to it by all of the various inputs?


Elon claims that if humans can only use 2 eyes his FSD can do it as well.

Has anyone seen the recent video of the guy thinking there is a giant sink hole in a tunnel (and it really looks like it) until he drives more closely and it turns out to be a puddle[1]? If humans can't always get it right how is FSD going to get it right better than humans because it needs to be better otherwise what's the point?

[1] https://youtu.be/zSEABlXjB5o


Humans have only two eyes but we're producing a much more complex spacially variable map as we're looking around, along with highly dynamic exposure. We're also very good at pattern recognition, filtering out noise, and anticipating that certain objects are more or less trustworthy: telephone poles, very trustworthy, they don't move; pedestrians, very untrustworthy, they move sideways and backwards.

I really don't think it's extreme to insist that a thing intended to replace humans, is at least as competent as humans. And a thing that can't even degrade itself, i.e. an "oh fuck I'm confused, abandon the turn in which I can't see, and keep going straight on the road I can see" is just a 100% fail. I'd go so far as to say anytime humans are intervening, that's a 100% fail, as in 0% trust should be extended. Such a system is acting capriciously. It really is all or nothing, just like the rules we apply to teenagers when they're learning to drive and get licensed.


But humans can also move/shift/rotate the head to see through raindrops on the windshield and have wider dynamic range.


That looked kinda scary. The other cars seem reluctant too.

I wonder how you would "patch it out" in FSD software and then have cars not run into holes.


They ditched LIDAR a long time ago.

They recently ditched radar. They're vision-only now.


to a naive kid, this sounds like a great idea - humans are pretty much vision-only, so cars should be able to do the same! makes me feel like there are a lot of naive kids over there. I know they have some geniuses, I'm just saying how the decisions look from the outside.


Humans crash cars pretty regularly, but I bet if we had lidar/radar vision we could do a bit better too.


> I bet if we had lidar/radar vision we could do a bit better too

IMO we do - try driving with the windows on both sides down a crack. Your brain synthesizes traffic noise into your mental model. You can be aware of traffic near you without being able to see it.


Radar isn't hearing. It has different properties than your eyes and your ears.

You know how your eyes can sometimes get things wrong? For example, you can see a mirage of an oasis when you are in the desert? Well, radar has failures like that. For example, in normal everyday driving conditions, like when you go over a pothole, someone passes through the lane in front of you, or you go under a bridge radar tends to hallucinate just like your eyes tend to hallucinate. It tells you things that aren't true about the observed reality.

The most famous story about radar that I know is the time that ignoring it prevented the death of humanity. There was an illusion of a nuclear launch by the United States on Russia. The officer didn't believe the radar and so didn't launch nuclear weapons in retaliation.

If you were combining radar with your eyes while driving, rather than combining it with hearing, you would have times like that - times where you recognized that the sensor was wrong and you were forced to ignore it. Or you would die. And others might very well die with you.


Humans crash cars because humans make stupid human mistakes.

A computer doesn't "forget" to look before merging. A computer doesn't drive drunk, fall asleep, get road rage, or get distracted by a phone. A computer can see in all directions at once with enough sensors.

I 100% believe that a vision-only self-driving system CAN work...but that radar/lidar providing extra signal would make it a lot easier to implement, especially at night when oncoming headlights can be blinding.


This would be true if we could rely on radar being an idealized sensor in which its error follows a normal distribution and in which visions error also follows a normal distribution. Under that world we could just use sensor fusion techniques to combine the two readings. We don't have that sort of normally distributed error though. In practice there are scenarios where radar gets things very wrong, like with bridges. This is is a systematic error, not a normally distributed error.

If you use the average rate of error and you combine it with the vision reading the sensor fusion is much worse, not much better, and in practice it resulted historically in break checks at underpasses, followed by drivers taking over to prevent themselves from getting rear ended.

In theory you could absolutely do sensor fusion, but it isn't trivial. You need to have a network that can (1) understand the scene and (2) understand the error distributions as they relate to each particular scene. But notice this - vision is usually more reliable than radar, by a hell of a lot, so how exactly are you going to be determining the edge cases where you need to assume a different error distribution for radar? Seeing how tricky this gets? Combining with radar relies on vision being reliable enough to help inform you of how radar is going to fail. Ugly circular dependence right there.

Lets say you can compensate for the potential for radar to error dramatically. In theory, this would allow you to have a better reading in some edge cases, but in practice those edge cases don't matter. What are the actual edge cases where it helps?

1. Driving in the dark or while blinded: radar isn't sufficient to drive safely. It can't see lane markings for example. Therefore this isn't a solution. The correct thing to do if you can't drive on vision alone is to not drive. Or - and I'm dead serious here (fully automated logistics is basically so ridiculous for enabling massive wealth that very unrealistic things are worth pursuing to attain it) - modify every road in the world so as to trivialize driving with sensors other than vision.

2. Seeing obstacles that are hidden from sight. This is an edge case that matters and is where you would get the big win, but it isn't actually just a win. It is also a huge complication. Uncertainty matters and you just made it very hard to project a cone of occlusion producing uncertainty because now we are claiming knowledge of occluded areas with an error prone sensor. You need some feedback mechanisms here or you are going to underestimate/overestimate your ignorance in situations involving uncertainty, potentially resulting in bad driving decisions as a consequence.

3. The more complicated you make this, the greater the potential latency to make the decision. Pursue the theoretical best and latency could increase enough so that you delay your decision. However, this is a real time system. It can't afford pure theoretical best, because latency matters a lot. That makes the sensor fusion more useful as something you use in the backend systems that aren't real time - training time and inference time have very different properties.

I actually basically agree with your larger point that in theory these other sensors should be able to benefit. However, I disagree that getting them to the point where they do benefit is easy. I think it is actually pretty hard.


If humans always paid attention and followed traffic laws almost all crashes would be prevented.


“I didn’t see them coming!”

—- 50% of humans after a crash


In my other reply I tried to say you were wrong in a way that was rude, which I regret, so I want to try again in a more respectful way.

Initially, it was the thought of most people who thought about the problem that more sensors would be best. This was what Tesla thought, not just what you thought. It was what I thought. I've read Artificial Intelligence: A Modern Approach. I've read about Kalman filters and sensor fusion. I've implemented them. So intuitively, I think it makes a lot of sense that more sensors should be more effective. So it was surprising to me when Tesla decided to drop radar and it was even more surprising to me when they shared that the empirical results of dropping radar led to measurable improvements in vehicle safety and improved their accuracy in determining their position relative to other objects.

In the past I've made that claim and people have been surprised by it. It doesn't seem to be common knowledge. If you didn't know it or you doubt it, then I'd recommend you check out an engineering talk by Andrej Karpathy. This isn't a Tesla marketing piece. It was a workshop talk at the Computer Vision and Pattern Matching conference in 2021. Andrej Karpathy is the senior director of AI at Tesla. He was someone involved in this decision to switch from radar to vision only and in the talk he outlines the engineering reasons which motivate the switch.

[1]: https://www.youtube.com/watch?t=28257&v=eOL_rCK59ZI&feature=...

If you still doubt that it is a good idea to switch, I can contribute my own anecdotal experience. I have a Tesla. I drive in it. When I do, I use autopilot very frequently. The majority of my driving is done by autopilot. As such I've gotten experience which has informed me a bit about how autopilot tended to fail. One of the ways it could fail was by making me take over when going under an underpass or by breaking suddenly when doing so made little sense. This is a category of error that I have stopped experiencing since the switch away from radar.

So now we're in an interesting position, because if you recall you claim that people who believe vision only are naive. Naivety is usually defined as meaning to be wrong because of idyllic assumptions that are wrong, but which you don't realize are wrong because of ignorance. Yet the way things played out historically is that people assumed that sensor fusion was the best approach, empirical results suggested it wasn't the best approach, and consequent to that people changed their minds. In terms of the progression pattern, this is exactly opposite of what it means to be naive, because the beliefs are contingent on experience rather than a consequence of its lack.

Sometimes we have simple models of reality and they suggest one thing. Then we get experience in the real world, which is much more complex than our simple model, and that experience tells us something else. Telsa, like you, thought radars plus vision was better. They tried a model that dropped radars and the result was empirically measured as being safer.

Should we go back to the thing that we know to be less safe through empiricism? Well, if we do, but we do so without solving the reasons it was bad, then we are going to have the car making bad decisions. Those bad decisions put lives at risk. So I don't think we should go back. If we can address the root causes of why the sensor fusion approach introduces dangerous error, then we can do that, but just adding back radar? That would kill people. Probably the people directly behind a Tesla that breaks because of a hallucinated object detected by radar.

I wish I could delete my other reply, but I think the example in it is extremely important, because when people try to use the toy problems that fit in their head? They are choosing to exclude that very very real non-theoretical situation. That is exactly what the world with radar was generating as an inevitability and we need to be crystal clear on that when we reason out exactly how to avoid that type of error. Blindly saying that sensor fusion is better is really dangerous, because people can really die if we make the wrong decisions -- and because we can measure the results of different models empirically? It is blindness to say it, because it contradicts the evidence.


Radar gives false positives. Pretend you are a computer for a second. You get told by radar that right in front of you there is an obstacle. You are five feet from crashing. The only option to avoid it would be to slam on your breaks. Meanwhile the vision system tells you that you are on a freeway, there is a car directly behind you, and there is a bridge ahead of you.

Do you a) slam on the breaks or b) drive under the underpass that was producing the false senor reading?

You've already declared your answer. You choose option a, because you think radar is amazing and you think people who think otherwise are naive morons. What happens next is that you slam on the brakes, surprising the person behind you. They slam into your vehicle. Their child wasn't wearing his seat belt. He flies forward, slamming through the window. His brains splatter the pavement. His body rolls without his brains into another lane. A horrified person to your left swerves to avoid hitting the kids body as they drive by, plowing into another car.

Congratulations. You are a genius. Everyone else is naive. Thank you for playing murder innocent people.

Of course, this isn't what actually happens. What actually happens is that your decision results in a sudden break, but the person at the wheel recognizes this is a mistake and presses down on the pedal. Your decision making ability is taken away, because you are a moron. They do so, because they trusted their vision system more than they trusted your stupid radar based decision. So the error correction mechanism that stopped your murder attempt? Vision.

Thank God for that, but the person in the vehicle is annoyed. They report the issue, not liking that bridges consistently produce that behavior. Tesla investigates. They realize that the radar sensor is producing false positives. After empirically validating that removing radar is better at driving in this edge case they roll out the improvement. Tesla removes your ability to decide to kill people because you are too obsessed with radar.

Later, even as the empirical results show that self driving is now better than human driving in terms of safety, partly of course due to it being human+computer driving now computer alone, a person names knodi123 goes online and calls people naive for thinking that vision is preferable to radar.

He gets asked this question. If you were the driver of that car, which would you rather trust? The car's decision made on the basis of radar? Or your own decision made on the basis of vision?


Tesla does not use LIDAR.


I might be wrong on this but I thought they removed Radar in recent models due to the parts shortage and just implemented it in software?


Alas they've ditched Lidar and are trying to replace radar with vision only.

Its not the approach I would have taken, but I know why its being done.


You are extremely misinformed. I'll go through the points one by one.

> I thought these cars had lidar?

Scale of rollout is a strategic imperative brought on by the advantage of massive datasets. Lidar was very expensive relative to cameras. It also didn't have the existence proof of human-level driving which vision has. Tesla opted not to use it.

> I thought these cars had radar?

Radar they did opt to use, since the sensors for it were cheap, but counterintuitively it ended up hurting performance. It has edge cases in which it is very wrong, for example when you go under an underpass radar gets confused and misreports the distance. This leads to very uncomfortable false positives, like slamming on the break while on a freeway, which could lead to brutal and horrifying death for the person behind you. Solving that effectively meant sensor fusion, but in practice the vision stack has a neural architecture which makes its reliability much greater than radars reliability. If you have two unreliable sensors you can combine them to get a better sensor. If you have an unreliable sensor and a reliable sensor, it makes more sense to just use the good sensor. They empirically validated that removing radar improved the systems performance. Then did so. Anecdotally, I haven't had a false-positive break check while going under an underpass since then.

> Is it possible overloading of the CPU that can't keep up with the volume of data being given to it by all of the various inputs?

Tesla's have dedicated chips which handle the self-driving workload. I believe these are more alike to GPUs then CPUs. The system doesn't have a problem handling the workload.

> The radar should "see" things 160 metres away so regardless of whether the Tesla thought these were cars, shouldn't it still have seen them as obstructions and avoided them?

Go to the actual incident and you watch the computer planned trajectory it is very clear that the planned route has the Tesla entering the center left turn lane directly ahead of it. There wasn't going to be a head on collision and if there was it wouldn't have been the Tesla's fault. If another vehicle hits you while you are in a central left turn lane preparing for a left turn, that is on them, not you. Also, in addition to being misleading, the video title is against the Hacker News guidelines; that wasn't the title of the video. That was a wrong but shocking description of the video which draws attention away from the actual issues:

- There is a timestamp where it plans a bad route briefly. That is something to investigate and fix even if it didn't ultimately keep that route.

- It also enters a lane with markings indicating that it shouldn't be entered. That is another serious issue that needs to be addressed.


>> Go to the actual incident and you watch the computer planned trajectory it is very clear that the planned route has the Tesla entering the center left turn lane directly ahead of it. There wasn't going to be a head on collision and if there was it wouldn't have been the Tesla's fault.

Someone else posted an image in a different thread ( https://i.imgur.com/Hj6glgu.png ) and its pretty clear the car's planned route was to move into the opposing traffic's lane which makes very little sense but would've 100% definitely made the accident the Tesla's fault.


Read my post again. I clearly articulated that there is a timestamp where it plans a bad route briefly. That is something to investigate and fix even if it didn't ultimately keep that route. It is a very serious issue and no one should diminish it.

I'm sorry that you only get your information from second hand sources and so can't understand how I can both know what you said, but also know that you are under-informed. I realize that a headline and a single screenshot may seem extremely informative to you, especially if you skim comments rather than reading them in their entirety. However, I didn't look at a screenshot. I watched the video multiple times, including in slow motion, with special emphasis on the time where the incident occurred.

I strongly feel that if you are going to try to correct someone, you should really finish reading their post before you try to correct them.

I share your concern for that timestamp and think this error needs attention from Tesla.


The Tesla clearly crosses double yellow lines into the center no-driving buffer zone at 8:05, continues through that into the opposite lane of travel at 8:06, and nearly collides with a car going the other direction at 8:07. It ends up in back in the center turn lane at 8:09 only because the human driver intervened.

It doesn't matter whether Tesla ultimately planned to enter the center left turn lane; what matters it that it decided to cross into a no-driving lane and an opposing lane of traffic en-route to its purported destination, and the unrefuted visual evidence is that the planned trajectory violated multiple traffic laws and nearly caused the death of at least 2 people, at least 1 of whom did not consent to being Elon Musk's guinea pig.


From my original post:

- There is a timestamp where it plans a bad route briefly. That is something to investigate and fix even if it didn't ultimately keep that route.

- It also enters a lane with markings indicating that it shouldn't be entered. That is another serious issue that needs to be addressed.

> It doesn't matter whether Tesla ultimately planned to enter the center left turn lane;

It absolutely matters; figuring out the root cause of the issue is the essential thing, not scaring people. Self-driving cars are already safer to use than regular vehicles. Scaring people out of them rather than solving the root cause of issues with them will kill people.

What I'm doing, which others aren't, which makes me seem so counter-cultural, is that I'm putting this video in context rather than accepting the headline at face value.

This is a video of a self-driving car + human not getting into an accident. People who hate Tesla want to make that seem like a horrifying thing. In contrast, there are roughly 30,000 deaths a year due to driving accidents in the US alone. If you treated self-driving like you treat human-driving we would watch videos of deaths every single day on Hacker News. Every single day, without fail, there would be a new video we could post where someone died. We wouldn't get once in a month a video where someone didn't even get into an accident. People would die in the video, because humans are worse drivers than humans + AI.

I know that context. I know that regular cars are killing people everyday. I'm not forgetting that when I watch a video where an accident doesn't happen. I'm not accepting the empirically falsified perspective that self-driving is less safe, because the stats don't back up that assertion. I'm instead noticing that the video title was editorialized. I'm noticing that goes against the Hacker News guidelines. It disgusts me that it is, because I'm aware that convincing people that self-driving cars + humans are less safe than they are has the net effect of killing people by reducing prevalence. I don't consider that a good thing.

People who are freaking out over Tesla have a very distorted sense of how safe these cars are and a very distorted sense of how safe human drivers are. We're definitely not at the point where the vehicles can drive on their own, but we are definitely past the point where it is better to let the human drive on their own than to pair them with an AI. One video a month where someone doesn't get into accident in a Tesla, but I see as many cases where the Tesla stops an accident. Yet if I engaged in the same behavior as OP for regular cars, I could post literally hundreds of videos of actual death. I wouldn't need to editorialize.


> They empirically validated that removing radar improved the systems performance. Then did so. Anecdotally, I haven't had a false-positive break check while going under an underpass since then

If the failure mode is well known and the system is advanced enough to interpret what it sees, why wouldn't it factor in that they are about to be in an underpass to adjust the reliability factors of the sensors? What sense does it make to completely remove the other sensor to avoid a possible-to-predict edge-case?


If you rely on radar when you shouldn't rely on radar, people can die. So the answer to half your question is that even a minuscule probability of misidentification has intolerable consequences. If you rely on radar, because you can't rely on vision, people can also die. You are driving while blind to things like stop lights. Unless vision conditions improve it isn't safe to drive. Also there are going to be the same edge cases where radar failed, but now vision won't be able to compensate for them. Since radar can also fail by failing to detect something there, not just detecting something that isn't there, this is a fatality waiting to happen. In every other case, since the error bars are lower on vision than they are on radar, you ought to be relying on vision. Therefore, since in every conditional case in which driving is reasonable, vision is demanded, engineering effort is better directed toward vision than radar.


Am I missing the sarcasm here?


Which of my claims do you disagree with?

- That the vehicle planned a route that it shouldn't, but corrected that route plan?

- That the vehicle entered an area where it shouldn't enter?

- That Tesla doesn't use radar?

- That Tesla doesn't use lidar?

- That the calculations were done on GPU rather than CPU and so it doesn't make sense to think that the reason this happened was an overloaded CPU?

- That the Hacker News title is editorialized, in contradiction to Hacker News guidelines?

I think the last one is probably most contentious and you are most likely to consider it sarcasm. So I'll point this out and hopefully you'll see where I'm coming from. If you renamed the video title to, "Tesla FSB Beta does not cause a head-on collision" it would be just as accurate for the situation in question. However, it would also be more accurate for the majority of the video. As such, it is a more accurate headline. Meanwhile, the name of the video is changed from the name on YouTube. Clearly an editorial choice was made.

Was it a good one?

To really get to the heart of the matter: no one died in this video, but if we had a similar bias against traditional vehicles we would see 90 videos of deaths every single day. A similar level of antagonism would demand that more then the entirety of the entire front page of Hacker News was completely taken up with actual deaths. Every day. The actual statistics at the heart of driving safety are very clear - human + AI is currently much much safer than human alone. That is the actual reality. This headline contradicts the underlying reality and it does so because the author knew the headline would attract more attention. It was optimizing for engagement, not for accuracy. I feel this sort of approach to headlines strongly contradicts the guidelines of Hacker News.


Oh sorry, I didn't think someone could be such a devoted Tesla apologist. I see I was wrong.


Skip to the 8 minute mark to see the incident mentioned in the title


Is there any chance that Elon Musk scrutinizes the 'active' v. 'non-active' users of FSD, just like he is doing with Twitter? In the interest of transparency, I think it would be very revealing, how many people have been shocked and dismayed by FSD performance to just entirely drop out of the beta FSD - but do so, via the entirely passive route of just not using it anymore.


I am in the beta and treat it just like normal AP. I tried it almost 100% of the time the first week while building a "relationship" with it. Now (months later) I only use it when I know it has high probability of success (known-good routes).

I am sure there are a lot of people who stopped using it. I'd hope (but don't know for sure) that they could easily opt back out of the beta.


.. and you paid 10 grand for that?


You can use if without having purchased it - it's available on a monthly subscription basis.


Criminally negligent trash I have zero interest sharing roads with.


Has anyone impartial (not on Tesla's payroll) evaluated FSD?


It's a public beta that normal drivers can sign up for. So I'd say there's quite a lot of them out there


What do you mean evaluated? There are thousands of us using the FSD beta (including I assume the video from OP).


I understand why Tesla would let users beta test for them. It might even be ethical, taking a big picture view, but I don't understand what kind of person would use volunteer to be a beta tester. Is it about laziness, thrill seeking, ignorance, entertainment, or something else?

It's little different from letting an 8 year old child take the wheel. Sure, it'd be fine most of the time but regularly, certainly, and randomly fail catastrophically.

Just why?


I'm also not sure if it is ethical. A good way to frame the question is, "Is it ethical, to take a single alcoholic drink, and then drive?"

In both cases, AI driving and single-drink driving, the car operates in a less than optimal way. Occupants of the car are at risk. Other motorists and passers-by are at risk. In both cases, the data collection of AI-coincident accidents and drink-coincident accidents is spotty and anecdotal.

Unlike AI driving, there are lots of tests that show how human performance degrades at each level of blood-alcohol level, and that information is searchable for your average internet-inclined driver.

I would feel better if the government could be an umpire on these technologies and call out the accidents and fault with greater rigor... but I'm not sure that the lack of government involvement and the degree of lethal AI accidents has been reached for the current work, by unpaid crash-test dummies, to be unethical.


> A good way to frame the question is, "Is it ethical, to take a single alcoholic drink, and then drive?"

I don't see why. I would trust a road full of "I had a single drink" drivers over self-driving cars. Assuming the self-driving cars are all like the current models of trying to use vision, radar, lidar because it's assumed to be a mix of all vehicles. 100% self-driving cars with 5G and a mesh network is a different question.


My guess is it's about trying the latest and greatest new technology.

It's exciting in the same setting up new software is.


The excitement to setup a brand new PS5 6 months before anyone else and very little training vs operating a motor vehicle that can end in death when things go wrong is not the same.

At the end of the day, it requires a human to be licensed to operate a vehicle. Why should we not expect the FSD to be licensed, or have a new license category for the humans on how to operate one?


Not advocating for anyone to beta test. Just pointing out the draw.


Sure, novelty makes sense. That would explain why a reasonable person might try it out for a few minutes in a dry, daytime, low speed, low traffic area.


I know lots of people who let 8 year old children take the wheel, so perhaps that answers your question?


Yeah, but they choose a time and place where split-second reactions aren't critical and don't trust the child at all.


Sure, stupidity must account for some large percentage. I guess I'm curious about the other reasons.


I'd agree that the vast majority of people owning a Tesla would not be good beta testers.However, I do think some would be worth while, but because you have no valid way of weeding out the bad ones the candidate poole should not be used.

The other alternative would cost Tesla so much to hire professional drivers with full understanding of what Tesla's FSD does/does not do in large enough numbers of cars and various environmental conditions to gather enough data in a fast time frame. That is NOT an excuse on why to turn it loose to the public.


Tesla only lets people with a safety score[1] of 98 or higher into the FSD beta. They've been slowly lowering the threshold since the initial 2020 rollout. I think around 60,000 Teslas have FSD beta right now, or about 3% of the fleet.

1. https://www.tesla.com/support/safety-score


Why did people ride in the first cars? They were slow, prone to breakdowns, expensive to operate.


Horses were slower, more prone to breakdowns, and more expensive to operate.

They also smelled like horseshit.


Not really. Even early on cars were quite fast and priced around the same as horses and a carriage, and without the hassle of dealing with horses. They did tend to break down a lot but were quite usable.

Tesla FSD is much, much worse than just driving oneself when taking into account the risk.


How often did horses breakdown? Do you see why someone from 1890 may think it is crazy to drive a car that frequently has a breakdown vs a horse?

Similarly, 130 years from now people will ask "Why would you want to drive your self when there were self driving cars available? And someone else will have to explain that FSD required close supervision and needed interventions between 1 or 100 miles, depending on the driving situations."


Have you ever ridden a horse? They "breakdown" quite often. As soon as mass production of cars started, they were practically superior to horses in almost every way.


If you read the thread you would see that obviously we are not talking about mass production cars. Obviously those are superior to horses.


So you're comparing the very first prototype cars to the technology present in millions of Teslas? Seems like a very contorted analogy and makes no real sense.


Can you just read the thread?

We are specifically talking about people who signed up to be a beta tester.


Yes, beta testers for hardware and software that is in mass production and many years into development. By this stage in the development of cars they were an extremely rational choice over the alternatives.

FSD is not a rational choice over the alternative and its adoption in no way resembles the adoption of cars. The analogy is bunk. But I won't keep beating a dead horse here, I was just trying to explain why it doesn't IMO make sense.


Some people I have talked to about the issue seem to think computer are fast and are better drivers becouse of that. And like, can calculate braking distances better.

They did not seem aware of problem with edge cases etc or image recognition actually taking some time.


So ignorance. That does seem like Tesla's responsibility. Tesla should probably show the worst behaving FSD videos on YouTube (like this one) before it can be enabled.


very unlikely to be ethical no matter what un-enforceable limits you put on the testers. their concern is legal liability and nothing more.


I don't own a Tesla but I share the road with them, which makes me an involuntary part of their beta test. It's hard to give a shit about liability if one runs head-on into you and kills you.

None of this is ethical. At all.


It may be ethical if it really does lead to saving many lives by advancing the state of the art more quickly than it otherwise would have.

And given how many people are drunk while intoxicated or using smartphones, it seems like a relatively negligible increase in danger.

The math might be something like: police departments could increase enforcement of DUI and reckless driving laws by 1% and compensate for the total increase of danger caused by Tesla's FSD testing.


It may be ethical if it really does lead to saving many lives by advancing the state of the art more quickly than it otherwise would have.

And if we kill a few hundred or thousand innocent passengers and/or pedestrians along the way, that's an acceptable sacrifice to perfect Musk's software?

Holy fucking shit.


> And if we kill a few hundred or thousand innocent passengers and/or pedestrians along the way...

So you're just going to make up numbers to be outraged at?

As far as I know, no innocent passengers or pedestrians have been killed as a result of FSD. I imagine it would be big news as the (relatively few) Autopilot accident have been. It should be common sense to assume that FSD would be halted by Tesla themselves, or at least a government agency, if there were huge numbers of people dying.

We only have safe and cheap air travel and safe cars today because many thousands of people died beta testing them. This is basic history that everyone should know. When it comes to high speed transportation, there's simply no way to avoid significant risk while making significant progress. The best we can do is make intelligent trade offs, which is my entire point.


Excuse me but Tesla makes drivers pinky swear that they will pay attention 100% of the time and be ready to take over driving at the drop of a hat. Obviously you're just a luddite.

/s


It's very clearly Beta. No one claimed it's perfect/done/finished. Why is everyone so shocked that beta software has bugs?

We can certainly debate whether it's ethical to put beta software out on the public roads. Perhaps the Minimum Viable Product method isn't the best approach for transportation hardware.

But... surely we can all expect that beta software has bugs. So yes -- I expect it to be driving like a drunk 10yr old. Ideally that'll slowly improve as the software matures. That it made a mistake and has a bug isn't really news though.


>I expect it to be driving like a drunk 10yr old.

Then it shouldn't be on public roads. Tesla could spend 3 billion dollars and build huge road complexes to test their stuff instead of putting the rest of us at risk.


Looks scary for sure. But the driver had the time to say "woa woa woa" before even starting to intervene, he knew the system was exhibiting erratic behavior. Had he acted appropriately this would never have been an issue. The system makes mistakes, plenty of mistakes. Absolutely nobody is claiming that it is safe yet. It's supposed to go with a human supervisor ready to intervene at all times. And so far it has worked, as there hasn't been a single confirmed accident aside a Youtuber driving into a bollard resulting in some paint damage (again, letting the system go longer than necessary in order to see what would happen), and this despite 60k+, approaching 100k drivers using it. It sure looks scary, but I expect HN to be smarter than that and actually look at the statistics.


> Absolutely nobody is claiming that it is safe yet.

If you just go looking for a few minutes you can find all kinds of claims that Tesla's FSD system is safe or will be safe. It would be very easy to end up with the impression that it would never veer into oncoming traffic. For example:

> "I would be shocked if we do not achieve Full Self-Driving safer than a human this year. I would be shocked," Musk told analysts. [1]

I know that this is actually a statistical claim, not a claim that it will do uniformly better in all situations (like not veering into oncoming traffic). Regardless, you can find lots of "it's safer than people" and a lot of victim blaming when it's suddenly not safe.

It was like ten years ago that Google's self driving effort concluded it wasn't safe to trust the driver to supervise the system. Saying it's safe if the driver catches mistakes is just shifting the blame. You can't say you have full self driving and say mistakes don't count if the driver's not paying attention!

[1]: https://www.drive.com.au/news/tesla-full-self-driving-safer-...


I think the claims that "it's" safe is actually the claim that the combination of FSD and human driver is safe. So far that seems to be true, although there is the concern as the system becomes better and the human complacent, we'll see more accidents due to bored/complacent drivers. Some people (possibly biased of course) report that the combination feels more safe, because the they can use more of their cognitive capacity assessing threats rather than for the act of driving. Many people also report less mental fatigue driving long stretches with both autopilot and FSD, fatigue also being a big contributor for accidents.

I think in the hands of a responsible driver it may actually be more safe. In the hands of an irresponsible driver though, who'd use it to space out or browse on their phone, it's clearly extremely unsafe. So far, the Safety Score filter may have ruled out most of the irresponsible drivers, but I think it'll be a problem soon. Also as the system becomes better, even responsible drivers may stop paying attention.


> Looks scary for sure. But the driver had the time to say "woa woa woa" before even starting to intervene, he knew the system was exhibiting erratic behavior. Had he acted appropriately this would never have been an issue.

From the video, there appears to be about 3-4 secs between it behaving fairly normally and it turning into oncoming traffic. To me that says the driver supervising the car has to be extremely attentive which is a pretty spicy place to be.

I think for me the underlying point is that a system where the backup is "random human ready to take over at any point with <5secs notice" just doesn't seem very safe or ready for general consumption.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: