Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The original post is "fine", although a bit antagonistic. The point still stands though.

I do not think that this feature should be on the road and it's not "that the car is drunk" it's that the Autopilot feature is fundamentally is mis-designed. Take a look at any thread related to Tesla Autopilot and you have experts calling out that the lack of Radar/Lidar is absolutely reckless. This is a clear reason why: the cameras ability to discern objects is limited and can cause erratic behavior under normal circumstances.

The cost/part cutting option of removing these sensors (or not using them) has time and time again shown that it can act erratically. I don't want to have drunk drivers on the road and I don't want this Autopilot on the road. Both are bad options, but the way that this is marketed to the every-day person makes it seem like it is ready to ship and you can "be drunk and Autopilot will take the wheel". We are quite far from that. FSD needs to exceed the capabilities of a human or prove that it has dramatically less deaths/accidents per mile than human drivers.

I work in software, I work with ML algorithms. I don't trust either with mine or my families life right now. I know that there are life-critical software deployments, but who is regulating Tesla Autopilot right now and are they doing enough? The NTSB is trying... we will see how it works out.



But if the likelihood of FSD causing an accident is N%, but the likelihood of a human error (alcohol or otherwise) is M%, I would always choose the better bet.

I realize the error situations may differ drastically but if the final numbers are in favor of the computer, it's worth betting on. IMO.


Sure, at the moment that is by no means clear cut yet. The statistics on self-driving are if anything: "computer with human watching over it in near perfect conditions" has lower probability of accident than humans in all conditions. Anyone who cites the statistics in any other way is deliberately misleading.

This brings up a second question, if a human driver would do something like in this video and cause an accident, there would very likely be criminal charges. So if an self-driving car does it, should there be criminal charges against the the engineers, the CEO?


Bear with me here. You find random Hacker News comments convincing. But what if you look at the evidence instead of commenters?

Let me save you some time by linking you to an engineering talk by the head of AI at Tesla. I'll link you directly to a time in the video where he directly contradicts the witness of your expert hacker comments by showing the empirically results of sensor fusion side by side with the empirical results of vision only.

https://youtu.be/a510m7s_SVI?t=1400




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: