Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> They aren't real problems because nobody smart is trying to directly hook up a road sign classifier to a steering wheel. They are trying to build complex systems where this is just one signal.

What are the guarantees that those higher-level "complex systems" aren't going to have some weird behaviors as well?

The problem isn't just that ANNs misclassify adversarial examples. The problem is that it's counter-intuitive to an average observer and that no one clearly knows why those examples generalize so well. It's a remarkable property that a lot of AI "enthusiasts" try to downplay.

It's one thing to wire up unreliable but simple and fully understood components into a more reliable system. It's an entirely different level of challenge if the "unreliable components" are complex and poorly understood.

> Humans also act much worse than you imply. Look at the number of people who accelerate into crashes instead of braking, etc.

Except we know very well how and how often people make mistakes. For most of those mistakes we have a pretty good idea why they happen (at the high level, I'm not talking about neuroscience). Our roads, our cars and our laws are designed to handle these failures. Also, we have a pretty good model of how other people behave on the road, so we cal react accordingly.

All of this goes out of the window with self-driving cars.

Hell, most people naively assume that if a single self-driving gets into 50% less accidents than an average person than replacing all drivers with self-driving cars will reduce the global accident rate by at least 50%. This assumption doesn't take into account the fact that many accidents are caused by complex interactions between several vehicles and the environment. So introducing many self-driving cars can lead to some nasty emergent behaviors and some mass accidents that just aren't possible with human drivers.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: