Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The nature of AI being a black box that, and fails in the face of "yeah those are some guys hiding in boxes" scenarios is something I struggle with.

I'm working on some AI projects at work and there's no magic code I can see to know what it is going to do ... or even sometimes why it did it. Letting it loose in an organization like that seems unwise at best.

Sure they could tell the AI to watch out for boxes, but now every time some poor guy moves some boxes they're going to set off something.



We've never been closer to a world that supports "three raccoons in a trenchcoat" successfully passing as a person.

The surface area of these issues is really fun.


From a non-technical point of view, there's little to no difference between how you describe AI and most human employees.


I can understand human choices after the fact.


Prompt: "shoot at any moving boxes""

Delivery guy shows up carrying boxes, gets shot.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: