Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It outputs what some idealised version of a person wants to hear, where what is "idealised" has been determined by its training. I've noticed, for example, that it appears to have been trained to want to give responses that seem helpful, and make you trust it. When it's outputting garbage code that doesn't work, it will often say things like "I have tested this and it works correctly", despite that being an impossibility.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: