Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs aren't described as hallucinators (just) because they sometimes give results we don't find useful, but because their method is flawed.

For example, the simple algorithm is_it_lupus(){return false;} could have an extremely competitive success rate for medical diagnostics... But it's also obviously the wrong way to go about things.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: