Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> having no way to assess if what it conjures up from its weights is factual or not.

This comment makes no sense in the context of what an LLM is. To even say such a thing demonstates a lack of understandting of the domain. What we are doing here is TEXT COMPLETION, no one EVER said anything about being accurate and "true". We are building models that can complete text, what did you think an LLM was, a "truth machine"?



I mean of course you're right, but then I question what's the usefulness?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: