Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because there's no difference between a success and failure as far as an LLM is concerned. Nothing went wrong when the LLM produced a false statement. Nothing went right when the LLM produced a true statement.

It produced a statement. The lexical structure of the statement is highly congruent with its training data and the previous statements.





This argument is vacuous. Truth is always external to the system. Nothing goes wrong inside the human when he makes an unintentionally false claim. He is simply reporting on what he believes to be true. There are failures leading up to the human making a false claim. But the same can be said for the LLM in terms of insufficient training data.

>The lexical structure of the statement is highly congruent with its training data and the previous statements.

This doesn't accurately capture how LLMs work. LLMs have an ability to generalize that undermines the claim of their responses being "highly congruent with training data".




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: