A model can't be right or wrong, because it doesn't actually make any logical decisions.
These are categorizations that we make after the fact. If the model could do the same categorization work, then it could actively choose correct over incorrect.
Models could potentially make logical decisions too, if we connect them to something like a classical computer or a rules engine. I don't see any fundamental barriers to making models and computers in general similar to humans' way of understanding and reasoning too.
These are categorizations that we make after the fact. If the model could do the same categorization work, then it could actively choose correct over incorrect.