Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think that follows, necessarily. Chess has an unfathomable amount of states. While the LLM might be able to play chess competently, I would not say it has learned chess unless it is able to judge the relative strength of various moves. From my understanding, an LLM will not judge future states of a chess game when responding to such a prompt. Without that ability, it's no different than someone receiving anal bead communications from Magnus Carlsen.


An LLM could theoretically create a model with which to understand chess and predict a next move, you just need to adjust the training data and train the model until that behavior appears.

The expressiveness of language lets this be true of almost everything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: