>But if you add a feedback loop where it can use tools, investigate external files or processes, and then autocomplete on the results, you get to see something that is (close to) thinking
It's still just information retrieval. You're just dividing it into internal information (the compressed representation of the training data) and external information (web search, API calls to systems, etc). There is a lot of hidden knowledge embedded in language and LLMs do a good job of teasing it out that resembles reasoning/thinking but really isn't.
No, it's more than information retrieval. The LLM is deciding what information needs to be retrieved to make progress and how to retrieve this information. It is making a plan and executing on it. Plan, Do, Check, Act. No human in the loop if it has the required tools and permissions.
I can't speak for academic rigor, but it is very clear and specific from my understanding at least. Reasoning, simply put is the ability to come to a conclusion after analyzing information using a logic-derived deterministic algorithm.
* Humans that make mistakes are still considered to be reasoning.
* Deterministic algorithms have limitations, like Goedel incompleteness, which humans seem able to overcome, so presumably, we expect reasoning to also be able to overcome such challenges.
1) I didn't say we were, but when someone is called reasonable or acting with reason, then that implies deterministic/algorithmic thinking. When we're not deterministic, we're not reasonable.
2) Yes, to reason does imply to be infallible. The deterministic algorithms we follow are usually flawed.
3) I can't speak much to that, but I speculate that if "AI" can do reasoning, it would be a much more complex construct that uses LLMs (among other tools) as tools and variables like we do.
It's still just information retrieval. You're just dividing it into internal information (the compressed representation of the training data) and external information (web search, API calls to systems, etc). There is a lot of hidden knowledge embedded in language and LLMs do a good job of teasing it out that resembles reasoning/thinking but really isn't.