I think most things humans do are reflexive, type one "thinking" that AIs do just as well as humans.
I think our type two reasoning is roughly comparable to LLM reasoning when it is within the LLM reinforcement learning distribution.
I think some humans are smarter than LLMs out-of-distribution, but only when we think carefully, and in many cases LLMs perform better than many humams even in this case.
I think our type two reasoning is roughly comparable to LLM reasoning when it is within the LLM reinforcement learning distribution.
I think some humans are smarter than LLMs out-of-distribution, but only when we think carefully, and in many cases LLMs perform better than many humams even in this case.