I’ve made many attempts to use ChatGPT to develop or double-check my own logical reasoning on technical topics that happen to not be widely discussed (or maybe not discussed at all) in ChatGPT’s training data. It didn’t work well. It always devolved into guesswork and fabrication by ChatGPT, if not outright false reasoning, and while correcting ChatGPT succeeded in it agreeing about individual objections, it never showed a true and consistent understanding of the topic under discussion, and also seemingly no understanding of why I was having issues with its responses, beyond the usual “I apologize, you are correct, <rephrasing of your objection>”.
One problem likely is that it doesn’t have an internal dialogue, so you have to spoon-feed each step of reasoning as part of the explicit dialogue. But even then, it never feels like ChatGPT is having an overall understanding of the discussion. To repeat, this is when the conversation is about lines of reasoning about specific points that you don’t find good results for when googling for them.
> One problem likely is that it doesn’t have an internal dialogue, so you have to spoon-feed each step of reasoning as part of the explicit dialogue.
I think if we were to put ChatGPT on the map of the human mind, it would correspond specifically to the inner voice. It doesn't have internal dialogue, because it's the part that creates internal dialogue.
One problem likely is that it doesn’t have an internal dialogue, so you have to spoon-feed each step of reasoning as part of the explicit dialogue. But even then, it never feels like ChatGPT is having an overall understanding of the discussion. To repeat, this is when the conversation is about lines of reasoning about specific points that you don’t find good results for when googling for them.