Well, here instead of asking you a clarifying question about your response, I might instead ask ChatGPT. Something would be lost, in my mind. Just musing out loud.
I think leaning on ChatGPT to understand others is as problematic as relying only on oneself to understand others, but now you’ve added a layer of your own interpretation depending on how you engage with ChatGPT.
Put another way, if you’re going to sources other than the individual speaking to clarify what they’re saying, the underlying issue is probably not ChatGPT or whatever the next tool is that comes around.
Another form of what you describe is leaning on one’s friends/acquaintances. Plenty of people do this, often with poor results. Reddit’s various relationship forums are a great example. I translate what I thought I heard and ask a 3rd party who wasn’t there what they hear. But by doing so, I remove even more context and make it even less likely to arrive at a useful answer.
I’m sure people will use LLMs for this, but the root issue is deeper, not caused by these tools.
I think that with time, we’ll get better at determining which types of conversations are worthwhile and which aren’t.
If I’m trying to understand a complex multi-faceted technical issue, it’s amazing to be able to drill deeper and deeper into the knowledge contained within the LLM.
If I’m trying to understand the internal states of other people, I have no reason to believe I’ll find good answers in a model that wasn’t trained on that person’s thoughts.