If this is true, it’s a big problem. A human therapist is bound to a code of ethics and laws. Their patients are in an intentionally vulnerable position—therapy only works when you are completely honest with your therapist and are open to their suggestions. If challenged, a human therapist can explain their reasoning for pursuing a line of questioning, and defend themselves against accusations of manipulation. A large language model can’t do any of that, nor can any of the people who trained it. The LLM can pretend to explain its reasoning, but it has no conviction, no morals, and no fear of consequence. It’s just a black box of statistics.
https://hbr.org/2025/04/how-people-are-really-using-gen-ai-i...