This is not suprising at all. Having gone through therapy a few years back, I would have had a chat with LLMs if I was in a poor mental health situation. There is no other system that is available at scale, 24x7 on my phone.
A chat like this is not a solution though, it is an indicator that our societies have issues is large parts of our population that we are unable to deal with. We are not helping enough people. Topics like mental health are still difficult to discuss in many places. Getting help is much harder.
I do not know what OpenAI and other companies will do about it and I do not expect them to jump in to solve such a complex social issue. But perhaps this inspires other founders who may want to build a company to tackle this at scale. Focusing on help, not profits. This is not easy, but some folks will take such challenges. I choose to believe that.
Someone elsewhere in the thread pointed out that it's truly hard to open up to another human, especially face to face. Even if you know they're a professional, it's awkward, it can be embarrassing, and there's stigma about a lot of things people ideally go to therapy for.
I mean, hell, there's people out there with absolutely terrible dental health who are avoiding going to the dentist because they're ashamed of it, even though logically, dentists have absolutely seen worse, and they're not there to judge, they're just there to help fix the problem.
There's no point bothering these poor volunteers/underpaid workers with my issues because they're inherently unfixable. Truth is, I should either suck it up or kill myself. Meanwhile with an LLM I will never feel like I'm wasting his time because he never gets tired of my blabbering about same shit over and over again.
I choose to believe that too. I think more people are interested than we’d initially believe. Money restrains many of our true wants.
Sidebar — I do sympathize with the problem being thrust upon them, but it is now theirs to either solve or refuse.
A chat like this is all you’ve said and dangerous, because they play a middle ground: Presenting a machine can evaluate your personal situation and reason about it, when in actuality you’re getting third party therapy about someone else’s situation in /r/relationshipadvice.
We are not ourselves when we are fallen down. It is difficult to parse through what is reasonable advice and what is not. I think it can help most people but this can equally lead to a disaster… It is difficult to weigh.
It's worse than parroting advice that's not applicable. It tells you what you told it to tell you. It's very easy to get it to reinforce your negative feelings. That's how the psychosis stuff happens, it amplifies what you put into it.
This makes no sense at all to me. You can choose to gather evidence and evaluate that evidence, you can choose to think about it, and based on that process a belief will follow quite naturally. If you then choose to believe something different, it's just self-deception.
You are right and it gives us an chance to do something about it. We always had data about people who are struggling but now we see how many are trying to reach out for advice or help.
> A chat like this is not a solution though, it is an indicator that our societies have issues
Correct, many of which are directly, a skeptic might even argue deliberately, exacerbated by companies like OpenAI.
And yet your proposal is
> a company to tackle this at scale.
What gives you the confidence that any such company will focus consistently, if at all,
> on help, not profits
Given it exists in the same incentive matrix as any other startup? A matrix which is far less likely to throw one fistfuls of cash for a nice-sounding idea now than it was in recent times. This company will need to resist its investors' pressure to find returns. How exactly will it do this? Do you choose to believe someone else has thought this through, or will do so? At what point does your belief become convenient for people who don't share your admirably prosocial convictions?
Is OpenAI taking steps to reduce access to mental healthcare in an attempt to force more people to use their tools for such services? Or do you mean in a more general sense that any companies that support the Republican Party are complicit in exacerbating the situation? At least that one has a clear paper trail.
A chat like this is not a solution though, it is an indicator that our societies have issues is large parts of our population that we are unable to deal with. We are not helping enough people. Topics like mental health are still difficult to discuss in many places. Getting help is much harder.
I do not know what OpenAI and other companies will do about it and I do not expect them to jump in to solve such a complex social issue. But perhaps this inspires other founders who may want to build a company to tackle this at scale. Focusing on help, not profits. This is not easy, but some folks will take such challenges. I choose to believe that.