I think that most smart people underestimate the complexity of fields they aren’t in. ChatGPT may be able to replace a psychology listicle, but it has no affect or ability to read, respond, and intervene or redirect like a human can.
They definitely can. Here's an example when I tried it:
Me: I'm struggling with some things today and feel like eating a bunch of food to make myself feel better. can we talk about it?
Deepseek: Absolutely, I’m here to listen. It’s completely understandable to turn to food for comfort when you’re struggling—many people do. But if you’re open to it, we can explore what’s going on beneath the urge to eat emotionally.
Would you like to share what’s weighing on you today? Sometimes just naming the feelings can take some of their power away. And if you’d rather not go into details, that’s okay too—we could brainstorm other ways to cope that might leave you feeling more nourished emotionally (and physically).
Some gentle questions to consider (no pressure to answer all or any):
- What’s the emotion you’re trying to soothe? (Loneliness? Stress? Sadness?)
- If eating wasn’t an option, what’s one tiny thing that might bring you a moment of relief? (A hug? A walk? A favorite song?)
You’re not alone in this.
That's a really solid opening response and it kept doing well from there. I haven't tried a complex psychological issue but for the "I'm struggling and need someone to talk to" it does surprisingly well.
ChatGPT can’t be in the room with you. Can’t evoke psychological mirroring. Is AI going to cure loneliness? Will the AI be empathetic? Will it be capable of showing love? Can it give you the acceptance of another human being? Do these things matter if they’re simulated? Do these things matter at all?
I don’t deny the helpfulness of LLMs on many aspects of therapy and mental health. I’ve used it myself. Relational problems (of which almost all mental health is a part) requires relational healing.
I think we’re going to need to learn a lot more information about what being a human means, else we continue to in the name of business and financial efficiency lose something very important. I’m an optimist, but we have a lot of challenges ahead.
45% of the US[0] have a degree, about 40% EU[1] graduate, and 54% of China[2] get at least a diploma from university.
The best AI behave like someone fresh out of university without much real world experience.
Personally, I use this as a way to stay humble: when the AI is teaching me fundamentals about some subject, my opinion about it can't possibly be very useful.
> I think that most smart people underestimate the complexity of fields they aren’t in.
And people deep in new technologies overestimate the potential effect of $new_tech_du_jour. You cannot solve a problem without understanding it and its reasons. And LLMs are not able to understand something.
What I think we have here is a problem with definitions and scope.
For example does your definition of problem demand a problem only exists if a human consciously thinks of it?
If no, then almost every single 'problem' that's ever existed was solved by the random walk of genetics. Nature has 'solved' some of the most difficult energy conservation optimizations in living systems with nothing but time and randomness. Think of how many problems had to be solved over time to go from an animal that swims to one that flys?
Underestimating the complexity of other fields is not mutually exclusive with overestimating the intelligence of others. The real issue is that society is very stratified so smart people are less likely to interact with regular people, especially in circumstances where the intelligence of the regular person could become obvious.
I don’t see there being an insurmountable barrier that would prevent LLMs from doing the things you suggest it cannot. So even assuming you are correct for now I would suggest that LLMs will improve.
My estimations don’t come from my assumption that other people’s jobs are easy, they come from doing applied research in behavioral analytics on mountains of data in rather large data centers.
Most human intelligence is within a fairly narrow band. Most people I’ve ever met have their own unique intelligences. Perhaps it might be good to meet more people without holding the self-looping dichotomy of “smart people” vs “normal people”. In my experience it tends to lead to huge cognitive errors.
As mentioned I’m in the rather unique position to have analyzed the complete browser history for a substantial number of people - I have learned far more than I wished to.
The behaviors of very high IQ people are rather distinct from regular IQ people due to IQ being both largely generic and those genes having other comorbidities. Most obviously is the depression, anxiety, and bipolar disorders. This is so obvious that even regular researchers have uncovered it.
I think what happens to many people is they confuse their desired reality with actual reality by looking at everything through a tinted lens. In being a data scientist in pursuit of actual reality I’ve had my desired reality repeatedly challenged far more than a person not in this industry. My desired reality was that intelligence was more common and I believed that until it was shown to me in data that I was wrong.
What gets me the most about this is how certain you are that because you have some data about people’s browser history you can make these conclusions. I hope age and experience avails you of these notions - there is no master database that will tell you the nature of people. All you have is the data you have - I’m sure there are many interesting insights and confounding results, but it’s foolish to think you can know anyone from a database, or social media.
Genetic comorbidities with IQ? You at least have a lot to learn about mental health as well if you’re using these as examples.
I think it is you who is blinded by your prior biases. It's not that I have 'some data', it was a mountain of data, and it was my full time job to analyze it specifically for behavioral information with a number of other researchers also doing the same, and we'd compare notes with other research orgs in other companies. I have a feeling that I've thought about this topic far more than you have.
At this point we'll just have to agree to disagree.
If you are suggesting that people shouldn't underestimate the difficulty of the jobs of others - my answer is a strong yes. People should strive for accuracy in all cases. But I did suggest that even if true it does not negate my assertion so I am failing to see the relevance. Perhaps I have misunderstood your point.
Sorry, I was rather obscure - you said "My estimations don’t come from my assumption that other people’s jobs are easy, they come from doing applied research in behavioral analytics on mountains of data in rather large data centers."
And so I considered the preceding discussion in light of your last sentence. Which makes it sound like you are saying "I've observed the behavior of people and they're often flawed and foolish, regardless of the high ideals they claim to be striving for and the education they think they have. Therefore, they will do better with ChatGPT as a companion than with a real human being". But that's quite a few words that you may not have intended, for which I apologize!
It wasn't that I observed them being foolish but many behaviors are subtly linked to intelligence and can be combined to create a proxy IQ. It also helps when people search their SAT scores. I noted that the people I typically interact with are much higher IQ than I had expected which incorrectly skewed my believe of the average higher. I noticed that other high IQ individuals were making the same assumptions. I had very much underestimated how little I interact with regular people.
I think we're already finding out that people are doing better with ChatGPT than with their peers, not all peers are created equal, and they can ask ChatGPT things that they cannot ask their peers. I think this trend will continue to the point that most people will prefer discussing things with ChatGPT than with their peers. Given what I know I predict this is a choice many people will make, I'm not passing judgment on that, it's a choice I've also made and I'm fortunate enough to have better peers than most.
> So even assuming you are correct for now I would suggest that LLMs will improve
Yes, and when we can all wear smart glasses the ways we use them will become increasingly influential in our daily lives: a conversational voice assistant that is visually monitoring our surroundings, helping with decision making (including micro decisions), coaching, carrying out our instructions, etc.
You’re comparing ChatGPT to an idealized example of a good human therapist when many actual therapists are either useless or even actively harmful to the mental health of their clients.
These therapists exist. There also exists therapists who certain people gain a ton of help from that another would go running for the hills. Not all therapists are meant to treat all clients - not just in terms of methodology, experience with given diagnoses, but also on a generational and cultural basis.
This idea that there is some “best” based on a synthesis of all content is inherently wrong - therapy more than most other things is personal and personalized. Human connection is not just a series of levers and response tokens.
That is a messy one here in the US. Almost every time we attempt to increase regulations around medical stuff we end up increasing costs and consolidation making care even more unavailable.
It’s easy to just say that the regulations should be improved. Very different to actually improve them. Therapy isn’t a mass produced engineered product like a car; if therapy was a mass produced engineered product, it would be an AI anyway. Materials science and structural engineering are far more mature and well-understood sciences than psychology, and you can’t just throw a crash test dummy into a therapist’s office and measure how damaged it gets.
It’s also not really clear how such regulations could even work. The regulations we have now are basically the obvious ones around licensure that require people to go to the right schools and whatnot. And then you can lose your license if it turns out you’ve broken the big ethical rules. But at the end of the day, that only regulates who can call themselves a “therapist” and get listed in Psychology Today. Actually “doing” “therapy” is, ultimately, built on talking to someone about your problems in some way that is supposed to help you solve them. You don’t need a “therapist” to do that. You can do it with your friend or parent or pastor or bartender or guru or “life coach” and, as long as we live in a free country, nobody’s going to stop you. Sure, the people who are allowed to call themselves therapists have certain techniques and rules that make them different, but even if that was a guarantee of quality there’s no way to stop people from talking to someone other than a licensed therapist, and it would be kind of absurd and dystopian to even try.
So let’s dispense of the notion that we are some sort of omniscient god-emperor who can just magically fix things with vague “regulations” and talk about the world as it actually exists. For a lot of people, I think that’s a world where talking about their personal issues with an LLM is arguably no worse than whatever other options they have. Maybe it’s not the equivalent of whatever amazing therapist you know or have or are or can imagine, but that’s not the therapist that everyone is going to get.
That’s why therapists have to be licensed. They decide what a “bad” therapist is, and delicense those who’ve caused significant harm to their clients - it’s a difficult process though, but it also should be. Once you get into it, you find that people have already thought of these solutions and actually put things in place to work towards a better system.
Except healthcare payments. That shit was designed to make money, not make people healthy.
There are a lot of awful therapists out there. I wager that Claude Sonnet 3.7 given a suitable, straightforward system prompt would handily outperform non-doctoral degree therapists in a clinical trial, even if the humans had the advantage of in-person sessions.
I would want to bet against it but then we’d have to agree on what “performance” means. Also agree there are horrible therapists. The funny part about horrible therapists is that sometimes they can actually be extremely good for one population of people. Will AI be able to connect with all sorts of people? Will we have AI politicians (do we now?)?
It’s sorta like saying AI will be a better friend. We’ll see about that - I don’t consider profit seeking enterprises to be my friend.