For many people ChatGPT is already the smartest relationship they have in their lives, not sure how long we have until it’s the most fulfilling. On the upside it is plausible that ChatGPT can get to a state where it can act as a good therapist and help helpless who otherwise would not get help.
I am more regularly finding myself in discussions where the other person believes they’re right because they have ChatGPT in their corner.
I think most smart people overestimate the intelligence of others for a variety of reasons so they overestimate what it would take for a LLM to beat the output of an average person.
>I think most smart people overestimate the intelligence of others for a variety of reasons so they overestimate what it would take for a LLM to beat the output of an average person.
I think most people also _vastly_ overestimate how much positive attention the average person gets in their lives
It wouldn't surprise me if, for most people, ChatGPT offers them more empathy and understanding than _anyone_ else _ever has_, at least on a consistent basis. That kind of indefatigable emotional labor is just not feasible for most, even on a very short term basis, even for those with large support networks
We can argue over whether or not it's "real" empathy, but I don't believe we can argue with the emotions of our attention starved brothers and sisters
>We can argue over whether or not it's "real" empathy
There's nothing to argue about, it's unambiguously not real empathy. Empathy from a human exists in a much broader context of past and future interactions. One reason human empathy is nice is because it is often followed up with actions. Friends who care about you will help you out in material ways when you need it.
Even strangers will. Someone who sees a person stranded on the side of a road might feel for them and stop to lend a hand. ChatGPT will never do that, and not just because interaction mediums are so limited, but also because that's not the purpose of the tool. The purpose of ChatGPT is to make immense amounts of money and power for its owners, and a nice sounding chat bot currently happens to be an effective way of getting there. Sam Altman doesn't have empathy for random ChatGPT users he's never met and neither do the computer algorithms his company develops.
>There's nothing to argue about, it's unambiguously not real empathy
I think if a person can't tell the difference between empathy from a human vs empathy from a chatbot, it's a difference without a distinction
If it activates the same neural pathways, and has the same results, then I think the mind doesn't care
>One reason human empathy is nice is because it is often followed up with actions. Friends who care about you will help you out in material ways when you need it.
This is what I think people vastly overestimate
I don't think most people have such ready access to a friend who is both willing and able to perform such emotional labor, on demand, at no cost to themselves.
I think the sad truth is that empathy is a much scarcer resource than we believe, not through any moral fault of our own, but because it's just the nature of things.
The economics of emotions.
We'll see what the future has in store for the tech anyway, but if it turns out that the average person gets more empathy from a chatbot than a human, it wouldn't surprise me
Empathy does not lie in its perception on receipt but in its inception as a feeling. It is fundamentally a manifestation of the modalities enabled in shared experience. As such impossible to the extent that our experiences are not compatible with those of an intelligence that does not put emphasis on lived context trying to substitute for it with offline batch learning. Understanding is possible in this relationship, but should not be confused with empathy or compassion.
I happen to agree with what you said. (Paraphrasing: A machine cannot have "real empathy" because a machine cannot "feel" in general.) But I think you're arguing a different point from the grandparent's. rurp said:
> Someone who sees a person stranded on the side of a road might feel for them and stop to lend a hand. ChatGPT will never do that [...]
Now, on the one hand that's because ChatGPT cannot "see a person" nor "stop [the car]"; it communicates only by text-in, text-out. (Although it's easy to input text describing that situation and see what text ChatGPT outputs!) GP says it's also because "the purpose of ChatGPT is to make immense amounts of money and power for its owners [, not to help others]." I took that to mean that GP was saying that even if a LLM was controlling a car and was able to see a person in trouble (or a tortoise on its back baking in the sun, or whatever), then it still would not stop to help. (Why? Because it wouldn't empathize. Why? Because it wasn't created to empathize.)
I take GP to be arguing that the LLM would not help; whereas I take you to be arguing that even if the LLM helped, it would by definition not be doing so out of empathy. Rather, it would be "helping"[1] because the numbers forced it to. I happen to agree with that position, but I think it's significantly different from GP's.
Btw, I highly recommend Geoffrey Jefferson's essay "The Mind of Mechanical Man" (1949) as a very clear exposition of the conservative position here.
[1] — One could certainly argue that the notions of "help" and "harm" likewise don't apply to non-intentional mechanistic forces. But here I'm just using the word "helping" as a kind of shorthand for "executing actions that caused better-than-previously-predicted outcomes for the stranded person," regardless of intentionality. That shorthand requires only that the reader is willing to believe in cause-and-effect for the purposes of this thread. :)
Yes, I am not in fact expanding on GPs argument but etymologically attack the premise. Pathos is not learnt. When I clutch my legs at the sight of someone getting kicked in the balls, that’s empathy. When, as now, I write about it, it’s not, even in my case where I have lived experience of it. More sophisticated kinds of empathy build on the foundations of these gut-driven ones. Thank you for reading recommendation, will look for it.
> As such impossible to the extent that our experiences are not compatible with those of an intelligence that does not put emphasis on lived context trying to substitute for it with offline batch learning.
Conversely that means empathy is possible to the extent that our experiences are compatible with those of an AI. That is precisely what's under consideration here and you have not shown that it is zero.
an intelligence that does not put emphasis on lived context trying to substitute for it with offline batch learning.
Will change your tune when online learning comes along?
Lived context is to me more than online learning. I admit I am not so versed in the space as to be able to anticipate the nature of context in the case of online learning, so, yes, indeed I may change my tune if it somehow makes learning more of an experience rather than an education. My understanding is it won’t. I have not proven, but argued that experience compatibility is zero, to the extent a Lim does not experience anything. Happy to accept alternative viewpoints and accordingly that someone may perceive something as a sign of empathy whether it is or not.
>If it activates the same neural pathways, and has the same results, then I think the mind doesn't care
Boiling it down to neural signals is a risky approach, imo. There are innumerable differences between these interactions. This isn't me saying interactions are inherently dangerous if artificial empathy is baked in, but equating them to real empathy is.
Understanding those differences is critical, especially in a world of both deliberately bad actors and those who will destroy lives in the pursuit of profit by normalizing replacements for human connections.
There's a book that I encourage everyone to read called Motivational Interviewing. I've read the 3rd edition and I'm currently working my way through the 4th edition to see what's changed, because it's a textbook that they basically rewrite completely with each new edition.
Motivational Interviewing is an evidence-based clinical technique for helping people move through ambivalence during the contemplation, preparation, and action stages of change under the Transtheoretical Model.
In Chapter 2 of the 3rd Edition, they define Acceptance as one of the ingredients for change, part of the "affect" of Motivational Interviewing. Ironically, people do not tend to change when they perceive themselves as unacceptable as they are. It is when they feel accepted as they are that they are able to look at themselves without feeling defensive and see ways in which they can change and grow.
Nearly all that they describe in chapter 2 is affective—it is neither sufficient nor even necessary in the clinical context that the clinician feel a deep acceptance for the client within themselves, but the client should feel deeply accepted so that they are given an environment in which they can grow. The four components of the affect of acceptance are autonomy support, absolute worth (what Carl Rogers termed "Unconditional Positive Regard"), accurate empathy, and affirmation of strengths and efforts.
Chapters 5 and 6 of the third edition define the skills of providing the affect of acceptance defined in Chapter 2—again, not as a feeling, but as a skill. It is something that can be taught, practiced, and learned. It is a common misconception to believe that unusually accepting people become therapists, but what is actually the case is that practicing the skill of accurate empathy trains the practitioner to be unusually accepting.
The chief skill of accurate empathy is that of "reflective listening", which essentially consists of interpreting what the other person has said and saying your interpretation back to them as a statement. For an unskilled listener, this might be a literal rewording of what was said, but more skilled listeners can, when appropriate, offer reflections that read between the lines. Very skilled listeners (as measured by scales like the Therapist Empathy Scale) will occasionally offer reflections that the person being listened to did not think, but will recognize within themselves once they have heard it.
In that sense, in the way that we measure empathy in settings where it is clinically relevant, I've found that AIs are very capable with some prompting of displaying the affect of accurate empathy.
A lot of human empathy isn't real either. Defaulting to the most extreme example, narcissists use love bombing to build attachment. Sales people use "relationship building" to make money. AI actually seems better than these -- it isn't building up to a rug pull (at least, not one that we know of yet).
And it's getting worse year after year, as our society gets more isolated. Look at trends in pig butchering, for instance: a lot of these are people so incredibly lonely and unhappy that they fall into the world's most obvious scam. AI is one of the few things that actually looks like it could work, so I think realistically it doesn't matter that it's not real empathy. At the same time, Sam Altman looks like the kind of guy who could be equally effective as a startup CEO or running a butchering op in Myanmar, so I hope like hell the market fragments more.
This is a good point, you can't be dependent on a chatbot in the same way you're dependent on someone you share a lease with. If people take up chatbots en masse, maybe it says more about how they perceive the risk of virtual or physical human interactions vs AI. The people I have met in the past make the most sycophant AIs seem like a drop in the bucket by comparison. When you come back from that in real life, you remark that this is all just a bunch of text in comparison.
I treat AIs dispassionately like a secretary I can give infinite amounts of work to without needing to care about them throwing their hands up. That sort of mindset is non-conducive to developing any feelings. With humans you need empathy to not burden them with excessive demands. If it solely comes down to getting work done (and not building friendships or professional relationships etc.) then that need to restrain your demands is a limitation of human biology that AIs kind of circumvent for specific workloads.
One concern that I do worry about is if LLMs are able to present an false attractive view of the world that the user will become increasingly dependent on the LLMs to maintain that view. A cult of 1. Reminds me of the episode 'Safe Space' from South Park but instead of Butters filtering content it'll be the LLM. People are already divorced enough from reality - but I see no reason why they couldn't be more divorced, at least temporarily.
It begs the question of who decides what “reality” is though. A lot of people have an unrealistically negative view of themselves and their abilities—often based on spending time around pessimistic or small-minded humans.
In that case, if an AI increases someone’s confidence in themselves, you could say it’s giving them a stronger sense of reality by helping them to question distorted and self-limiting beliefs.
We're talking about psychology, therapy, sycophancy, etc. None of this is empirical.
If someone thinks they can, say, create a billion dollar startup, whether they can really do it or not is a subjective determination. The AI might tell the person they can do it. You might tell them they can't, that the AI is sycophantic, and that they should stop talking to it because they're losing touch with reality.
But is the AI a sycophant, or are you an irrational pessimist?
The AI will be saying the same thing to everyone. Rationally, what are the chances every single OpenAI customer will be building a billion dollar startup any time soon?
But even it's more obvious than that. The sycophancy is plain old love bombing, which is a standard cult programming technique.
As for startups - let's wait until the AI has built a few of its own, or at least mentored humans successfully.
That's easy. What makes someone a sycophant, by definition, is that their encouragement and flattery is unconditional and completely disconnected from any sort of realistic consideration of your ideas.
You can't judge whether LLM is acting like a sycophant without reading the conversation, and you can't judge whether a human is being an irrational pessimist without having the full context.
Are they a highly intelligent, technically skilled, and socially competent person (probably not if they discuss their ideas with ChatGPT instead of a friend), or do they have a high school diploma, zero practical skills, and have spent the past 20 years smoking weed all day?
I think it is more complicated than just a matter of being correct or not. Common advice in some creative professions is "don't bother trying to do X for a living, you'll never make it." The point of the advice is not whether or not it is literally correct and the person is better off not bothering - in the general case, it's probably true. The point is that it acts as a filter for those not motivated enough. It's a backhanded sort of test implicitly presented to the aspirant.
Someone who really, really wants to make a billion dollar startup against all odds is going to ignore your advice anyway. In fact, they would ignore any AI's advice on the topic as well. But that kind of arrogance is precisely what's required to be able to pull it off. Someone who quits the moment an AI tells them "don't do it" was not cut out to accomplish such a goal to begin with.
And maybe in the end the startup will only be worth a couple million dollars, but the hubris to believe they could go even further would be what got them that far at all. So "can build a billion dollar startup" ended up being false, but something else was gained in the end.
We get into a bit of a weird space though when they know your opinions about them. I'm sure there are quite a few people who can only build a billion dollar startup if someone emotionally supports them in that endeavor. I'm sure more people could build such a startup if those around them provide knowledge or financial support. In the limit, pretty much anyone can build a billion dollar startup if handed a billion dollars. Are these people capable or not capable of building a building a billion dollar startup.
EDIT: To be clear, I somehow doubt an LLM would be able to provide the level of support needed in most scenarios. However, you and others around the potential founder might make the difference. Since your assessment of the person likely influences the level of support you provide to them, your assessment can affect the chances of whether or not they successfully build a billion dollar startup.
Hopefully there are better solutions to the fundamental limitations of societal empathy in the future, but for now i just can't see any
Seems to me empathy on a societal scale has been receding as population grows, not increasing to match (or outpace)
Telling people to seek empathy elsewhere to me will be about as useful as telling people at an oasis in the desert to look for water elsewhere, but i hope i'm wrong
For a subset of topics, "AI" is already what I prefer to interact with over humans. At times, it's nicer to start with "AI" and kind of ground my messy thoughts before interacting with people and that works better than directly starting with a person.
I'm also starting to come across people who say "You should ask these types of questions to AI first" answer. But this is no different than people who preached "lmfgt" kind of religiously. Even when I prefer to start with humans, some humans prompt me to start by prompting AI.
I'm waiting on LLMs to get good enough that I can use them to help me learn foreign languages - e.g. talk to me about the news in language X. This way I can learn a language in an interesting and interactive way without burdening some poor human with my mistakes. I would build this myself but others will probably beat me too it.
I sometimes prompt the LLM to talk to me as a <language> instructor - to suggest a topic, ask a question, read my response, correct my grammar, and suggest alternate vocabulary where appropriate. This works quite well. Similar to your comment, I am often hesitant to butcher a language in front of a real person :-).
The first step to really learn a language is to be confident and forgive yourself for any mistakes (you’re starting late anyway , and juggling other things).
These days, I do my best to learn and reflect. But any mistakes is just a reminder for more learning (and practice).
I contribute to a language-learning forum as a native English speaker, and we constantly get questions from people who are doing exactly what you're doing. The AI does not understand the language, and it will tell you blatantly incorrect information. Especially with less-common constructs, you'll just get very bad advice.
The problem is, AI doesn't let you, or encourage you to create your own style. Word choices, structure, flow, argument building and discourse style is very fixed and "average", since it's a machine favors what it ingests most.
I use Grammarly for grammar and punctuation, and disable all style recommendations. If I let it loose on my piece of text, it converts it to a slop. Same bland, overly optimistic toned text generator output.
So, that machine has no brain, use your own first.
It's not a project. Just open Claude or ChatGPT and tell it what you want to do. You could copy/paste your comment above. If you want to get slightly more fancy, use Google's AIStudio with a simple system prompt.
I would like to do it semi-passively through talking while I’m doing other things like going for a walk, doing chores etc. I don’t have the time at the moment to sit down and focus on this.
I think that most smart people underestimate the complexity of fields they aren’t in. ChatGPT may be able to replace a psychology listicle, but it has no affect or ability to read, respond, and intervene or redirect like a human can.
They definitely can. Here's an example when I tried it:
Me: I'm struggling with some things today and feel like eating a bunch of food to make myself feel better. can we talk about it?
Deepseek: Absolutely, I’m here to listen. It’s completely understandable to turn to food for comfort when you’re struggling—many people do. But if you’re open to it, we can explore what’s going on beneath the urge to eat emotionally.
Would you like to share what’s weighing on you today? Sometimes just naming the feelings can take some of their power away. And if you’d rather not go into details, that’s okay too—we could brainstorm other ways to cope that might leave you feeling more nourished emotionally (and physically).
Some gentle questions to consider (no pressure to answer all or any):
- What’s the emotion you’re trying to soothe? (Loneliness? Stress? Sadness?)
- If eating wasn’t an option, what’s one tiny thing that might bring you a moment of relief? (A hug? A walk? A favorite song?)
You’re not alone in this.
That's a really solid opening response and it kept doing well from there. I haven't tried a complex psychological issue but for the "I'm struggling and need someone to talk to" it does surprisingly well.
ChatGPT can’t be in the room with you. Can’t evoke psychological mirroring. Is AI going to cure loneliness? Will the AI be empathetic? Will it be capable of showing love? Can it give you the acceptance of another human being? Do these things matter if they’re simulated? Do these things matter at all?
I don’t deny the helpfulness of LLMs on many aspects of therapy and mental health. I’ve used it myself. Relational problems (of which almost all mental health is a part) requires relational healing.
I think we’re going to need to learn a lot more information about what being a human means, else we continue to in the name of business and financial efficiency lose something very important. I’m an optimist, but we have a lot of challenges ahead.
45% of the US[0] have a degree, about 40% EU[1] graduate, and 54% of China[2] get at least a diploma from university.
The best AI behave like someone fresh out of university without much real world experience.
Personally, I use this as a way to stay humble: when the AI is teaching me fundamentals about some subject, my opinion about it can't possibly be very useful.
> I think that most smart people underestimate the complexity of fields they aren’t in.
And people deep in new technologies overestimate the potential effect of $new_tech_du_jour. You cannot solve a problem without understanding it and its reasons. And LLMs are not able to understand something.
What I think we have here is a problem with definitions and scope.
For example does your definition of problem demand a problem only exists if a human consciously thinks of it?
If no, then almost every single 'problem' that's ever existed was solved by the random walk of genetics. Nature has 'solved' some of the most difficult energy conservation optimizations in living systems with nothing but time and randomness. Think of how many problems had to be solved over time to go from an animal that swims to one that flys?
Underestimating the complexity of other fields is not mutually exclusive with overestimating the intelligence of others. The real issue is that society is very stratified so smart people are less likely to interact with regular people, especially in circumstances where the intelligence of the regular person could become obvious.
I don’t see there being an insurmountable barrier that would prevent LLMs from doing the things you suggest it cannot. So even assuming you are correct for now I would suggest that LLMs will improve.
My estimations don’t come from my assumption that other people’s jobs are easy, they come from doing applied research in behavioral analytics on mountains of data in rather large data centers.
Most human intelligence is within a fairly narrow band. Most people I’ve ever met have their own unique intelligences. Perhaps it might be good to meet more people without holding the self-looping dichotomy of “smart people” vs “normal people”. In my experience it tends to lead to huge cognitive errors.
As mentioned I’m in the rather unique position to have analyzed the complete browser history for a substantial number of people - I have learned far more than I wished to.
The behaviors of very high IQ people are rather distinct from regular IQ people due to IQ being both largely generic and those genes having other comorbidities. Most obviously is the depression, anxiety, and bipolar disorders. This is so obvious that even regular researchers have uncovered it.
I think what happens to many people is they confuse their desired reality with actual reality by looking at everything through a tinted lens. In being a data scientist in pursuit of actual reality I’ve had my desired reality repeatedly challenged far more than a person not in this industry. My desired reality was that intelligence was more common and I believed that until it was shown to me in data that I was wrong.
What gets me the most about this is how certain you are that because you have some data about people’s browser history you can make these conclusions. I hope age and experience avails you of these notions - there is no master database that will tell you the nature of people. All you have is the data you have - I’m sure there are many interesting insights and confounding results, but it’s foolish to think you can know anyone from a database, or social media.
Genetic comorbidities with IQ? You at least have a lot to learn about mental health as well if you’re using these as examples.
I think it is you who is blinded by your prior biases. It's not that I have 'some data', it was a mountain of data, and it was my full time job to analyze it specifically for behavioral information with a number of other researchers also doing the same, and we'd compare notes with other research orgs in other companies. I have a feeling that I've thought about this topic far more than you have.
At this point we'll just have to agree to disagree.
If you are suggesting that people shouldn't underestimate the difficulty of the jobs of others - my answer is a strong yes. People should strive for accuracy in all cases. But I did suggest that even if true it does not negate my assertion so I am failing to see the relevance. Perhaps I have misunderstood your point.
Sorry, I was rather obscure - you said "My estimations don’t come from my assumption that other people’s jobs are easy, they come from doing applied research in behavioral analytics on mountains of data in rather large data centers."
And so I considered the preceding discussion in light of your last sentence. Which makes it sound like you are saying "I've observed the behavior of people and they're often flawed and foolish, regardless of the high ideals they claim to be striving for and the education they think they have. Therefore, they will do better with ChatGPT as a companion than with a real human being". But that's quite a few words that you may not have intended, for which I apologize!
It wasn't that I observed them being foolish but many behaviors are subtly linked to intelligence and can be combined to create a proxy IQ. It also helps when people search their SAT scores. I noted that the people I typically interact with are much higher IQ than I had expected which incorrectly skewed my believe of the average higher. I noticed that other high IQ individuals were making the same assumptions. I had very much underestimated how little I interact with regular people.
I think we're already finding out that people are doing better with ChatGPT than with their peers, not all peers are created equal, and they can ask ChatGPT things that they cannot ask their peers. I think this trend will continue to the point that most people will prefer discussing things with ChatGPT than with their peers. Given what I know I predict this is a choice many people will make, I'm not passing judgment on that, it's a choice I've also made and I'm fortunate enough to have better peers than most.
> So even assuming you are correct for now I would suggest that LLMs will improve
Yes, and when we can all wear smart glasses the ways we use them will become increasingly influential in our daily lives: a conversational voice assistant that is visually monitoring our surroundings, helping with decision making (including micro decisions), coaching, carrying out our instructions, etc.
You’re comparing ChatGPT to an idealized example of a good human therapist when many actual therapists are either useless or even actively harmful to the mental health of their clients.
These therapists exist. There also exists therapists who certain people gain a ton of help from that another would go running for the hills. Not all therapists are meant to treat all clients - not just in terms of methodology, experience with given diagnoses, but also on a generational and cultural basis.
This idea that there is some “best” based on a synthesis of all content is inherently wrong - therapy more than most other things is personal and personalized. Human connection is not just a series of levers and response tokens.
That is a messy one here in the US. Almost every time we attempt to increase regulations around medical stuff we end up increasing costs and consolidation making care even more unavailable.
It’s easy to just say that the regulations should be improved. Very different to actually improve them. Therapy isn’t a mass produced engineered product like a car; if therapy was a mass produced engineered product, it would be an AI anyway. Materials science and structural engineering are far more mature and well-understood sciences than psychology, and you can’t just throw a crash test dummy into a therapist’s office and measure how damaged it gets.
It’s also not really clear how such regulations could even work. The regulations we have now are basically the obvious ones around licensure that require people to go to the right schools and whatnot. And then you can lose your license if it turns out you’ve broken the big ethical rules. But at the end of the day, that only regulates who can call themselves a “therapist” and get listed in Psychology Today. Actually “doing” “therapy” is, ultimately, built on talking to someone about your problems in some way that is supposed to help you solve them. You don’t need a “therapist” to do that. You can do it with your friend or parent or pastor or bartender or guru or “life coach” and, as long as we live in a free country, nobody’s going to stop you. Sure, the people who are allowed to call themselves therapists have certain techniques and rules that make them different, but even if that was a guarantee of quality there’s no way to stop people from talking to someone other than a licensed therapist, and it would be kind of absurd and dystopian to even try.
So let’s dispense of the notion that we are some sort of omniscient god-emperor who can just magically fix things with vague “regulations” and talk about the world as it actually exists. For a lot of people, I think that’s a world where talking about their personal issues with an LLM is arguably no worse than whatever other options they have. Maybe it’s not the equivalent of whatever amazing therapist you know or have or are or can imagine, but that’s not the therapist that everyone is going to get.
That’s why therapists have to be licensed. They decide what a “bad” therapist is, and delicense those who’ve caused significant harm to their clients - it’s a difficult process though, but it also should be. Once you get into it, you find that people have already thought of these solutions and actually put things in place to work towards a better system.
Except healthcare payments. That shit was designed to make money, not make people healthy.
There are a lot of awful therapists out there. I wager that Claude Sonnet 3.7 given a suitable, straightforward system prompt would handily outperform non-doctoral degree therapists in a clinical trial, even if the humans had the advantage of in-person sessions.
I would want to bet against it but then we’d have to agree on what “performance” means. Also agree there are horrible therapists. The funny part about horrible therapists is that sometimes they can actually be extremely good for one population of people. Will AI be able to connect with all sorts of people? Will we have AI politicians (do we now?)?
It’s sorta like saying AI will be a better friend. We’ll see about that - I don’t consider profit seeking enterprises to be my friend.
I dunno, I feel like most people (probably not the typical HN user though) don't even think about their feelings, wants or anything else introspective on a regular basis. Maybe having something like ChatGPT available could be better than nothing, at least for people to start being at least a bit introspective, even if it's LLM-assisted. Maybe it gets a bit easier to ask questions that you feel are stigmatized, as you know (think) no other human will see it, just the robot that doesn't have feelings nor judge you.
I agree that it probably won't replace a proper therapist/psychologist, but maybe it could at least be a small step to open up and start thinking?
> I feel like most people (probably not the typical HN user though) don't even think about their feelings, wants or anything else introspective on a regular basis.
Well, two things.
First, no. People who engage on HN are a specific part of the population, with particular tendencies. But most of the people here are simply normal, so outside of the limits you consider. Most people with real social issues don’t engage in communities, virtual or otherwise. HN people are not special.
Then, you cannot follow this kind of reasoning when thinking about a whole population. Even if people on average tend to behave one way, this leaves millions of people who would behave otherwise. You simply cannot optimise for the average and ignore the worst case in situations like this, because even very unlikely situations
are bound to happen a lot.
> Maybe having something like ChatGPT available could be better than nothing, at least for people to start being at least a bit introspective, even if it's LLM-assisted.
It is worse than nothing. A LLM does not understand the situation or what people say to it. It cannot choose to, say, nudge someone in a specific direction, or imagine a way to make things better for someone.
À LLM regresses towards the mean of its training set. For people who are already outside the main mode of the distribution, this is completely unhelpful, and potentially actively harmful. By design, a LLM won’t follow a path that was not beaten in its training data. Most of them are actually biased to make their user happy and validate what we tell them rather than get off that path. It just does not work.
> I agree that it probably won't replace a proper therapist/psychologist, but maybe it could at least be a small step to open up and start thinking?
In my experience, not any more than reading a book would. Future AI models might get there, I don’t think their incompetence is a law of nature. But current LLM are particularly harmful for people who are in a dicey psychological situation already.
> It is worse than nothing. A LLM does not understand the situation or what people say to it. It cannot choose to, say, nudge someone in a specific direction, or imagine a way to make things better for someone.
Right, no matter if this is true or not, if the choice is between "Talk to no one, bottle up your feelings" and "Talk to an LLM that doesn't nudge you in a specific direction", I still feel like the better option would be the latter, not the former, considering that it can be a first step, not a 100% health care solution to a complicated psychological problem.
> In my experience, not any more than reading a book would.
But to even get out in the world to buy a book (literally or figuratively) about something that acknowledges that you have a problem, can be (at least feel) a really big step that many are not ready to take. Contrast that to talking with a LLM that won't remember you nor judge you.
Edit:
> Most people with real social issues don’t engage in communities, virtual or otherwise.
Not sure why you're focusing on social issues, there are a bunch of things people deal with on a daily basis that they could feel much better about if they even spent the time to think about how they feel about it, instead of the typical reactionary response most people have. Probably every single human out there struggle with something, and are unable to open up about their problems with others. Even people like us who interact with communities online and offline.
I think people are getting hung up on comparisons to a human therapist. A better comparison imo is to journaling. It’s something with low cost and low stakes that you can do on your own to help get your thoughts straight.
The benefit from that perspective is not so much in receiving an “answer” or empathy, but in getting thoughts and feelings out of your own head so that you can reflect on them more objectively. The AI is useful here because it requires a lot less activation energy than actual journaling.
> Right, no matter if this is true or not, if the choice is between "Talk to no one, bottle up your feelings" and "Talk to an LLM that doesn't nudge you in a specific direction", I still feel like the better option would be the latter, not the former, considering that it can be a first step, not a 100% health care solution to a complicated psychological problem.
You’re right, I was not clear enough. What would be needed would be a nudge in the right direction. But the LLM is very likely to nudge in another because that’s what most people would need or do, just because that direction was the norm in its training data. It’s ok on average, but particularly harmful to people who are in a situation to have this kind of discussion with a LLM.
Look at the effect of toxic macho influencers for an example of what happens with harmful nudges. These people need help, or at least a role model, but a bad one does not help.
> But to even get out in the world to buy a book (literally or figuratively) about something that acknowledges that you have a problem, can be (at least feel) a really big step that many are not ready to take.
Indeed. It’s something that should be addressed in mainstream education and culture.
> Not sure why you're focusing on social issues,
It’s the crux. If you don’t have problems talking to people, you are much more likely to run into someone who will help you. Social issues are not necessarily the problem, but they are a hurdle in the path to find a solution, and often a limiting one. Besides, if you have friends to talk to and are able to get advice, then a LLM is even less theoretically useful.
> Probably every single human out there struggle with something, and are unable to open up about their problems with others. Even people like us who interact with communities online and offline.
Definitely. It’s not a problem for most people, who either can rationalise their problems themselves with time or with some help. It gets worse if they can’t for one reason or another, and it gets worse still if they are mislead intentionally or not. LLMs are no help here.
I think you're unreasonably pessimistic in the short term, and unreasonably optimistic in the long term.
People are getting benefit from these conversations. I know people who have uploaded chat exchanges and asked an LLM for help understanding patterns and subtext to get a better idea of what the other person is really saying - maybe more about what they're really like.
Human relationship problems tend to be quite generic and non-unique, so in fact the averageness of LLMs becomes more of a strength than a weakness. It's really very rare for people to have emotional or relationship issues that no one else has experienced before.
The problem is more that if this became common OpenAI could use the tool for mass behaviour modification and manipulation. ChatGPT could easily be given a subtle bias towards some belief system or ideology, and persuaded to subtly attack competing systems.
This could be too subtle to notice, while still having huge behavioural and psychological effects on entire demographics.
We have the media doing this already. Especially social media.
But LLMs can make it far more personal, which means conversations are far more likely to have an effect.
I am more regularly finding myself in discussions where the other person believes they’re right because they have ChatGPT in their corner.
I think most smart people overestimate the intelligence of others for a variety of reasons so they overestimate what it would take for a LLM to beat the output of an average person.