There is no question that ChatGPT and equivalents are not sentient. Of course they aren’t.
The unfortunate realization is that often, each of us fails to embrace our sentience and think critically; instead we keep repeating the stories we were told, acting indistinguishably from any of these large language models.
There’s a need for a reverse Turing test: prove to yourself that you’re actually not acting as a large language model.
I personally believe that non-biological things can be sentient, but I would argue that Large Language Models are not.
The only working example of sentience we have is ourselves, and we function in a completely different way than LLMs. I put/output similarity between us and LLMs is not enough IMO, as you can see in the Chinese Room thought experiment. For us to consider a machine sentient, it needs to function in a similar way to us, or else our definition of sentience gets way too broad to be true.
> For us to consider a machine sentient, it needs to function in a similar way to us, or else our definition of sentience gets way too broad to be true.
Imagine a more technologically advanced alien civilization visiting us. And they notice that our minds don't function quite in the same way as theirs. (E.g. they have a hive mentality. Or they have a less centralized brain system like an octopus. Or whatever.)
What if they concluded "Oh, these beings don't function like us. They do some cool tricks, but obviously they can't be sentient". I hope you see the problem here.
We're going to need a much more precise criterium here than "function in a similar way".
I mean, if we found a planet that was filled with plant life, that doesn't seem to display any level of thought, speech, or emotion, would we consider that sentient? Do we consider trees to be sentient? On some level, self-similarity is the only metric that we have; other things could be sentient but we have very little way of knowing.
My belief is that we need to see similar functionality to be sure of sentience. This may exclude some things that may theoretically be sentient, but I don't think we have a better metric than functionality that doesn't also include a lot of definitely-not-sentient things.
While similarity to humans is a good indicator of consciousness, it's very much not a necessary condition!
We need to err on the side of sentience if we're going to avoid moral catastrophe.
So yes, I do take the possibility of sentient trees pretty seriously. They're much more alive than we realize--e.g. they communicate with one another and coordinate actions via chemical signals. Do they feel pain and pleasure? Who knows. But I'm definitely not going to start peeling the bark off a tree for shits and giggles.
My thoughts on the Chinese room thought experiment is: the person in the room does not know Chinese, but the person+room system knows Chinese. I believe the correct analogy is to compare the AI system to the person+room system, not to just the person.
How do you back up the statement that "for us to consider a machine sentient, it needs to function in a similar way to us"? On what basis do you categorically deny the validity of a sentient being which works differently than a human?
Let me provide you with an example of why functionality seems important. Let's say that you perfectly record the neuron activations of a real-life human brain for one second, including all sensory input. Then, you play back those neuron activations in a computer, just by looking up the series of activations from memory, doing no computation. It simply takes in the sensory inputs and provides outputs based on its recording. Is such a program sentient? It produces all the correct outputs for the inputs provided.
If you believe that is sentient, let's take it one step further. What constitutes playback here? The program just copies the data from memory and displays it without modification. If that constitutes playback and therefore sentience, does copying that data from one place to another create sentient life as well? If the data is stored in DRAM, does it get "played back" whenever the DRAM refreshes the electrons in its memory?
There are a lot of various programs that can produce human output without functionality that we would reasonably consider sentient. Perhaps some of these things are sentient, but some of them most likely shouldn't be considered sentient.
If you simply copy the state of a brain along with the sensory information that yields a given set of state transitions, then of course, it's not exhibiting sentience. Sentient intelligence is the ability to react appropriately to novel input, which these models absolutely do.
The intelligence, such as it is, resides on the side that processes and understands the human's input, not in the output text that gets all the press coverage. You cannot get results like those discussed here from the operator of a Chinese room.
Now, if you take your deterministic brain-state model and feed it novel input, it will indeed exhibit sentience. To argue otherwise will require religious justification that's off-topic here. Either that, or you'll have to resort to Penrose's notion of brain-as-quantum-computer.
IMO, as you can see in the Chinese Room thought experiment.
We've already left the Chinese Room a hundred miles behind. How could something like the link in the (current) top post [1] ever have come out of Searle's model?
> as you can see in the Chinese Room thought experiment
The Chinese Room thought experiment is a thought experiment, where Searle argues that the functional definition of intelligence is not enough, and he gives an example that might convince some that the "implementation" is important.
However, the neural networks in LLMs function at least superficially similar to our brains. It doesn't function at all like a "Chinese room" where predefined scripts are given to a simple machine. Even if we accept the Chinese room argument as an objection to the functional intelligence test, given that LLMs work more similarly to brains than a Chinese room, I don't think you could use the argument as a rebuttal against LLMs being sentient unless you could show that "similarity between us and LLMs is not enough".
I think calling it simply an LLM is incorrect also. There clearly is intelligence in these models that has _emerged_ that goes far beyond simple “it’s just doing auto-completion”.
I think in general what’s causing so many people to be thrown for a loop is a lack of understanding or consideration of emergent behavior in systems. Break down the individual components of the human body and brain and someone could easily come to the same conclusion that “it’s just X”. Human intelligence and consciousness is all emergent as well. AGI will very likely will be emergent as well.
I would say instead that there's a level of complexity in these models that blow past our uncanny valley threshold heuristics for recognizing conscious intent. We've managed to find the edge of the usefulness of that hardwired sense.
I suspect the most challenging distinction for people is going to be: even if we do grant that a LLM had developed a kind of mind, our interactions with it are not actual communication with that mind. Human words are labels to organize sensory data and our experiences. The tokens an LLM works with have no meaning to it.
> The only working example of sentience we have is ourselves, and we function in a completely different way than LLMs
I'm not sure I'm fully convinced of that. I find that the arguments for LLMs potentially gaining sentience are reminiscent of Hofstadter's arguments from Goedel, Escher, Bach, that intelligence and symbolic reasoning are necessarily intertwined.
Why should it have to be able to describe its qualia at all? A dog can't. By the magic of empathy I can _infer_ what a dog is feeling, but its only through similarity to myself and other humans.
If we met a pre-linguistic alien species, it's likely we wouldn't be able to infer _anything_ about their internal state.
How are you so confident? It's a neural net with like 200 billion connections. I mean I also really doubt it's sentient, but you hear people who are 100% sure and the confidence is baffling.
Why "of course"? I don't think it is yet but it's clearly gone past the point of "of course".
You can't dismiss it as "just autocomplete" or "just software" like this author does, as if there's some special sauce that we know only animals have that allows us to be sentient.
In all likelihood sentience is an emergent property of sufficiently complex neural networks. It's also clearly a continuum.
Given that, and the fact that we don't even know what sentience is or what creates it, you'd have to be a total idiot to say that ChatGPT definitely is 0% sentient.
The unfortunate realization is that often, each of us fails to embrace our sentience and think critically; instead we keep repeating the stories we were told, acting indistinguishably from any of these large language models.
There’s a need for a reverse Turing test: prove to yourself that you’re actually not acting as a large language model.