Hacker Newsnew | past | comments | ask | show | jobs | submit | silvio's commentslogin

There is no question that ChatGPT and equivalents are not sentient. Of course they aren’t.

The unfortunate realization is that often, each of us fails to embrace our sentience and think critically; instead we keep repeating the stories we were told, acting indistinguishably from any of these large language models.

There’s a need for a reverse Turing test: prove to yourself that you’re actually not acting as a large language model.


Curious--what would convince you that something non-biological were sentient? I have a hard time answering this question myself.


I personally believe that non-biological things can be sentient, but I would argue that Large Language Models are not.

The only working example of sentience we have is ourselves, and we function in a completely different way than LLMs. I put/output similarity between us and LLMs is not enough IMO, as you can see in the Chinese Room thought experiment. For us to consider a machine sentient, it needs to function in a similar way to us, or else our definition of sentience gets way too broad to be true.


> For us to consider a machine sentient, it needs to function in a similar way to us, or else our definition of sentience gets way too broad to be true.

Imagine a more technologically advanced alien civilization visiting us. And they notice that our minds don't function quite in the same way as theirs. (E.g. they have a hive mentality. Or they have a less centralized brain system like an octopus. Or whatever.)

What if they concluded "Oh, these beings don't function like us. They do some cool tricks, but obviously they can't be sentient". I hope you see the problem here.

We're going to need a much more precise criterium here than "function in a similar way".


I mean, if we found a planet that was filled with plant life, that doesn't seem to display any level of thought, speech, or emotion, would we consider that sentient? Do we consider trees to be sentient? On some level, self-similarity is the only metric that we have; other things could be sentient but we have very little way of knowing.

My belief is that we need to see similar functionality to be sure of sentience. This may exclude some things that may theoretically be sentient, but I don't think we have a better metric than functionality that doesn't also include a lot of definitely-not-sentient things.


While similarity to humans is a good indicator of consciousness, it's very much not a necessary condition!

We need to err on the side of sentience if we're going to avoid moral catastrophe.

So yes, I do take the possibility of sentient trees pretty seriously. They're much more alive than we realize--e.g. they communicate with one another and coordinate actions via chemical signals. Do they feel pain and pleasure? Who knows. But I'm definitely not going to start peeling the bark off a tree for shits and giggles.


My thoughts on the Chinese room thought experiment is: the person in the room does not know Chinese, but the person+room system knows Chinese. I believe the correct analogy is to compare the AI system to the person+room system, not to just the person.

How do you back up the statement that "for us to consider a machine sentient, it needs to function in a similar way to us"? On what basis do you categorically deny the validity of a sentient being which works differently than a human?


Let me provide you with an example of why functionality seems important. Let's say that you perfectly record the neuron activations of a real-life human brain for one second, including all sensory input. Then, you play back those neuron activations in a computer, just by looking up the series of activations from memory, doing no computation. It simply takes in the sensory inputs and provides outputs based on its recording. Is such a program sentient? It produces all the correct outputs for the inputs provided.

If you believe that is sentient, let's take it one step further. What constitutes playback here? The program just copies the data from memory and displays it without modification. If that constitutes playback and therefore sentience, does copying that data from one place to another create sentient life as well? If the data is stored in DRAM, does it get "played back" whenever the DRAM refreshes the electrons in its memory?

There are a lot of various programs that can produce human output without functionality that we would reasonably consider sentient. Perhaps some of these things are sentient, but some of them most likely shouldn't be considered sentient.


If you simply copy the state of a brain along with the sensory information that yields a given set of state transitions, then of course, it's not exhibiting sentience. Sentient intelligence is the ability to react appropriately to novel input, which these models absolutely do.

The intelligence, such as it is, resides on the side that processes and understands the human's input, not in the output text that gets all the press coverage. You cannot get results like those discussed here from the operator of a Chinese room.

Now, if you take your deterministic brain-state model and feed it novel input, it will indeed exhibit sentience. To argue otherwise will require religious justification that's off-topic here. Either that, or you'll have to resort to Penrose's notion of brain-as-quantum-computer.


IMO, as you can see in the Chinese Room thought experiment.

We've already left the Chinese Room a hundred miles behind. How could something like the link in the (current) top post [1] ever have come out of Searle's model?

1: https://www.reddit.com/r/ChatGPT/comments/110vv25/bing_chat_...


> as you can see in the Chinese Room thought experiment

The Chinese Room thought experiment is a thought experiment, where Searle argues that the functional definition of intelligence is not enough, and he gives an example that might convince some that the "implementation" is important.

However, the neural networks in LLMs function at least superficially similar to our brains. It doesn't function at all like a "Chinese room" where predefined scripts are given to a simple machine. Even if we accept the Chinese room argument as an objection to the functional intelligence test, given that LLMs work more similarly to brains than a Chinese room, I don't think you could use the argument as a rebuttal against LLMs being sentient unless you could show that "similarity between us and LLMs is not enough".


Chinese Room as an argument that computers can't think, is thoroughly rebutted. Read the summary Wikipedia.


There are many replies and replies-to-those-replies listed, but nothing I would call “thoroughly rebutted”.

I’m particularly unimpressed by the amount of hand-waving packed into replies that want us to assume a “simulated neuron”.


I think calling it simply an LLM is incorrect also. There clearly is intelligence in these models that has _emerged_ that goes far beyond simple “it’s just doing auto-completion”.

I think in general what’s causing so many people to be thrown for a loop is a lack of understanding or consideration of emergent behavior in systems. Break down the individual components of the human body and brain and someone could easily come to the same conclusion that “it’s just X”. Human intelligence and consciousness is all emergent as well. AGI will very likely will be emergent as well.


> There clearly is intelligence in these models

I would say instead that there's a level of complexity in these models that blow past our uncanny valley threshold heuristics for recognizing conscious intent. We've managed to find the edge of the usefulness of that hardwired sense.

I suspect the most challenging distinction for people is going to be: even if we do grant that a LLM had developed a kind of mind, our interactions with it are not actual communication with that mind. Human words are labels to organize sensory data and our experiences. The tokens an LLM works with have no meaning to it.


> The only working example of sentience we have is ourselves, and we function in a completely different way than LLMs

I'm not sure I'm fully convinced of that. I find that the arguments for LLMs potentially gaining sentience are reminiscent of Hofstadter's arguments from Goedel, Escher, Bach, that intelligence and symbolic reasoning are necessarily intertwined.


At the very least it needs to be able to describe it's qualia without me having to prompt for it.


Why should it have to be able to describe its qualia at all? A dog can't. By the magic of empathy I can _infer_ what a dog is feeling, but its only through similarity to myself and other humans.

If we met a pre-linguistic alien species, it's likely we wouldn't be able to infer _anything_ about their internal state.


By that logic how do you know rocks aren't sentient?


That's the thing! You don't!

Conversely, there's no way for me to know that you _are_ sentient (a la Solipsism).


How are you so confident? It's a neural net with like 200 billion connections. I mean I also really doubt it's sentient, but you hear people who are 100% sure and the confidence is baffling.


Why "of course"? I don't think it is yet but it's clearly gone past the point of "of course".

You can't dismiss it as "just autocomplete" or "just software" like this author does, as if there's some special sauce that we know only animals have that allows us to be sentient.

In all likelihood sentience is an emergent property of sufficiently complex neural networks. It's also clearly a continuum.

Given that, and the fact that we don't even know what sentience is or what creates it, you'd have to be a total idiot to say that ChatGPT definitely is 0% sentient.


On the topic of a city divided in two, I highly recommend the novel ‘The City & The City’ by China Mieville. It’s an excelllent detective thriller that reminds you that the strongest and tallest walls are always in our minds.


Hey there! This article https://news.ycombinator.com/item?id=27696369 got me excited about the current state of generative art using neural networks, so I created this toy using some of those techniques. The site shows you generated movie posters for famous movies, and gives you a chance to guess what they are before revealing the answer. Even though the mechanic is very simple, I found it quite entertaining, and most of the generated posters are detailed and interesting. I hope you enjoy it!


Liked it very much. Some of them pretty obvious, some of them not.


This gave me an uncanny feeling... quite interesting.


If the camera had a fisheye lens, it would distort the parts of the craft that are in its field of view, but those appear without fisheye distortion.

Unless I'm missing something else, the curvature you see is likely due to the spherical nature of planets.


They just switched cameras, and this one doesn't show parts of the craft, so my comment above won't make much sense, but the curvature I see with this camera is similar to the one I saw with the other camera which shows parts of the craft.


Tesla | Palo Alto, CA | Full Time - Onsite

We're hiring Engineers with solid Computer Architecture fundamentals, who are comfortable working at the lowest levels of Linux or other embedded operating systems, as well as happily venturing into userland and application code. Our day to day include working with C, C++, Linux, DSPs, gstreamer, BlueZ, ARM SoCs, LTE modems, security, bash, yocto, and more.

If you're interested in making a dent in the world, and the above sounds like you, reach out to me at sbrugada (at) teslamotors.com.


At Tesla we have lots of embedded software opportunities, ranging from code that runs in very tiny microcontrollers to beefy ARM host processors. If you, or anyone else in this thread, are interested, contact me on email. My inbox at Tesla is sbrugada.


This will work really well. Some leaves use this same mechanism to 'self-clean', which means that now we will have the ability to add the 'lotus effect' to man-made objects.

https://www.youtube.com/watch?v=VHcd_4ftsNY


New York as a Microchip layout was the main theme of the intro of 'Hackers'. Check out around 0:40

https://www.youtube.com/watch?v=9c4KG_8iTZM


Wow perfect reference.

Btw just in case you didn't know, you can link to specific times in youtube videos, e.g. https://www.youtube.com/watch?v=9c4KG_8iTZM&t=55s


That intro was responsible for introducing me to Orbital as well: http://en.wikipedia.org/wiki/Orbital_(band)


Koyaanisqatsi and Tron both did it in '82. I'm sure there are probably other, earlier examples, too.


Yeah, it's a metaphor that gets explored a bit from time to time.

https://www.youtube.com/watch?v=eJuy7LTBJk0

https://www.youtube.com/watch?v=UDB9g7l5srA


Because Europe didn't know about America until 1492.


Or more accurately, as that wikipedia page mentions, March 1493- when Columbus got back.


With more research along these lines, we may have a day when a programming job interview will consist on interacting with some codebase for 10 minutes, and the model will spit out our normalized scores on syntax comprehension, algorithmic thinking, library familiarity, etc.


I wonder if when we get that far the computer is able to write the code for us anyways and programmers will be redundancies. Its just a matter of if, not when. Understanding human cognition well enough to evaluate it is a good indicator that the singularity is near.


That was the hype surrounding Fortran when it came out in 1959: it would eliminate the need for programmers, because you could just punch the formulas and it would translate them into a program automatically. And in a sense it did, and its successors did to an even greater extent. Today you have "nonprogrammers" who can build, in a day, in OpenOffice Calc, computations that would have taken weeks to get written by programmers in the 1950s (when they could have been done at all.)

It's partly a matter of giving the objectives to the computer at progressively higher levels, and partly a matter of improving user interfaces so that you can tell what the thing is doing and what it's going to do.


Computers already write code for us, they just need to be told what we want them to write. The question is if we can design a programming environment where the computer can understand 'natural' human specifications and compile that, instead of needing to have a trained human compile the natural specifications into source code, which the computer can compile into a program.


A large chunk of the job of developing software is getting the stakeholders to understand the problem they want the software to solve. The rest of it is just typing which is the trivial part.


Yeah, right. And all the technical books on design patterns, functional programming, algorithms, etc. are out there just to teach programmers how to type faster ...


Sure we could. This is just like Siri, you tell it what you want, she asks for clarifications, a conversation you have with the computer until...bam, you get what you want.

I would think we could do much of this already if we tried, but there are a lot of things on my list to do until I get to that.


Your optimism on this seems unfounded in my opinion.

When there are no domain restrictions, computers have not proven very capable of comprehension. Look even at the comparatively simple domain of handwriting recognition.

More importantly, you entirely miss the problem that many times the human doesn't know exactly what he wants until he sees it.

As a software engineer, I have no fear my job will be replaced by computers talking to product managers.


That's why the program should be expressed as a conversation! We don't know what we want so we should start vague, get feedback, provide clarifications, and so on. The conversation with the computer must be a two-way thing!


That's brilliant. Not in terms of some AI working out what a silver haired old grandmother meant but in terms of talking to clients - they could imagine talking to a computer and I am just providing that feedback.

Daily or weekly, feedback nice and quick


The thing is, we probably don't need hard AI yet to do some of this. Yes, it must be a dialogue system, but we have those today. We already saw the movement in this direction with Hugo Liu's concept.net work, but for some reason no one has followed up yet. We are getting to the point with speech rec/understanding technology that someone is bound to try again soon.


In general, we don't need Hard AI for much anything, and usually don't want it. Any specific problem you want solved will require some specific solution method or algorithm rather than the whole and entire "make a computer hold an adequate water-cooler conversation" thing.


I think it's less worthwhile to argue this than to try to build it.


That's kind of how TeX and METAFONT have worked since the 1970s. But maybe you have something else in mind.


The problem is that most people are simply incapable of giving correct specifications.

In a lecture about this topic (how to create specifications that can be turned into formal correct code) we were given the following simple example of an incorrect specification:

"Everybody loves my baby. But baby loves nobody, but me."

If you formalize this, you can simply conclude that the person that says that is equal to the baby - which clearly is not, what you intended.

And this is a very, very simple specification. Specifications of real software are magnitudes more complicated.


That is why the computer must hold a two way conversation with the programmer. How often do we just type out a program from our head to the computer anyways? Most of us write the program, debug it, change it because the result wasn't right, or we didn't really understand what we wanted, or someone else decided the requirements changed, or whatever...


These predictions always fail to account for C++.

That is, having computers write code depends on the code being in a form about which computers can formally reason. If you can't prove anything about your code because the language is that bad, good luck getting a computer to write code.

/explain_the_joke.jpg


has there been any work on this? it seems relatively straightforward to score a git branch in certain time scales

EDIT should have RTFA before commenting, gaze tracking stuff is very cool


It does sound representative of the bulk of the daily work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: