Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You can ask it how it came up with its answer and it will do it's best to give you an explanation.

will it? or is it just going to start another chain of words it's trying to complete without any regard for its previous statements? My guess is that it's doing what I described and isn't doing what you described (because it can't).



Also not that far off from how humans behave sometimes. Reminds me of split brain studies. Iirc they got human brains to confidently explain reasons for fictional past behavior.


It might coincidentally be how a human brain behaves, but I made this same point upthread. It's misplaced to think that because we aren't sure exactly what happens when a human thinks, that thinking is what chatGPT does. One has nothing to do with the other.


Totally. I think I just feel like that similarity in behavior could encourage us to forgive its flaws as much as we forgive humans their flaws. In aggregate we clearly still produce value and GPT or similar probably does as well.

I can’t trust GPT but I can’t trust my uncle or my in-laws or the media either. I know that’s not exactly precise or “correct” but I think that’s where we’re headed with AI, rich experiences where you take what you want and leave what you don’t just like with other beings and other creations.


But there is nothing similar about the behavior. You are jumping to the conclusion based upon an absence of evidence.

>I can’t trust GPT but I can’t trust my uncle or my in-laws or the media either.

So? I don't ask my uncle for legal advice, and he isn't owned by a company and is being offered for his legal advice?


How is there nothing similar about the behavior? The whole premise of this thread is that there are similarities. If you wanna get off that train now, then peace.

> So? I don't ask my uncle for legal advice, and he isn't owned by a company and is being offered for his legal advice?

So? You think there aren't plenty of human lawyers who offer questionable/flawed legal advice? I'm not saying it's not worthy of criticism for specific use-cases or output quality, but that's not really what this thread is about.

When building an email service we can expect godlike perfection. When building an AI, we cannot expect godlike perfection. What's interesting is the AI approaching behavior akin to living beings, whether that's animal, toddler, mentally disabled, or adult level intelligence/behavior. And it seems like we're headed in that direction at a rapid clip. Remember humans also confabulate: confidently fabricate memories and explanations post-hoc.

Also remember that some great minds have entertained the "Language of Thought hypothesis", long before computers, which takes language as the building blocks of thought, so is it really that surprising that people are drawing parallels between a machine that uses language as it's building blocks of behavior and human behavior?


>So? You think there aren't plenty of human lawyers who offer questionable/flawed legal advice? I'm not saying it's not worthy of criticism for specific use-cases or output quality, but that's not really what this thread is about.

They can be disbarred. Your AI can't.

>When building an email service we can expect godlike perfection.

That's certainly not a standard I've been advocating for.

> What's interesting is the AI approaching behavior akin to living beings, whether that's animal, toddler, mentally disabled, or adult level intelligence/behavior.

You are just anthropomorphizing.

> Remember humans also confabulate: confidently fabricate memories and explanations post-hoc.

What does that have to do with ChatGPT at all? It's a post-hoc rationalization of chatGPT's own lack of explanation. Just because it's not clear how humans think, doesn't mean that the same thing is happening in ChatGPT just because we aren't clear on that either.

>Also remember that some great minds have entertained the "Language of Thought hypothesis", long before computers, which takes language as the building blocks of thought, so is it really that surprising that people are drawing parallels between a machine that uses language as it's building blocks of behavior and human behavior?

Yeah, because ChatGPT doesn't exhibit human behaviors, at all.


> They can be disbarred. Your AI can't.

Why not? That seems to be lacking imagination. There's all sorts of regulation that can be brought to bear.

> Yeah, because ChatGPT doesn't exhibit human behaviors, at all.

You can think so, but others disagree. Here is where the thread started.

> The more I work with LLMs, the more I think of them as plagiarization engines. > We're all kind of plagiariztion engines.

If you don't see any similarities others see, then further discussion is fruitless.

> ChatGPT doesn't exhibit human behaviors, at all.

Chat is a behavior humans do. It chats, and pretty damn well. Clearly you mean something else then what you are explicitly saying.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: