Totally. I think I just feel like that similarity in behavior could encourage us to forgive its flaws as much as we forgive humans their flaws. In aggregate we clearly still produce value and GPT or similar probably does as well.
I can’t trust GPT but I can’t trust my uncle or my in-laws or the media either. I know that’s not exactly precise or “correct” but I think that’s where we’re headed with AI, rich experiences where you take what you want and leave what you don’t just like with other beings and other creations.
How is there nothing similar about the behavior? The whole premise of this thread is that there are similarities. If you wanna get off that train now, then peace.
> So? I don't ask my uncle for legal advice, and he isn't owned by a company and is being offered for his legal advice?
So? You think there aren't plenty of human lawyers who offer questionable/flawed legal advice? I'm not saying it's not worthy of criticism for specific use-cases or output quality, but that's not really what this thread is about.
When building an email service we can expect godlike perfection. When building an AI, we cannot expect godlike perfection. What's interesting is the AI approaching behavior akin to living beings, whether that's animal, toddler, mentally disabled, or adult level intelligence/behavior. And it seems like we're headed in that direction at a rapid clip. Remember humans also confabulate: confidently fabricate memories and explanations post-hoc.
Also remember that some great minds have entertained the "Language of Thought hypothesis", long before computers, which takes language as the building blocks of thought, so is it really that surprising that people are drawing parallels between a machine that uses language as it's building blocks of behavior and human behavior?
>So? You think there aren't plenty of human lawyers who offer questionable/flawed legal advice? I'm not saying it's not worthy of criticism for specific use-cases or output quality, but that's not really what this thread is about.
They can be disbarred. Your AI can't.
>When building an email service we can expect godlike perfection.
That's certainly not a standard I've been advocating for.
> What's interesting is the AI approaching behavior akin to living beings, whether that's animal, toddler, mentally disabled, or adult level intelligence/behavior.
You are just anthropomorphizing.
> Remember humans also confabulate: confidently fabricate memories and explanations post-hoc.
What does that have to do with ChatGPT at all? It's a post-hoc rationalization of chatGPT's own lack of explanation. Just because it's not clear how humans think, doesn't mean that the same thing is happening in ChatGPT just because we aren't clear on that either.
>Also remember that some great minds have entertained the "Language of Thought hypothesis", long before computers, which takes language as the building blocks of thought, so is it really that surprising that people are drawing parallels between a machine that uses language as it's building blocks of behavior and human behavior?
Yeah, because ChatGPT doesn't exhibit human behaviors, at all.
I can’t trust GPT but I can’t trust my uncle or my in-laws or the media either. I know that’s not exactly precise or “correct” but I think that’s where we’re headed with AI, rich experiences where you take what you want and leave what you don’t just like with other beings and other creations.