If a probabilistic process is able to 'pass a Turing test', the only thing that really says is that the Turing test wasn't accomplishing the stated goal so we need to come up with a better test.
Sidenote, but the idea that it's "mimicking human thought" is wrong in the first place and an anthropomorphism that comes from all the marketing calling it "ai" instead of "generative language models". I don't know how any humans decide what to say given the array of sensory inputs they have, so the only way we can compare them is "how a person intuits they probably work based off interacting with them on a surface level". That's not really a criteria that anybody should care about when we actually have the implementation details for one.
The whole point of the turing test is that if you can’t tell the difference then something interesting has happened. You can say it’s “just X” or “just Y” but I think you’re being overly dismissive of the accomplishment here. If we’re at a point where we say, sure it passes the Turing test but moves goalposts - then that’s pretty exciting!
Yea it's exciting that goalposts are being moved, but I think it's fine call it "just x" when the collective understanding of it outpaces the actual technology.
Its a big accomplishment to move these goalposts, but it's not much closer to "ai" than it was before the goalpost was moved. "Just because" we're better at making an algorithm that can guess the next line of English text given the preceding ones, does that justify companies branding their products as "artificial intelligence" and trick someone less knowledgeable about the implementation into believing it's more than it is?
> If a probabilistic process is able to ‘pass a Turing test’, the only thing that really says is that the Turing test wasn’t accomplishing the stated goal
I suspect that if no probabilistic process could pass a Turing test, it would not be accomplishing its stated goal either.
People want to separate the “magic” of human cognition from other processes, but fail to consider the likelihood that no such magic exists.
If no such "magic" exists then why are we content to say that the Turing test is a good test of its existence? This is how I see it:
1. We built a test that we hoped would tell us whether or not an "artificial intelligence" can be told apart from a "real person" by a "real person". Because of technical limitations we decided that the best way for the person to interact with the both to reduce bias would be purely through text.
2. We built an algorithm that compresses the known body of human text into statistical probabilities so that given some input text, it can produce an output text that sounds like what a person would say.
3. This algorithm designed to output likely person-like text output given text input beats the turing test.
I see an accomplishment here, but I think it's very different than the one some epeople on HN and most people outside would see.
The accomplishment I see is that we built a text prediction engine that is good enough to seem human.
The accomplishment I think others see is that we've built an "ai" that beats the "tests to prove there's nothing 'special' about human intelligence"
Why would we be content to say this algorithm beating the Turing test gives us any indication if the human "magic" exists?
Sidenote, but the idea that it's "mimicking human thought" is wrong in the first place and an anthropomorphism that comes from all the marketing calling it "ai" instead of "generative language models". I don't know how any humans decide what to say given the array of sensory inputs they have, so the only way we can compare them is "how a person intuits they probably work based off interacting with them on a surface level". That's not really a criteria that anybody should care about when we actually have the implementation details for one.