>I could walk into a club in Vegas, throw down $10,000 cash for a VIP table, and start throwing around $100 bills.
If you can withdraw $10,000 cash at all to dispose as you please (including for this 'trick' game) then my friend you are wealthy from the perspective of the vast majority of humans living on the planet.
And if you balk at doing this, maybe because you cannot actually withdraw that much, or maybe because it is badly needed for something else, then you are not actually capable of performing the test now, are you ?
That's really not true. Lots of people in America can have $0 in net worth and get a credit card, use that to buy some jewelry and then sell it, and have $10k in cash. The fact that the trick only works once proves that it's a trick.
You're not making much sense. Like the other user, you are hinging on non-transferrable details of your analogy, which is not the actual reality of the situation.
You've invented a story where the user can pass the test by only doing this once and hinged your point on that, but that's just that - a story.
All of our tests and benchmarks account for repeatability. The machine in question has no problem replicating its results on whatever test, so it's a moot point.
The LLM can replicate the trick of fooling users into thinking it's conscious as long as there is a sufficient supply of money to keep the LLM running and a sufficient number of new users who don't know the trick. If you don't account for either of those resources running out, you're not testing whether its feats are truly repeatable.
>The LLM can replicate the trick of fooling users into thinking it's conscious as long as there is a sufficient supply of money to keep the LLM running and a sufficient number of new users who don't know the trick.
Okay ? and you, presumably a human can replicate the trick of fooling me into thinking you're conscious as long as there is a sufficient supply of food to keep you running. So what's your point ? With each comment, you make less sense. Sorry to tell you, but there is no trick.
The difference is that the human can and did find its own food for literally ages. That's already a very, very important difference. And while we cannot really define what's conscious, it's a bit easier (still with some edge cases) to define what is alive. And probably what is alive has some degree of consciousness.
An LLM definitely does not.
One of the "barriers" to me is that (AFAIK) an LLM/agent/whatever doesn't operate without you hitting the equivalent of an on switch.
It does not think idle thoughts while it's not being asked questions. It's not ruminating over its past responses after having replied. It's just off until the next prompt.
Side note: whatever future we get where LLMs get their own food is probably not one I want a part of. I've seen the movies.
You only exist because you were forced to be birthed externally? Everything has a beginning.
In fact, what is artificial is stopping the generation of an LLM when it reaches a 'stop token'.
A more natural barrier is the attention size, but with 2 million tokens, LLMs can think for a long time without losing any context. And you can take over with memory tools for longer horizon tasks.
>All of our tests and benchmarks account for repeatability.
What does repeatability have to do with intelligence? If I ask a 6 year old "Is 1+1=2" I don't change my estimation of their intelligence the 400th time they answer correctly.
>The machine in question has no problem replicating its results on whatever test
What machine is that? All the LLMs I have tried produce neat results on very narrow topics but fail on consistency and generality. Which seems like something you would want in a general intelligence.
>What does repeatability have to do with intelligence? If I ask a 6 year old "Is 1+1=2" I don't change my estimation of their intelligence the 400th time they answer correctly.
If your 6 year old can only answer correctly a few times out of that 400 and you don't change your estimation of their understanding of arithmetic then, I sure hope you are not a teacher.
>What machine is that? All the LLMs I have tried produce neat results on very narrow topics but fail on consistency and generality. Which seems like something you would want in a general intelligence.
No LLM will score 80% on benchmark x today then 50% on the same 2 days later. That doesn't happen, so the convoluted setup OP had is meaningless. LLMs do not 'fail' on consistency or generality.
>Couldn’t someone else just give him a bunch of cash to blow on the test, to spoil the result?
If you still need a rich person to pass the test, then the test is working as intended. Person A is rich or person A is backed by a rich sponsor is not a material difference for the test. You are hinging too much on minute details of the analogy.
In the real word, your riches can be sponsored by someone else, but for whatever intelligence task we envision, if the machine is taking it then the machine is taking it.
>Couldn’t he give away his last dollar but pretend he’s just going to another casino?
Again, if you have $10,000 you can just withdraw today and give away, last dollar or not, the vast majority of people on this planet would call you wealthy. You have to understand that this is just not something most humans can actually do, even on their deathbed.
>> Again, if you have $10,000 you can just withdraw today and give away, last dollar or not, the vast majority of people on this planet would call you wealthy. You have to understand that this is just not something most humans can actually do, even on their deathbed.
So, most people can't get $1 Trillion to build a machine that fools people into thinking it's intelligent. That's probably also not a trick that will ever be repeated.
If you can withdraw $10,000 cash at all to dispose as you please (including for this 'trick' game) then my friend you are wealthy from the perspective of the vast majority of humans living on the planet.
And if you balk at doing this, maybe because you cannot actually withdraw that much, or maybe because it is badly needed for something else, then you are not actually capable of performing the test now, are you ?