Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's really not true. Lots of people in America can have $0 in net worth and get a credit card, use that to buy some jewelry and then sell it, and have $10k in cash. The fact that the trick only works once proves that it's a trick.




You're not making much sense. Like the other user, you are hinging on non-transferrable details of your analogy, which is not the actual reality of the situation.

You've invented a story where the user can pass the test by only doing this once and hinged your point on that, but that's just that - a story.

All of our tests and benchmarks account for repeatability. The machine in question has no problem replicating its results on whatever test, so it's a moot point.


The LLM can replicate the trick of fooling users into thinking it's conscious as long as there is a sufficient supply of money to keep the LLM running and a sufficient number of new users who don't know the trick. If you don't account for either of those resources running out, you're not testing whether its feats are truly repeatable.

>The LLM can replicate the trick of fooling users into thinking it's conscious as long as there is a sufficient supply of money to keep the LLM running and a sufficient number of new users who don't know the trick.

Okay ? and you, presumably a human can replicate the trick of fooling me into thinking you're conscious as long as there is a sufficient supply of food to keep you running. So what's your point ? With each comment, you make less sense. Sorry to tell you, but there is no trick.


The difference is that the human can and did find its own food for literally ages. That's already a very, very important difference. And while we cannot really define what's conscious, it's a bit easier (still with some edge cases) to define what is alive. And probably what is alive has some degree of consciousness. An LLM definitely does not.

One of the "barriers" to me is that (AFAIK) an LLM/agent/whatever doesn't operate without you hitting the equivalent of an on switch.

It does not think idle thoughts while it's not being asked questions. It's not ruminating over its past responses after having replied. It's just off until the next prompt.

Side note: whatever future we get where LLMs get their own food is probably not one I want a part of. I've seen the movies.


This barrier is trivial to solve even today. It is not hard to put an LLM on an infinite loop of self-prompting.

A self-prompting loop still seems artificial to me. It only exists because you force it to externally.

You only exist because you were forced to be birthed externally? Everything has a beginning.

In fact, what is artificial is stopping the generation of an LLM when it reaches a 'stop token'.

A more natural barrier is the attention size, but with 2 million tokens, LLMs can think for a long time without losing any context. And you can take over with memory tools for longer horizon tasks.


Good points. :) Thank you.

>All of our tests and benchmarks account for repeatability.

What does repeatability have to do with intelligence? If I ask a 6 year old "Is 1+1=2" I don't change my estimation of their intelligence the 400th time they answer correctly.

>The machine in question has no problem replicating its results on whatever test

What machine is that? All the LLMs I have tried produce neat results on very narrow topics but fail on consistency and generality. Which seems like something you would want in a general intelligence.


>What does repeatability have to do with intelligence? If I ask a 6 year old "Is 1+1=2" I don't change my estimation of their intelligence the 400th time they answer correctly.

If your 6 year old can only answer correctly a few times out of that 400 and you don't change your estimation of their understanding of arithmetic then, I sure hope you are not a teacher.

>What machine is that? All the LLMs I have tried produce neat results on very narrow topics but fail on consistency and generality. Which seems like something you would want in a general intelligence.

No LLM will score 80% on benchmark x today then 50% on the same 2 days later. That doesn't happen, so the convoluted setup OP had is meaningless. LLMs do not 'fail' on consistency or generality.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: