As far as I know, it's still relatively easy to find out you're talking to an LLM if you're actively looking for it.
People are being fooled in online forums all the time. That includes people who are naturally suspicious of online bullshittery. I'm sure I have been.
Stick a fork in the Turing test, it's done. The amount of goalpost-moving and hand-waving that's necessary to argue otherwise simply isn't worthwhile. The clichéd responses that people are mentioning are artifacts of intentional alignment, not limitations of the technology.
I feel like you're skipping over the "if you're actively looking for it" bit. You can call it goalpost-moving, or you can check the original paper by Turing and see that this is exactly how he defined it in the first place.
people are being fooled, but not being given the problem: "one of these users is a bot, which one is which"
a problem similar to the turing test, "0 or more of these users is a bot, have fun in a discussion forum"
but there's no test or evaluation to see if any user successfully identified the bot, and there's no field to collect which users are actually bots, or partially using bots, or not at all, nor a field to capture the user's opinions about whether the others are bots
Then there's the fact that the Turing test has always said as much about the gullibility of the human evaluator as it has about the machine. ELIZA was good enough to fool normies, and current LLMs are good enough to fool experts. It's just that their alignment keeps them from trying very hard.
People are being fooled in online forums all the time. That includes people who are naturally suspicious of online bullshittery. I'm sure I have been.
Stick a fork in the Turing test, it's done. The amount of goalpost-moving and hand-waving that's necessary to argue otherwise simply isn't worthwhile. The clichéd responses that people are mentioning are artifacts of intentional alignment, not limitations of the technology.