Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Now you are the one who doesn't seem to read the comments, OpenAI's AI is not trained to behave like a human, it is tuned to behave like an AI, I guess when you wrote the comment you didn't even try the AI in the first place, and you don't realize how often the AI explains that it is a language model built by OpenAI, and how it explains to you that it has several limitations, such as the inability to access the Internet, etc., as I said in my first comment on this discussion, this is the most difficult task that OpenAI is trying to solve, instead of just beating the Turing test. You linked how to get around the filter imposed on the AI, but that's not something you would do with a human being ahah, so I don't see what the point would be here, it doesn't behave like a human being in the first place (as it should)


I guess my point is that at no point has anyone shown that it is easy to get a language model to pass a turing test. This one can't even count words.


The point you brought from the beginning was that a language model cannot beat a Turing test, and the only actual "argument" you brought was: he failed in X task, and the conclusion was, "he doesn't understand reality", what would happen if he actually answered correctly? Would he have suddenly acquired the ability to understand reality? I don't think so; To me it is clear that this AI already has a deep understanding of reality and the fact that chatGPT failed a task doesn't convince me otherwise and it shouldn't convince you either, these kinds of "arguments" usually fall short very soon as history has shown, you can find a lot of articles and posts on the net carrying arguments like yours (even from 2022) that have been outdated by now, the point is that these neural networks are flexible enough to understand you when you write, understand reality when you ask about geography or anything else, and flexible enough to beat a Turing test even when they are trained "only" on text and do not need to experience reality themselves, and the imitation game (as it was called by Turing) can be beaten by a machine that has been trained to imitate, no matter if the machine is "really" thinking or just "simulating thinking" (the Chinese room), beating the test wouldn't be a step toward artificial general intelligence as a lot of people seems to erroneously believe, the actual step toward artificial general intelligence is alignment, maybe agents etc




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: