The turing test point is actually very interesting, because it's testing whether you can tell you're talking to a computer or a person. When Chatgpt3 came out we all declared that test utterly destroyed. But now that we've had time to become accustomed and learn the standard syntax, phraseology, and vocabulary of the gpt's, I've started to be able to detect the AI's again. If humanity becomes completely accustomed to the way AI talks to be able to distinguish it, do we re enter the failed turing test era? Can the turing test only be passed in finite intervals, after which we learn to distinguish it again? I think it can eventually get there, and that the people who can detect the difference becomes a smaller and smaller subset. But who's to say what the zeitgeist on AI will be in a decade
> When Chatgpt3 came out we all declared that test utterly destroyed.
No, I did not. I tested it with questions that could not be answered by the Internet (spatial, logical, cultural, impossible coding tasks) and it failed in non-human-like ways, but also surprised me by answering some decently.