|
By Alimat Aliyeva
Two popular chatbots have reached an important milestone—they have passed the Turing test, according to scientists from the University of California at San Diego (USA). The models in question are GPT and LLaMa, and their success could suggest that artificial intelligence has reached a level of sophistication comparable to human intelligence, Azernews reports.
The Turing test, created by Alan Turing in 1950, is used to evaluate the intelligence of machines. If researchers cannot reliably distinguish the responses of a machine from those of a human, the machine is considered to have passed the test.
The team tested four artificial intelligence models: GPT-4.5, released in February 2025; its predecessor GPT-4o; the LLaMa model; and the ELIZA chat program from the 1960s. The first three are "big language models"—advanced deep learning algorithms that generate and understand text based on vast datasets.
To conduct the test, experts recruited 126 students from the University of California and 158 participants from the Prolific data collection platform. These participants engaged in online conversations, not knowing whether they were talking to a human or an AI. The results showed that GPT-4.5 was mistaken for a human in 73% of cases—more often than actual people were. LLaMa-3.1 was recognized as human in 56% of interactions. The ELIZA and GPT-4o models were identified as human in 23% and 21% of cases, respectively.
The AI models performed best when they were instructed in advance to impersonate a human. However, the researchers emphasized that this does not mean they would have failed the test without the prompts. This marks the first instance in which an AI has successfully passed the Turing test, according to findings published on arXiv.
In another interesting development, a neural network was recently enrolled as a student at a university for the first time in Austria. The University of Applied Arts in Vienna accepted an AI model named "Flynn" into its digital art program. Flynn went through the standard application process, and the university’s leadership pointed out that there are no specific rules requiring students to be human. This move further raises questions about the role of AI in higher education and its potential future in academia.