|
By Alimat Aliyeva
According to a recent study by the University of East Anglia (UK), essays generated by artificial intelligence are still far from matching the quality of work produced by real students.
The study, published in the journal Written Communication, compares the essays of 145 real students with those generated by ChatGPT. Titled "Does ChatGPT Write Like a Student? Markers of Engagement in Argumentative Essays," the paper delves into the differences between human-authored and AI-generated content.
While the AI-generated essays were impressively coherent and grammatically correct, they were notably lacking in one critical aspect: personality.
As AI writing tools like ChatGPT become increasingly sophisticated, the study emphasizes the importance of cultivating critical literacy and ethical awareness in the digital age.
The researchers hope that the findings will assist educators in identifying instances of cheating in schools, colleges, and universities worldwide by distinguishing machine-generated essays from those written by students.
Professor Ken Hyland from the School of Education and Continuing Education highlighted that ChatGPT’s public release has raised serious concerns among teachers. Many fear that students may use AI to complete assignments. "The concern is that ChatGPT and other AI writing tools may contribute to cheating and undermine foundational literacy and critical thinking skills," said Hyland. "This is particularly important because we don't yet have reliable tools to detect AI-generated texts. In response to these concerns, we wanted to explore how closely AI can mimic human essay writing, specifically by examining how writers engage with readers."
The research team analyzed 145 essays written by real students and 145 written by ChatGPT. "We were particularly interested in what we called 'engagement markers,' such as questions and personal comments," Professor Hyland explained. "We found that the essays written by real students featured a rich array of engagement strategies, making them more interactive and compelling. These included rhetorical questions, personal digressions, and direct appeals to the reader—techniques that enhance clarity, communication, and argument strength. On the other hand, while ChatGPT’s essays demonstrated linguistic fluency, they were more impersonal. AI-generated essays adhered to academic writing standards but lacked the personal touch and clear stance that human writers bring to their work."
Despite these shortcomings, the study does not dismiss the role of AI in education. On the contrary, the researchers argue that tools like ChatGPT should be used as teaching aids, rather than shortcuts for students seeking to reduce effort.
"When students enter school, college, or university, we don’t just teach them how to write; we teach them how to think—and this is something no algorithm can replicate," Professor Hyland added.
The study also raises important questions about the future of education in an AI-driven world. As AI continues to evolve, it could become an invaluable tool for assisting students with research, brainstorming, and drafting. However, it also underscores the need for education systems to adapt and teach students how to interact with AI responsibly and ethically. Encouraging students to maintain their unique voices in writing, even when using AI tools, may become an essential aspect of future curricula. Educators will likely need to strike a balance between leveraging AI for its benefits while preserving the critical thinking and creativity that form the foundation of meaningful learning.