The rise in popularity of ChatGPT, an AI language model, has sparked concerns regarding its impact on academic and professional writing. It has become increasingly crucial to differentiate between text generated by ChatGPT and human-authored academic content. In particular, medical journals and other scholarly publications now face the challenge of identifying articles that have been produced using ChatGPT without proper attribution or disclosure. Distinguishing between articles written by humans and those generated by AI tools is a new and pressing issue in the field of medical publishing.
While various statistical and deep learning methods have shown promise in differentiating between human and AI-generated text, they may not be fully adequate for accurately discerning academic writing. Recognizing this gap, researchers recently presented a method aimed at effectively distinguishing between ChatGPT-generated text and human-authored academic writing. To develop their approach, the researchers constructed a carefully curated dataset of perspectives articles sourced from the reputable journal Science. These articles were specifically chosen to reflect the writing style typically found in academic scientific literature. The dataset encompassed a collection of human-written articles alongside corresponding text generated by ChatGPT.
Through their analysis, the researchers identified four distinct categories of features that proved to be highly effective in differentiating human writing from ChatGPT-generated text. These feature categories include paragraph complexity, sentence-level diversity in length, the differential usage of punctuation marks, and disparities in the popular words used by each group.
Upon examining the writing styles, it was observed that ChatGPT tends to produce shorter paragraphs, with fewer sentences per paragraph compared to human scientists. Additionally, ChatGPT-generated text displayed less variation in sentence length. In contrast, human scientists exhibited a tendency to utilize a greater number of punctuation marks, proper nouns, acronyms, and numerical references within their writing.
Moving beyond structural differences, stylistic variations also emerged between human scientists and ChatGPT. Human scientists were found to employ equivocal language, often incorporating terms such as “however,” “but,” and “although” into their writing. Furthermore, they displayed a propensity for using words like “this” and “because” more frequently. In terms of sentence construction, human scientists demonstrated a greater range in sentence length, employing both longer sentences (35 words or more) and shorter sentences (10 words or fewer) more frequently than ChatGPT. On the other hand, ChatGPT exhibited a preference for providing more generalized information and relied on the use of single quotes in its generated text.
The researchers’ method for distinguishing between human-written academic content and ChatGPT-generated text proved highly effective. It successfully captured the unique characteristics and patterns exhibited by human scientists, enabling accurate identification of their writing style. While this study primarily focused on the academic science writing found in Perspectives articles from Science, further investigations are necessary to assess the generalizability of these findings across a broader range of academic writing. Future studies should explore the application of this approach in various academic disciplines and consider additional features that may enhance the accuracy of the classification model.
Source: DOI:10.1016/j.xcrp.2023.101426
Book a Free Consultation
Related Articles
No Results Found
The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.