In her recent article in Nature, Elizabeth M. Wolkovich, an associate professor of forest and conservation sciences at the University of British Columbia in Vancouver, detailed how a peer reviewer accused her of using ChatGPT to produce a manuscript. Even though she hadn’t, her paper was nevertheless rejected.

Editors often forward research papers to one or more reviewers for peer review when they are sent to an academic journal for potential publication. For a work to be accepted for publication, reviewers typically need to be satisfied. Additionally, the author(s) must satisfactorily respond to the reviewers’ questions, concerns, and suggestions.

“Obviously ChatGPT,” one reviewer stated about Wolkovich’s paper. The handling editor thought “the writing style unusual,” thus only obliquely agreeing with the reviewer. Wolkovich uses LaTeX, a typesetting language, to produce her articles, and her text change history is accessible on GitHub. However, even if she didn’t have this evidence, how can an editor or reviewer call someone out for being a chatbot when they haven’t seen any proof?

In academia, having a work rejected by a journal is quite common; scholars worldwide have grown very accustomed to it. However, this could be an example of how AI subtly taints science without even trying. Worries about AI gaining control over humanity, its ability to supercharge misinformation, and its propensity to support subtle forms of bias and inequality are common.

However, according to Wolkovich, “ChatGPT corrupted the whole process simply by its existential presence in the world.” The idea that reviewers and editors were so indifferent to the submission of content generated by AI astounded her.

The conflict between technology, transparency, and humanity was succinctly expressed by author Yuval Noah Harari in a 2018 Atlantic article: “Lots of mysterious terms are bandied about excitedly in Ted Talks, at government think tanks, and high-tech conferences – globalization, blockchain, genetic engineering, AI, machine learning – and common people, both men and women, may well suspect that none of these terms is about them.” However, they are.

Deepfakes crisis

One of AI’s dangerous avatars, deepfakes, for example, is causing the world to enter its greatest crisis as they continue to blur the lines between “real reality” and “fake world.”As early as 2018, American law professors Bobby Chesney and Danielle Citron coined the phrase “liar’s dividend.” Simply put, Chesney and Citron stated, “a sceptical public will be primed to doubt the authenticity of real audio and video evidence.”

In a May 2023 New York Times piece, one of the “godfathers of AI,” Geoffrey Hinton, spoke of “bad actors” who would attempt to utilise AI for “bad things.” Yes, that is something we’re worried about. But as Robert Hall noted, “Just as daunting is what if it remains in the hands of us – the good guys. The evidence is convincing that just as we have allowed current and previous generations of technology to erode our relationships, AI will only erode them further.”

Ethics and integrity

A great deal of science relies on trust and faith in the ethics and integrity of researchers. It’s true that, maybe for thousands of years, data fabrication and other forms of scientific misconduct have been prevalent. But it would be truly dystopian if we began to distrust everyone’s research findings and writings.

AI-generated photos are now so easy to create that these days, we often search for indications of AI’s touch whenever we view a painting or photo. We know that some novels have already been created with the assistance of generative AI. We might soon stop believing that stories, essays, and poems are original creations of writers or poets. And that will be the greatest AI-caused casualty.

Not all college students used to do assignments and projects in an honest manner in the past. We were aware that some students would turn to their friends, seniors, the internet, and even professionals for assistance.

However, we never considered it a serious enough problem to cause system instability. But colleges and universities all across the world are having difficulty reframing their assignments in the ChatGPT era. The general distrust that ChatGPT has bred across society is rooted in the worry that students may use LLMs. Perhaps this represents the most severe harm inflicted by generative AI.

The writer is Professor of Statistics, Indian Statistical Institute, Kolkata

comment COMMENT NOW