In a recent study, an AI tool outpaced a human expert in detecting duplicated images in scientific research papers.
Sholto David, who investigates image manipulation in academic papers, noted that many in the scientific community are unaware of the magnitude of this issue.
He used AI to scrutinize numerous papers from the Toxicology Reports journal in search of image duplications.
Scientific research is predicated on authenticity and accuracy. When these images are duplicated, it casts a shadow of doubt over the integrity of the entire study.
Moreover, duplicate images can distort the truth and often fail to provide the creator fair credit. If reused improperly or out of context, they can lead to misleading interpretations or conclusions. Misrepresentation affects the immediate study and can ripple outwards, influencing subsequent research based on flawed premises.
David’s tool, at a rate considerably faster than his own abilities, identified nearly all suspicious papers he had marked and 41 he had overlooked. The paper is currently available on Bioxriv.
In total, of 715 papers analyzed, 115 contained inappropriate duplications (16%). The paper is currently available on Bioxriv.
Academic publishing is already battling issues related to image manipulation.
A 2016 study led by image forensics expert Elisabeth Bik disclosed that roughly 4% of the papers she visually inspected across 40 biomedical journals presented improperly duplicated images. David’s study places the figure much higher than that.
David further highlights that not all image manipulations are carried out with ill intentions. Many instances occur due to accidental adjustments or for aesthetics and clarity.
Nevertheless, there’s a growing consensus about addressing image alterations that breach ethical guidelines.
More about the AI tool
The AI tool David utilized for his study, named Imagetwin, is already used by approximately 200 academic entities, including publishers and universities.
This software benchmarks images in academic papers against a vast database comprising over 25 million images to verify them.
Patrick Starke, a developer of ImageTwin, explained that it generates a unique “fingerprint” for every image in a research paper. This fingerprint is subsequently used for internal paper scans and external database checks.
Starke revealed that various universities are deploying Imagetwin to scrutinize papers before journal submissions.
AI is profoundly influencing academia. A recent survey of 1,600 researchers by science publisher Nature found that, despite widespread support for AI in academia, some 68% of people were worried about misinformation relating to large language models (LLMs) like ChatGPT.
Where one AI contributes to disinformation, another AI provides hope – at least in this case.