Retraction Reactions
By Ivan Oransky, Adam Marcus
Scientists’ responses to published errors provide case studies of practices to avoid or embrace when engaging with the research community.
Scientists’ responses to published errors provide case studies of practices to avoid or embrace when engaging with the research community.
Retracted papers are uncommon in research, but as they have increased, more than 20 Nobel laureates—some 2 percent of the 1,000-plus people and organizations who have won the prize—have retracted scientific papers. Altogether, those 22 winners have retracted 41 articles.
How these celebrated scientists respond to the loss of their work from the literature reveals a great deal about the ways researchers should—and should not—handle public setbacks. Some embrace their mistakes and attempt to rectify them, others ignore the problem, and a few go on the attack in an attempt to defend their reputations.
As the founders of Retraction Watch, a website devoted to covering problems in scientific publishing, we know errors are an unavoidable part of the research endeavor. The responses of Nobel laureates to their retractions offer useful case studies in how best to respond when research comes under fire.
In broad strokes, retractions are a solution (and definitely not the only solution) to a problem: What to do with unreliable articles in the scientific literature. The oldest example in the Retraction Watch database is a 1756 letter by Benjamin Wilson published in the Philosophical Transactions of the Royal Society. In the letter, Wilson, an English experimental philosopher, retracts a portion of his previously published treatises on electricity that critiqued the work of one Benjamin Franklin.
Despite their long history, however, retractions were extremely uncommon until relatively recently. In 2002, journals retracted about 40 papers, or roughly 0.002 percent of all published articles. A decade ago, that figure was closer to 0.02 percent, and today it sits at about 0.2 percent. Based on our experiences at Retraction Watch and on work by others, that percentage still does not reflect the true rate of errors in published papers and should be at least 10-fold greater.
Davide Bonazzi
Careful analysis by researchers going back more than a decade—including by microbiologists Ferric Fang of the University of Washington and Arturo Casadevall of Johns Hopkins University, as well as reporting by the news team at the journal Science—has found approximately two-thirds of retractions result from misconduct, including plagiarism, image manipulation, and the fabrication or falsification of data. The remaining third are due to honest errors, such as publishers’ mistakes, legal issues, and other edge cases unrelated to the behavior of the researchers.
In other words, when journals retract papers, the odds are strong that the reason for the move is because one or more players have behaved badly. And given those odds, scientists are entirely rational when they worry about the damage to their reputation a retraction can cause.
Just as scientists fear tarnishing their reputations by publicly admitting mistakes, journal editors tend to avoid retractions as well. Some may fear that retracting more papers than other publications in their specialty might cause readers to question their editorial judgement, peer review processes, and other elements of quality control. They may feel legal pressure to opt for a lengthy correction over a retraction, for example, or to say they do not have the resources to conduct broad investigations. There is some good news: The growing number of retractions in part reflects moves by some journals to hire research integrity teams. But it also reflects the willingness of publishers to endorse—some might say cynically—a “victims of paper mills” narrative that absolves them of responsibility.
The calculus would be much different if retractions more commonly occurred for cases of honest error. Retractions might then lose a bit of their stigma and instead become part of the normal publishing process. But for the moment, at least, many publishers and scientists view retraction chiefly as a kind of “nuclear option” for sins of commission.
Most of the 22 Nobel laureates who had papers retracted lost only a single article, but one Nobelist has lost 13 articles, a dozen of them since late 2022. Gregg Semenza, who shared the 2019 prize in Physiology or Medicine for “discoveries of how cells sense and adapt to oxygen availability,” is a researcher at Johns Hopkins University. The retracted papers were published as far back as 2009, and all were challenged for reasons related to the duplication of data or images—reusing material to bolster a claim that might not otherwise pass muster.
Thirteen retractions is not a record, although the vast majority of researchers have never had one paper retracted, let alone more than a dozen. And the Nobel Assembly at the Karolinska Institute, which bestows the prize, has never revoked an award. In fact, there would not seem to be a mechanism for doing so. Still, Semenza has not commented publicly on the retractions, even when asked to do so by news organizations such as Nature.

Retraction Watch Database/Crossref/Stacey Lutkoski
Semenza’s lack of engagement and transparency contrasts with the behavior of another Nobel Prize winner with recent retractions. Thomas Südhof, of Stanford University, shared the 2013 prize in Physiology or Medicine with two others “for their discoveries of machinery regulating vesicle traffic, a major transport system in our cells.” He has retracted two papers since early 2024, both for data aberrations related to figures.
Unlike Semenza, Südhof has engaged vigorously with critics and the media, even if not always in ways we at Retraction Watch would describe as constructive. In a frequently updated section of his lab website, Südhof responds directly and specifically to comments about nearly 50 papers critiqued on PubPeer, a site launched in 2012 that allows postings, including anonymous comments, about published research. About one in five retractions begin as concerns raised on PubPeer, an impact that earned the site the prestigious Einstein Award in 2024. (One of us [Oransky] is a volunteer member of the PubPeer Foundation’s board of directors.)
On his site, Südhof acknowledges some errors but seems to deflect blame, saying “more than 30 students and postdocs in my lab over 20 years acknowledged copy-paste mistakes in their papers” and “many accusations against my lab actually identified copy-paste errors in collaborating labs.” Südhof’s responses also question the motivations of his critics and include descriptions of what he calls the “dark side to scientific fraud detection efforts.” He claims on his website that “many PubPeer commenters maintain commercial websites communicating their discoveries and have a conflict of interest,” a statement that is difficult to prove, because most users of the site post anonymously, and is misleading at best.
Südhof also claims, “PubPeer posts frequently exhibit a fundamentalist attitude that insists that even an accidental duplication of a control image, undetectable to the naked eye, is a major issue, demanding that science should be absolutely pure.” In his second retraction, Südhof made a point of thanking image sleuth Matthew Schrag, a neurologist at Vanderbilt University who figured prominently in the story of how a major pillar of Alzheimer’s disease research collapsed, but he did not thank those who had commented earlier on PubPeer about the same paper. While proclaiming the virtues of transparency, Südhof nonetheless feels the need to denigrate his critics. Still, compared with Semenza, Südhof’s willingness to address his detractors publicly is refreshing.
If neither of these two heralded men should be considered paragons of transparency, another Nobelist offers a better model of how to handle mistakes in their work: Frances Arnold of the California Institute of Technology. In early 2020, Arnold, who shared the 2018 Nobel Prize in Chemistry “for the directed evolution of enzymes,” announced the retraction of one of her papers even before the notice had been published by Science. “For my first work-related tweet of 2020, I am totally bummed to announce that we have retracted last year’s paper on enzymatic synthesis of beta-lactams,” she tweeted. “The work has not been reproducible.” Arnold, who earned praise for being forthright, told us at the time, “I was in the middle of all the Nobel Prize hoopla and did not pay enough attention to this submission, so it is my fault.”
So, what can we learn from Nobel Prize winners who have retracted papers, both about the role of retractions in the scientific enterprise and about how to respond to critics?
We think the defensive reflex, although understandable, is misguided. The data support our inclination. Scholars of retractions, including Susan Feng Lu of the University of Toronto and her colleagues, have found some evidence that scientists who take back their own work for honest mistakes enjoy something of a citation bump for their future publications. Other studies of the phenomenon—such as one from 2017 by Pierre Azoulay at the Massachusetts Institute of Technology and his colleagues—have shown that authors who retract papers due to fraud are, in fact, linked to decreases in their overall citation rate, but those who retract papers due to honest errors are not.
Misconduct might not pay, but doing the right thing appears to. Such behavior is very much in keeping with dozens of cases that we’ve written about over the years on Retraction Watch and that we categorize as “doing the right thing.” These posts tell the stories of researchers with the courage to go public about their mistakes even at professional risk. Perhaps they’ve ordered the wrong mice, or used the wrong reagent—real examples, by the way. Or they’ve just made a mistake. Despite the terror they might feel at “fessing up,” they do it anyway. We should cheer.
Authors who retract papers due to fraud are, in fact, linked to decreases in their overall citation rate, but those who retract papers due to honest errors are not.
What does this all mean for scientists whose work comes under scrutiny, particularly at a time when all of science is under the microscope by powerful forces? Earlier this year, Science published a thoughtful and useful guide for its authors about how to engage with the media, should the integrity of their work be questioned. Such coaching is necessary, according to the journal, given how in the current “age of growing, intense attacks on science, silence can be detrimental to both public trust and the careers of scientists who are under scrutiny.”
Among the tips are a couple with which we strongly agree, particularly the admonition to “respond on substance” and not “attack the motives or standing of the people who question your work.” We also like the call to avoid blaming junior colleagues. After all, as the guide notes, “all authors, and especially the corresponding authors, are all responsible for the data, interpretations, and conclusions presented in the paper.”
And as journalists, we’re fully behind the suggestion to talk directly with reporters “with humility”—an approach Arnold appears to endorse.
In other words, circling the wagons is the wrong tactic, for both individual scientists and their fields, particularly when scandal is involved. In these cases, aggressive transparency is far better than building walls to keep out prying eyes.
Consider the case of Jonathan Pruitt, formerly of McMaster University in Canada, a “rock star” in the field of behavioral ecology whose prolific—but in retrospect, highly questionable—fieldwork was incorporated into the work of scores of other researchers. As Retraction Watch and other publications reported, rather than suppress or ignore the scandal, many of the scientists scorched by Pruitt’s unreliable data openly and honestly rallied together to purge his research from their own, even as he denied, deflected, and menaced his critics with threats of legal action against them.
And it wasn’t only leaders in the field who had the courage to be fully transparent and retract their work. There were also relative newcomers, such as Kate Laskowski, who had recently landed a tenure-track position at the University of California, Davis, largely on the strength of several papers built around data on spiders Pruitt had provided her. “When I realized that I could no longer trust the data that I had reported in some of my papers, I did what I think is the only correct course of action,” Laskowski wrote on her lab’s blog in a 2020 post that became something of a landmark in scientific transparency. “I retracted them.”
In other words, be more like Frances Arnold, a bit less like Thomas Südhof, and see Gregg Semenza’s behavior as what not to do.
Click "American Scientist" to access home page
American Scientist Comments and Discussion
To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.
If we re-share your post, we will moderate comments/discussion following our comments policy.