The Case for Quantity in Science Publishing
By David B. Allison, Brian B. Boutwell
Well-intentioned efforts that encourage researchers to produce fewer, higher-quality papers miss the many benefits of abundance in academic research.
Well-intentioned efforts that encourage researchers to produce fewer, higher-quality papers miss the many benefits of abundance in academic research.
In an influential 2016 editorial in the journal Nature, Daniel Sarewitz at Arizona State University warned that scientific research is being undermined by a glut of over-publishing. “Current trajectories threaten science with drowning in the noise of its own rising productivity,” he wrote, adding that avoiding such an outcome “will, in part, require much more selective publication.” This sentiment has since been repeated so often that it has practically become an accepted truism.
If we were forced to choose between quantity and quality in the production of research, quality seems the obvious choice. We reject the notion that improving the quality of scientific publishing requires limiting the quantity, however. On the contrary, we believe that rules and procedures designed to suppress quantity could end up harming the research community and hindering the emergence of new, creative ideas.
Most researchers already publish at a quite modest rate. One of the few studies to look systematically at the publication rate of individual researchers found no meaningful increase over the past century. The study’s authors—Daniele Fanelli from the Meta-Research Innovation Center at Stanford and Vincent Larivière from the University of Montreal—concluded that “the widespread belief that pressures to publish are causing the scientific literature to be flooded with salami-sliced, trivial, incomplete, duplicated, plagiarized and false results is likely to be incorrect or at least exaggerated.” A relatively small number of hyperproductive publishers create a distorted view of the situation.
Furthermore, the people who worry about overpublishing imagine a zero-sum game where none exists. Quantity does not need to come at the cost of quality, and there are significant upsides to quantity in publishing. More scientific papers and more scientific communication can contribute to the quality of the research enterprise as a whole, for a number of important reasons.

Yuki Murayama
Our first argument is that increasing the quantity of papers or publications leads to more opportunities for the law of large numbers to take effect, thereby increasing the chances of an important finding emerging. Nobel laureate Linus Pauling said that if one wishes to have a good idea, one must first have many ideas. Most ideas will flounder, a reality captured famously by the example of Thomas Edison. When asked about his repeated failed attempts to develop the light bulb, Edison responded, “I have not failed. I have just found 10,000 ways that won’t work.”
By extension, researchers cannot know ahead of time which seemingly inconsequential discoveries will turn out to be important. Reverend Thomas Bayes hit on the powerful concept of conditional probability in the 18th century partly as a way of understanding the shifting odds of winning a lottery. (Incidentally, his discovery of what is now known as Bayes’s theorem was never published during his lifetime.) Studies of Gila monster venom in the 1990s helped spark the development of today’s GLP-1 diabetes and weight-loss drugs.
The entire collective of scientific research should be thought of as a portfolio to be optimized.
Roberta Sinatra of Northeastern University and colleagues analyzed data on the careers of scientists and the impact of their published work, ultimately concluding that “the highest-impact work in a scientist’s career is randomly distributed within her body of work.” Moments of great insight occurred with the same probability anywhere in the sequence of a scientist’s publications. This random-impact rule held across disciplines, across careers of varied lengths, and over time. It applied whether authorship was solo or with a team, and whether or not credit was assigned uniformly among collaborators.
Rather than enjoining scientists to limit their quantity, expecting that reduced output will somehow provoke a hot streak, we should encourage steady productivity, patience, and perseverance. It’s important, and entirely possible, to promote quantity while maintaining standards for quality and disincentivizing questionable research practices. For instance, we support having scientists make their raw data openly available and pre-registering their plans for research and analysis. The goal should be to make those actions as frictionless as possible so that they don’t harm productivity and output.
Another advantage of quantity is that greater numbers of studies and publications allow greater opportunities to observe both successful and failed replications.
Whether from random variations, subtle differences in methodologies, or statistical manipulations such as p-hacking (exploiting data analysis to produce desired results), what works in one study or lab is not guaranteed to work in other places and at other times. (See “The Statistical Crisis in Science,” November–December 2014, for more on p-hacking.) Concerns about replicability have surged among researchers, the public, and government officials alike. Members of Congress went as far as to request that the National Academies of Sciences, Engineering, and Medicine offer recommendations for improving rigor and transparency in research, which were published in 2019 as a Consensus Study Report titled Reproducibility and Replicability in Science. Brian Nosek at the Center for Open Science has conducted extensive investigations showing how a scarcity of replication can help conceal flimsy bodies of evidence.
More publications will naturally lead to more studies geared toward replication, with a positive impact for science as a whole. We argue that the entire collective of scientific research should be thought of as a portfolio to be optimized—that is, the incentives and evaluations that influence quality and quantity should lead to the best outcomes relative to available resources, as determined by new knowledge, novel applications, public interest, and support for the next generation of researchers. Such optimization will likely involve repeated shifts between focusing on novel studies and focusing more on replication attempts in subdomains of research.
Past a point, repeated replication attempts bring diminishing returns. More publishing is better when it is advancing knowledge and actively clarifying prior results; more is not better when it recapitulates robust results ad tedium. But overall, a greater volume of successful and failed replications not only weeds out individual flawed studies, it also highlights the insights from different areas of inquiry that time and again have resisted refutation.
The 19th-century scientist and philosopher William Whewell described a concept that is relevant here: consilience, a process of intellectual reinforcement that happens when demonstrated findings from one domain of inquiry accord with demonstrated facts or postulates from another. Consilience is akin to puzzle building. A single puzzle piece, like a single finding, could fit many different places, or it might not fit at all. The more pieces you have joined together, the easier it is to see where a new one can or cannot go. In this context, both success and failure become more evident.

Yuki Murayama
The absence of consilience risks producing misleading results, as happened in 2011 when Cornell social psychologist Daryl Bem reported evidence for extrasensory perception (ESP), premised on ostensibly statistically significant results. The broader scientific community was rightly incredulous. The concept of ESP ran counter to our understanding of what physical laws permit within the construction of reality (that is, no consilience). Sure enough, further studies failed to replicate Bem’s results.
Quantity makes it easier for investigators to engage in competition and probe for weaknesses in an idea. In military and security training, one group will sometimes pose as an enemy “red team” to help the other team find and fix its weaknesses. A good red team personifies the notion of tough-minded but constructive criticism. A similar process happens in scientific research. In his book The Knowledge Machine, New York University philosopher Michael Strevens argues that science is at its best when honest but competitive investigators repeatedly search for flaws in their competitors’ arguments in a collegial, constructive manner.
A century ago, Albert Einstein intensely prodded Danish physicist Niels Bohr in this way over his interpretation of the statistical nature of quantum mechanics. Although Einstein’s early work had provided much of the foundation for quantum mechanics, he had difficulties accepting core aspects of the fast-developing field. He repeatedly assailed Bohr’s ideas, most notably at the Solvay Conference of 1927. The two physicists’ attempts at refuting each other were frequently rebutted and, in some cases, found to contain mistakes. Their intellectual volleys rooted out flaws and strengthened the intellectual bulwark of quantum theory.
More recently, nutritionists David Ludwig of Harvard University and Kevin Hall of the National Institutes of Health have engaged in a series of vigorous debates over the connections among obesity, diet composition, and energy (caloric) intake. Their pointed critiques have elevated each other’s thinking and prompted new insights into how physiologic and behavioral factors affect body weight. These red-team volleys were facilitated by numerous research papers as well as blog posts and other nontraditional forms of academic science communication.
Quantity in research can improve data collection as well. When individuals (or individual laboratories or teams) produce a great deal of a certain type of research, they become increasingly efficient and proficient over time. For instance, repeated use of expensive equipment and facilities can lead to a positive economy of scale and an increase in quality.
Rather than enjoining scientists to limit their quantity, expecting that reduced output will somehow provoke a hot streak, we should encourage steady productivity, patience, and perseverance.
Fostering environments that encourage greater research productivity should lead to more publishing. At the same time, greater productivity implies increased use of the labs, research facilities, software, and databases required to generate the work. A 2017 review by a British team explored the effects of concentrating more work in one facility for biomedical and health research. The authors reported some direct evidence for “positive economies of scale” in universities and research institutes. Our personal experiences affirm such economies of scale, especially for research that depends heavily on repetitive use of complex procedures or instruments that are initially expensive to install or difficult to master.
In 1991, one of us (Allison) joined the New York Nutrition and Obesity Research Center as a postdoc under the mentorship of Steven B. Heymsfield, affectionately known as the “king of human body composition analysis.” He seemingly had access to every new body composition device as it came out. Companies flocked to Heymsfield’s lab to examine and validate the latest tools; health researchers from across the country likewise came to gather more precise measurements for their studies. As our team grew more experienced, the documented quality of our measurements improved and our databases of measurements expanded. Finally, the number of research papers generated by investigators with access to those databases grew as well, and continues to do so.
Multidisciplinary research has long been observed to increase insights from different fields of science. Increasing quantity can assist in this regard, because it can bring together individual scientists and their institutions, with benefits for science as a whole. More publishing, particularly of cross-disciplinary collaborations, increases the chance of exposure and creates more opportunities to draw on the knowledge of new collaborators.
When communicating their work to other researchers, both inside and outside their fields, scientists often inspire new collaborators who then bring fresh ideas and insights to the table. A famous example is that of the young Michael Faraday, later one of the greatest experimental physicists, who was drawn into the laboratory of the polymath Sir Humphry Davy after one of his lectures at the Royal Institution of Great Britain. Davy was mostly summarizing earlier research, but by exposing new people to his ideas, he helped to launch Faraday’s career. Faraday subsequently developed the principle of the electric motor and identified the unified relationship between electricity and magnetism.
Both of us have attracted students, collaborators, and funders because our work was available for others to notice. We have also noted many other contemporary examples of research partnerships fostered through science publishing and communication. In the early 2000s, for example, immunologist Marta Catalfamo (then a fellow at the National Cancer Institute) attended a lecture about HIV patients by Clifford Lane, clinical director of the National Institute of Allergy and Infectious Diseases. As described on the George Washington University website, that lecture redirected Catalfamo’s research toward understanding how HIV affects the human immune system, and led to her working alongside Lane to develop novel therapies for neutralizing HIV and suppressing secondary infections.
Public understanding of science is a critical factor in determining what research has a practical impact. At times it is necessary to bring an idea to many different audiences, both to convey its importance and to open minds to evidence supporting novel or controversial findings. Quantity creates openings for science to be applied more effectively, especially when an important idea is either not easily grasped or not easily accepted by others for social or emotional reasons.
We have on more than one occasion published a piece, such as a review article or perspective piece, in one journal, only to have another journal contact us afterward saying, “We saw your paper, and we’d like you to write a related piece for us.” These days it is also common for major scientific societies to come out with a position statement that is created collaboratively with other journals and is then simultaneously published in more than one outlet. In the context of this article, we observe that quantity is important not just in academic publishing, but also throughout science communication more broadly.
Louis Pasteur regularly wrote letters to newspapers rebutting attacks against his ideas on germ theory and vaccines. In evolutionary biology, Thomas Henry Huxley earned the appellation “Darwin’s bulldog” in the 19th century for his dogged promotion of Charles Darwin’s ideas. Over the past few decades, scientists such as Stephen Jay Gould, Richard Dawkins, and Sean B. Carroll have continued Huxley’s work, recognizing the importance of repeatedly articulating complex evolutionary ideas using different voices in journals, books, popular magazine articles, and videos. Public outreach is a separate type of quantity that could and should be encouraged through its own set of incentives.
Promoting quantity can help cultivate the future workforce of science, encouraging diversity and equal opportunity. To make an analogy: The goal in professional basketball, or in any professional sport, is arguably to produce the best output for any unit of time and money spent, and to not spend time and money inefficiently. So, if we define the best version of the sport as the most technically proficient, and if we had just a single evening to spend watching, then it would seem best to have only the top performers or teams play, and only in the interval of their peak performance. Minor leagues, college basketball, children’s leagues, or pick-up games would generally be eliminated so that we could promote quality and not quantity.

Yuki Murayama
But hopefully we have much more time than just a single evening in our lives to enjoy basketball. Over time, it is also important to continuously draw new people into the game. If we value the abilities of a top basketball player, then justice requires that we allow others the opportunity to develop similar skills. By bringing in a diversity of players, we increase the pool to draw from, thereby increasing quality. We may also find that diversity adds complexity and creativity to the game. Thus, it is crucial to support developmental league games, college basketball, children’s basketball games, and so on, if we value the future of basketball.
The same principles hold for scientific research. If we desire a strong scientific workforce, we must promote the careers of those who may not have attended the most elite universities or worked in the highest-profile laboratories. We should encourage people whose lives have given them a slower start by affording them opportunities to present results and receive encouragement, win awards, and obtain jobs. A vital way to do that is to provide more opportunities for junior researchers to publish their work, perhaps by adding journals that focus on student and mentee research. When established researchers publish more, they also have greater opportunities to take on junior researchers as coauthors.
It seems useful to close by considering not just why it is important to promote quantity in science publishing, but how to do it effectively. Scientists often remark that there is never a good time to do research—you either find the time or you don’t. But the system can make it easier for scientists to boost their productivity and increase their rates of publication while maintaining high standards. That is, we can optimize conditions for promoting both quality and quantity.
Journals could further streamline and automate the rules of publishing so that it takes less time to preregister research and analysis plans. Universities and funding agencies could work continually to reduce administrative burdens of tasks such as the submission of grant applications. Institutions of all kinds could provide more infrastructure to help researchers be more creative and more productive. In particular, they could adjust the way money is allocated so that researchers experience periods of stable support, punctuated by periods of demand.
The system can make it easier for scientists to increase their rate of publication while maintaining high standards: We can optimize conditions for promoting both quality and quantity.
There is a body of literature supporting the idea that researchers work most effectively when they alternate through those conditions. During times of demand, researchers enter a state of high productivity: Get this grant, solve this problem, publish this paper. But researchers also need episodes of relative downtime, when they can think for a while and engage in open creativity. Einstein’s seven years spent working in a Swiss patent office functioned as a (highly effective) creative escape from academic burdens. The Howard Hughes Medical Institute allocates creativity time to its grant winners and has been shown to produce a high level of creativity as a result.
A final thought about the current hand-wringing about overpublishing in science: The volume of research publications has been increasing exponentially for at least two centuries—for as long as institutional science has existed. The current growth should be viewed in that larger context, as a sign of the continued expansion of human knowledge. Making quantity a sacrificial lamb in the hopes of boosting quality will only impede the remarkable progress that is possible when science is functioning at its best.
Click "American Scientist" to access home page
American Scientist Comments and Discussion
To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.
If we re-share your post, we will moderate comments/discussion following our comments policy.