Digitizing the Coin of the Realm
By Francis Louis Macrina
Electronic publication has transformed the culture of scientific communication
Electronic publication has transformed the culture of scientific communication
DOI: 10.1511/2011.92.378
Imagine it’s 1991. You’ve just completed a series of exciting experiments in the lab and now it’s time to write up the results. You review the instructions for authors of the journal where you will submit your paper and you consult some rudimentary authorship guidelines published by your scientific society. You are even inspired to reread a few sections of Robert Day’s classic book How to Write and Publish a Scientific Paper, then in its third edition. (It’s now in its sixth.) Engaging your coauthors, you work diligently for a few weeks to draft and revise your manuscript. Finally, you package the printed pages, attach the correct postage, and get ready to hoof it over to the campus postal drop. Before you leave your office, you power down your desktop computer. As its cooling fan turns quiet, you never imagine that in 20 years, you'll be able to carry orders of magnitude more computing power in your pocket.
Illustration by Tom Dunne.
Over the past two decades, computing has transformed scientific publication, a process so central to the research enterprise that it is often called the “coin of the realm.” Sociologist Robert K. Merton is credited with introducing that phrase in the context of science. As he explained in his 1968 article, “The Matthew Effect in Science,” Merton intended the “coin of the realm” to refer to recognition by one’s peers for one’s work. But over time, the phrase has become more broadly connected with the concept of authorship. Nuances aside, publishing one’s research results is a critical step in earning peer recognition.
Illustration by Barbara Aulicino.
It is also essential for the progress of science. The value of publishing scientific results has always been indisputable and it will remain so. But virtually every aspect of the process is, or soon will be, affected by the digital revolution. Generally, scientists have accepted these changes, assuming or hoping they were for the better. As the digital landscape continues to evolve, we need to think about and systematically examine the impact technology is having on the coin of the realm. This reflection should lead to engagement and action by the community of science—editors, publishers, scientific societies and scientists themselves—to ensure that the digital revolution has the maximum positive effect on the reporting of research. I hope that this essay will stimulate such thinking.
Twenty years ago, you would have used your personal computer solely as a means to prepare your manuscript. In 1991, when it came to scientific publication, computers weren’t much more than word processors. But the winds of change were already blowing, and some aspects of the electronic preparation of your 1991 manuscript did portend things to come. Computers were making it easier to create complex, high-quality illustrations. You would have used a software program to compile your list of literature cited and to insert citations into your manuscript. Such programs provided relief from the burdensome job of building reference lists, and they were harbingers of the effect digital tools would have on scientific publication over the next two decades.
For historical perspective, consider the following 1991 truisms. Communication by e-mail was growing rapidly, but e-mail attachments were still a few years away. Manuscript submission and review were solidly grounded in paper and the postal service, but facsimile machines were beginning to accelerate the process. The rise of electronic journals and open-access publication were years in the future. Unless you were in the computer sciences, you probably had not heard of the World Wide Web project. Digital photography was an emergent technology, but the Adobe corporation had only recently launched version 1.0 of Photoshop. And the now-ubiquitous Adobe Portable Document Format (PDF) did not yet exist. You can expand this list yourself, but I trust I have made my point. These elements have all contributed to the rapid transformation of scientific communication.
Let’s take a look at where things stand today by considering how computers, computing, and the Internet have affected the publication process itself. The availability of detailed, quality information about how to publish our scholarly work has grown dramatically, creating a valuable resource that is just a few mouse clicks away. The spartan instructions for authors (IFAs) of the early 1990s have given way to complex web pages and downloadable electronic files. Along the way, IFAs themselves have changed from brief documents that conveyed preparative and administrative instructions to lengthy, detailed compendia of authorship definitions, responsibilities, expectations and policies. In 1991, the IFA for the journal Nature amounted to a single printed page of 1,300 words. Today, Nature publishes its IFA electronically as the “Guide to Publication Policies of the Nature Journals.” It is an 18-page, 12,000-word PDF.
Such evolution is more likely to be the rule than the exception. I recently reviewed the publication guidelines of five scientific journals for a study that appeared this year in Science and Engineering Ethics. Most of these journals had expanded their IFAs into detailed documents. I also looked at guidelines provided by a few professional societies and noted that they, too, contained considerable detail about authorship and publication practices, much of which agreed with the journals’ IFAs. In another essay in this series (July–August), Michael Zigmond made a compelling case for the role that professional societies can and do play in developing and promoting codes of conduct. Zigmond chaired the Society for Neuroscience committee that wrote guidelines for responsible scientific communication. This document is so comprehensive that it leaves almost nothing to the imagination.
IFAs and society guidelines have expanded for a variety of reasons. They have become more detailed and precise in response to lessons learned from high-profile misconduct cases. And they have grown longer to encompass new policies on topics such as digital image manipulation. Taken together, modern journal IFAs and professional-society guidelines form the basis for ethical standards and best practices in scientific publication. Today, this trove of information is instantly accessible using whatever electronic portal—PC, laptop, tablet or smart phone—suits you. The digital availability of information should be a catalyst for promoting responsible conduct, but its mere existence won’t guarantee the production of ethical researchers. We’ve got to practice what we preach, and teach what we practice. The legendary football quarterback Johnny Unitas summed it up before every game, after the coaches finished their pep talks. Unitas’s speech was always the same: “Talk is cheap. Let’s play!”
Computers have not only increased the availability of ethical guidance, they have also impacted the work flow of manuscript preparation, submission, peer review, revision and publication. At one end of the spectrum, your favorite journal may have gone digital by mandating that some or all manuscript-related activities be conducted by e-mail. At the other extreme, the publisher may require the use of a web-based, graphic interface to handle all phases of submission and review, with e-mail communication augmenting the process. But across this spectrum of modern digital work flows, the common denominator is a greatly reduced role for the nonelectronic exchange of materials.
Clearly, digitization makes the manuscript production-to-publication cycle more convenient for all parties, especially authors. You’ll have to accept this as my assertion based on experience and intuition, because data to support the claim are scarce. But if you published papers 20 years ago, and still do so today, you’ll know what I mean. I believe that most scientists do not miss drawing figures (even with early computer programs), photocopying manuscripts and mailing printed papers.
But the notion of convenience should not be confused with speed. To be sure, the time between acceptance and publication has gotten shorter: just a few weeks for online articles, compared to months for print articles. But there’s also the issue of the time from submission to acceptance. If you look at papers published online, you’ll likely find that the time between submission and acceptance can be a few months, sometimes longer. The obvious interpretation is that peer review can take varied, unpredictable and sometimes excessive amounts of time. This may reflect a process that is desirably rigorous. But excessive submission-to-acceptance time can also be a sign of the human foibles of over-commitment or procrastination. Evidently, even the most attractive graphic interface can’t overcome these age-old problems among authors, editors and reviewers.
The online revolution has changed the way papers are read and evaluated after they’re published. Studies were once critiqued in formal letters and reviews—often published in the same journals as the original papers—and in journal clubs and discussions. The visibility and pace of those critiques have exploded with the advent of blogs and other online venues.
For example, an initiative called Faculty of 1000 was established in the early 2000s as a corporate endeavor to provide post-publication peer review online. The program selects and enlists scientists, termed faculty, to scan the literature and comment each month on the papers they consider most interesting. Their reviews, which highlight good articles and provide constructive criticism, are available by subscription. Subscribers may also log in and comment on any evaluated article, but the site is systematically monitored for inappropriate commentary. Abusive, defamatory or otherwise offensive remarks can be reported and may be deleted by the service provider.
Such consistent controls are not necessarily in place on independent blogs, which have also taken on an increasingly visible role in post-publication peer review. One notable example began to unfold in 2010, when Science magazine published online a research paper about a bacterium isolated from arsenic-rich lake sediments. The authors reported that this organism could incorporate arsenic, instead of the usual phosphorus, into its DNA. The biological implications of the work were huge, and the paper got considerable exposure in the media. It also attracted scrutiny from scientists who used their blogs to offer a variety of critical comments. But these evaluations were met with disdain by the paper’s authors, who said they would only respond to critiques that had been peer reviewed and vetted by Science. Rather than engage with their critics, the authors simply asked scientists to work to reproduce the controversial results. This attitude prompted the journal Nature to publish an editorial which asserted that there is indeed a role for blogging in the assessment of published results. Still, for some scientists, the speed and directness of unvetted digital criticism popped up unexpectedly. A subsequent news article in Nature, cleverly titled “Trial by Twitter,” claimed that “blogs and tweets are ripping papers apart within days of publication, leaving researchers unsure how to react.”
That may be so. But fast-forward a few months for a completely different take on the social networking of scientific data. One of the largest outbreaks of potentially lethal Escherichia coli infections began in Germany in May 2011. In about six weeks, there were more than 3,000 cases and 36 deaths. Scientists on multiple continents shared biological samples and used online media such as Twitter, wikis and blogs to compile their data. Within 10 days of the recognition of the outbreak, the entire genomic sequence of one of the isolated E. coli strains was available on the Internet. As I write, data analysis is still underway, but the collaborative research has already yielded new and valuable information about the E. coli strains involved. The speed and real-time availability of this genetic analysis is unprecedented and underscores a powerful use of digital social media in the dissemination of new scientific knowledge.
Earlier in this essay, I discussed online publication of articles as a service provided by publishers to complement the printed versions of their journals. This form of electronic publication still requires users to have a subscription or an institutional site license to access online articles. But a second, slower-growing form of publishing, which also began in the 1990s, lets readers access online articles for free. The business model for such publications depends on the relatively low cost of distributing digitally encoded articles. And the expenses of doing so are borne by the authors themselves or by their academic institutions or funding agencies.
At a meeting in 2003, a group of scientists, librarians and publishers wrote a document now commonly known as the “Bethesda Statement on Open Access Publishing,” which has been widely embraced among open access (OA) publishers. The statement lays out two conditions that define OA publication. First, it grants “to all users a free, irrevocable, worldwide, perpetual right of access” to the published work, as well as a license to “copy, use, distribute, transmit and display the work publicly,” to distribute derivative works based on the original and to make a few copies for personal use. Second, it promises that the work will be deposited in at least one online repository that is supported by an academic institution or other “well-established organization” that supports open access and long-term archiving. In practice, this definition has been widely adopted, with individual variations among publishers.
Although it had a slower start than subscription-based online publishing, the OA model is now viable and growing. A website called The Directory of Open Access Journals reported the existence of 6,671 OA journals in all scholarly fields at the end of June 2011. And in a study published this year in the OA journal PLoS ONE, Mikael Laakso and colleagues reviewed 16 years of OA publishing, from 1993 to 2009. They report that the number of such journals grew 18 percent per year, while the total number of scholarly journals grew only 3.5 percent per year. The numbers of OA journals and articles show impressive growth curves over the time frame. But despite the increase, these articles accounted for only about 8 percent of all scholarly papers published in 2009.
Other studies have shown that researchers are well aware of OA journals and increasingly publish their work in them. Perceived advantages include free accessibility to users and the ability to reach a wide readership. The validity of criticisms, such as diminished prestige and lower peer-review quality, remains unresolved. The acceptance of OA publishing and the success of the enterprise may be best expressed by the fact that PLoS ONE published 6,749 papers in 2010, making it the world’s largest journal that year. Today, free access is part of the culture of scientific publication. But it will be interesting to follow the trends as new publishers enter the marketplace and authors develop a better sense of the desirability (or undesirability) of publishing in OA journals.
OA publication is often referred to as an “author pays” model, in contrast to the alternative, in which the user pays a fee to gain access. Typically, authors have always paid to publish their work. In 1991, authors often spent several hundred dollars on page charges and reprint fees for a single paper. But for OA publication, the author pays even more. These journals charge an article-processing fee that can range from about $1,000 to several thousand dollars, depending on the journal. If you publish in a subscription-based journal that distributes articles in print and online, and also offers an OA option, you rack up additional fees. Today, page charges are $50 to $100 per page, which would get you into print and online and make your paper accessible to journal subscribers. Then, you might want to publish supplemental information on the journal’s website, for a surcharge of up to several hundred dollars. Finally, you might decide to make your article freely available from the moment it appears online, a decision that may cost you several hundred to thousands of dollars more. Today, as in the past, most journals waive or reduce page charges or OA fees if the author demonstrates that he or she cannot afford them.
Although OA journal publishing is the dominant means for putting scientific results into the public domain, other strategies do exist—and all are facilitated by online technology. Journal articles may be deposited into any of several public-domain digital archives such as PubMedCentral. Subscription-based journals may allow authors to pay an extra fee to make their papers freely available online. Finally, researchers may post OA manuscripts on non-commercial servers such as arXiv.org, or on personal or institutional websites.
For more than 20 years, we have witnessed the profound effects of digital technology on scholarly publication. Changes in logistics and culture have been diverse and numerous. But I would argue that today, open access is the central issue in the marriage of publication to the pixel. It may be growing too fast for some and not fast enough for others, but it is growing nonetheless. I believe it is here to stay. It takes multiple forms, from journals that exclusively practice free-to-user availability, to individual investigators who maintain online libraries of their own published work. Will one model dominate over time? Are there more models to come? If the past 20 years are any predictor, the interplay of imagination, market forces and evolving digital technology will continue to change the publication landscape.
In the meantime, the scholarly community has a role to play in the development of the OA movement. That community includes authors, publishers, scientific societies, librarians and computer scientists. OA journal publishing should be subjected to ongoing evaluation to measure its impact, to address problems and to improve the platform for all its users. There should be transparent assessment of performance metrics such as article processing times, citations, peer-review quality and the costs to those involved.
Once such evaluations have been performed, they may help answer a growing host of questions: Is a goal of 100 percent open access reasonable or desirable? Should researchers embrace some forms of OA publication and not others? What about server space, backup and security issues specific to online-only journals? As we move toward a more OA culture, what role do—and should—printed journals have? Should there be more proactive education about OA publication? Do we need to be more forward-thinking about who should pay for publication costs? Many research funding agencies do pay grantees’ publication fees, but with OA publishing, the budget may have to increase. Should our institutions step up to the plate with their checkbooks? To gain maximum effect, the analyses that address these questions should be made by parties devoid of conflict of interest, and—in the spirit of open access—the results should be placed in the public domain.
I thank Andrekia Branch for her help in manuscript preparation and Glen Kellogg for helpful comments.
Click "American Scientist" to access home page
American Scientist Comments and Discussion
To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.
If we re-share your post, we will moderate comments/discussion following our comments policy.