{"id":3643,"date":"2018-01-26T00:00:00","date_gmt":"2018-01-26T00:00:00","guid":{"rendered":"http:\/\/pl-asapbio.local\/digital-age\/"},"modified":"2025-03-28T21:38:13","modified_gmt":"2025-03-28T21:38:13","slug":"digital-age","status":"publish","type":"post","link":"https:\/\/asapbio.org\/digital-age\/","title":{"rendered":"Scientific Publishing in the Digital Age"},"content":{"rendered":"<p><em>By Bodo M. Stern and Erin K. O\u2019Shea<\/em><br \/>\n<em> Howard Hughes Medical Institute<\/em><br \/>\n<em> Chevy Chase, Maryland<\/em><\/p>\n<p><iframe loading=\"lazy\" src=\"https:\/\/www.youtube.com\/embed\/EmJlamBeKiM?rel=0\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<p><strong>Summary<\/strong><\/p>\n<p>Life scientists feel increasing pressure to publish in high-profile journals as they compete for jobs and funding. While academic institutions and funders are often complicit in equating journal placement with impact as they make hiring and funding decisions, we argue that one of the root causes of this practice is the very structure of scientific publishing. In particular, the tight and nontransparent link between peer review and a journal\u2019s decision to publish a given article leaves this decision, and resulting journal-specific metrics like the impact factor, as the predominant indicators of quality and impact for the published scientific work. As a remedy, we propose several steps that would dissociate the appraisal of a paper\u2019s quality and impact from the decision to publish it. First, publish peer reviews, whether anonymously or with attribution, to make the publishing process more transparent. Second, transfer the publishing decision from the editor to the author, removing the notion that publication itself is a quality-defining step. And third, attach robust post-publication evaluations to papers to create proxies for quality that are article-specific, that capture long-term impact, and that are more meaningful than current journal-based metrics. These proposed changes would replace publishing practices developed for the print era, when quality control and publication needed to be integrated, with digital-era practices whose goal is transparent, peer-mediated improvement and post-publication appraisal of scientific articles.<span id=\"more-1823\"><\/span><\/p>\n<p><strong>Introduction<\/strong><\/p>\n<p>Scientific publishing in the life sciences is going through a period of experimentation and questioning not seen since the appearance of open access in the early 2000s, when new online-only and open access journals challenged the traditional model of print distribution and subscription fees. More recent experiments in scientific publishing include preprints, open models of peer review, and micro-publications (the publication of smaller units, such as individual observations). Yet most scientific work in the life sciences is still disseminated following a process inaugurated by the Royal Society in the 17th century. This process (Fig. 1A) starts with authors submitting a manuscript to a journal of their choice, at which editorial selection and peer review culminate in an editorial thumbs-up\/thumbs-down decision that determines whether the article is accepted for publication or rejected. If it is rejected, the author starts all over again at a different journal, typically until the paper gets accepted for publication somewhere. Subscription fees (in the form of single user licenses and institutional site licenses) and open access fees are the two prevailing payment options to compensate publishers for their services.<\/p>\n<p>It made sense for publishers to charge consumers subscription fees in exchange for hard copies of journals and to establish editors as the gatekeepers of publishing, when printing and distributing scientific articles was expensive and logistically challenging. These limitations no longer apply. We propose here to reconsider hallmarks of this traditional publishing process \u2013 the subscription business model and the roles of journal editors, reviewers and authors \u2013 with the goal to better align scientific publishing with a digital environment and with a scientist-driven research workflow.<\/p>\n<p><em>Open access: necessary but not sufficient <\/em><\/p>\n<p>Advocates of open access have long noted problems with the subscription model, including the following:<\/p>\n<ul>\n<li>The academic research community, including research institutions and funders, consider their research output a public good. The subscription paywall constitutes a powerful conflict with their mission of sharing research data and tangible products openly and in a timely manner. Payment for research articles should therefore not come from consumers, since that limits opportunities to reproduce and build upon the research for future discoveries.<\/li>\n<li>The subscription price that publishers charge is inflated because it is not based on the specific value that publishers add. By imposing a toll for access to scientific articles that were created and evaluated by scientists for free, publishers hold these scientists\u2019 products \u201cfor ransom,\u201d charging for the whole product instead of for the publisher\u2019s specific contributions to that product.<\/li>\n<\/ul>\n<p>Future changes in scientific publishing should strive to ensure that research output is freely available from the time of publication (see https:\/\/oa2020.org\/). Our arguments here extend beyond open access, however. While we consider open access necessary, it is not sufficient: an author\u2019s payment for publication in the current open access model creates a strong incentive for publishers to accept papers independent of their quality, elevating the risk that publications become paid advertisements. The rise of so-called predatory open access journals, with fake editorial boards and fake peer review, is evidence that this risk is already becoming reality. The best insurance against open access fees compromising quality control at journals would be to make the quality control process itself transparent. Increased transparency in the publishing process is a recurring theme in this perspective and will reappear as a proposed solution to challenges with the current journal-based publishing process, which we describe next.<\/p>\n<p><em>Impact factor and the academic incentive system: the good, the bad, and the ugly <\/em><\/p>\n<p>Scientific articles are a major intellectual output of the research enterprise and an important basis for evaluating the productivity and impact of individual scientists. Expert evaluation is and will remain the gold standard for judging scientists and their output. However, we recognize that additional indicators of research quality and impact are necessary and useful; a shorthand gauge of quality helps scientists and nonscientists alike to identify high-quality scientific work amid the vast sea of published manuscripts. At the moment, the journal name is used as such an indicator of quality: the assumption is that articles are of high quality and impact if they are published in journals that are perceived as prestigious. Journal editors set standards for their journals and choose what to publish accordingly. Journals like <em>Cell<\/em>, <em>Science,<\/em> and <em>Nature,<\/em> which are considered the most prestigious journals in the life sciences, aim to publish the most highly citable articles in each field, since the number of citations of an article is a measure of its influence. The journal metric that is most widely used to signal this prestige is the journal impact factor \u2013 the average of citations in a given year garnered by all articles published in the journal over the two previous years. The impact factor has become such a predominant metric because it discriminates well among journals. Journals like <em>Cell<\/em>, <em>Science,<\/em> and <em>Nature <\/em>publish, on average, more highly cited papers and thus have a higher impact factor. But like any metric that relies on the mean, this one is easily skewed by outliers, such as heavily cited papers. The distribution of article citations actually overlaps significantly between journals that have markedly different impact factors (Lariviere et al., bioRxiv, 2016; Kravits and Baker, 2011). A glass-half-full perspective emphasizes that editors of journals with a high impact factor manage to attract \u2013 at least on average \u2013 articles that end up being more highly cited; a glass-half-empty perspective highlights that they certainly don\u2019t do it consistently.<\/p>\n<p>Why do all journals publish articles of varying influence? There are three reasons. First, citation rates differ significantly among fields, with the number of scientists in a field and its translational potential typically increasing citation rates. Second, the opinions of the chosen two to four peer reviewers for a given paper may, by chance, not be representative and thus may lead to an erroneous publishing decision. Third, nobody \u2013 not even experts or editors \u2013 has a crystal ball to accurately predict at the time of an article\u2019s publication what its eventual impact will be. In the end, only time, replication, and extension of the research data can truly validate experimental findings and conclusions and determine their long-term impact. These inherent limitations explain why scientific journals will always publish papers that vary in influence, despite efforts by their editors to try to ensure consistency.<\/p>\n<p>The variability in the influence of the articles in a given journal does not cause damage per se. The damage comes from an academic incentive system that equates the journal name, specifically the corollary metric of the journal impact factor, with a given paper\u2019s quality and impact \u2013 in effect devaluing those papers in lower-impact-factor journals that are actually of high impact and overvaluing papers of low quality or impact in journals with a high impact factor. Journals promote their impact factor, and many academic institutions and funders are, unfortunately, complicit in using it for hiring and funding decisions. The combination of long term growth of the biomedical research enterprise and recent stagnation in federal funding has fueled hyper-competition for research funding, jobs, and publication in high-impact-factor journals and has rendered the impact factor an even more corrosive indicator of research quality and impact. It is particularly alarming that the next generation of scientists perceives a need to publish in <em>Cell<\/em>, <em>Science,<\/em> and <em>Nature<\/em> to be competitive for faculty positions. Evaluating scientists based on <em>where<\/em> they publish, rather than <em>what<\/em> they publish, weakens important elements of the biomedical research enterprise, including integrity, collaboration, and acceleration of progress. It shapes the behavior of scientists in undesirable ways, tempting them to exaggerate their work\u2019s impact, to choose research topics that are deemed suitable for top journals, and to refrain from open sharing of data and other research outputs.<\/p>\n<p><em>Integration of peer review with the publishing decision: thumbs-up sums it up<\/em><\/p>\n<p>Why does the academic incentive system rely on journal-based metrics like the impact factor when those metrics are inherently limited in their ability to evaluate the contributions of individual scientists? One major reason is the very structure of the publishing process, in particular the nontransparent integration of peer review with the publishing decision. Most journals keep peer reviews a confidential exchange among editors, reviewers, and authors, which gives editors flexibility to use their own judgment in deciding what to publish. It leaves their decision to publish as the only visible outcome of the evaluation process and hence the journal name and its impact factor as the only evident indicators of quality. In addition to encouraging the widespread use of impact factor in the evaluation of scientists, the tight and nontransparent linkage between peer review and the editorial decision contributes to other serious problems in publishing:<\/p>\n<ul>\n<li>The main purpose of peer review should be to provide feedback to authors in order to improve a manuscript before publication. But, in service of the publishing decision, peer review has morphed into a means of assisting editors in deciding whether a paper is suitable for their journal. Scientists may disagree on technical issues, but at least they can \u201cagree to disagree\u201d and keep the technical discourse constructive. Assessing whether a paper is \u201cnovel enough\u201d or \u201cabove the bar\u201d for a journal tends to be the most acrimonious and frustrating aspect of the peer review process because it is more subjective. It is important to identify papers with broad impact, and peer reviewers can contribute to that appraisal by properly describing the scientific context of the work in question. But the work\u2019s suitability for a given journal should not be a concern for peer reviewers.<\/li>\n<li>The intense competition for publication in high impact factor journals likely increases how often and to what extend scientific articles are revised before publication. While most papers are significantly improved through revisions suggested by reviewers and editors, there is a sense among scientists that a significant fraction of the time spent on revisions, resubmissions and re-reviews is not adding sufficient value and needlessly delays the sharing of findings.<\/li>\n<li>Important long-term evaluations of scientific work are not captured if the major quality controls conclude with the publishing decision. Experts know which papers in their field have stood the test of time. But for non-experts, it is more difficult to discern which high-profile papers have been built upon and which turned out to be dead ends. Longer-term evaluation is critical today for other reasons: the shift from data generation to data analysis as the rate-limiting step in research, and the increasingly interdisciplinary nature of research, pose challenges for even the best peer reviewers (Kaelin, <em>Nature<\/em>, 2017). How can we expect peer reviewers to verify the accuracy of all data and conclusions when this job could take a significant fraction of the time it took the authors to produce their analysis in the first place? While future technological solutions may considerably speed up the reanalysis of research data (see https:\/\/mybinder.org\/ for an example of such a solution), it is time to acknowledge that peer review before publication is just the initial step in scientific evaluation.<\/li>\n<\/ul>\n<p>In summary, the current journal-based publishing system drives the use of the impact factor in the evaluation of scientists, it renders the peer review process more adversarial than it needs to be, it delays dissemination of research findings, and it fails to capture the long-term impact of scientific articles.<\/p>\n<p><strong>Recommendations<\/strong><\/p>\n<p>We propose three changes to address the shortcomings described in the previous sections. While these changes could be implemented independently, together they promise to significantly increase transparency and efficiency in scientific publishing:<\/p>\n<ol>\n<li>Improve the peer review process<\/li>\n<li>Put dissemination of scientific articles in the hands of authors<\/li>\n<li>Develop a system of post-publication article evaluation<\/li>\n<\/ol>\n<p><em>Improve the peer review process<\/em><\/p>\n<p>Make peer review transparent: Currently, the publishing decision itself is the quality proxy for scientific articles. Publishing the peer review reports on a manuscript, anonymously or with attribution, would change that. Scientists would be able to take into account the peer reviews, not just the editor\u2019s decision, when they evaluate the work of another researcher. As pointed out above, transparent peer review could also address the serious risk that open access publications become paid advertisements.<\/p>\n<p>Ensure higher-quality peer reviews: Publishing peer reviews would likely motivate peer reviewers to more consistently execute their role well. Two other measures would further improve the quality of peer reviews. First, consultations among peer reviewers \u2013 a practice pioneered by journals such as the <em>EMBO Journal<\/em> and <em>eLife<\/em> \u2013 could effectively eliminate unreasonable reviewer demands. (It is also an ideal vehicle to introduce early-career scientists to the art of peer reviewing \u2013 what better way to learn than by consulting with a seasoned peer reviewer?) Second, peer reviews should focus on the technical quality and scientific background of the submitted work. The goals are to evaluate whether the conclusions of the article are warranted and to provide context that can serve as a scientific foundation for any subsequent evaluation of a paper\u2019s broader impact, by either an editor or a reader. By sidestepping the suitability of the work for a particular journal, peer review would become more constructive and, in principle, transferrable among journals.<\/p>\n<p>Give recognition for peer review: We recognize that many scientists are concerned that peer reviewers would not be as forthcoming with their critiques if signing reviews becomes compulsory. However, we hope that over time, reviewers will increasingly opt to sign their reviews. Signing of peer reviews aligns better with the notion that peer review is a scholarly activity that deserves credit. Considering that peer review is such a labor-intensive activity and a cornerstone of the scientific enterprise, we need to devise better ways to recognize scientists who contribute outstanding peer review services to the scientific community. Peer reviews should be given their own DOI, as they already are by some journals with open peer review practices and by <a href=\"https:\/\/publons.freshdesk.com\/support\/solutions\/articles\/12000022370-how-do-i-get-a-doi-for-my-review-\" target=\"_blank\" rel=\"noopener\">Publons<\/a> (Lin, Crossref blog, 2017), making it possible to cite peer reviews and include them on the peer reviewer\u2019s CV or ORCID profile. Widespread signing of peer reviews would also enable a community-wide analysis of peer review patterns, informing future suggestions for peer review improvements.<\/p>\n<p><em>Put dissemination of scientific articles in the hands of authors<\/em><\/p>\n<p>Funders entrust scientists with the execution of research. This trust in the creativity and independent judgment of individual scientists or groups of scientists is at the heart of the research enterprise. The research article is a major output, and often the culmination, of this research. Curiously, the trust in the researcher breaks down at the point of dissemination, since we have transferred the decision to publish to editors. Why do we trust scientists in the design and the execution of their research yet insist that editors should decide when this research is ready to be published? Or, to look at the flip side, if we feel so strongly that an independent party like an editor should make the publishing decision, why don\u2019t we ask independent parties to oversee experimental design and execution as well? If we agree to trust scientists to do research, then we should also trust them to decide when to publish that research.<\/p>\n<p>Making authors the publishers of their own work has additional benefits. For starters, it solves the problem that we identified at the outset: that the journal\u2019s publishing decision is used as an indicator of quality. Since authors have such a clear self-interest in publishing their own work, nobody would equate the author\u2019s decision to publish with a stamp of quality. This stamp of quality has to come from elsewhere, including the published peer reviews and post-publication evaluations described below. In addition, the peer reviewers would direct their comments to the authors focusing their peer reviews on improving the manuscript as opposed to advising the editor on suitability for a journal. The overall publishing experience for authors would improve significantly, since they would publish when they considered the work to be ready. The time and resource savings would be significant: authors wouldn\u2019t have to perform experiments that they deem unnecessary; consecutive submission and evaluation at several journals would decrease; and the time-to-publish interval, which keeps increasing due to demanding revisions and multiple rounds of review, would decline, since authors would control publication.<\/p>\n<p>A major concern with this model is that an author is not as impartial as an editor. The best insurance against authors\u2019 poor decision-making is the link between their published work and their reputation. Few authors will knowingly want to put out poor-quality work. Sometimes authors may want to rush publication of a competitive story, but preprint servers can now disseminate papers so much faster that there may be less pressure to rush publication after peer review. The peer reviews themselves will be a powerful restraint on the author, since they will be published together with the paper (see above). An author may, for example, prefer to withdraw a paper submitted to a journal if the reviews reveal fundamental flaws that cannot be addressed with revisions. And if an author decides to publish a paper despite serious criticism from reviewers, at least those criticisms will be accessible to readers, who can decide for themselves whether to side with the author\u2019s or the reviewers\u2019 point of view. This is arguably better than the situation today, in which authors can publish any work somewhere (though not necessarily in the journal of their choice), typically without critical reviews that might highlight potential shortcomings.<\/p>\n<p>Author-driven publishing is already practiced at preprint servers and publishing platforms such as F1000Research. The difference between a publishing platform and a journal is that the author replaces the editor in all major gatekeeper roles (Fig. 1C): the submitted article is immediately published as a preprint; the author-selected peer reviewers evaluate the work; the attributed peer reviews are published together with the revised manuscript when the author decides to publish; and version control allows the author to update the manuscript (Tracz, F1000Research, 2016). All versions and all peer reviews are available under an open access license. Such articles are also indexed in PubMed, the National Institutes of Health\u2019s searchable database of life science research articles, if at least two reviewers have signed off that the work is technically sound. Several funders, including the Wellcome Trust and the Bill &amp; Melinda Gates Foundation, have established open research platforms based on the F1000Research model. Future experiments with publishing platform models may differ in important ways \u2013 such as how they select articles for peer review (will every submission be reviewed?) and how they select peer reviewers (should the author or an independent entity select reviewers or should peer reviewers self-select?). Publishing platforms, where authors replace editors as gatekeepers, are an exciting model for scientific publishing in the future because they provide an efficient and fully transparent completion of the research workflow and best satisfy requirements for the open sharing of research outputs.<\/p>\n<p>Where does this leave journals and editors? We envision that journals can transition toward a publishing platform without giving up all editorial gatekeeper roles at once. For example, journal editors could retain the first editorial gatekeeper function of selecting articles for journal-orchestrated peer review (editorial triage), but relinquish the publishing decision to authors on the condition that the peer reviews and any author responses will be published as well (Fig. 1B). Journals would basically commit to the publication of all peer reviewed articles. In rare cases, editors may need to step in and stop publication of an article when the peer review process reveals that publication would be inappropriate \u2013 for example, in cases of plagiarism, data fabrication, violation of the law, or reliance on nonscientific methods.<\/p>\n<p>The editorial triage step serves the purpose of allocating peer review resources wisely. Rigorous peer review is time-consuming and particularly important for scientific articles that could have a broad impact, because validation from experts allows scientists from other fields to build on the data and conclusions. More specialized research articles may not need the same level of peer review, since they are mostly read by experts who can evaluate the work themselves. Editorial triage at prestigious journals is the traditional method that ensures that reviewer resources are used only for scientific work that is of sufficiently broad interest. Directing peer reviewer resources to broad-interest articles that need them most is currently not addressed at publishing platforms where all papers are reviewed equally. It may be possible over the long term to replace this editorial gatekeeper role if it becomes feasible and culturally acceptable to use community approaches to select works of broad impact for detailed review. At that point, the journal would transition to a full-fledged publishing platform.<\/p>\n<p>We believe that academic publishers like scientific societies are ideally placed to experiment with this transition from journal to publishing platform. Typically run by practicing scientists, these journals may have a natural affinity for the concept that authors should bear more responsibility and rights for what and when they publish. But equally important, a transition to author-determined publishing offers these journals a path to financial sustainability in an open access context. At the moment, society journals are between a rock and a hard place. They can\u2019t afford to switch to open access, since the open access fees required to replace their subscription income would be too high for readers. On the other hand, they feel considerable pressure from for-profit publishers who are launching competing journals at breakneck speed. Academic publishers risk becoming obsolete if they don\u2019t adjust. The proposed author-driven publishing model provides such an opportunity. A journal\u2019s commitment to publish all peer reviewed articles immediately increases the number of published papers and opens the door to charge for peer review instead of for publication. The journal would thus receive income on a larger share of the manuscripts it handles, reducing per-article open access fees.<\/p>\n<p>Publishing platforms and journals where authors decide when to publish explicitly forgo the editorial selection that currently occurs after peer review \u2013 and, with it, the (limited) ability to enforce a quality standard. Journals and editors can then focus on curating published articles through post-publication evaluation, which we discuss next.<\/p>\n<p><strong>A<\/strong><br \/>\n<a href=\"https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/digitalAge1A-2.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1826 size-large\" src=\"https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/digitalAge1A-2-1024x519.png\" alt=\"\" width=\"625\" height=\"317\" data-id=\"1826\" srcset=\"https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/digitalAge1A-2-1024x519.png 1024w, https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/digitalAge1A-2-300x152.png 300w, https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/digitalAge1A-2-768x390.png 768w, https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/digitalAge1A-2.png 624w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\"><\/a><\/p>\n<p><strong>B<\/strong><br \/>\n<a href=\"https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge1B-2.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-1828\" src=\"https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge1B-2-1024x521.png\" alt=\"\" width=\"625\" height=\"318\" data-id=\"1828\" srcset=\"https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge1B-2-1024x521.png 1024w, https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge1B-2-300x153.png 300w, https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge1B-2-768x391.png 768w, https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge1B-2.png 624w, https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge1B-2.png 1300w, https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge1B-2.png 650w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\"><\/a><\/p>\n<p><strong>C<\/strong><br \/>\n<a href=\"https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge1C-2.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-1825\" src=\"https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge1C-2-1024x527.png\" alt=\"\" width=\"625\" height=\"322\" data-id=\"1825\" srcset=\"https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge1C-2-1024x527.png 1024w, https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge1C-2-300x154.png 300w, https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge1C-2-768x395.png 768w, https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge1C-2.png 624w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\"><\/a><\/p>\n<blockquote>\n<p><strong>Fig. 1 Current and proposed publishing models for the life sciences<\/strong><\/p>\n<p><strong>A.<\/strong><em>The traditional publishing process. <\/em>Editors fulfill two critical gatekeeper functions: editorial triage and the decision to publish. The box (\u201cblack box\u201d) signifies that the step between peer review and the publishing decision is typically confidential, making the published article the only visible outcome of the processes in this box. <strong>B. <\/strong><em>Author-driven publishing at journals.<\/em> Editors still conduct editorial triage, but authors decide what and when to publish. The peer reviews are published together with the article. Tags are added by editors and other scientists to facilitate post-publication evaluation. <strong>C. <\/strong><em>Author-driven publishing on publishing platforms.<\/em> The star indicates that peer reviewers can be selected in various ways: by authors, by self-selecting reviewers, or by yet-to-be-determined community approaches. As in B, peer reviews are published and tags evaluate articles post-publication. Editors are not listed for C, since they do not fulfill any gatekeeper roles, but they can contribute to post-publication evaluation.<\/p>\n<\/blockquote>\n<p><em>Develop a system of post-publication article evaluation<\/em><\/p>\n<p>The current publishing system is particularly ill-suited to adopt post-publication evaluation for articles, since the major quality control steps conclude at the time of publication. With the life sciences becoming increasingly interdisciplinary and data-rich, it is critical to supplement published peer review reports with post-publication measures of validation. These measures, which we refer to as \u201ctags,\u201d could capture, in shorthand, a particular aspect of a paper \u2013 such as its technical quality, intellectual rigor, or breadth of interest. A precedent for tags exists in the F1000Prime service, in which experts identify articles they consider of most interest to their field \u2013 in effect \u201ctagging\u201d these articles. Similarly, badges to acknowledge open practices have been attached to articles at the journal <em>Psychological Sciences,<\/em> where they may contribute to an increase in data sharing (Kidwell et al., <em>PLOS Biol<\/em>, 2016).<\/p>\n<p>In post-publication evaluation, we envision that tags could be created by journal editors, perhaps initially to reflect aggregated reviewer scores. Tags have the capacity to extend the validation process well beyond the initial peer reviews and to capture article-specific indicators, both quantitative and qualitative, that reflect an article\u2019s value to the scientific community over longer time periods. Tags can come in many flavors, taking full advantage of internet capabilities through crowdsourcing and analytics, while still preserving the critical input from professional and academic editors as adjudicators of quality. Some tags could be created automatically and standardized across journals. For example, tags could capture the long-term impact of a paper through a metric like relative citation ratio \u2013 an article-specific metric recently developed at the National Institutes of Health (Hutchins et al., <em>PLOS Biol<\/em>, 2016). Other tags could reflect data downloads and reuse. Tags could also be useful in tracking the reproducibility of a published study: scientists who were successful (or unsuccessful) in reproducing or building upon the study could contribute to a crowdsourced reproducibility tag and link to their follow-up study. We should, of course, be mindful that tags, like any proxy, could be manipulated. But at least tags would be superior to journal-based metrics like the impact factor, since they would be article-specific and could change over time to reflect the changing impact of a paper to the scientific community.<\/p>\n<p>Tags take on particular significance in the proposed author-driven dissemination model. If authors decide when to publish, we lose the impartial voice of editors in the publishing decision. This editorial role can be executed through tags. For example, if authors decided to publish an article against reviewers\u2019 recommendations, the editors wouldn\u2019t stop publication but could attach a \u201cred flag\u201d tag that highlights the controversial nature of the paper and encourages readers to take a closer look at the published peer reviews. Today, editorial opinions factor into the publishing decision in important but opaque ways. \u201cEditor\u2019s choice\u201d tags could capture these opinions and might differ from peer reviewer scores. Importantly, we could evaluate the predictive power of all these tags \u2013 over time, identifying those scientists with a particularly good nose for high-impact work. We feel that the tag system is an exciting new approach that could capture the important role that editors play in curating the scientific literature. Unlike the primary research that we believe should be disseminated in an open access format, we think some curation services could be subscription-based and conducted by commercial and academic publishers alike.<\/p>\n<p>Tags could stratify published papers within a journal and help readers gauge the quality of articles. In contrast to the offerings of traditional publishers, who have created families of journals that cascade from high to low selectivity, tags could create, in effect, a quality cascade <em>within<\/em> a journal, sorting papers according to particular quality tags. We think that this type of internal cascade could be a much more efficient and effective way to differentiate published papers than the current journal system; a paper would typically have to be reviewed just once and would then rise or sink in importance based on its post-publication tags. A re-review may occasionally be in order to replace unreasonable reviews, but the existence of long-term impact tags could vindicate a paper, even if it erroneously received overly negative tags immediately after publication.<\/p>\n<p>The shift to author-driven publication and the introduction of post-publication tags would change the rules of the game, since the act of publication would no longer be a quality-defining step. If these tags prevail as indicators of scientific quality, scientists will care about them and will no longer see a need to publish papers in extremely selective journals. Tomorrow\u2019s scientists will not associate quality with a particular journal name but with the peer reviews and the tags that are attached to a paper (Fig. 2).<\/p>\n<p><a href=\"https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge2-2.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1827 size-full\" src=\"https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge2-2.png\" alt=\"\" width=\"790\" height=\"424\" data-id=\"1827\" srcset=\"https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge2-2.png 790w, https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge2-2-300x161.png 300w, https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge2-2-768x412.png 768w, https:\/\/asapbio.org\/wp-content\/uploads\/2025\/03\/DigitalAge2-2.png 624w\" sizes=\"auto, (max-width: 790px) 100vw, 790px\"><\/a><\/p>\n<p style=\"padding-left: 30px;\"><em><strong>Fig. 2 Summary of current and proposed publishing models<\/strong><\/em><\/p>\n<p><strong>Future outlook<\/strong><\/p>\n<p>The notion that high-impact-factor journals are synonymous with importance is deeply engrained in the scientific community. Focusing editors on their critical role as curators and restricting their role in the publishing decision will therefore be a significant cultural shift. But this shift makes sense: why should the act of publishing still be the main quality-control mechanism in sharing science when publishing itself is cheaper, faster, and easier than ever before? Instead, publishing in the digital age would benefit from more robust and transparent pre- and post-publication evaluations; a transparent peer review process that is recognized and rewarded as a critical scholarly activity and that focuses on improving scientific articles before publication, not on its suitability for a particular journal, and we need article-specific quality measures that extend beyond the time of publication. These changes would shift publishing and the academic incentive system away from journal-based to article-based metrics that better reflect the true contributions of scientists. The benefits would include a more transparent and efficient publishing system, with robust yet evolvable post-publication evaluation. We believe these changes could put science, and scientists, back at the heart of scientific publishing.<\/p>\n<p><strong>Acknowledgements<\/strong><\/p>\n<p>We thank Kathryn Brown,&nbsp;David Clapham, Gerry Rubin, Sean Carroll, Heidi Henning, Boyana Konforti, Judy Glaven, Janet Shaw, Viknesh Sivanathan (all from HHMI), Mark Patterson (eLife), Robert Kiley (Wellcome), and Jessica Polka and Ron Vale (ASAPbio) for feedback on drafts of this manuscript.<\/p>\n<p><strong>References<\/strong><\/p>\n<p>Ian Hutchins, Xin Yuan, James M. Anderson, George M. Santangelo, \u201cRelative Citation Ratio (RCR): A New Metric That Uses Citation Rates to Measure Influence at the Article Level,\u201d <a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pubmed\/27599104\" target=\"_blank\" rel=\"noopener\"><em>PLoS Biol<\/em><\/a> 2016 Sep 6;14(9):e1002541. doi: 10.1371\/journal.pbio.1002541.<\/p>\n<p>William G. Kaelin Jr., \u201cPublish houses of brick, not mansions of straw,\u201d <em>Nature <\/em>545, 387 (23 May 2017)<em>, <\/em>doi:10.1038\/545387a.<\/p>\n<p><a href=\"http:\/\/www.frontiersin.org\/Community\/WhosWhoDetails.aspx?UID=34923&amp;d=1&amp;sname=DwightKravitz&amp;name=Science\" target=\"_blank\" rel=\"noopener\">Dwight J. Kravitz<\/a> and&nbsp;<a href=\"http:\/\/www.frontiersin.org\/Community\/WhosWhoDetails.aspx?UID=10101&amp;d=1&amp;sname=ChrisBaker&amp;name=Science\" target=\"_blank\" rel=\"noopener\">Chris I. Baker<\/a>, <em>Front Comput Neurosci<\/em>, 2011, <a href=\"https:\/\/doi.org\/10.3389\/fncom.2011.00055\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.3389\/fncom.2011.00055<\/a>. \u201cToward a new model of scientific publishing: discussion and a proposal\u201d<\/p>\n<p>Mallory C. Kidwell, Ljiljana B. Lazarevi\u0107, Erica Baranski, Tom E. Hardwicke, Sarah Piechowski, Lina-Sophia Falkenberg, Curtis Kennett, Agnieszka Slowik, Carina Sonnleitner, Chelsey Hess-Holden, Timothy M. Errington, Susann Fiedler, Brian A. Nosek, \u201cBadges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency.\u201d <em>PLoS Biol<\/em> 14(5): 2016 e1002456. https:\/\/doi.org\/10.1371\/journal.pbio.1002456<\/p>\n<p>Jennifer Lin, <a href=\"https:\/\/www.crossref.org\/blog\/peer-reviews-are-open-for-registering-at-crossref\/\" target=\"_blank\" rel=\"noopener\">https:\/\/www.crossref.org\/blog\/peer-reviews-are-open-for-registering-at-crossref\/<\/a> 2017<\/p>\n<p>Vincent Lariviere, Veronique Kiermer, Catriona J. MacCallum, Marcia McNutt, Mark Patterson, Bernd Pulverer, Sowmya Swaminathan, Stuart Taylor, Stephen Curry, \u201cA simple proposal for the publication of journal citation distributions,\u201d<em> bioRxiv, <\/em>Sept 11 2016, doi: <a href=\"https:\/\/doi.org\/10.1101\/062109\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1101\/062109<\/a>.<\/p>\n<p>Vitek Tracz, Rebecca Lawrence, \u201cTowards an open science publishing platform.\u201d F1000Res 2016 Feb 3;5:130. doi: 10.12688\/f1000research.7968.1.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>By Bodo M. Stern and Erin K. O\u2019Shea Howard Hughes Medical Institute Chevy Chase, Maryland Summary Life scientists feel increasing pressure to publish in high-profile journals as they compete for [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":2490,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[42,49],"tags":[],"class_list":["post-3643","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-guest-posts","category-peer-review"],"acf":[],"_links":{"self":[{"href":"https:\/\/asapbio.org\/wp-json\/wp\/v2\/posts\/3643","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/asapbio.org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/asapbio.org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/asapbio.org\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/asapbio.org\/wp-json\/wp\/v2\/comments?post=3643"}],"version-history":[{"count":1,"href":"https:\/\/asapbio.org\/wp-json\/wp\/v2\/posts\/3643\/revisions"}],"predecessor-version":[{"id":3644,"href":"https:\/\/asapbio.org\/wp-json\/wp\/v2\/posts\/3643\/revisions\/3644"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/asapbio.org\/wp-json\/wp\/v2\/media\/2490"}],"wp:attachment":[{"href":"https:\/\/asapbio.org\/wp-json\/wp\/v2\/media?parent=3643"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/asapbio.org\/wp-json\/wp\/v2\/categories?post=3643"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/asapbio.org\/wp-json\/wp\/v2\/tags?post=3643"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}