Skip to navigation menu Skip to main content Skip to footer

Keep up to date on open scholarly communication! Check out what’s new on the blog.

New to preprints and open peer review? Explore our resource library.

Get the latest and greatest preprint and open peer review news in your inbox! Sign up for our newsletter.

ASAPbio Response to NIH Request for Information on “Maximizing Research Funds by Limiting Allowable Publishing Costs” (NOT-OD-25-138)

ASAPbio Response to NIH Request for Information on “Maximizing Research Funds by Limiting Allowable Publishing Costs” (NOT-OD-25-138)

NIH seeks to reduce its publishing expenses, while encouraging high quality research, in order to be good stewards of its limited public funding. Certainly, high costs are one important downside of the current research publishing system. Article processing charges (APCs), in particular, prop up a system that encourages a high volume of article production (for those who can afford to pay), with far less corresponding attention to increasing research quality (Butler et al., 2023). 

Unfortunately, capping APCs is not likely to meaningfully reduce costs (Haustein et al., 2025), and it will not improve research quality. It may even increase costs as currently inexpensive publishers raise their prices to meet the cap. Importantly, a focus on APCs merely entrenches the current system and delays necessary reforms. We argue that rather than a stopgap APC fix, NIH is better positioned to put its weight behind more meaningful research communication reform. Fortunately, solutions already exist that will both save money and accelerate scientific progress: preprints and open evaluation.

Open publishing and open evaluation for a more robust publishing ecosystem

Preprints are research articles published durably and openly on preprint servers like bioRxiv and medRxiv (two of the largest such services in the life sciences). Preprinted articles are free to publish and free to read. They can be made available rapidly, as soon as researchers are ready to share the results of their work. The organizations that maintain the servers have costs to cover to provide their services, but those costs are pennies on the dollar when compared to the cost of APCs (and the cost of traditional publishing more broadly).

Research that is published on preprint servers can also be openly evaluated using a growing variety of services. For instance, PREreview (prereview.org) allows anyone with expertise to provide public (and durable) evaluations of preprints. In another model, overlay journals – run by communities of domain experts – provide open reviews of the latest preprints in their field (see, e.g., Rapid Reviews\Infectious Diseases). Services like these are proving that it is not necessary to rely on gold open access journals to perform important evaluation and vetting functions. 

There are substantial benefits for science that come from unbundling the publication and evaluation functions that journals currently serve. Under the traditional publishing model, research is completed and submitted to a journal, where it sits for several months as it undergoes a peer review process. Papers are often rejected (frequently for non-scientific reasons, such as page limits), meaning that a paper might undergo several rounds of several month processes before ultimately being published. Peer evaluations are usually unseen, because reviews of rejected papers are not often shared with anyone beyond submitting authors. The community can only build on and react to these papers many months or years after their completion.

By contrast, in a preprint-centered system, research is shared openly and in a timely way as soon as researchers have a result that they are ready to communicate. Evaluation can take place in a variety of diverse forms, including via the work of evaluation services but also by commenting and other forms of public feedback. The separation of publishing and evaluation functions allows for efficiencies; new evaluation services do not also have to be concerned with publishing functions and vice versa. Separation also allows for innovation. Improvements to the feedback and evaluation process can be realized when we move beyond the constraints of traditional review-then-publish models that keep research needlessly out of the hands of readers for months or years and then charge a heavy fee for the content to finally be openly available. 

Recommendations for NIH

We offer two recommendations for NIH to achieve its goals of reigning in expensive publication costs and supporting scientific rigor.

First, we urge NIH to augment its existing policies with additional guidance and encouragement for authors to follow best practices in open research communication. Together with Creative Commons, we recently published a Preprint Policy Framework (ASAPbio, 2025), which spells out recommended policies for funders that want to maximize their return on investment from their grants. Consisting of six components, the framework catalogs funder requirements for grantees to share preprints, retain their copyright, openly license their preprints for maximum reuse potential, time the submission of their preprints prior to any journal submission, acknowledge funders in the preprint, and provide availability statements describing how key research outputs like data and analysis code can be accessed. The biomedical funders Aligning Science Across Parkinson’s (ASAP) and the Gates Foundation endorse it as a model framework, and both require their grantees to adhere to these strong policies. Were NIH to do the same, NIH funded research would be strongly situated for a high level of reuse potential and impact.

Second, we urge NIH to direct resources to support an alternative publishing ecosystem with openly published preprints and open evaluation at its center. Existing services are already well-positioned to deliver on the vision that this alternative ecosystem could provide, and they are growing rapidly. bioRxiv, for instance, currently adds about 4,000 new papers each month, while medRxiv adds about 1,200 per month. Together the two services have published over 365,000 papers since their founding (in 2013 and 2019, respectively). In terms of evaluation services, these are also increasing in use. Corker et al. (2024) provides recent information about the growing volume of open evaluation services and platforms. Additional support for these vital community-led initiatives will allow them to scale and reach their full potential to provide a cost-effective and robust alternative to APC-based publishing. 

Focusing on improvements to the research communication process via (1) strong policies for NIH grantees and (2) support for preprinting and open evaluation infrastructure stands to better position NIH to achieve its goals to reduce publishing costs and improve research quality. We urge NIH to consider our recommendations and to reconsider its focus on APC caps, which are unlikely to achieve its stated goals.

About us

ASAPbio (https://asapbio.org/about/) is a non-profit, researcher-led organization that advocates for timely, open, and rigorous research communication. We work to create a life sciences communication ecosystem where all papers and other outputs are shared rapidly and without restrictions on access or reuse, and open and constructive exchanges take place on research products at all stages.

References

ASAPbio. (2025). Preprint policy framework. Retrieved from https://asapbio.org/preprint-policy-framework/

Butler, L. A., Matthias, L., Simard, M. A., Mongeon, P., & Haustein, S. (2023). The oligopoly’s shift to open access: How the big five academic publishers profit from article processing charges. Quantitative Science Studies, 4(4), 778-799. https://doi.org/10.1162/qss_a_00272

Corker, K. S., Waltman, L., & Coates, J. A. (2024). Understanding the Publish-Review-Curate  (PRC) model of scholarly communication. MetaArXiv. https://doi.org/10.31222/osf.io/h7swt

Haustein, S., Shares, E., Alperin, J. P., Camargo, F., Matthias, L., Céspedes, L., Poitras, C., & Strecker, D. (2025). NIH explores capping APCs: Let’s look at the evidence. https://doi.org/10.59350/scholcommlab.5645

0 Comments

Leave a Comment

Your email address will not be published or shared. Required fields are marked *.