Community Call – ASAPbio https://asapbio.org Tue, 07 Oct 2025 20:12:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://asapbio.org/wp-content/uploads/2025/02/cropped-ASAPbio-favicon-32x32.png Community Call – ASAPbio https://asapbio.org 32 32 ASAPbio September 2025 Community  Call: Experiments in publishing with PubPub & Octopus https://asapbio.org/asapbio-september-2025-community-call-experiments-in-publishing-with-pubpub-octopus/ https://asapbio.org/asapbio-september-2025-community-call-experiments-in-publishing-with-pubpub-octopus/#respond Tue, 07 Oct 2025 19:36:03 +0000 https://asapbio.org/?p=5420 During the September Community Call, we heard from Alex Freeman from Octopus and Gabe Stein from PubPub

The meeting started with Alex Freeman, who first introduced the Octopus platform. Alex discussed how academic problems all reflect how things are published. She discussed how slow and expensive it is to produce a paper, the importance of publishing in academia, the focus on findings, and how it influences how everything is done in academia. She believes that the Octopus addresses all those issues simultaneously.

The idea behind the Octopus is to create a new ‘primary research record.’ This new primary research record would be digital, free to publish and read, and optimized for language translations. It is not based on books and monograms, but it is a new, flexible structure focused on smaller units. Authors can publish, e.g., research questions, rationale, data, and analyses separately. Each is a single publication, which can be linked to the others. 

This new design stems from how research is performed in reality, which differs from how researchers are taught to describe it. The current model of publication forces researchers to write in the same linear narrative  (which is most likely not the order in which it happened), but in Octopus, the researchers can publish their research the way it was performed, one unit at a time, as they progress with their work.

The website also allows users to write a review of the publication or express a concern by raising a red flag, which will be visible to other readers. 

Alex showed the example of the research process on the Octopus page. She also discussed the advantages of this system, such as being instant and free, that there is no need for a complete ‘narrative’, you can see which author from the group did which work (highlighting the specialization of each author), and that it includes open post-publication peer review.

Following Alex’s talk, Gabe Stein discussed PubPub, a free and open platform for publishing. He started with a live presentation of PubPub Legacy and how it is organized. He then discussed where the platform is heading since the current version is undergoing significant changes. Those changes are based on feedback from users who expressed what features are missing from the platform and how to change its organization to serve users better. Gabe gave a live presentation showing behind the scenes of PubPub and the improvements they have created to respond to users’ needs. He walked the listeners through several examples of organization systems under development, each depending on the kind of data and research question the user has. He concluded that new features should be available to users in a few months.

The Q&A session followed the talks. During the Q&A, the speakers discussed several topics. One of the questions pertained to achieving sustainable funding for those kinds of projects, which the speakers admit is not easy; however, they discussed several opportunities and different approaches to ensure financial sustainability.

The speakers also discussed how Octopus and PubPub fit within the current publishing system and where things might be headed. Another critical question discussed how research should be assessed. The speakers discussed peer review and how we can move beyond it. Alex elaborated on how we should move away from valuing only results, but also value ideas, analysis, and other parts of the research process that do not require so many resources. 

For the entire discussion, view the video below.

]]>
https://asapbio.org/asapbio-september-2025-community-call-experiments-in-publishing-with-pubpub-octopus/feed/ 0
ASAPbio July Community Call: Replacing Academic Journals with Björn Brembs https://asapbio.org/asapbio-july-community-call-replacing-academic-journals-with-bjorn-brembs/ Sat, 30 Aug 2025 16:39:14 +0000 https://asapbio.org/?p=5354 During our recent Community Call, Björn Brembs, a professor at the Universität Regensburg in Germany, discussed problems with the current landscape of scientific publishing. Björn believes that the issue with journals is not accessibility since we have developed many ways to get around paywalls in recent years. Instead, he summarized the three most pressing problems as: Replicability, Affordability, and Functionality, which he also discusses in a 2023 Perspective

(Brembs, Replacing academic journals, 2023)

Björn started with journals. He discussed that the problem with journals is that they overcharge the institutions by a factor of 5 to 10 (depending on how you count). That leads to institutions lacking affordability, which has consequences. For instance,institutions can’t automate things that researchers wish they would automate (costing time), and institutions also can’t afford to establish and maintain processes and technologies that would help researchers to scrutinize the science of other people, which leads to a replicability problem. Since institutions cannot provide adequate support that the researchers need, the researchers lack functionality, such as making Open Science the default, or helping the researchers scrutinize other people’s work. This causes everyone to be locked into this circle of problems. The consequence of that is a lack of reproducibility in science. Björn has shown data that suggest that, depending on the discipline, 40-80% of public support of science ends up in irreproducible publications. 

What do journals have to do with replicability? Björn summarized the large body of data into one graph showing that as you climb up the journal rank, the higher the journal’s prestige, the less reliable the science. And if, in the hiring process, the researchers who publish only in the less reliable journals will be hired, then they will train people on how to get ahead in academia by publishing low-reliability papers in high-prestige journals. He also emphasized that it gets worse. For those high-prestige journals, we not only get unreliable science, but we also pay more for it.

What do journals do with the money they earn from publishing? If a researcher pays ~$4,000 to publish, $600 is the actual publication cost, $1,200 is the profit, and the remaining $2,200 is salaries and other costs, e.g., money for shareholders. The money doesn’t cover the functionalities that the researchers wish to have, such as journal prestige correlating with reliability of the published science, better peer review without the issues that it currently faces, more transparency, better filtering and discoverability, and so on. Instead, the publishers spend the money to develop or buy tools that cover the research cycle. This allows them to monitor what their products’ users are currently working on before anything is published. Then they can use their analytics to convince departments that people who work on those things should receive more money. Since those publishers make money on the unreliable science they publish, the consensus is that they should be substituted with better solutions. 

The Council of the European Union (all the ministers of the EU member states) realized the problem with scientific publishing. In 2023, they published a paper on the hazards of scholarly publishing. They recommended the development of interoperable, non-profit infrastructure for publishing based on open software and standards to avoid lock-in of services, which received support from major research organizations. Unfortunately, this is the opposite of what we currently have. 

(Brembs, Replacing academic journals, 2023)

Björn elaborated on what such an interoperable, non-profit infrastructure could look like. It is a system where academic libraries redundantly host green repositories, so if even part of the globe goes offline, all the knowledge deposited there will still be available. 

In such a system, the repository’s content, the output layer, would be any data, software, methods, and protocols academics create, packaged into research articles, multimedia, or policy advice (the narrative layer). On top of that would be the community layer, which is also decentralized and could edit, review, publish, and connect.

In the European Union, Open Research Europe is transitioning into such an interoperable, non-profit infrastructure, which is intended to look very similar to the infrastructure described by Björn, with many elements already in place. Therefore, we have the journal replacement halfway ready. However, even if we have the alternative, it doesn’t mean it will be used since many authors don’t have much choice where to publish. How then to break the monopoly and make this replacement the default? 

Some people try to convince all kinds of evaluation committees worldwide not to judge the researchers based on the journal in which they publish. So far, Björn sees modest progress here. Instead, he discusses how we can convince people who are invested in science and proper spending of public funds that the time is ripe to make the change and take the action they couldn’t before. He lists two possibilities. 

First, we should defund current journals, because public money should be spent via tenders or bidding. He explains that the institutions should find the most affordable service to fulfill their publishing needs instead of negotiating with big publishers. The European Commission has already launched such a tender for an open-access publishing platform. The outcome of this is Open Research Europe

Second, funders can use their current funding criteria, but include a rule that they will not fund institutions that do not have a modern, federated infrastructure. This part incentivizes institutions to make a change since Björn believes not that individuals should be incentivized, but that institutions should be, because they are much more reluctant to change than individuals. Funders should put that pressure on reluctant institutions. 

Following the talk was the Q&A session, where the call participants discussed many topics with Björn. 

When asked about actionable steps an average researcher can take to move towards presented solutions, Björn emphasized that in the ideal world, the average researcher should focus not on that administrative stuff but on their research. He is focusing on this because the current system doesn’t work. He suggested that if researchers want to do something, they should go on committees and advise the university leadership to stop paying corporate publishers, complain about the current situation, and make it visible that this is a problem. Instead of wasting money on paying publishers, he suggests that institutions should use the money to build a system that serves researchers. 

During further discussion, Björn added that there are several hurdles that need to be overcome, before the vision he presented can be implemented. He believes technology is not the main roadblock, but governance and organization are required. Such governance could possibly originate from the EU or UNESCO. During further discussion, Björn mentioned models that are in place in South America, such as SciELO, and how they can serve as an inspiration for Western institutions.

When asked whether there is a place for journals in this system, or if we should have something new come up, in response, Björn asked a thought-provoking question. He suggested we consider what function of a journal couldn’t be copied by the system presented? If you don’t have the answer for that, you have the answer.

We encourage you to view the entire thought-provoking presentation and discussion. 

]]>
ASAPbio May Community Call: Tackling industrial scale research fraud parallels with virus control with Dorothy Bishop https://asapbio.org/asapbio-may-community-call-tackling-industrial-scale-research-fraud-parallels-with-virus-control-with-dorothy-bishop/ Fri, 20 Jun 2025 01:20:00 +0000 https://asapbio.org/?p=5244 During the May Community Call, we welcomed Dorothy Bishop, Emeritus Professor of Developmental Neuropsychology. Dorothy themed her talk around fraudulent behaviors in science, likening them to a virus that can infiltrate and be disruptive, a concept she had been thinking about during the COVID lockdown. 

Dorothy began by discussing the common misconception that fraud is a relatively minor problem in science. She elaborated that researchers believe that because science is inherently self-correcting, fraud should be very rare. Unfortunately, Dorothy discusses that, in recent years, there has been an increase in scientific fraud. 

Though she began by citing examples of prominent figures who committed scientific fraud, which even led to deaths, she spent most of her talk focusing on the current issue: institutional-scale science fraud, which includes paper mills and a large number of retracted papers. 

One would assume that papers produced by paper mills shouldn’t be published because we have peer review, which should be a system that catches such fraud. However, those papers are published anyway. Dorothy believes that some of these manuscripts are convincing enough to pass as authentic papers. This was the case for the papers identified by Jennifer Byrne, a cancer biologist. Jennifer noticed a flurry of scientific papers discussing the gene she studies, which was unusual given that very few people work on this gene. She noticed that those papers used the wrong sequence of nucleotides, and they all had a similar format, as if they would be written from the same template. Working with Cyril Labbé, they developed an automated search for papers that included such wrong sequences. After screening 12,000 papers, they identified 700 with incorrect gene sequences, suggesting a paper mill that used a similar template for papers, differing only in the gene described and the phenotype reported. 

Using this kind of template format is just one of the techniques paper mills use. Other techniques used to publish fake papers include selecting reviewers recommended by the authors (who may be fake). 

Paper mills might also publish manuscripts in a predatory journal, or a real journal can be hijacked by the bad actors who make the webpage look precisely like the real journal. Other cases might involve situations where the editor is complicit in the fraud and allows for the fake papers to be published. Dorothy discussed that some journals have Special Issues, which can be done in good faith. However, they can also serve as a foot in the door for paper mills, as journals might not be careful enough in selecting guest editors for those issues. After providing examples of some of the fraudulent techniques used by paper mills, she discussed why we should be concerned about it. 

Many people believe that fraudulent papers are often published in low-quality journals, and researchers who conduct proper science would not even read or pay attention to them. However, paper mills may affect individuals who attempt to conduct serious research. For example, if a researcher is doing a systematic review, they waste time, because those papers will come up in the search, and the researcher will waste their time reading them just to conclude that they are of poor quality. 

Another significant issue is that some people advance their careers by engaging in such fraudulent practices. A system that evaluates researchers by the number of papers they publish creates a strong incentive for abuse. This system is abused by both researchers and publishers. Predatory publishers in particular often prey on researchers from the Global South and countries in which researchers are required to publish a certain number of articles each year. Worldwide, researchers who are required to publish a set number of articles by their institutions in order to graduate or be promoted may feel they have no option but to engage in fraudulent practices, wrongly believing that this is their only option. 

Such fraudulent practices also lead to a weakening of public trust in science. Because a significant amount of pseudoscience is now incorporated into the literature, it becomes harder to judge, especially for non-scientists, what science is and what is pseudoscience.

Dorothy then discussed how the spread of research fraud can be stopped. Returning to the comparison of fraud in science to a virus, the solution is also similar to a virus: test and trace, inoculate against the virus, and quarantine those affected. 

Test and trace:

  • Open Science plays a significant role here, for example, in the form of preregistration. She provides an example of cancer genetics, where studies can take years to complete. If studies were preregistered, then there wouldn’t be a problem of paper mills, because paper mills are impatient. 
  • If code and data are open, it’s more challenging to fake them. 
  • Open peer review and post-publication peer review can help identify papers from paper mills. 

Inoculate against the virus:

  • The current incentive structure values publication numbers, which can promote fraudulent behavior. Those incentives need to be removed. Instead of the number of publications, researchers should be evaluated based on the integrity of the research. 
  • Publishers should slow down and stop the proliferation of journals, particularly special issues. 

Quarantine:

  • Editors who fail to do their job correctly and allow fraudulent science to be published should be removed from their positions and barred from serving as editors again. 
  • Fraudulent papers should be promptly retracted. 
  • Institutions should not ignore problems when they occur; instead, they should be transparent and address issues of research misconduct promptly. 

During the Q&A session, Dorothy addressed several important issues. One of the issues was determining who should regulate academia and investigate research misconduct. She suggested that it should be an independent body, but not a government body, as this could lead to problems with the government weaponizing science. She elaborated on the pros and cons of several alternatives, including scientific societies and commercial providers. 

She also made the case that if there is a problem with science, scientists should promptly and transparently tackle it. If scientists are not open about this problem, it will further undermine trust in science. 

When asked about preprints, she made the case that preprints can be more trustworthy than journal papers, as people who commit fraud are less likely to post preprints, since posting a preprint doesn’t benefit them. This can be ironic, as one of the critiques of preprints is that they are not trustworthy. However, considering the research fraud Dorothy discussed, the preprint ecosystem may be less infected with the virus of research fraud compared to the scientific journal ecosystem.

]]>
ASAPbio April Community Call: A discussion on openRxiv with Richard Sever https://asapbio.org/asapbio-april-community-call-a-discussion-on-openrxiv-with-richard-sever/ Thu, 15 May 2025 17:43:23 +0000 https://asapbio.org/?p=5203 The April Community Call discussion focused on the past, present, and future of bioRxiv, medRxiv, and openRxiv. We were joined by Richard Sever, co-founder of bioRxiv and medRxiv and CSSO at openRxiv, preprint servers focusing on biology and biomedical science, which started at Cold Spring Harbor Laboratory. 

Richard summarized the past 12 years of those preprint servers in a few impressive numbers:

  • >350K preprints posted
  • ~10 M views per month
  • >80% of papers are subsequently published in journals.

However, the bioRxiv and medRxiv history is not just the number of posted or read preprints. Richard discussed that those servers’ 12 years of existence changed the life science community. More biologists are posting, reading, and citing preprints. They also do not need to worry about whether journals will accept a manuscript if it has already been posted as a preprint, as most journals accept such submissions (which was not always the case in the past). Preprints also receive support from founders and institutions that encourage or even mandate preprints, like the Gates Foundation, which has mandated that their grantees post preprints. 

Richard also discussed the original goals behind creating preprint servers – to accelerate science and stimulate the evolution of the scientific publication system. Looking back at 12 years of the bioRxiv and medRxiv, Richard concluded that those goals were met. 

First, preprints enabled researchers to make their research immediately available, rather than waiting for a scientific journal’s lengthy peer review process. One notable example of how preprints helped expedite science was during the COVID-19 pandemic, when the rapid sharing of scientific knowledge helped save lives. Richard gave an example of the results from the RECOVERY trial that were first posted as a preprint. The data showed the benefits of dexamethasone in severely ill patients with COVID-19. Since the data was available weeks before the journal publication, physicians could use it earlier, thus saving lives. He also referred to the estimation that if everyone shared research immediately via preprints, it could accelerate scientific discoveries fivefold. 

Second, preprints facilitated experimentation in peer review, commentary, and other forms of feedback. Preprints enabled initiatives such as Peer Community In, Review Commons, PREreview, and many others to experiment with reviewing, commenting, and discussing scientific research. 

He also discusses the future goals. First, in the future, he sees preprints as the primary means of disseminating research. Second, he is encouraging the directed evolution of preprints. This evolution aims to decouple dissemination and evaluation to improve peer review and verification.  

He discussed two possible scenarios of evolution of preprinting: open ecosystem and publisher silo, an idea also dscussed in the op-ed posted on LSE

In the open ecosystem, everyone posts preprints on a community preprint server and then different actions can be taken, depending on the manuscript, e.g, peer review, either traditional or decoupled, such as PCI or Review Commons; or other actions, such as commenting, or verifying key information by the badging services and so on.

The other option of this evolution is that preprints are in silos controlled by publishers. In this scenario, the researchers send a preprint to the commercial publisher, and they decide where it fits in the hierarchy of their journals. This preserves the journal hegemony and the journal name as a proxy. 

Richard prefers the first scenario, which will ultimately lead to what he calls ‘the article of the future’, with many objects linked to it, such as preregistered reports, data sets, and reviews.

Numerous challenges are associated with the evolution of the preprinting system, but he believes that openRxiv is the answer to the challenges they face. penRxiv is not a preprint server, but an independent nonprofit organization that oversees bioRxiv and medRxiv to help establish long-term stability.

He believes that bioRxiv and medRxiv were successful experiments. Now it’s time to move from prototype to something that is operational long-term and steer it in the right direction. For this, openRxiv brought respected figures in the scientific community on board, many of whom have been involved in bioRxiv, medRxiv, and other preprint-related activities.

Following the talk, during the Q&A, Richard answered many questions from the participants. He elaborated on how preprint servers can serve authors from the Global South, as they are inclusive platforms and accept submissions worldwide.

Participants also asked about the idea of incremental preprints. Richard believes that we need to strike a balance between improving your paper and creating a new version of the manuscript, but not producing too many versions of your paper, because this would make it more difficult for readers. He also believes that this idea does not appeal to scientists themselves, as they often want to draw a line and decide to finish the project at some point. 

Richard also discussed Open Access alternatives, specifically increasing the adoption of Green Open Access. Richard believes there is a great role for funders in improving the adoption of Green Open Access and preprints. Funder requirements would increase compliance among scientists to post and cite preprints. 

Watch the full call for more Q&A’s and in-depth discussion!

]]>
ASAPbio March Community Call: Institutional Recognition of Preprints https://asapbio.org/asapbio-march-community-call-institutional-recognition-of-preprints/ Mon, 12 May 2025 17:56:48 +0000 https://asapbio.org/?p=5196 Our Community Call on March 26, 2025, focused on the topic of institutional recognition of preprints, which is one of the key barriers to the broader adoption of preprints.

Our first speaker, Michael Dougherty, Professor and Department Chair of Psychology at the College of Behavioral and Social Sciences, University of Maryland, represented HELIOS (Higher Education Leadership Initiative for Open Scholarship). HELIOS is a United States-based organization working toward improving how we share and value scholarship. HELIOS promotes practices that enhance research reproducibility, facilitate the better sharing of data and research, and create scholarship more accessible for the public good.

Michael’s work focuses on aligning incentives with institutional values. He believes that institutions’ mission statements are very value-centered, but what an institution says it values and what it rewards is often skewed. He discussed the misalignment at the level of individual scientists, whose activities are focused on reward systems in academia. Instead, he is working to transform the academic reward system so that it rewards people for conducting open, transparent, and high-quality scholarship.

Michael discussed initiatives underway at the University of Maryland that aim to build trust in science, correct misalignments between what the university claims to value and what it rewards, and address issues with problematic metrics used in hiring, promotion, tenure, and annual rewards. 

The areas of work include:

  • Improving rigor through transparency, which would lead to higher quality and reproducible science
  • Giving back to the community: rewarding science that would have a use in society
  • Increasing accessibility to all aspects of science: including a bigger swath of the population in the scientific effort
  • Accepting alternative ways to document aspects of the scholarship that we care about, such as an annotated CV that allows researchers to describe the scope of the research effort, or making the code public to increase reproducibility and transparency.

What did he learn?

  • Faculty members want to do the right thing, but they also want to be rewarded for doing so.
  • It’s challenging to get people to envision a new system; it takes time for change to occur.
  • Administrators are open and supportive of changes, but they want the faculty to bring the ideas; they don’t want to force faculty to do things.

Our second speaker was Shubha Tole from the Tata Institute of Fundamental Research. She elaborated on the history of the Tata Institute of Fundamental Research and her role as a Dean. She explained that when she arrived as a Dean, the students in the Graduate School had to have a scientific paper published in a scientific journal to graduate. Due to the lengthy review process, this adds approximately a year to the student’s timeline. 

Under Shubha’s leadership, the Institute changed this rule. Now, they accept preprints instead of journal-published articles for graduation. She explained that although there were obstacles to implementing this change, the leadership of the Mathematics Department helped persuade the Biology Department to adopt it. Shubha explained that in the Tata Institute of Fundamental Research Mathematics Department, preprints were accepted for graduation for a long time. Using them as an example, she was able to convince the Biology Department faculty to accept preprints instead of a journal-published article for a student to graduate. 

This allows students to graduate more quickly and decide whether they want to move on or stay with their advisor as a postdoctoral researcher to undergo the review process and have a journal publication.

Some people who had reservations about accepting preprints for graduation were concerned that students would post low-quality work as a preprint. However, based on her experience, concerns about low-quality publications being used for graduation are unfounded, as faculty are reluctant to post a work of low quality on a preprint server, as they do not want low-quality science publicly associated with them. 

During the Q&A session, both speakers agreed that presenting data is the most effective way to convince those who are hesitant about making changes to the system, as researchers tend to respond to data. For example, Shuba presented data on the time it takes from manuscript submission to the journal’s publication. It also helped to have experience on multiple international committees that recognize preprints as published research.

The speakers also discussed their approaches to teaching the undergraduate and graduate students about the publishing system and how to balance striving towards change in academia while still working in the old system of rewards and recognition that values where you publish. 

They advised that one must work within the existing system while still adopting positive practices. Once you have a critical mass of people who are willing to embrace change, others will follow.

]]>
ASAPbio February Community Call: Six ways to share preprint reviews https://asapbio.org/asapbio-february-community-call-six-ways-to-share-preprint-reviews/ Tue, 29 Apr 2025 15:09:18 +0000 https://asapbio.org/?p=5180 A recent ASAPbio Community Call (February 26, 2025) was inspired by a preprint titled ‘Mapping the preprint review metadata transfer workflows.’

The preprint is a working group product borne out of the, ‘Supporting interoperability of preprint peer review metadata,’ workshop co-organized by Europe PMC and ASAPbio in October 2023. Presenters Maria Levchenko (Europe PMC, EMBL-EBI) and Bianca Kramer (Sesame Open Science) are co-authors of that preprint.

Maria presented first, defining preprint review metadata as “all kinds of information about the review record,” e.g., reviewer name, affiliation, link to preprint being reviewed, review outcome, assigned score, and more. She stressed that proper metadata is essential to making information readable to machines. Properly organized and accessible metadata makes it easy to display information automatically in a user-friendly way. It can also facilitate the discovery of reviews and allow the community to create new tools.

Maria defined the general workflow and all technical definitions needed to understand the sharing of preprint review metadata. Then, she discussed six approaches to preprint review metadata transfer and their pros and cons.

  • Workflow 1: Preprint server with the integrated review option, e.g., preprints posted on ScienceOpen. 
  • Workflow 2: Preprint review group using a commenting platform, e.g., bioRxiv preprints reviewed by Arcadia Science using Hypothesis.io. 
  • Workflow 3: Preprint review group registering DOIs, e.g., preprint review by Peer Community In (PCI).
  • Workflow 4: Preprint review group using a repository, e.g., SciELO preprints review by PREreview and shared using Zenodo.
  • Workflow 5: Open research platforms, e.g., F1000Research, which makes manuscripts publicly available after editorial checks, followed by an open peer review. If the manuscript reaches three positive reviews from experts invited by the platform, it is sent to indexers, such as PubMed.
  • Workflow 6: Publish-Review-Curate platform. In the first step, the manuscript is made public (Publish step), then it is Reviewed. Finally, in the Curate stage, it is compiled into a collection, or a summary is added, e.g., medRxiv preprints, reviewed and published by eLife.

In the end, Maria discussed the group’s recommendations for a preprint review group seeking to initiate a new service. Maria recommends using one of the six existing systems instead of inventing another one. For a new group to decide which workflow is best for them, they should consider four key factors:

  1. Technical expertise – Do you have technical expertise in a group? Should you use available solutions, or do you need to develop one? 
  2. Cost – Do you have funds to cover the cost of registering DOIs? 
  3. Maintenance and sustainability – Can you maintain the records your group will create, or should you outsource that?
  4. Discoverability and linking between preprints and reviews – How important is it that your reviews will be discoverable by multiple services and linked to the reviewed preprint?

In the second part of the call, Bianca discussed linking preprints and preprint review metadata. The extent of linking preprints and preprint reviews was analyzed considering platforms that use Crossref and DataCite to deposit preprints and/or metadata. The group limited the analysis to the year 2023.

Bianca described the differences in Crossref and DataCite metadata regarding how they indicate preprints and reviews, explaining the difficulties one needs to consider when extracting specific information and the limitations of the approach they applied.

Ultimately, she provided some recommendations and a summary of their findings:

  1. Technical implementations—Some technical implementation is required to discover the links between preprints and reviews. If the links to preprints or preprint reviews are not included in the metadata, you cannot retrieve them using metadata.
  2. Types of identifiers—Whether metadata allows links to multiple identifier types. For example, Crossref only allows links to DOIs, which limits the discoverability of preprint reviews without DOIs.
  3. Versioning—Are preprint reviews linked to all preprint versions or a specific version? It would be easier if each preprint version had a separate DOI to link each review to one particular version of a preprint. 
  4. Completeness of metadata records—Currently, not all metadata fields are filled. For example, reviews on Zenodo do not link to the specific platforms that posted them, eg, if the PREreview platform posted a preprint review, the metadata will not have the information that PREreview was the poster. 
  5. Discoverability and linking between preprints and reviews – If we think linking preprint and their reviews is important, then we, as a community, should discuss how to increase the adoption of better practices regarding metadata and the adoption of DOIs or move beyond DOIs. 

The talks were followed by a Q&A session, during which participants discussed a few topics. The discussion started with the usefulness of metadata in identifying data trust signals. Maria then discussed Europe PMC’s plans in this direction. Then, the participants talked about the complexity of the PRC model and its evolution.

If you missed the call, you can view it on YouTube!

]]>