Preprint Sprint Proposals – ASAPbio https://asapbio.org Fri, 28 Mar 2025 21:36:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://asapbio.org/wp-content/uploads/2025/02/cropped-ASAPbio-favicon-32x32.png Preprint Sprint Proposals – ASAPbio https://asapbio.org 32 32 The Novel Coronavirus Research Compendium https://asapbio.org/the-novel-coronavirus-research-compendium/ https://asapbio.org/the-novel-coronavirus-research-compendium/#respond Wed, 04 Nov 2020 00:00:00 +0000 http://pl-asapbio.local/the-novel-coronavirus-research-compendium/ Organized by:

Emily Gurley, PhD and M. Kate Grabowski, PhD; Johns Hopkins University

Website or social media links

https://ncrc.jhsph.edu/

Current stage of development

ongoing project

Project duration

1-2 years with longer term potential

Project aims

Background information on current practices

The 2019 Novel Coronavirus Research Compendium (NCRC) is a centralized, publicly available resource that rapidly curates and reviews the emerging scientific evidence about SARS-CoV-2 and COVID-19. Our goal is to provide accurate, relevant information for global public health action by clinicians, public health practitioners, and policy makers.

Overview of the challenge to overcome

As the pandemic unfolds, there has been a rapid proliferation of literature on SARS-CoV-2 and COVID-19, much of it first released as a preprint. Reliable and rapidly curated evidence is needed to inform the public, programs, policy, and research.

The ideal outcome or output of the project

Expert curation of COVID-19 research with pre- and post-publication reviews linked directly to pre-print servers as applicable.

Description of the intervention

Working alongside informationists at the Johns Hopkins Welch Library, we developed and maintain a list of key search terms for our eight focus areas, including ecology and spillover, diagnostics, and clinical and prognostic risk factors, epidemiology, non-pharmaceutical interventions, pharmaceutical interventions, disease modeling, and vaccines. Literature searches were automated for PubMed and preprint servers (MedXriv, BioXriv, and SSRN), and search results are downloaded regularly into a custom NCRC web application and divided among the eight teams for screening. Experts working in each of the eight topical teams then review each paper to decide whether they believe it will bring new and key information to inform clinicians, public health practitioners, and policy makers. In addition to selecting key articles from our search results, we keep our eyes on the press and social media to see what papers are trending to evaluate if they should be included in the compendium. We prioritize original, high-quality research for public health action and papers receiving significant press for review, regardless of quality.

For each paper selected into the compendium, our teams summarize the setting, population, results, strengths, limitations of the paper, and value added based on what is already known about SARS-CoV-2 and COVID-19. At the end, we write our short take of the paper’s key finding(s), significance, and reliability relevant for action or practice. These expert summaries are then reviewed by a scientific editor before being posted to our website, and if they appear on a preprint server, are also linked to those servers directly. We have a weekly newsletter that provides subscribers with updates on the latest NCRC reviews.

Our endeavors have been covered by ScienceWiredSTAT NewsCNNBuzzfeed, and the JHSPH Magazine.

Plan for monitoring project outcome

We monitor unique users and page views on our site. We also monitor our newsletter subscriptions and social media followers.

What’s needed for success

Additional technology development

We need a more robust front-end web application and back-end database for screening and reviewing the literature. The current web application (using R Shiny) and database (using Google Sheets) were not developed for large numbers of users or for the rapidly expanding body of COVID-19 literature.

Feedback, beta testing, collaboration, endorsement

One of our biggest challenges has been breaking through the COVID-19 noise and reaching our target audience. We are seeking collaborations and endorsements that will drive more users to our site.

Funding

We need continued funding to support student and faculty reviewers as well as our NCRC coordinators. Our monthly operating budget is approximately 10k per month.

]]>
https://asapbio.org/the-novel-coronavirus-research-compendium/feed/ 0
Encouraging preprint review: make it easier to create reviews, make it easier to incorporate reviews. https://asapbio.org/encouraging-preprint-review-make-it-easier-to-create-reviews-make-it-easier-to-incorporate-reviews/ https://asapbio.org/encouraging-preprint-review-make-it-easier-to-create-reviews-make-it-easier-to-incorporate-reviews/#respond Wed, 04 Nov 2020 00:00:00 +0000 http://pl-asapbio.local/encouraging-preprint-review-make-it-easier-to-create-reviews-make-it-easier-to-incorporate-reviews/ Organizer

Nokome Bentley, Alexander Ketchakmadze & Colette Doughty from Stencila

Website or social media links

https://stenci.la/

https://twitter.com/stencila

Current stage of development

Ongoing project

Project duration

1 – 3 months for integrations described here as part of Stencila’s ongoing development

Update

Our aim for the #PrePrintSprint was to use it as an opportunity to add reviewing functionality to our platform for executable documents. So, since the kick off we’ve been heads down doing a lot of coding. 

We are happy to report that we managed to get most of our tickets moved from the “To do” column to the “Done” column! We’ve added the ability to mint DOIs for both preprints, and reviews as well as integrations to extract reviews from Google Docs and GitHub pull request reviews.

How has your project changed?

We found that there was more work involved in designing and implementing a workflow for requesting reviews and allowing potentially anonymous reviewers to accept or decline those. 

Have you integrated any feedback received?

We got some really useful ideas and suggestions from the breakout room at the sprint kickoff. We have only just got our development work completed to a stage where we can start seeking feedback from users. We’d love to get some feedback from people after the sprint.

Have you started any collaborations?

Yes, we have started exploring options for collaboration with PREreview. We are also talking to a journal who is interested in our integration with Github, including the ability to pull reviews from there.

Project aims

Background information on current practices

Preprints are rapidly becoming an integral part of the modern scientific workflow, dramatically increasing the speed at which new research is disseminated. However, although one of the benefits of preprints is their availability for early review, the number of preprints with reviews is relatively low.

Overview of the challenge to overcome

Part of the reason for the scarcity of preprint reviews is the burden it places on both authors and reviewers. Reducing this burden by reducing the friction in the workflow, for both authors and reviewers, provides an opportunity to both increase the number and speed of preprint reviews that are done and acted upon.

The ideal outcome or output of the project

Part of the reason for the scarcity of preprint reviews is the burden it places on both authors and reviewers. Reducing this burden by reducing the friction in the workflow, for both authors and reviewers, provides an opportunity to both increase the number and speed of preprint reviews that are done and acted upon.

Description of the intervention

This project will aim to dramatically reduce the friction in,

  • Authors being able to make their preprint available for review
  • Reviewers being able to create reviews on a preprint
  • Authors being able to incorporate reviewers’ comments and suggestions and publish new versions of their preprint.

To achieve this our approach will be to “go to where the users are” and make use of existing platforms that already provide excellent facilities for reviewing alternative types of content but which are often overlooked for reviewing research articles.

Our envisaged workflow is:

  • Authors upload or link their preprint authored in one of several formats (Microsoft Word, Google Docs, Markdown, RMarkdown, Jupyter Notebooks) to a Stencila Hub project and make it available for review on either Google Docs (most suitable for general reviews) or GitHub (most suitable for reviews of research involving a lot of code).
  • Reviewers are invited to make reviews on the platform that the author has selected (Google Docs or GitHub) using the built in facilities for this (Pull Requests on Github, Suggestions and Comments on Google Docs).
  • Authors respond to comments and suggestions from reviewers using the normal mechanisms for each platform (e.g. accepting changes in GitHub PRs; accepting or rejecting suggested changes on Google Docs).
  • Authors create new versions of their preprint (a new “named version” on Google Docs; a “release” on GitHub).
  • A new HTML version of the preprint is automatically published on Stencila Hub (using our Webhook integrations with these platforms) with a newly minted DOI and both authors and reviewers are notified.

Plan for monitoring project outcome

We will measure the success of this project by monitoring:

  • The number of reviews created (Reviewers making comments on Google Docs, Pull Requests on GitHub).
  • The number of revisions of preprints created in response to reviews.

What’s needed for success

Additional technology development

Stencila’s open source software, principally Hub (integration with other platforms) and Encoda (conversion between formats), already have the technical foundations necessary for this project. Some additional development and testing will be required for the API integrations between Stencila Hub and Crossref, Google Docs and GitHub. Refinements to format conversions may also be required.

Feedback, beta testing, collaboration, endorsement

User feedback, in particular one-on-one user testing of the workflows for both authors and reviewers, will be important for the success of this project. We will draw on our existing pool of beta testers for this. Endorsements by ASAPbio, funders and journals will help substantially with uptake.

Funding

The initial integrations with Google Docs and GitHub will be developed as part of Stencila’s ongoing development and do not require any additional funding. If these prove successful funding may be needed for refinements and/or integrations with other platforms.

]]>
https://asapbio.org/encouraging-preprint-review-make-it-easier-to-create-reviews-make-it-easier-to-incorporate-reviews/feed/ 0
Towards principled metrics of scientific influence with automatic curation of preprints. https://asapbio.org/towards-principled-metrics-of-scientific-influence-with-automatic-curation-of-preprints/ https://asapbio.org/towards-principled-metrics-of-scientific-influence-with-automatic-curation-of-preprints/#respond Wed, 04 Nov 2020 00:00:00 +0000 http://pl-asapbio.local/towards-principled-metrics-of-scientific-influence-with-automatic-curation-of-preprints/ Organizer

Thomas Lemberger, EMBO

Website or social media links

https://eeb.embo.org

Current stage of development

EEB is an experimental platform under development and used as sandbox to test ideas about aggregation and human- or machine-mediated curation of preprints. It is linked to EMBO Press and ASAPbio’s Review Commons journal-agnostic preprint review platform and to the SourceData curations platform.

Project duration

3 years

Update

How has your project changed?

In view of the feedback received we have decided to fuse our two proposals “Early Evidence Base…” and “Towards Principled Metrics…” into a single project. We feel that presenting the Early Evidence Base (EEB) as a single resource that combines aggregation of refereed preprints, rendering and summarization of peer reviews and automatic mining of the scientific content of preprints will provide a more concrete view of our ideas on how to increase engagement of authors, readers and reviewers with refereed preprints.

Have you integrated any feedback received?

  • One point of discussion was whether it was premature to build advanced platforms such as EEB when the number of peer reviewed preprints remains low. In our view, it is key to increase engagement and trust not only of reviewers but above all of authors and readers. Readers should have an easier time finding preprints they can trust and are interesting for them and authors should be convinced that posting preprints and their reviews is an efficient and visible way of sharing findings. In view of this feedback, we will integrate more preprint reviewing services into the EEB platform to further raise awareness about peer reviewed preprints across a broader range of disciplines.
  • On the idea of finding ‘principled metrics’ related to novelty, depth and significance, one of the major issues raised during the discussion was to motivate the need for such metrics and be mindful of their potential misuse. To get a better sense and some data on whether ranking metrics might be useful to filter and prioritize content by users, we have already included and will add further ranking mechanisms based on the automated analysis of the knowledge graph that supports Early Evidence Base. These methods are not presented to users as ‘metrics’ (no scores are displayed) to avoid over interpretation and misuse of the rankings while allowing us to analyze their utility in filtering large amounts of preprints.
  • Following positive feedback on the idea of identifying studies that potentially bridge fields, we have developed methods that automatically identify fields of research in an unsupervised way and exclusively based on the scientific content of preprints. These methods are successful in identifying emerging fields, such as research on COVD19/SARS-CoV-2, and open the door to find studies that belong to more than one field of research.
  • The suggestion was made that different sections of the referee reports might be used to guide readers in selecting preprints and identify studies in specific fields or with a multi- or cross-disciplinary scope. We are therefore starting to integrate powerful automatic summarization methods to expose specific statements from referee reports, for example in order to highlight the expertise of the reviewers as a proxy for the depth of the reviewing and of the fields covered by a study.

Have you started any collaborations?

  • We are collaborating with Peer Community In (PCI) to integrate PCI into the EEB platform. This will allow us to develop the necessary interface with CrossRef which has just started to support registration of peer review material linked to preprints.

Project aims

Background information on current practices

Preprints are extraordinarily attractive for authors in part because of the ease and speed in disseminating new findings. This simplicity goes however at the expense of the absence of supervised aggregation around a scientific scope, expert filtering and certification, which are traditionally the functions of journals. In absence of such prioritization tools, navigating the rapidly increasing volume of preprints is becoming difficult.

While a number of tools exists that derive article-level metrics based on citations or social network activities, a particularly difficult challenge is to define sorting or classification metrics that are more directly and intrinsically linked to the scientific content presented in a preprint.

The progress in the automated comprehension of natural language using artificial intelligence provides the opportunity to analyze the content of preprints exposed in various text and data mining resources, for example those provided by bioRxiv/medRixv or other organizations, such as the Covid-19 Open Research Dataset (CORD-19)

dataset. Several initiatives have derived large ‘knowledge graphs’ from the automated processing of such compendia.

It is therefore timely to attempt building AI-generated knowledge graphs and other tools to derive principled metrics that attempts to evaluate how a piece of work inserts in the pre-existing knowledge graph and estimate its (potential) contribution to the current scientific literature. Ideally, these metrics should avoid taking into account authorship, citations patterns or social network activity and rather be exclusively based on the scientific content (data, evidence, claims) in as transparent a way as possible.

Such automated principled metrics of potential influence, especially if combined in a complementary way with human expert peer review, could provide an attractive solution that preserves the simplicity and speed of preprints while assorting it with powerful prioritization methods.

Overview of the challenge to overcome

To start experimenting with combining human and machine curation, we have built the experimental platform Early Evidence Base (EEB, https://eeb.embo.org). EEB combines human curation, through peer review, and machine curation, with text mining, to aggregate and filter refereed preprints.

The challenges are considerable, both at the conceptual and technical levels. At the conceptual level, principles should be found that allow to define various dimensions of ‘scientific advance’, such as novelty, nature of the advance, depth or completion of the analysis, reproducibility, etc… Ideally these principles should be articulated in terms of measurable and explainable properties when the content of a preprint can be represented in a structured machine-readable way (for example in the fom of a knowledge graph).

At the technical level, tools need to be developed that extract and derive the appropriate representation of the content such that suitable properties are exposed and quantified in a way that can be benchmarked with suitable reference sets.

As specific example of such an approach, in EMBO’s SourceData project, we are attempting to mine experimentally tested hypotheses based on the information provided in figure legends, build a knowledge graph from this representation and derive metrics that indicate the potential contribution of a result in bridging disparate fields.

The ideal outcome or output of the project

Demonstration of the feasibility of automated content-centric metrics and their value when juxtaposed with human-based peer review.

Description of the intervention

  1. Identification of features related to scientific advance that can in principle be measured provided a suitable computable representation of the results, methodologies and claims reported in a preprint.
  2. Assembly of demonstration and benchmark datasets.
  3. Development of tools extracting the representations necessary for the computation of metrics defined in #1 and illustrated in #2.

Plan for monitoring project outcome

The metrics should be evaluated by researchers for the ability to produce results that make sense and that are motivated by a clear set of principles. This would favour metrics that are ‘explainable’, at least to some extent, over ‘black box’ metrics learned by machine learning based on complex combinations of features.

What’s needed for success

Additional technology development

  • Customized AI tools to extract and represent specific aspect of the scientific content.
  • Analytical methods, including graph-based approaches, to derive ranking metrics.

Feedback, beta testing, collaboration, endorsement

The project would foster a tight collaboration between machine learning specialists, data scientists and professional in editorial curation.

Funding

Support for an interdisciplinary team composed of:

  • Machine learning specialists and data scientists
  • Scientometrics specialists
  • Editorial curation specialists
]]>
https://asapbio.org/towards-principled-metrics-of-scientific-influence-with-automatic-curation-of-preprints/feed/ 0
Early Evidence Base: aggregating, mining and rendering preprint reviews. https://asapbio.org/early-evidence-base-aggregating-mining-and-rendering-preprint-reviews/ https://asapbio.org/early-evidence-base-aggregating-mining-and-rendering-preprint-reviews/#respond Wed, 04 Nov 2020 00:00:00 +0000 http://pl-asapbio.local/early-evidence-base-aggregating-mining-and-rendering-preprint-reviews/ Organizer

Thomas Lemberger, EMBO

Website or social media links

https://eeb.embo.org

https://reviewcommons.org

Current stage of development

EEB is an experimental platform under development and used as sandbox to test ideas about aggregation and mining of refereed preprints.

Project duration

3 years

How has your project changed?

In view of the feedback received we have decided to fuse our two proposals “Early Evidence Base…” and “Towards Principled Metrics…” into a single project. We feel that presenting the Early Evidence Base (EEB) as a single resource that combines aggregation of refereed preprints, rendering and summarization of peer reviews and automatic mining of the scientific content of preprints will provide a more concrete view of our ideas on how to increase engagement of authors, readers and reviewers with refereed preprints.

Have you integrated any feedback received?

  • One point of discussion was whether it was premature to build advanced platforms such as EEB when the number of peer reviewed preprints remains low. In our view, it is key to increase engagement and trust not only of reviewers but above all of authors and readers. Readers should have an easier time finding preprints they can trust and are interesting for them and authors should be convinced that posting preprints and their reviews is an efficient and visible way of sharing findings. In view of this feedback, we will integrate more preprint reviewing services into the EEB platform to further raise awareness about peer reviewed preprints across a broader range of disciplines.
  • On the idea of finding ‘principled metrics’ related to novelty, depth and significance, one of the major issues raised during the discussion was to motivate the need for such metrics and be mindful of their potential misuse. To get a better sense and some data on whether ranking metrics might be useful to filter and prioritize content by users, we have already included and will add further ranking mechanisms based on the automated analysis of the knowledge graph that supports Early Evidence Base. These methods are not presented to users as ‘metrics’ (no scores are displayed) to avoid over interpretation and misuse of the rankings while allowing us to analyze their utility in filtering large amounts of preprints.
  • Following positive feedback on the idea of identifying studies that potentially bridge fields, we have developed methods that automatically identify fields of research in an unsupervised way and exclusively based on the scientific content of preprints. These methods are successful in identifying emerging fields, such as research on COVD19/SARS-CoV-2, and open the door to find studies that belong to more than one field of research.
  • The suggestion was made that different sections of the referee reports might be used to guide readers in selecting preprints and identify studies in specific fields or with a multi- or cross-disciplinary scope. We are therefore starting to integrate powerful automatic summarization methods to expose specific statements from referee reports, for example in order to highlight the expertise of the reviewers as a proxy for the depth of the reviewing and of the fields covered by a study.

Have you started any collaborations?

  • We are collaborating with Peer Community In (PCI) to integrate PCI into the EEB platform. This will allow us to develop the necessary interface with CrossRef which has just started to support registration of peer review material linked to preprints.

Project aims

Background information on current practices

With the increased popularity of transparent peer review, where reviews are made publicly available next to a preprint or journal article, the target audiences of formal referee reports do not only include the authors and journal editors, but also the readers. As such, the online presentation of referee reports may have to evolve such that it enriches the experience of expert and non-expert readers. In the context of peer reviews linked to preprints, this aspect is particularly important as in-depth reviews represent an invaluable resource that provide context and expert in-depth analyses. The time is therefore right to go the next step and use the reports on refereed preprints to highlight specific preprints and to guide readers through the otherwise non-navigable volumes of non-curated scientific information on preprints.

Important initiatives are currently underway in defining technical aspects of how to link reviews to preprints in a general way, how to standardize machine-portability of transparent reviews and how to leverage refereed preprints in journal-independent peer review or publish-review-curate workflows. For most users, the concept of refereed preprints remains however rather new and little is known about reader engagement with such preprints. How would readers search and browse preprints that have been reviewed by various entities? What section or what aspect of a highly technical, detailed formal review is the most important when selecting which preprint to read or to trust? In what form should this information be presented to users? How do expert vs non-expert use referee reports linked to preprints?

In addition to the information related to the provenance and other metadata related to peer reviews, it is of particular interest to delineate features that can be derived from the content of the reviews and that are worthwhile extracting, summarizing or visualizing to readers. Such features could include, but are not limited to, the expertise of the reviewers; summary of key points; highlights of different types of statements (e.g. critical, supportive, literature-supported, linked to requests for additional experiment or textual changes, related to data presentation, to novelty, presence of unsupported negative statements, tone of the review).

Making refereed preprint attractive and useful to a wide spectrum of readers will be a major driving force for the adoption of this way of rapid scientific communication by the community. Increasing the utility of refereed preprints will lead to higher visibility which is an important incentive for authors to engage with preprint peer review platforms.

Overview of the challenge to overcome

To start experimenting with the aggregation of refereed preprints produced by various peer review platforms handling author-driven preprint submissions, including Review Commons and eLife’s Preprint Reviews, and integrating the peer reviews and their summaries next to preprints, we have built the experimental platform Early Evidence Base (EEB, https://eeb.embo.org). EEB explores how human curation, through peer review, could be combined with machine curation, through text mining, to aggregate and filter refereed preprints. We intend to use this platform as a sandbox to experiment with various implementations and learn how to improve readers’ user experience when searching and interacting with refereed preprints.

Reviews are typically semi-structured with no universally applied format. The challenge is to identify conserved structural and semantic patterns that can be extracted as salient features that help readers finding, filtering and understanding preprints.

In terms of user interface and user experience, the challenge is to present the content of peer reviews in compact but informative way. An additional challenge is to cater both for expert and non-expert readers.

The ideal outcome or output of the project

Demonstration of the impact of exposing key features and attributes from referee reports on readers’ engagement with refereed preprints.

Description of the intervention

The scope of this project would a priori be restricted to preprint linked to formal in-depth reviews, typically organized within an author-driven submission process.

  1. Prioritization of features to extract from peer reviews based on user survey and user testing.
  2. Development of tools to capture or extract some of the features identified in #1.
  3. Implementation of feasible solutions developed in #2.
  4. Testing and evaluating implementations.

Plan for monitoring project outcome

  • Identification of feasible feature extractions strategies.
  • Benchmarking of feature extractions.Beta-testing, AB testing of various implementations and analysis on the impact on traffic, search and attention.

What’s needed for success

Additional technology development

  • Additional preprint review services should be integrated to the EEB site to provide a broader diversity of refereed preprints and referee reports.
  • Training and benchmarking sets should be assembled for machine learning by labelling relevant sections and statements in referee reports.
  • AI tools should be developed to parse, summarize and classify features extracted from referee reports.

Feedback, beta testing, collaboration, endorsement

  • Collaboration with review services to enable integration and access to the content of referee reports and survey various audiences.
  • Public UI/Ux recommendations based on user surveys/testing.
  • Beta-testing of display and rendering solutions.
  • Open source release of machine learning models and extraction tools.

Funding

  • User survey and testing: UI/Ux specialist
  • Feature extraction: machine learning specialist
  • Display and rendering: web developer.
]]>
https://asapbio.org/early-evidence-base-aggregating-mining-and-rendering-preprint-reviews/feed/ 0
Peeriodicals https://asapbio.org/peeriodicals/ https://asapbio.org/peeriodicals/#respond Tue, 03 Nov 2020 00:00:00 +0000 http://pl-asapbio.local/peeriodicals/ Organizer

The PubPeer Foundation, represented by Brandon Stell and Boris Barbour

Website or social media links

 https://peeriodicals.com

@peeriodicals

Current stage of development

Ongoing project

Project duration

Indefinite

Update

How has your project changed?

No changes.

Have you integrated any feedback received?

We received a useful suggestion to provide case studies for specific Peeriodicals applications to reduce any confusion arising from the flexibility of the platform. However, this has not yet been implemented.

Have you started any collaborations?

We made contact with the Novel Coronavirus Research Consortium to see if Peeriodicals (or the backing PubPeer database) could help disseminate the NCRC reviews; this is under discussion. The BIMS project have made contact about a possible collaboration and a discussion is being set up.

Project aims

Background information on current practices

There is widespread agreement that fashions imposed by the publishing industry have diverged from well established criteria of scientific quality. Peeriodicals offers a flexible platform for individual (or groups of) scientists to experiment with different forms of overlay or virtual journals, aiming to supplement if not replace the traditional journal functions of discovery and evaluation; in peeriodicals any publication (preprint or traditional article, new or old) can be showcased and analysed. An excellent example of an individual peeriodical is https://peeriodicals.com/peeriodicals/theoretical-neuroscience

Overview of the challenge to overcome

The technology exists, but uptake is slow. Significant barriers appear to include building up an audience for a peeriodical and the effort required to provide or procure useful analysis. The same barriers are likely to apply to other existing and future initiatives.

The ideal outcome or output of the project

Increased uptake and experimentation with the platform by scientists, both as producers and consumers of analysis.

Description of the intervention

Publicity for the platform. Support from the scientific establishment would encourage participation from the research community.

Plan for monitoring project outcome

Publicity for the platform. Support from the scientific establishment would encourage participation from the research community.

What’s needed for success

Additional technology development

None.

Feedback, beta testing, collaboration, endorsement

All of the above would be welcomed.

Funding

None requested.

]]>
https://asapbio.org/peeriodicals/feed/ 0
Open post-publication peer review https://asapbio.org/open-post-publication-peer-review/ https://asapbio.org/open-post-publication-peer-review/#respond Tue, 03 Nov 2020 00:00:00 +0000 http://pl-asapbio.local/open-post-publication-peer-review/ Organizer

Victor Venema, Grassroots Review Journals, University of Bonn, Germany

Website or social media links

Concept homepage

Development server

GIT

Future integrations

Mastodon

Twitter

Reddit

Current stage of development

Main design for the communities (review journals) is ready to be implemented. Multiple communication channels set-up apart from a newsletter/distribution list. 

Project duration

Years, but for this sprint we can do a small part.

Project aims

Background information on current practices

It takes years to build up a reputable scientific journal, while the reputation of journals is important for scientific careers. This together with copyright law gives publishers enormous market power, reflected in prolonged profit margins of 30 to 50%, bad service and little innovation. 

Overview of the challenge to overcome

Break the power of the publishers by putting the assessment of the quality of research back into the hands of the scientific community by building up an open post-publication peer review system. The more this system is accepted the less important it is where a paper is published. To replace the current system all studies need to be reviewed, articles and preprints, good and bad articles.

The ideal outcome or output of the project

A working prototype for a Grassroots post-publication peer review system based on WordPress Multisite. So that we can start inviting editorial teams to start reviewing.

Description of the intervention

The design is ready as plain text where every review is simply a blog post; see https://grassroots.is for details on the design. These blog posts should become automatically generated review pages by giving a DOI and the names of the editors in charge of the review. Elements on this page should not be plain text, but machine readable, so that it can be distributed with an API and ActivityPub protocol (pubfair). Starting a new journal should be automated and a new journal should be able to reuse (all) reviews of an existing journal to make the review system federated. 

Plan for monitoring project outcome

During the time of the sprint I plan to work along with the volunteers in designing and coding the system. Our common code is on GitHub. It would be great to immediately start a second federated server, but I can also give access to my development server.

What’s needed for success

Additional technology development

The review journals should also be communities that connect researchers in a field. Thus also systems to communicate meetings, projects, datasets, software, etc. are valuable, as well as social media capabilities. The Open Social Media system (Fediverse) are starting to make this possible. They already have good micro-blogging system (Mastodon, FediScience) and now a new system for events (Mobilizon).

Feedback, beta testing, collaboration, endorsement

A blog post on the Grassroots review system won the OpenUp Blog Competition.

https://www.openuphub.eu/community/blog/item/the-results-of-the-openup-blog-competition-for-young-researchers-students

Funding

Working on a proposal for this system for the German Science Foundation.

]]>
https://asapbio.org/open-post-publication-peer-review/feed/ 0