Transparency and reporting characteristics of COVID-19 randomized controlled trials | BMC Medicine


This study is part of the COVID-NMA initiative (PROSPERO CRD42020182600) [1, 2, 21]. The first two pillars of this initiative are living mapping and living evidence synthesis of all randomized controlled trials evaluating treatments and preventive interventions for COVID-19. All results are updated weekly and made available on an open access platform ( [1, 2, 21].

The third pillar of this initiative, described in this manuscript, is the monitoring of study reports in terms of transparency and reporting (log on Zenodo: [1]. Due to context and resource limitations, the scope was reduced to studies of pharmacological treatments and to assessing the transparency, completeness, and consistency of reporting. Due to the role of the preprint in scholarly communication, we added the comparison between preprint and related peer-reviewed publication.

study design

We conducted a systematic review of randomized controlled trials published up to May 31, 2021 for the treatment of COVID-19.

Eligibility Criteria

We included RCTs evaluating pharmacological treatments such as antivirals, interferons, other antimicrobials, nonsteroidal anti-inflammatory drugs, vitamins, kinase inhibitors, corticosteroids, monoclonal antibodies, immunosuppressants, antithrombotics, but also convalescent plasma and advanced therapy medicinal products (ATMP).

Studies evaluating non-pharmacological interventions (eg, prone position, physical therapy), pharmacological treatment for long-standing COVID, and preventive interventions, including vaccines, were excluded. Studies that did not randomly assign patients to a treatment arm (e.g., quasi-randomized studies, phase 1 studies, single-arm studies) and model studies of interventions in COVID-19 were also excluded. We included studies published as research articles (ie full reports), while excluding other publication formats (eg conference abstracts or commentaries). We only included studies written in English.

search strategy

The search strategy was developed in collaboration with an information specialist from Cochrane’s Editorial and Methods department as part of a living systematic review.

The search strategy evolved over time and relied on two high-quality secondary sources: the Epistemonikos L OVE COVID-19 platform ( [22] and the Cochrane COVID-19 Study Register ( We also searched the Retraction Watch Database for retracted studies (

The search strategy and data sources are listed in Supplementary File 1: Table S1. The last search was conducted on May 31, 2021.

Two reviewers independently checked all retrieved titles and abstracts in duplicate with Rayyan [23]. Discrepancies were resolved by consensus between the two reviewers. When necessary, a third reviewer was consulted to resolve disagreements.

Also Read :  Broad Prospects Of Pak-China Traditional Medicine Cooperation: Experts

Paired tagging for preprint-related peer-reviewed publication

The search allowed the identification of both preprints and peer-reviewed publications.

For all studies first published as a preprint, we systematically searched weekly for a follow-up publication in peer-reviewed journals using a preprint tracker ( : 3 – last search October 7, 2021) [24]. We entered the preprint DOI, preprint location and preprint dates. The tracker provides a list of relevant publications. The candidate list of preprint publication pairs is sorted by reducing the probability of a preprint publication association. A reviewer reviews each pair and identifies the publication that reports the corresponding study results.

data extraction

We specifically designed a standardized online data extraction form covering general study characteristics, transparency indicators, completeness and consistency of reporting on the COVID-NMA platform. For reports published as a preprint and as a peer-reviewed journal publication, we evaluated the first publicly available report.

To reduce errors during extraction and ensure calibration, two assessors were trained and each 20 trials were assessed separately through verbal and written instructions. The reviewers discussed the importance of each assessment item and reached consensus for the 20 studies. All included studies were then extracted by a single reviewer. Interrater agreement between the two reviewers was good at 96.6% agreement, with a kappa coefficient of 0.87 (95% CI, 0.83-0.92).

General characteristics of the studies

We extracted study design, number of arms, sample size, setting, number of centers, blinding, type of publication (preprint, peer-reviewed journals), subsequent publication of preprint studies, and funding sources (ie, private by industry sources or public sources, which are mainly government funds). We also extracted the type of treatments, setting (inpatient vs. outpatient outpatient care) and disease severity of the included participants [25].

transparency indicators

Transparency indicators refer to accessible sources of information such as the log, register and statistical analysis plan that are essential for understanding what has been planned and implemented. We considered the following transparency indicators:

  1. 1)

    Access to study documentation: We checked that we had access to the log and statistical analysis plan and that it was available in English.

  2. 2)

    test registration: We assessed whether trials were registered using the registration number provided in the manuscript or associated documents. If none was reported, the study is considered unregistered unless we have received the registration number from other sources (e.g. contact from authors). If enrollment was prospective (i.e., before recruitment began) and trial results were published, when the registry had a designated field for the investigator to report trial results, the following primary registries had this option available:, EU Clinical Trial Register, ISRCTN Registry, DRKS – German Clinical Trials Register, jRCT – Japan Registry of Clinical Trials and ANZCTR – Australian New Zealand Clinical Trials Registry. The reference to the published report in the register was not considered a published result.

  3. 3)

    Data Sharing Statement: We searched the report, its appendix and the online version of the report for an explanation of data sharing, ie an explanation by the authors of whether, how and when they share each participant’s data. For the relevant study registry, where available, we have retrieved information from the relevant data sharing section. We honored any type of data-sharing statement, with no restrictions on the type of data-sharing (e.g., upon email request, online repository). We extracted the type of data release.

Also Read :  In fentanyl's wake a new ulcerating drug is killing drug addicts

completeness of reporting

We systematically assessed whether the study report and protocol, if any, conformed to the Consolidated Standards of Reporting Trials (CONSORT) 2010 [11, 26]. We decided to focus on 10 CONSORT items that were considered most important because they are often incompletely reported and necessary to conduct a systematic review to assess the risk of bias and collect the outcome data [27]. The completeness of the reporting was assessed using the COBPeer tool (in additional file 1: Table S2) [27]. For each element extracted, the COBPeer tool evaluates the CONSORT elements and their associated sub-elements and generates what should be reported, as specified in the CONSORT 2010 Explanation and Elaboration Explanation Paper [11, 27]. The evaluators had to indicate whether the requested information was reported for each sub-item (yes/no). Finally, each item was reported as “fully reported” if all subitems were adequately reported, “partially reported” if at least one subitem was missing, and “not reported” if all items were missing. For the evaluation of the CONSORT items, we systematically took into account the primary result of the report. When the primary endpoint could not be clearly identified, we considered the endpoint reported in the objective, and when none was reported, we assessed the completeness of reporting of all endpoints reported in the publication and captured the least adequately reported endpoint.

Also Read :  The Gustave Roussy Institute Integrates Synapse Platform to Ensure Complex Prescription Safety

In addition to CONSORT-related items, we assessed whether the authors provided information on funding, conflicts of interest for the principal investigators and study statistician, and ethical consent.

Consistency of reporting (i.e. change of primary outcome)

We assessed the first publicly available report for consistency between what was planned and reported in the register and what was reported in the publication. In particular, we checked the primary result switch between the register and the report. Changing the primary outcome has been defined as adding, removing, or changing a primary outcome (i.e., the variable of interest, time frame, or metric). Studies that did not include temporal information in the report or in the study registration were only examined for a change in the variable of interest.

All available registration platforms were used to evaluate outcome switching. If the study registration was changed after the study start date, we honored the last registration entry before the start of the study, where available. We checked whether outcome switching was disclosed in the report. Explanations and justifications were considered valid as soon as the authors pointed out the changed primary endpoints in the report (e.g. in the introductory or discussion sections of the report).

Comparison between preprint reports and related peer-reviewed journal publications

For preprints subsequently published in a peer-reviewed journal, we compared the coverage of the first publicly available preprint report to the peer-reviewed publication. Changes between the preprint and the peer-reviewed journal publication were reported as “added” information (i.e., information missing from the preprint report but reported in the publication) or “removed” information (i.e., information provided in the preprint report, but were removed) classified The publication) [28]. In addition, we assessed whether the primary outcome change between the preprint and the peer-reviewed journal publications has changed.

data analysis

The descriptive analysis consisted of frequencies, percentages and medians with interquartile range. We also report the absolute risk difference and 95% confidence interval (using Wald’s method) to compare reporting between preprint and subsequent peer-reviewed publications.

Source link