(2) Development of Tool
Due to the wide accessibility of Microsoft Excel, this was chosen as the platform for the COR Tool. A basic spreadsheet was created, which through four steps, enables a researcher to determine the comprehensiveness of outcome reporting in a given study.
The above process allows the user to define the reference standard for comprehensive outcome reporting, allowing flexibility and use of the tool in the specific context of the condition and intervention(s) being studied. A trialist, looking to select outcomes for a trial, can use this tool directly to obtain a full list of relevant outcomes, comprising all outcome areas, as presented in Table 1, that should be reported in a trial. A systematic reviewer, critical appraiser, or clinician can use this tool to assess each trial for comprehensiveness of outcome inclusion and measurement based on (1) if they were reported, (2a) if they were measured, and (2b) whether the measurement/definition was in keeping with the pre-specified standards. For purposes of a systematic review, each clinical trial will be assessed using a separate programmed excel sheet. The current iteration of the tool allows for inclusion of up to fifty studies. There are four steps to assessing individual trials, using this reference as the standard for comparison.
Technical Considerations: Once data is entered into the excel spread sheet, scores related to outcome reporting and measurement are calculated for each domain area, and this is translated to a colour gradient. These scores and colours are then subsequently transferred to generate a heatmap that represents all studies being assessed, serving as the output analysis of the tool. For each outcome area (OA), the scoring formula is as follows and is adjusted to account for any outcome domains (OD) that have been determined to be “not applicable”:\(\left(\frac{\#\ of\ Reported\ OD\ within\ OA}{\#\ of\ Relevant\ OD\ within\ OA}+\frac{\#\ of\ Properly\ Measured\ OD\ within\ OA}{\#\ of\ Relevant\ OD\ within\ OA}\right)\). This formula helps to score each outcome area into quartiles of comprehensiveness. This translates into the colours on the heatmap, with yellow highlighting outcome areas that have been unrepresented, and a gradient of blue representing outcome areas that are reported, with darker shades of blue indicating more complete reporting within that particular outcome area. Outcome domains are not shown on the heatmap individually, but rather as part of one of the six outcome areas it belongs to. Any outcome area that has been deemed not applicable in their entirety is automatically shown as grey. An effort to avoid red and green was made to increase accessibility for users who may find distinguishing these colours challenging.
Finally, the COR tool will assess a few additional items, not directly related to comprehensiveness of outcome reporting. These are as follows:
  1. If a core outcome set was developed for the condition, were all outcomes of the core outcome set used in the trial? : A core outcome set is a standard, minimum set of outcomes derived through patient- and stakeholder input, which must be reported in all trials on the topic.2 Although we have recently shown that despite adherence to guidelines on conduct and reporting, these COS for obstetric conditions do not necessarily represent comprehensiveness of outcome reporting.16 This question will assess adherence to a published core outcome set, which is now considered to be a bare minimum that should be reported.
  2. Were intermediary or surrogate outcomes reported?: Surrogate outcomes that are cheaper to measure and can provide robust statistical significance, are sometimes chosen by trialists, in favour of patient-centric or clinically meaningful end-points such as death or functional capacity.19, 20 In a systematic review of 109 trials that used surrogates as a primary outcome, only 35% discussed their clinical relevance and rationale for inclusion.21 Where a trial seems to be deficient in the inclusion of core outcome areas, this section will provide information on whether the scope of the trial was merely to study associations between interventions and surrogate measures that may be directly or indirectly related to patient-centric outcomes, thereby assisting the researcher in drawing relevant conclusions.
  3. Were the study conclusions supported by the reported outcomes? : Sometimes, obstetric trials claim the benefit of one intervention over another based on a narrow set of outcomes, for example only maternal outcomes and no neonatal outcomes. While it must be acknowledged that due to funding-, resource- and time constraints, not every trial can measure outcomes from all areas, it is still important that conclusions drawn, clearly state these limitations and do not generally conclude that a certain intervention is the preferred intervention.
  4. Was the abridged conclusion in the abstract, an accurate representation of the scope of the study, based on outcomes selected? : Sometimes, although manuscripts may draw appropriate conclusions that consider the study’s scope and limitations, the rigid word count in the manuscript abstract, which is often the only part of the paper that is read by busy clinicians, does not accurately represent the study findings.22 These omissions could influence clinical practice and patient care, and therefore need to be addressed.
These questions are accompanied by a drop-down menu of pre-generated options, each response associated with a shade of burgundy represented in the final heatmap, to provide additional information regarding the reporting of outcomes. Darker shades of burgundy were chosen to indicate ‘better quality’ outcome reporting.
After programming the tool, each aspect was systematically tested to refine user interface and troubleshoot any programming glitches with scoring and heatmap generation.