Common evidence gaps in point-of-care diagnostic test evaluation: a review of horizon scan reports

Talk Code: 
EP1C.08
Presenter: 
Jan Verbakel
Co-authors: 
Jan Y Verbakel, Philip J Turner, Matthew J Thompson, Annette Plüddemann, Christopher P Price, Bethany Shinkins, Ann Van den Bruel
Author institutions: 
Nuffield Department of Primary Care Health Sciences University of Oxford, Primary Care Innovation Lab Department of Family Medicine University of Washington, Test Evaluation Group AUHE Leeds Institute of Health Sciences University of Leeds

Problem

Since 2008, the Oxford Diagnostic Horizon Scan Programme has been identifying and summarizing evidence on new and emerging diagnostic technologies relevant to primary care. We used these reports to determine the sequence and timing of evidence for new diagnostic tests, and to identify common evidence gaps in this process.

Approach

Design: systematic overview of diagnostic horizon scan reports

Primary outcome measures: We obtained the primary studies referenced in each horizon scan report (n=40) and extracted details of the study size, clinical setting, and design characteristics. In particular, we assessed whether each study evaluated test accuracy, test impact or cost-effectiveness. The evidence for each test was mapped against the Horvath framework for diagnostic test evaluation.

 

Findings

Results: We extracted data from 500 primary studies. Most diagnostic technologies underwent clinical performance (i.e. ability to detect a clinical condition) assessment (71.2%), with very few progressing to comparative clinical effectiveness (10.0%) and a cost-effectiveness evaluation (8.6%), even in the more established and frequently reported research domains, such as cardiovascular disease. The median time to complete an evaluation cycle was 9 years (IQR: 5.5-12.5 years). The sequence of evidence generation was typically haphazard and some diagnostic tests appear to be implemented in routine care without completing essential evaluation stages such as clinical effectiveness.

Consequences

Conclusions: Evidence generation for new diagnostic tests is slow and tends to focus on accuracy, and overlooks other test attributes such as impact, implementation and cost-effectiveness. Evaluation of this dynamic cycle and feeding back data from clinical effectiveness to refine analytical and clinical performance are key to improve the efficiency of diagnostic test development and impact on clinically relevant outcomes. While the ‘roadmap’ for the steps needed to generate evidence are reasonably well delineated, we provide evidence on the complexity, length and variability of the actual process many diagnostic technologies undergo.

 

Submitted by: 
Jan Verbakel
Funding acknowledgement: 
JV, PJT, MT, AP, CP, BS, AVdB are supported through the National Institute for Health Research (NIHR) Diagnostic Evidence Co-operative Oxford at Oxford Health Foundation Trust (award number IS_DEC_0812_100).