How to Evaluate the Added Value of a New Risk Factor to an Established Risk Prediction Tool?
Problem
Risk prediction tools are used as part of formal risk assessment for disease and health events in UK primary care. They are used to identify patients at risk of disease, for whom preventative treatment can be offered. To improve the accuracy of risk prediction, new risk factors are being added to established risk prediction tools. However, current methods to evaluate the added value of new risk factors have shown to be limited; insensitive to change; lack interpretability and clinical relevance; and do not incorporate the impact of costs in their assessment. However, these can be addressed using health economic methodology.
Approach
A cost effectiveness analysis (CEA) was performed using a decision tree framework. The CEA derived the incremental cost effectiveness ratio (ICER), using the Youden Index and Harrell’s C-Index performance measures; and the net monetary benefit. A probabilistic sensitivity analysis was performed (10,000 iterations), and a range of £0-£100,000 was used for the willingness to pay to provide the probability the new risk factor is cost effective.
Findings
A CEA using a decision tree framework was shown to be an effective way of evaluating the added value of the new risk factor. Combining the change in effects and costs, using the ICER, provided better, and interpretable information about the new risk factor to aid decision making between risk tools. Further, the probabilistic sensitivity analysis provided a measure of uncertainty around these estimates. Caution was needed however, when using measures of effect not directly derived from the decision tree, such as Harrell’s C-Index. The method also showed to account for disparities between calibration and discrimination measures.
Consequences
A CEA is a novel method to compare two risk prediction tools; evaluating the added value of a new risk factor to established risk prediction tools. It identifies the added value of a new risk factor; encompassing statistical and clinical improvement, with cost consequences; providing a better evidence base for the use of new risk factors. As well as informing national guidance and commissioning groups, the methodology can also be used as part of reporting guidelines and recommendations for validating risk prediction models; increased use would help standardise results from studies and also allow for a league table style approach to rank and compare new risk factors based on their added value.