By Laura Cowen
There are too many poorly validated models for predicting cardiovascular disease (CVD) risk in the general population, researchers report in The BMJ.
"Rather than developing new models, researchers should make better use of available evidence by validating, making head-to-head comparisons, and tailoring the promising existing models", Johanna Damen (University Medical Center Utrecht, the Netherlands) and colleagues observe.
The findings arose from a systematic review of 212 articles published up to June 2013, describing the development of 363 prediction models to predict CVD risk in the general population. The papers also included 473 external validations of such models.
There was considerable methodological variation among the models, say the researchers, particularly in the study populations (notably age, gender, and other patient characteristics) and predictor and outcome definitions.
For example, there were more than 40 different definitions of fatal or nonfatal coronary heart disease. And although the majority (33%) of models focused on this outcome, 20 other outcomes were identified, including CVD, stroke, myocardial infarction and atrial fibrillation.
The prediction horizon was typically 10 years (58% of models) but was not specified for 13% of the models. In addition, crucial information was missing from 25% of models to enable them to be used for individual risk prediction.
The most common predictors included in the models were smoking, age, blood pressure and cholesterol, but more than 100 were described in total, most of which were only included in models once or twice.
Only 36% of the models developed were externally validated, with just 19% undergoing independent external validation and 10% being validated more than 10 times.
Furthermore, Damen and team point out that the extended models using the less common predictors were rarely externally validated.
"This suggests that there is more emphasis placed on repeating the process of identifying predictors and developing new models rather than validating, tailoring, and improving existing CVD risk prediction models", they say.
When the researchers looked at the studies describing external validation, they found that model performance was heterogeneous and measures such as discrimination and calibration were reported only 65% and 58% of the time, respectively.
Damen et al conclude that "[m]ost developed models are inadequately reported to allow external validation or implementation in clinical practice".
Editorialist Tim Holt, from the University of Oxford in the UK, agrees. He says: "We need better studies of these models and, most importantly, the translation of CVD risk recognition into tangible and measurable clinical benefit for patients and the general public."
Licensed from medwireNews with permission from Springer Healthcare Ltd. ©Springer Healthcare Ltd. All rights reserved. Neither of these parties endorse or recommend any commercial products, services, or equipment.