Repository logo
 

An evaluation of sample size requirements for developing risk prediction models with binary outcomes.

Published version
Peer-reviewed

Repository DOI


Change log

Authors

Pavlou, Menelaos 
Ambler, Gareth 
Qu, Chen 
Seaman, Shaun R 
White, Ian R 

Abstract

BACKGROUND: Risk prediction models are routinely used to assist in clinical decision making. A small sample size for model development can compromise model performance when the model is applied to new patients. For binary outcomes, the calibration slope (CS) and the mean absolute prediction error (MAPE) are two key measures on which sample size calculations for the development of risk models have been based. CS quantifies the degree of model overfitting while MAPE assesses the accuracy of individual predictions. METHODS: Recently, two formulae were proposed to calculate the sample size required, given anticipated features of the development data such as the outcome prevalence and c-statistic, to ensure that the expectation of the CS and MAPE (over repeated samples) in models fitted using MLE will meet prespecified target values. In this article, we use a simulation study to evaluate the performance of these formulae. RESULTS: We found that both formulae work reasonably well when the anticipated model strength is not too high (c-statistic < 0.8), regardless of the outcome prevalence. However, for higher model strengths the CS formula underestimates the sample size substantially. For example, for c-statistic = 0.85 and 0.9, the sample size needed to be increased by at least 50% and 100%, respectively, to meet the target expected CS. On the other hand, the MAPE formula tends to overestimate the sample size for high model strengths. These conclusions were more pronounced for higher prevalence than for lower prevalence. Similar results were drawn when the outcome was time to event with censoring. Given these findings, we propose a simulation-based approach, implemented in the new R package 'samplesizedev', to correctly estimate the sample size even for high model strengths. The software can also calculate the variability in CS and MAPE, thus allowing for assessment of model stability. CONCLUSIONS: The calibration and MAPE formulae suggest sample sizes that are generally appropriate for use when the model strength is not too high. However, they tend to be biased for higher model strengths, which are not uncommon in clinical risk prediction studies. On those occasions, our proposed adjustments to the sample size calculations will be relevant.

Description

Acknowledgements: The authors thank Dr Khadijeh Taiyari who contributed to an early version of this work.

Keywords

Calibration, Discrimination, Sample size, Simulation, Humans, Sample Size, Risk Assessment, Models, Statistical, Computer Simulation, Algorithms

Journal Title

BMC Med Res Methodol

Conference Name

Journal ISSN

1471-2288
1471-2288

Volume Title

24

Publisher

Springer Science and Business Media LLC
Sponsorship
MRC (Unknown)
National Institute for Health and Care Research (IS-BRC-1215-20014)
This work was supported by the Medical Research Council grant MR/P015190/1. R.O. and G.A. were supported by the National Institute for Health and Care Research, University College London Hospitals, Biomedical Research Centre. I.R.W. was supported by the Medical Research Council Programmes MC_UU_12023/29 and MC_UU_00004/09. S.R.S. was funded by UKRI (Unit programme numbers MC UU 00002/10) and was supported by the National Institute for Health Research (NIHR) Cambridge Biomedical Research Centre (BRC-1215-20014). The views expressed are those of the authors and not necessarily those of PHE, the NHS, the NIHR or the Department of Health and Social Care.