Well-Calibrated Regression Uncertainty in Medical Imaging with Deep Learning

verfasst von
Max Heinrich Laves, Sontje Ihler, Jacob F. Fast, Lüder A. Kahrs, Tobias Ortmaier
Abstract

The consideration of predictive uncertainty in medical imaging with deep learning is of utmost importance. We apply estimation of predictive uncertainty by variational Bayesian inference with Monte Carlo dropout to regression tasks and show why predictive uncertainty is systematically underestimated. We suggest using σ scaling with a single scalar value; a simple, yet effective calibration method for both aleatoric and epistemic uncertainty. The performance of our approach is evaluated on a variety of common medical regression data sets using different state-of-the-art convolutional network architectures. In all experiments, σ scaling is able to reliably recalibrate predictive uncertainty. It is easy to implement and maintains the accuracy. Well-calibrated uncertainty in regression allows robust rejection of unreliable predictions or detection of out-of-distribution samples. Our source code is available at: github.com/mlaves/well-calibrated-regression-uncertainty.

Organisationseinheit(en)
Institut für Mechatronische Systeme
Externe Organisation(en)
Centre for Image Guided Innovation and Therapeutic Intervention (CIGITI)
University of Toronto
Typ
Konferenzaufsatz in Fachzeitschrift
Journal
Proceedings of Machine Learning Research
Band
121
Seiten
393-412
Anzahl der Seiten
20
Publikationsdatum
2020
Publikationsstatus
Veröffentlicht
Peer-reviewed
Ja
ASJC Scopus Sachgebiete
Artificial intelligence, Software, Steuerungs- und Systemtechnik, Statistik und Wahrscheinlichkeit
Elektronische Version(en)
https://proceedings.mlr.press/v121/laves20a.html (Zugang: Offen)