Well-Calibrated Predictive Uncertainty in Medical Imaging with Bayesian Deep Learning

verfasst von
Max-Heinrich Viktor Laves
betreut von
Tobias Ortmaier
Abstract

The use of medical imaging has revolutionized modern medicine over the last century. It has helped provide insight into human anatomy and physiology. Many diseases and pathologies can only be diagnosed with the use of imaging techniques. Due to increasing availability and the reduction of costs, the number of medical imaging examinations is continuously growing, resulting in a huge amount of data that has to be assessed by medical experts. Computers can be used to assist in and automate the process of medical image analysis. Recent advances in deep learning allow this to be done with reasonable accuracy and on a large scale. The biggest disadvantage of these methods in practice is their black-box nature. Although they achieve the highest accuracy, their acceptance in clinical practice may be limited by their lack of interpretability and transparency. These concerns are reinforced by the core problem that this dissertation addresses: the overconfidence of deep models in incorrect predictions. How do we know if we do not know? This thesis deals with Bayesian methods for estimation of predictive uncertainty in medical imaging with deep learning. We show that the uncertainty from variational Bayesian inference is miscalibrated and does not represent the predictive error well. To quantify miscalibration, we propose the uncertainty calibration error, which alleviates disadvantages of existing calibration metrics. Moreover, we introduce logit scaling for deep Bayesian Monte Carlo methods to calibrate uncertainty after training. Calibrated deep Bayesian models better detect false predictions and out-of-distribution data. Bayesian uncertainty is further leveraged to reduce the economic burden of large data labeling, which is needed to train deep models. We propose BatchPL, a sample acquisition scheme that selects highly informative samples for pseudo-labeling in self- and unsupervised learning scenarios. The approach achieves state-of-the-art performance on both medical and non-medical classification data sets. Many medical imaging problems exceed classification. Therefore, we extended estimation and calibration of predictive uncertainty to deep regression (sigma scaling) and evaluated it on different medical imaging regression tasks. To mitigate the problem of hallucinations in deep generative models, we provide a Bayesian approach to deep image prior (MCDIP), which is not affected by hallucinations as the model only ever has access to one single image.

Organisationseinheit(en)
Institut für Mechatronische Systeme
Typ
Dissertation
Anzahl der Seiten
145
Publikationsdatum
2021
Publikationsstatus
Veröffentlicht
Ziele für nachhaltige Entwicklung
SDG 3 – Gute Gesundheit und Wohlergehen
Elektronische Version(en)
https://doi.org/10.15488/11588 (Zugang: Offen)