References
- Shen Y, Heacock L, Elias J, Hentel KD, Reig B, Shih G, et al. ChatGPT and other large language models are double-edged swords. Radiology 2023;307:e230163
- Faghani S, Moassefi M, Rouzrokh P, Khosravi B, Baffour FI, Ringler MD, et al. Quantifying uncertainty in deep learning of radiologic images. Radiology 2023;308:e222217
- Guo C, Pleiss G, Sun Y, Weinberger KQ. On calibration of modern neural networks [accessed on January 1, 2024]. Available at: http://proceedings.mlr.press/v70/guo17a.html
- Angelopoulos AN, Bates S. Conformal prediction: a gentle introduction. Found Trends Mach Learn 2023;16:494-591 https://doi.org/10.1561/2200000101
- Lakshminarayanan B, Pritzel A, Blundell C. Simple and scalable predictive uncertainty estimation using deep ensembles [accessed on January 1, 2024]. Available at: https:// proceedings.neurips.cc/paper_files/paper/2017/hash/9ef2ed4 b7fd2c810847ffa5fa85bce38-Abstract.html
- Abdar M, Pourpanah F, Hussain S, Rezazadegan D, Liu L, Ghavamzadeh M, et al. A review of uncertainty quantification in deep learning: techniques, applications and challenges. Inf Fusion 2021;76:243-297 https://doi.org/10.1016/j.inffus.2021.05.008
- Gal Y, Ghahramani Z. Dropout as a Bayesian approximation: representing model uncertainty in deep learning [accessed on January 1, 2024]. Available at: https://proceedings.mlr.press/ v48/gal16.html?trk=public_post_comment-text
- Sensoy M, Kaplan L, Kandemir M. Evidential deep learning to quantify classification uncertainty [accessed on January 1, 2024]. Available at: https://proceedings.neurips.cc/paper/2018/hash/a981f2b708044d6fb4a71a1463242520-Abstract.html
- Khosravi B, Faghani S, Ashraf-Ganjouei A. Uncertainty quantification in COVID-19 detection using evidential deep learning. medRxiv [Preprint]. 2022 [accessed on January 1, 2024]. Available at: https://doi.org/10.1101/2022.05.29.22275732
- Gamble C, Faghani S, Erickson BJ. Toward clinically trustworthy deep learning: applying conformal prediction to intracranial hemorrhage detection. arXiv [Preprint]. 2024 [accessed on January 1, 2024]. Available at: https://doi.org/10.48550/arXiv.2401.08058
- Alves N, Bosma JS, Venkadesh KV, Jacobs C, Saghir Z, de Rooij M, et al. Prediction variability to identify reduced AI performance in cancer diagnosis at MRI and CT. Radiology 2023;308:e230275
- McCrindle B, Zukotynski K, Doyle TE, Noseworthy MD. A radiology-focused review of predictive uncertainty for AI interpretability in computer-assisted segmentation. Radiol Artif Intell 2021;3:e210031
- Hemmer P, Kuhl N, Schoffer J. DEAL: deep evidential active learning for image classification. In: Wani MA, Raj B, Luo F, Dou D, eds. Deep learning applications, volume 3. Singapore: Springer, 2022:171-192
- Lakara K, Valdenegro-Toro M. Disentangled uncertainty and out of distribution detection in medical generative models. arXiv [Preprint]. 2022 [accessed on January 1, 2024]. Available at: https://doi.org/10.48550/arXiv.2211.06250
- Faghani S, Khosravi B, Zhang K, Moassefi M, Jagtap JM, Nugen F, et al. Mitigating bias in radiology machine learning: 3. Performance metrics. Radiol Artif Intell 2022;4:e220061
- Baier L, Schlor T, Schoffer J, Kuhl N. Detecting concept drift with neural network model uncertainty. arXiv [Preprint]. 2021 [accessed on January 1, 2024]. Available at: https://doi.org/10.48550/arXiv.2107.01873
- Zhang K, Khosravi B, Vahdati S, Erickson BJ. FDA review of radiologic AI algorithms: process and challenges. Radiology 2024;310:e230242
- Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, et al. Developing, purchasing, implementing and monitoring AI tools in radiology: practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR and RSNA. Radiol Artif Intell 2024;6:e230513