Keynotes
"How Explainability Contributes to Trust in AI"
Dr. Andrea Ferrario
ETH Zurich and Mobiliar Lab for Analytics at ETH
Abstract
I provide a philosophical explanation of the relation between artificial intelligence (AI) explainability and trust in AI, providing a case for expressions, such as “explainability fosters trust in AI,” that commonly appear in the literature. This explanation relates the justification of the trustworthiness of an AI with the need to monitor it during its use. I discuss the latter by referencing an account of trust, called “trust as anti-monitoring,” that different authors contributed developing. I propose that “explainability fosters trust in AI” if and only if it fosters justified and warranted paradigmatic trust in AI, i.e., trust in the presence of the justified belief that the AI is trustworthy, which, in turn, causally contributes to rely on the AI in the absence of monitoring. Focusing on the use case of medical AI systems, I argue that the proposed approach can intercept the complexity of the interactions between physicians and medical AI systems in clinical practice, as it can distinguish between cases where humans hold different beliefs on the trustworthiness of the medical AI and exercise varying degrees of monitoring on them. Finally, applying the account to user’s trust in AI, I argue that explainability does not contribute to trust. By contrast, when considering public trust in AI as used by a human it is possible for explainability to contribute to trust.
Andrea Ferrario holds a Ph.D. degree in mathematics from ETH Zurich. He has worked in industry as a data scientist for five years before his return to ETH. Since then, he has been a Postdoctoral Researcher at the Chair of Technology Marketing and the Scientific Director of the Mobiliar Lab for Analytics at ETH Zurich. His research interests lie at the intersection of philosophy and technology, with a focus on AI and mixed reality. They comprise the ethics and epistemology of AI, the use of natural language processing and machine learning for digital health interventions, and the use of immersive augmented reality to solve problems on the interpretability of machine learning models collaboratively.
"Deep Learning Medical Image Analysis in Radiology:
myths, realities and how to make it work for you"
Prof. Dr. Leo Joslowicz
CASMIP Lab, The Hebrew University of Jerusalem, Israel
Abstract
Radiology, one of the cornerstones of modern healthcare, is undergoing rapid and profound changes due to the ever-increasing number of imaging examinations, the shortage of certified radiologists, the dynamics of healthcare economics, and the technological developments of artificial intelligence-based image processing. Deep learning has been adopted as the solution of choice for a variety of clinical applications. However, deep learning presents significant challenges and requires the consideration of the ecosystem around it. In this talk, we will discuss the myths and realities of deep learning medical analysis in Radiology. We will focus on three key aspects: 1) how to quantify and incorporate task-specific observer variability and measurements uncertainty when establishing the clinical goal of the analysis; 2) how to accelerate and make robust deep learning with very few annotated datasets, and; 3) what pipeline is required to enhance deep learning networks performance. We will illustrate these issues and present our methods with a variety of examples from our recent works on fetal MRI, abdominal CT, and OCT analysis.
Leo Joskowicz is a Professor at the School of Computer Science and Engineering at the Hebrew University of Jerusalem, Israel since 1995. He is the founder and director of the Computer-Aided Surgery and Medical Image Processing Laboratory (CASMIP Lab). Prof. Joskowicz is a Fellow of the IEEE, ASME, and MICCAI (Medical Image Processing and Computer Aided Intervention) Societies. He is the past President of the MICCAI Society, was the Secretary General of the International Society of Computer Aided Orthopaedic Surgery (CAOS) and of the International Society for Computer Assisted Surgery (ISCAS). He is the recipient of the 2010 Maurice E. Muller Award for Excellence in Computer Assisted Surgery by the International Society of Computer Aided Orthopaedic Surgery and the 2007 Kaye Innovation Award. He has published two books and over 270 technical works including conference and journal papers, book chapters, and editorials and has 14 issued patents. He is on the Editorial Boards of several journals, including Medical Image Analysis, Int. J. of Computer Aided Surgery, and Computer Aided Surgery and has served on numerous related program committees.