Speakers
Machine Learning Interpretability: Road to Mastery
Interpreting complex machine learning models can be difficult. There is a plethora of existing methods, but their meaningfulness and reliability are still hard to evaluate. Moreover, depending on the purpose (debugging, ...), a technique in the literature is often more appropriate than others. How do we choose the best approach in the landscape of the existing techniques? This talk is organized as a virtual "walk" through different techniques, from building glass-box, transparent models to black-boxes. In the road to mastery, we will cover both standard approaches and latest research outcomes.
Dr. Mara Graziani
Mara Graziani is a postdoctoral researcher at IBM Research Zurich and at two applied universities in Switzerland: ZHAW and HES-SO Valais. She obtained a Ph.D. in Computer Science from the University of Geneva in December 2021. Her research aims at using interpretable deep learning methods to support and facilitate knowledge discovery in bio-medical research. During her Ph.D., she was a visiting student at the Martinos Center, part of the Harvard Medical School in Boston (MA) to focus on the interaction between clinicians and deep learning systems.
Coming from a background of IT Engineering, she was awarded the Engineering Department Award for completing the M.Phil. in Machine Learning, Speech and Language Technology at the University of Cambridge (UK) in 2017.
The Need for Interpretability in Clinical Decision Support
Classical machine learning using handcrafted features or decision trees that were used in clinical decision support could be interpreted by design. When deep neural networks led to much better results in many tasks but as black box models it became clear that a machine learning decision without any explication can hardly be integrated with the way clinical work such as diagnosis or treatment planning is done. Interpretability/explainability has become a major challenge for using any tool in clinical practice. The presentation will start with the basic challenges of systematic medical data analysis and go towards integrating explainable AI into modern solutions of digital medicine.
Prof. Dr. Henning Müller
Henning Müller is titular professor in medical informatics at the University Hospitals of Geneva and professor in business informatics at the HES-SO Valais where he is responsible for the eHealth unit. He studied medical informatics at the University of Heidelberg, Germany, then worked at Daimler-Benz research in Portland, OR, USA. He carried out a research stay at Monash University, Melbourne, Australia in 2001 and in 2015-2016 was a visiting professor at the Martinos Center in Boston, MA, USA part of Harvard Medical School and the Massachusetts General Hospital (MGH) working on collaborative projects in medical imaging and system evaluation among others in the context of the Quantitative Imaging Network of the National Cancer Institutes. He has authored over 400 scientific papers, is in the editorial board of several journals and reviews for many journals and funding agencies around the world.
Explainable AI (XAI) for Medical Applications with MATLAB
In recent years, artificial intelligence (AI) has shown great promise in medicine and medical device applications. However, strict interpretability and explainability regulatory requirements can make it prohibitively difficult to use AI-based algorithms for medical applications. To overcome these limitations, interpretable machine and deep learning techniques and algorithms have been developed to assess if a model behaves as expected or needs further development and training.
In this talk, methods will be highlighted that help explain the predictions of deep neural networks applied to medical images such as MRI and X-Ray. You will learn about the interpretability methods readily available in MATLAB, such as occlusion sensitivity and gradient-weighted class activation mapping (Grad-CAM). We will illustrate how these methods can be applied in an interactive way using a MATLAB app-based [1] and command line workflows. Further, these methods will be put in the context of the complete AI workflow by showing an example where we develop an image segmentation network for cardiac MRI images using Grad-CAM [2].
[1] Explore Deep Network Explainability Using an App, GitHub.com.
[2] Cardiac Left Ventricle Segmentation from Cine-MRI Images using U-Net, MATLAB Documentation.
Dr. Christine Bolliger
Christine Bolliger is a senior application engineer at MathWorks in Bern (Switzerland), supporting customers across different industries in the areas of software engineering, data science and cloud computing. She holds master's degrees in Physics and in Computational Science and Engineering and has a PhD degree in Biomedical Sciences. Before joining MathWorks, she worked as a software engineer and leader of a data science team.
The Day 2 Problem for Medical Imaging AI
Despite growing interest in artificial intelligence (AI) applications in medical imaging, there are still barriers to widespread adoption. One key issue is the lack of tools to monitor model performance over time, which is important because model performance can degrade in various scenarios. This talk will propose a system to address this issue, which relies on statistics, deep-learning, and multi-modal integration and will describe how this approach will allow for real-time monitoring of AI models in medical imaging without the need for ground truth data.
Dr. Matt Lungren
Matt Lungren is Chief Medical Information Officer at Nuance Communications, a Microsoft Company. As a physician and clinical machine learning researcher, he maintains a part-time interventional radiology practice at UCSF while also serving as adjunct faculty for other leading academic medical centers including Stanford and Duke.
Prior to joining Microsoft, Dr Lungren was an interventional radiologist and research faculty at Stanford University Medical School where he led the Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI). More recently he served as Principal for Clinical AI/ML at Amazon Web Services in World Wide Public Sector Healthcare, focusing on business development for clinical machine learning technologies in the public cloud.
Practicing Safe Rx: The Importance of Intelligible Machine Learning in HealthCare
In machine learning often tradeoffs must be made between accuracy and intelligibility: the mostaccurate models usually are not very intelligible, and the most intelligible models usually are less accurate. This can limit the accuracy of models that can safely be deployed in mission-critical applications such as healthcare where being able to understand, validate, edit, and trust models is important. EBMs (Explainable Boosting Machines) are a recent learning method based on generalized additive models (GAMs) that are as accurate as full complexity models, more intelligible than linear models, and which can be made differentially private with little loss in accuracy. EBMs make it easy to understand what a model has learned and to edit the model when it learns inappropriate things. In the talk I’ll present several case studies where EBMs discover surprising patterns in medical data that would have made deploying black-box models risky.
Dr. Rich Caruana
Rich Caruana is a senior principal researcher at Microsoft Research. Before joining Microsoft, Rich was on the faculty in the Computer Science Department at Cornell University, at UCLA’s Medical School, and at CMU’s Center for Learning and Discovery. Rich’s Ph.D. is from Carnegie Mellon University, where he worked with Tom Mitchell and Herb Simon. His thesis on Multi-Task Learning helped create interest in a new subfield of machine learning called Transfer Learning. Rich received an NSF CAREER Award in 2004 (for Meta Clustering), best paper awards in 2005 (with Alex Niculescu-Mizil), 2007 (with Daria Sorokina), and 2014 (with Todd Kulesza, Saleema Amershi, Danyel Fisher, and Denis Charles), co-chaired KDD in 2007 (with Xindong Wu), and serves as area chair for NIPS, ICML, and KDD. His current research focus is on learning for medical decision making, transparent modeling, deep learning, and computational ecology.
Panel discussion
Dr. Mara Graziani, HES-SO Valais-Wallis and Research Scientist at IBM Research, Switzerland
Dr. Rich Caruana, Senior Principal Researcher, Microsoft Research
Dr. Mo Anas, Engineering Group Manager, The MathWorks
Dr. Lisa Koch, Group Leader for "Machine Learning for Medical Diagnostics", Werner Reichardt Centre for Integrative Neuroscience (CIN), Institute for Ophthalmic Research
Moderation: Prof. Dr. Mauricio Reyes, ARTORG Center for Biomedical Engineering Research, University of Bern