You are a medical ethicist at the Inselspital in Bern (Insel Gruppe). What does that mean?
We support health professionals when they run into ethical problems or dilemmas in their work. This does not mean that we tell them what is right or wrong. Rather, we help them make ethically difficult clinical decisions. In addition, I am also a professor of medical ethics and teach at the university and university of applied sciences.
Medicine is a science that goes back thousands of years: Why should the expertise of medical professionals no longer suffice? Do we really need AI in medicine?
I think this critical question could be asked about most innovations. When people started traveling by rail, they wondered why horses were no longer sufficient as a means of transportation. I don't perceive AI as an absolute necessity, but rather as an opportunity to exploit. A door is opening here, and we want to explore the possibilities in this new space.
We can already see that there are areas of life where sophisticated computer systems can help us, for example autopilots in flying. These are essentially positive developments. With AI, we now want to find out whether there might be assets for medicine.
I don't perceive AI as an absolute necessity, but rather as an opportunity to exploit.
At CAIM, clinicians and AI researchers want to develop new technologies for even better patient care. But can healthcare professionals and patients trust such technologies?
I think they can. We also trust other diagnostic technologies, such as laboratory analysis, magnetic resonance imaging, genetic testing, and further diagnostic advances. In this context, I understand AI as merely an assistance system that can help us. The idea is not for it to govern us. Such systems may even allow future physicians to spend more time with their patients.
Artificial Intelligence, among other things, wants to predict the future - for example a patient's chances of being cured. But aren't the moral implications of this (such as discontinuing therapy) extremely risky?
Any prediction of the future is risky. The Oracle of Delphi in Ancient Greece already tried to predict the future 2500 years ago. That, too, sometimes led to bad decisions. I think the problem here is not AI, but in the haughty, even perhaps arrogant, idea of predicting the future itself. There is always a myriad of potential influencing factors.
What we don't want is to create a responsibility gap.
Who ultimately decides how treatment is provided in the digital healthcare system? Physicians? Patients? Computers? And who is responsible?
Over the last 50 years, it has become clear that it is less and less the doctors who make the decisions, but the patients themselves who come to a decision in the sense of shared decision-making with the help of the information presented to them. Computer systems are now helping to make the basis for these decisions even more understandable.
I think, our healthcare system does not want to move in a direction where responsibility is completely handed over to technology. In general, patients will continue to make decisions about their treatment, except in exceptional situations: for example, in an emergency or intensive care unit. What we don't want is to create a responsibility gap. A large part of the responsibility must remain with the patients.
Do ethical discussions also contribute to more transparency in the field of AI for medicine? There remains for some of us a diffuse feeling of loss of control because the "black box" AI spits out something we cannot comprehend based on our experience.
Yes, it is a task of ethics to contribute to transparency. But ethics should also help to change perspectives, so that we look at things from a different angle.
Of course, there are many things we don't understand: For me, there is, for example, also a black box behind my smartphone, so to speak. But what's much more interesting is the question: Why do these fears exist about AI in medicine, but not about my car or my smartphone, which are also largely technology-based?
Culturally, AI is still very new, which is why we need to be patient with social acceptance. To reduce fears, dialogue with society should be encouraged. 200 years ago, Mary Shelley used the Frankenstein myth to show that society is afraid of losing control over new innovations. No one wants a science fiction world in which AI eventually takes over the reins of humanity. It is up to us to shape reality.