AI systems are already widely used and researched in medicine. The possible areas of use for artificial intelligence (AI) appear to be almost unlimited: For example, the AI-supported processing and rapid evaluation of large amounts of data enables robot-assisted surgery, the use of chatbots and screening apps as diagnostic aids, or the continuous monitoring of chronic diseases with the help of medical wearables, such as fitness trackers, which measure, record and interpret the patient's vital signs. In this context, AI acts as a kind of "sparring partner" for physicians in their clinical decision-making in the context of human-machine interaction (Helmholtz 2022 [1]). Thanks to specially developed algorithms and computer programs that possess deep-learning technology, AI thus has the potential to effectively improve medical care in terms of individual prevention, screening, diagnostics, prognosis and therapy.
Although the use of AI in medicine may sound promising at first, ethical considerations point to certain risks of the current use and design of AI in medicine: Lack of transparency, explicability and fairness, but also insufficient protection of patients' privacy - or their sensitive health data - are just a few examples of the specific challenges in dealing with AI in medicine. For example, what data set is the AI-assisted diagnosis based on? Are the training data representative of the individuals being treated (implicit bias)? Has the General Data Protection Regulation (GDPR) been complied with when collecting the data? But it is not only technological aspects that take on an important role in the assessment of responsible AI. Genuinely philosophical questions, such as those about the good life, good coexistence, or freedom of action, must also be considered in the context of AI research and critically reflected upon in light of the now digital environment.
[1] https://www.helmholtz.de/newsroom/artikel/wie-ki-die-medizin-revolutioniert/