AI in Healthcare: Navigating the Path to Responsible Innovation
Amira Soliman
Content
How can AI-driven decisions, such as diagnoses or treatment plans, be easily interpreted and trusted, helping clinicians make informed decisions and fostering greater confidence in AI technologies? Responsible AI in healthcare requires the ethical and transparent use of AI to ensure patient safety, privacy, fairness, and accountability. The poster explores the journey of integrating AI into clinical practice, covering every stage from data collection and modeling to deployment. It emphasizes the importance of recognizing and addressing potential challenges, such as data bias, model accuracy limitations, and the need for explainable AI (XAI) to foster trust in AI-driven decisions. The overarching goal is to harness AI to improve healthcare outcomes while protecting patient safety and maintaining trust throughout the process. We also explain the co-production development framework used in two ongoing projects under the CAISR Health profile.
Researchers
Farzaneh Etminani(1,4), Amira Soliman(1), Jens Nygren(2), Petra Svedberg(2), Lina Lundgren(3), Monika Nair(2), Jonas Sjöström(1), Omar Hamed(1), Björn Agvall(4), and Markus Lingman(1,4)
(1) School of Information Technology Halmstad University, Halmstad, Sweden,
(2) School of Health and Welfare, Halmstad University, Halmstad, Sweden
(3) School of Business, innovation and Sustainability, Halmstad University, Halmstad, Sweden
(4) Department of Research and Development, Region Halland, Halmstad, Sweden
Partners
Region Halland, Cambio, and Capio Ramsay Santé
Funding
The knowledge Foundation
Contact
Amira Soliman
Senast ändrad: