Resources > BLOG >
July 9, 2025
By Dr. Antti Rintanen, MD – The Internet Doctor
Artificial intelligence (AI) is no longer a distant vision in healthcare—it’s an operational reality. From diagnostic tools to decision-support systems and population-level modeling, AI is reshaping how we approach patient care. Yet while the potential is enormous, trust remains a central issue. Without trust from clinicians, patients, and regulators, the adoption of useful tools is delayed, and so is progress.
So, how do we build trustworthy AI in health tech? Transparency, fairness, and accountability are ethical imperatives. Without addressing them directly, we risk introducing new inequities even as we solve old inefficiencies.
One of the most cited barriers to AI adoption in medicine is the “black box“ problem—clinicians are expected to rely on recommendations they cannot explain, generated by algorithms they did not help design1. For a field grounded in evidence and professional judgment, that disconnect adds to the mistrust.
With transparency, there is an undoubted understanding through clear documentation of data sources and model design decisions. Clinicians don’t need to fully understand machine learning methods, but AI tools should offer interpretable outputs: What data was used? Which variables influenced a prediction? Why was a particular outcome suggested?
Methods like LIME (Local Interpretable Model-agnostic Explanations)2, and SHAP (SHapley Additive exPlanations)3 help demystify AI outputs and make them usable in clinical settings. A systematic survey has highlighted the importance of explainability for trustworthy AI in healthcare applications4.
Health data reflects human history, with all its inequities. If AI systems rely on biased proxies, such as cost instead of clinical need, they may perpetuate racial disparities. A widely cited 2019 Science study found that a commercial U.S. risk-prediction algorithm significantly underestimated the health needs of Black patients because it used healthcare costs rather than clinical indicators as a proxy for health needs5.
Mitigating such bias requires:
No matter how advanced AI becomes, human oversight remains essential. AI tools are most effective when they enhance—not override—clinical decision-making. While the explanatory models studied have limits in fostering deep understanding of model boundaries, they can still support clinicians in ambiguous cases and help reduce automation bias, particularly among less experienced professionals.⁶
These findings highlight the importance of maintaining active clinician engagement in evaluating AI outputs before action is taken.
In parallel, accountability remains a key concern in AI-assisted healthcare. If an AI tool contributes to a harmful outcome, it’s not always clear who should be held responsible—the developer, the clinician who used the tool, or the institution that deployed it. While shared responsibility is often encouraged, the laws that define accountability in these cases are still being developed.
For instance, the European Commission proposed new rules in 2022 to update how civil liability works when harm is caused by AI systems. These rules aimed to make it easier for people to claim compensation. However, the proposal was withdrawn in early 2025 after EU member states failed to reach a political agreement on how to implement it.⁷⁻⁸
Meanwhile, the World Health Organization (WHO) has offered clearer ethical guidance. Its 2021 report on AI in health care emphasized principles such as accountability, transparency, inclusiveness, and the need for human oversight. These principles remain widely used by healthcare institutions and AI developers around the world.⁹
Trustworthy AI must align with the core principles of medical ethics, which include beneficence, non-maleficence, autonomy, and justice. Regulatory frameworks like the EU’s AI Act10 and the FDA’s guidelines for Software as a Medical Device11 establish ethical baselines, but organizations should strive to exceed them.
While some ethical standards are now reflected in regulation, many remain aspirational and depend on voluntary institutional commitment. Institutions that embed ethics into design, auditing, and deployment send a strong message of commitment to fairness and transparency.12
These frameworks, however, are not without their limitations. The EU AI Act, while pioneering in its risk-based classification and emphasis on human oversight, still leaves ambiguity around how oversight should be implemented in real-world clinical workflows. Similarly, the FDA’s SaMD guidance provides a foundation for ensuring safety and effectiveness but often lacks specificity on interpretability or the human factors necessary for successful integration into healthcare teams. In practice, developers can meet legal requirements while still releasing tools that clinicians find difficult to trust or integrate. Bridging this gap between regulatory compliance and actual trustworthiness requires a deeper commitment: interdisciplinary collaboration, regular post-market auditing, and clear, user-facing explanations that reflect both ethical and operational needs.
Building trust in AI is not a one-time achievement—it’s a continuous process that requires transparent communication, rigorous validation, and sustained engagement with users. One commonly cited method for fostering trust is open-sourcing AI systems. When algorithms are publicly accessible, external experts can inspect, audit, and challenge their design and behavior. This level of openness promotes accountability and can help uncover potential flaws or biases early. However, it’s important to recognize that open source is just one avenue to trustworthiness, not the only one.
In fact, many effective and trusted AI systems are proprietary. For instance, platforms like Laser AI, while not open-source, have earned trust through strong performance, regulatory compliance, and precise documentation of how their systems work. Closed-source tools can still be transparent in meaningful ways, such as publishing validation results, undergoing third-party audits, and providing interpretable outputs to clinicians. Trust doesn’t hinge on code availability alone; it’s about whether the people affected by the tool feel confident in its safety, fairness, and reliability. Ultimately, both open and closed models can earn that trust—if they meet the bar for ethical, human-centered design.
Beyond academic papers and official reports, discussions about building trustworthy AI in healthcare are actively taking place at professional conferences and industry forums. Events like the Regulatory Affairs Professionals Society (RAPS) Convergence and the Drug Information Association (DIA) Global Annual Meeting bring together regulators, clinicians, developers, and policymakers to explore the real-world implications of AI in medicine.13-14 These gatherings provide space to share case studies, debate ethical concerns, and shape emerging standards. In addition, companies working in health AI—such as Laser AI15 and others—often host interdisciplinary panels that help turn high-level principles like transparency and accountability into actionable strategies for product development and implementation.
AI offers transformative potential for healthcare, but realizing that potential requires more than technological progress. The true challenge lies in navigating the tension between innovation and caution, automation and clinical judgment, efficiency and fairness. Without trust, even the most advanced tools will face resistance and ultimately fall short of improving care on the ground.
Trustworthy AI cannot be built solely by engineers; it requires meaningful, ongoing collaboration across multiple disciplines. Clinicians ensure tools align with real-world workflows; ethicists safeguard values like autonomy and justice; legal experts help define accountability. Without this interdisciplinary dialogue, we risk reinforcing existing health inequities or introducing new harms under the guise of innovation.
Ultimately, trust is not a one-time achievement—it’s an ongoing process rooted in transparency, fairness, and accountability. AI must support human decision-making, not replace it. When these principles guide development and deployment, we move closer to a future where health tech not only improves care—but earns its place in it.
Dr. Antti Rintanen is a licensed medical doctor and founder of The Internet Doctor, a platform translating complex health science into practical guidance. His work bridges clinical medicine and ethical innovation in digital health.
1. Xu H, Shuttleworth KMJ. Medical artificial intelligence and the black box problem: a view based on the ethical principle of “do no harm.” Intell Med. 2024;3(1):52-57. https://www.sciencedirect.com/
2. Ribeiro MT, Singh S, Guestrin C. ““Why Should I Trust You?”: Explaining the Predictions of Any Classifier”. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2016:1135–1144. https://www.kdd.org/
3. Lundberg SM, Lee SI. “A unified approach to interpreting model predictions.” In: Advances in Neural Information Processing Systems. 2017;30. https://proceedings.neurips.cc/
4. Markus AF, Kors JA, Rijnbeek PR. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey. Artif Intell Med. 2021;113:102038. https://www.researchgate.net/
5. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447-453. https://www.science.org/
6. Wysocki O, Davies JK, Vigo M, et al. Assessing the communication gap between AI models and healthcare professionals: explainability, utility and trust in AI-driven clinical decision-making. Artif Intell Med. 2022;132:102359. https://arxiv.org/
7. European Parliamentary Research Service. Artificial Intelligence Act: Civil liability rules. Published 2024. https://www.europarl.europa.eu/
8. Bird & Bird. Proposed EU AI liability rules withdrawn. Published February 2025. https://www.twobirds.com/
9. World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance. Published June 28, 2021. https://www.who.int/
10. European Parliament. EU AI Act: first regulation on artificial intelligence. Published June 1, 2023. https://www.europarl.europa.eu/
11. US Food and Drug Administration. Software as a Medical Device (SaMD). Digital Health Center of Excellence. Updated February 10, 2022. https://www.fda.gov/
12. Dankwa-Mullan I. Health equity and ethical considerations in using artificial intelligence in public health and medicine. Prev Chronic Dis. 2024;21:E45. https://www.cdc.gov/
13. Regulatory Affairs Professionals Society. RAPS Convergence 2024: Program Highlights. RAPS. Published October 2024. https://www.raps.org/
14. Drug Information Association (DIA). DIA Global Annual Meetings. https://www.diaglobal.org/
15. Laser AI. Homepage. https://www.laser.ai/
Related webinars:
Related blog posts: