AI in Healthcare: Promise and Peril as Safety Concerns Mount
![]() |
Data shows improvement in healthcare errors. Will AI reverse the trend. |
![]() |
Where is AI being used in Healthcare |
AI in Healthcare: Promise and Peril as Safety Concerns Mount
As artificial intelligence transforms healthcare, medical experts and researchers are raising urgent concerns about patient safety and liability risks in this new technological frontier. Recent studies and expert analyses highlight both the tremendous potential and serious dangers of AI implementation in medical settings.
The stakes are particularly high given healthcare's troubling history with medical errors. Since the landmark 1999 "To Err is Human" report revealed that up to 98,000 Americans die annually from preventable medical mistakes, patient safety has been a critical priority. Now, as AI systems are rapidly being deployed in everything from diagnosis to treatment planning, healthcare faces new safety challenges.
"We desperately need this technology in many areas of health care," says Michelle Mello, a professor at Stanford Law School and School of Medicine. "But people are rightly concerned about the safety risks."
A key issue is that AI systems, while potentially highly accurate, can fail in dangerous ways. In one case highlighted by Stanford researchers, an AI algorithm analyzing lab results cleared a young man to go home, missing critical family history data that could have prevented his death from cardiac arrest six weeks later.
Adding to these concerns is what experts call the "murkiness of the present" - a lack of clear regulatory structures for testing and implementing medical AI. Unlike drugs that undergo rigorous FDA approval processes, AI tools are often tested only by the companies developing them.
"Everyone is racing to be first in this area," notes Mello. "If we're moving quickly from innovation to dissemination, then this poses risk."
Healthcare organizations face particular challenges in determining liability when AI errors occur. Public sentiment appears skeptical - studies show 60% of Americans are uncomfortable with AI in healthcare. This creates additional pressure on hospitals implementing these systems.
Experts recommend several key steps to improve AI safety in healthcare:
- Rigorous testing and validation before deployment
- Clear explanation of how AI systems work and are evaluated
- Strategic rollout with patient safety as the primary focus
- Careful monitoring of outcomes
- Investment in safety measures alongside technological innovations
As healthcare navigates this complex transition, the goal remains clear: harnessing AI's benefits while protecting patient safety. With proper oversight and careful implementation, experts believe AI can help reduce rather than compound medical errors - but getting there requires sustained focus on safety alongside innovation.
Sources
Original Foundational Research:
1. "To Err Is Human" - Institute of Medicine (1999)
- First major report establishing baseline statistics
- Estimated 44,000-98,000 annual deaths from preventable medical errors
- Set initial 5-year goal for 50% reduction in errors
Recent Research and Analysis:
2. "Two Decades Since To Err Is Human: An Assessment Of Progress And Emerging Priorities In Patient Safety" - Health Affairs (2018)
- Authors: David W. Bates and Hardeep Singh
- Comprehensive review of progress since 1999 report
- Analysis of new challenges with health IT implementation
3. "Who's at Fault when AI Fails in Health Care?" - Stanford HAI (2024)
- Authors: Michelle Mello and Neel Guha
- Published in The New England Journal of Medicine
- Focuses on liability issues in AI healthcare failures
4. "Why we should not mistake accuracy of medical AI for efficiency" - NPJ Digital Medicine (2024)
- Authors: Karin Rolanda Jongsma, Martin Sand & Megan Milota
- Analysis of relationship between AI accuracy and efficiency
- Published in partnership with Seoul National University Bundang Hospital
5. "Opinion: AI can enhance and ensure medical patients' safety" - San Diego Union-Tribune (2024)
- Author: Rob El-Kareh
- Perspective from UC San Diego School of Medicine
- Focus on patient safety implications
Key Government/Agency Sources Referenced:
- Agency for Healthcare Research and Quality (AHRQ) national scorecard
- Centers for Medicare and Medicaid Services (CMS) data
- Patient Safety Organizations (PSOs) reports
- Hospital Survey on Patient Safety Culture (AHRQ)
Each of these sources contributes different perspectives on the evolution of healthcare safety and the emerging role of AI in healthcare settings. The combination of academic research, government data, and expert analysis provides a comprehensive view of both progress made and challenges ahead.
It’s been 25 years since the Institute of Medicine’s groundbreaking report, “To Err Is Human,” shed light on the vital issue of patient safety in health care, highlighting that 44,000 to 98,000 Americans die every year from preventable medical errors. Yet despite this awareness, the data show we have only made small pockets of progress despite substantial national investments in advocacy and education. As we approach an artificial intelligence revolution in health care, it’s crucial to keep patient safety at the forefront of our innovations.
AI is rapidly transforming health care, from diagnostic tools that analyze imaging data to robotic surgeries that promise greater precision. While these advancements offer groundbreaking opportunities to enhance patient care, they also present significant challenges that must not be overlooked. The real question is: How can we integrate AI into health care in a way that truly safeguards and advances patient outcomes?
My interest in patient safety was ignited during my time as a chief resident at an academic medical center on the East Coast. I witnessed firsthand how a health care system’s complexities can lead to tragic oversights. One patient, who came in for a common clinical issue, underwent a scan that revealed an unexpected, significant finding. Unfortunately, this crucial information fell through the cracks, and six months later, the patient returned with metastatic cancer — an outcome that could have been averted with timely follow-up. This is a well-documented problem, and it is here that thoughtful, well-integrated AI could make a profound difference.
As doctors, we strive to provide the best care, but we operate in an environment rife with vulnerabilities. AI has the potential to streamline processes, reduce the overwhelming data burden and bring actionable insights to the forefront. However, as we integrate AI into our health care systems, we must do so with a strong focus on patient safety.
Consider the analogy of autonomous vehicles: When we automate processes with AI, we sometimes encounter unintended consequences. Just as with self-driving cars, the focus should not solely be on achieving perfection, but rather on minimizing the frequency and severity of incidents. This is especially relevant in health care. We should evaluate whether we are reducing errors and improving patient safety. Even as AI introduces new possibilities for misdiagnosis, we need to analyze current errors and assess our progress in enhancing safety within the health care system.
While AI can enhance our capabilities, it must be rigorously tested and validated before it can be fully trusted in patient care. One major issue is the transparency of AI algorithms. This lack of transparency can impede trust and accountability. For patients to benefit fully from AI, there must be clear and comprehensible explanations of how these systems work and how they are evaluated.
The rollout of AI in health care must be strategic, with patient safety as a key focus of the process. Lawmakers and funding agencies must prioritize investments in safety measures alongside the technological innovations themselves. Currently, there is a rush to allocate funds for AI, but, without a corresponding commitment to safety, we risk exacerbating existing vulnerabilities in our health care systems. Engaging with patient safety expertise will be crucial in creating a robust regulatory environment that balances the drive for innovation with the necessity of safeguarding patient health.
As health care professionals, we have an obligation to advocate for outcomes-based AI solutions. It’s not enough for algorithms to enhance workflow efficiency; they must translate into real benefits for patients. We must remain vigilant, continuously measuring the impact of AI on patient outcomes to ensure it serves its intended purpose.
The landscape of health care AI is filled with both promise and uncertainty. While the potential offers exciting possibilities, we cannot take for granted that AI will always work in favor of patient safety. To safeguard against pitfalls, we need a collaborative approach that includes patients, health care providers, policymakers, patient safety experts and technology developers.
The conversation about balancing innovation with patient safety is more critical than ever, and it’s our responsibility to ensure that progress does not come at the expense of our most fundamental values.
El-Kareh, M.D., is a professor of medicine, Divisions of Biomedical Informatics and Hospital Medicine, and executive director of Continuing Professional Development at UC San Diego School of Medicine, and associate chief medical officer for transformation and learning, UC San Diego Health. He lives in Rancho Peñasquitos.
Opinion: Editorials, commentary and reader reaction on the issues San Diegans care about most.
By signing up, you agree to our Terms of Use, Privacy Policy, and to receive emails from The San Diego Union-Tribune.
Comments
Post a Comment