By Tucker Poling, JD, CPCU
Vice President, Claims, and General Counsel
Over a very short period, the use of AI tools in the clinical context has become increasingly common. Ambient listening tools that create draft SOAP notes, clinical decision support tools such as OpenEvidence, and AI-based medical devices are already widespread in clinical practice.1 The plaintiffs’ litigation industry will likely seek to profit from this disruption by finding new ways to point fingers at healthcare professionals when patients suffer poor treatment outcomes. Although AI may present a more dynamic and rapid technological change than what we’ve seen in the past, the blocking and tackling fundamentals of liability risk control likely won’t change.
AI tools impact the legal standard of care.
According to Stanford Medicine Professor and former Kaiser Permanente CEO, Robert Pearl, MD, AI tools will soon become essential to the practice of medicine, “more important than the stethoscope was in the past.”2 Whether or not this prediction becomes reality, a growing number of physicians are using AI tools to improve their ability to efficiently deliver high quality care.
Like any other new clinical tool, AI may become part of the standard of care. This raises the possibility that failure to use AI could become the basis for malpractice allegations. Reciprocally, overreliance on AI could form the basis of malpractice allegations. Consistent with this risk, some physicians have raised concerns about “cognitive deskilling” due to excessive reliance on AI.3
From a legal standard of care perspective, AI presents a new version of an old problem. Technology innovations can, when used wisely, vastly improve patient care. However, they can also “up the ante” in terms of the baseline expectations placed on healthcare professionals.
The plaintiffs’ litigation industry may use AI to identify potential clients and target potential defendants.
AI can process large volumes of publicly available data and place sophisticated pattern recognition capability in the hands of anyone with internet access. By mining public records, state medical board data, and millions of public posts and comments from social media users, we can expect the plaintiffs’ litigation industry to find new ways to identify both potential clients and potential healthcare professionals and facilities to target.
AI capabilities may transform litigation.
Medical malpractice litigation is dominated by large volumes of information. From medical record sets that routinely number in the tens of thousands of pages to massive amounts of electronic metadata tracking everything a user does within an EMR system, the ability to review, analyze, and recognize patterns across diverse and voluminous data is a superpower in medical malpractice litigation.
For example, an AI tool could comb through huge EMR data sets that include both the information charted about a patient and the metadata (who accessed charts, when they accessed them, what information they accessed, how long they viewed the information, what edits they made to entries, etc.) and could show patterns and connections that were never visible before.
We can also expect to see new forms of evidence arising from AI tools, such as AI-generated simulations of medical outcomes. Expert witnesses could use these to support their opinions about the likely outcome of various treatment options for a particular patient.
Can AI be Held Accountable for Liability?
If an AI tool contributes to a medical error, is the healthcare professional liable for using AI? Is the hospital liable for choosing and implementing the AI tool? Is the company that designed and distributed the AI tool liable for algorithm flaws or bias that may have contributed to the outcome? The answer is a definitive maybe – legal commentators have identified these as unsettled questions.4
The law of product liability might apply to allow for fault against the AI company rather than the healthcare professional, but courts have been reluctant to apply product liability principles to intangible software.5 Additionally, other legal barriers, such as the “learned intermediary” exception—which generally cuts off a product manufacturer’s liability when the product is intended to be used through a highly trained professional, such as a physician, could also apply to prevent healthcare professionals from apportioning fault to the AI company when flaws in an AI tool contribute to a patient’s injury.6
Practical Implications for Healthcare Professionals.
“Expert forecasts are statistically indistinguishable from random guesses.”
- Barry Ritholtz
When rapidly evolving change makes predictability, certainty, and control unrealistic, a natural human reaction is to crave exactly those things. Although predictions, plans, and highly specific check-box protocols provide comfort, they may create as many risks as they mitigate in the context of the rapid evolution of AI use cases and capabilities. What’s more, data collected across a variety of fields tells us that experts are really bad at predicting or effectively game-planning with any specificity for the future in such dynamic environments.7
Institutions may find that after they’ve spent months dutifully following the Joint Commission recommendation to establish a “formal governance structure responsible for risk-based and organizationally appropriate oversight of health AI tools involved in direct or indirect patient care, care support services, and care-relevant healthcare operations and administrative services”8, several iterations of such tools may have already entered the clinical environment in some manner. Meanwhile, suppose a novel, rigid, and prescriptive internal policy approach is taken. In that case, pages of internal policy documentation and detailed new “rules” (that attorneys can later scrutinize, with hindsight bias, to find fault with the institution in the context of litigation) may have been created without providing real-world clarity or risk-mitigation benefits.
A flexible, strategic approach may be well-suited to this transition period: as much as feasible, adapt and apply existing core risk management principles and policies related to privacy, consent, communication, documentation, security, technology oversight, and quality improvement to the new and evolving tools. In other words, AI LLMs “should be thought of as tools – just as any other modality in medicine – with their own set of indications, risks and benefits, ethical considerations, and costs.”9 Similarly, individual healthcare professionals may find that integrating AI as a “wayfinding” tool to apply within the bounds of their own clinical judgment and skill rather than relying on it to “solve” a diagnosis may reduce their risk.10
Over the past 20 years as a litigator, regulator, and claims professional, I’ve found that clinical skills, communication, documentation, and patient relationships consistently drive malpractice liability risk. No matter your clinical skill level, the more often you genuinely listen to and talk to your patient, communicate with your peers in the patient’s healthcare team, and document what you’ve done and why you did it, the less often you’ll get sued. Although implementing those fundamentals may look different when you incorporate AI tools, the fundamentals of liability risk won’t change. That’s my prediction….
References
[1] It has been reported that at least 40% of physicians already use OpenEvidence, and the FDA has approved more than 1,000 AI-based medical devices.
[2] Khari Johnson, ChatGPT Can Help Doctors—and Hurt Patients, WIRED (Apr. 24, 2023), https://perma.cc/39AV-JSNM.
[3] Khullar, Dhruv, MD. "If A.I. Can Diagnose Patients, What Are Doctors For?" The New Yorker, September 22, 2025.
[4] Spencer, L. (Ed.). (2023). Artificial intelligence in the practice of intellectual property. American Bar Association
[5] Mello MM, Guha N. Understanding Liability Risk from Using Health Care Artificial Intelligence Tools. N Engl J Med. 2024;390(3):206-215. doi:10.1056/NEJMhle2308901
[6] See Mindy Duffourc & Sara Gerke, Decoding U.S. Tort Liability in Healthcare’s Black-Box AI Era: Lessons from the European Union, 27 Stan. Tech. L. Rev. 1 (2024).
[7] See Philip E. Tetlock, Expert Judgment: How Good Is It? How Can We Know? (Princeton University Press, 2005).
[8] Joint Commission and Coalition for Health AI (CHAI). Guidance on the Responsible Use of AI in Healthcare (RUAIH). September 2024.
[9] Kindler K, Kravchenko O, Bartlett S. How to Write Effective Generative AI Prompts for Family Medicine. Fam Pract Manag. 2025;32(4):15-20.
[10] See generally, Khullar, Dhruv, MD. "If A.I. Can Diagnose Patients, What Are Doctors For?" The New Yorker, September 22, 2025.