Dental Technology

AI Finds the Cavity. The Patient Believes It. But Who's Legally Responsible?

Key Takeaways

  • AI caries detection accuracy ranges from 73% to 98.6% across platforms—meaning 'high accuracy' can mask catastrophic variance that dentists currently absorb as sole legal liability.
  • Dental AI vendors universally disclaim liability in their contracts, meaning the clinical risk of AI-assisted diagnosis falls entirely on the dentist or DSO, regardless of whether the AI was wrong.
  • The Heartland Dental class-action lawsuit—filed in 2025, dismissed in January 2026—established a legal precedent that AI tool use without patient disclosure creates actionable consent violations.
  • Informed consent forms built before 2022 don't mention AI systems; practices using annotated imaging without updating consent language are operating in an undocumented liability gap.
  • The ADA, FDA, and state dental boards have not yet issued unified guidance on AI disclosure obligations—meaning proactive practices that act now will set the standard of care, not follow it.

The dental AI market made a confident pitch in 2024: tools from Pearl, Overjet, and VideaHealth can detect caries with pooled sensitivity above 85% and specificity reaching 90%—performance that, in some studies, edges past human clinicians who miss an estimated 43% of cavities visible on X-rays. Practices implementing Overjet report 10–20% increases in case acceptance within months. The clinical and commercial logic is hard to argue with. The legal logic is a different matter entirely.

Here is the problem no vendor slide deck addresses: when an AI system flags a finding, the dentist confirms it, the patient accepts treatment, and the outcome is poor—who is legally responsible? The answer, under the current framework, is the dentist. Always. And that asymmetry is about to collide with a wave of adoption that has outpaced both the regulatory environment and the profession's institutional understanding of what informed consent actually requires when a machine is part of the diagnostic chain.

The 98.4% Problem: When AI Accuracy Outpaces the Legal Framework Around It

The accuracy figures circulating in dental AI marketing are real—but they require a careful read. A 2025 umbrella review with meta-analysis published in PLOS One found pooled sensitivity of 0.85 and specificity of 0.90 for AI caries detection. A separate meta-analysis in Head & Face Medicine confirmed similar ranges. The headline-grabbing 98.6% figure that appears in some studies represents the ceiling of reported accuracy—not the average. The floor, per a 2024 ScienceDirect meta-analysis, is 41.5% accuracy in certain AI platforms.

That variance matters enormously for liability. When a dentist uses an AI tool that performs brilliantly on the training dataset and less brilliantly on their patient population—because of scanner differences, patient demographics, or image quality—the dentist bears the clinical and legal consequences. FDA clearance, which Pearl, Overjet, and VideaHealth all hold, establishes that a device is substantially equivalent to a predicate—it is not a warranty of performance in every clinical context. As the Milbank Quarterly notes, a physician who relies in good faith on an AI/ML recommendation may still face liability if their actions fall below the standard of care—because the duty to apply clinical judgment independently remains with the clinician.

The dental profession has not processed what it means to adopt a tool that is simultaneously marketed as superior to human judgment and legally subordinate to it.

Treatment Acceptance Is Up—But Is AI Informing Patients or Persuading Them?

Annotated imaging—the core mechanism through which AI drives case acceptance—works because it makes radiographic findings viscerally legible. Color overlays on bitewing X-rays, AI-generated lesion markers, automated severity classifications: these tools transform an abstract gray shadow into something that registers as a problem. Pearl describes the mechanism directly: adding "color" and annotations to X-rays increases patient understanding, reduces skepticism rooted in prior experiences, and presents "objective evidence" that accelerates acceptance.

That framing elides a critical distinction in dental ethics. Informed consent requires that patients understand the nature of a proposed treatment, its risks, its alternatives, and the basis on which a diagnosis was made. If the basis is an AI system that the patient doesn't know exists—one that, by design, makes radiographic findings appear more urgent and concrete—then the consent is informed by an actor the patient has never consented to involve.

The concern is not hypothetical. A 2025 review published in BDJ Open identified overdiagnosis as a core deployment risk for dental AI, noting that the profession's historical inter-examiner inconsistency rate of 15–20% creates a ready surface for AI-amplified overtreatment. When an AI confidently annotates a borderline radiographic finding as a lesion requiring intervention, and the patient trusts the machine's certainty more than the dentist's hesitation, the treatment that follows may not represent genuine clinical judgment. It represents AI-mediated persuasion—and the line between the two is not visible in the patient's chart.

The Liability Vacuum: Where Dentist Responsibility Ends and AI Accountability Doesn't Begin

Every major dental AI vendor structures its contracts to limit its liability to the maximum extent permitted by law. This is standard SaaS practice, but the implications for clinical settings are serious. When AI contributes to a misdiagnosis—a false positive that drives unnecessary restorative work, or a false negative that allows a lesion to progress—the vendor is not in the chain of accountability. The dentist is.

This dynamic was made explicit by the Heartland Dental class-action lawsuit filed in July 2025. The case alleged that Heartland allowed RingCentral's AI to transcribe and analyze patient phone calls without consent, violating the Federal Wiretap Act. Heartland and RingCentral ultimately prevailed—the case was dismissed in January 2026—but not before establishing a legal template that will haunt every DSO compliance officer: when patients discover that an AI system was operating on their data without disclosure, they sue the dental organization, not the software vendor.

The lesson extends directly to diagnostic AI. If a patient later disputes a treatment decision and discovers that a machine generated the finding their dentist confirmed, the absence of disclosure in the consent record becomes actionable. Legal analysis from NCBIBookshelf is unambiguous: when AI is used only as decision support, the clinician who makes the final determination bears the liability risk. That was true before this technology existed. The difference now is that patients are increasingly aware AI was involved—and increasingly likely to ask why no one told them.

Rewriting Informed Consent for the Age of Annotated Imaging

The profession's consent infrastructure was built for a world without AI. Most dental practice consent forms describe diagnosis as a human clinical judgment based on examination and radiographs. They do not mention software systems, machine learning models, FDA-cleared devices, or algorithmic findings. A practice running Overjet or VideaHealth on every new patient X-ray and using those findings to build treatment presentations is operating outside the scope of its own consent documentation.

Researchers recognized this gap in 2025. A study published in PMC developed a dentistry-specific informed consent checklist for AI use, identifying recurring consent failures: patients weren't told AI was involved in diagnosis, weren't told what the AI's accuracy limitations were, and weren't informed that AI findings were validated (or not) by independent clinical judgment. The checklist is a framework, not a legal standard—but it signals where the profession is heading.

What should genuine AI-era informed consent look like? At minimum: disclosure that AI software analyzed the radiographs, a plain-language explanation of what that means, a statement that the dentist independently reviewed and confirmed the findings, and documentation of any AI findings the dentist chose to override. This is not administratively burdensome. It is the baseline required to demonstrate that a patient's acceptance of treatment was genuinely informed rather than AI-mediated.

Clinical Aid or Sales Tool? How Your AI Implementation Answers That Question

The ethical and legal status of dental AI diagnostic tools depends heavily on how they're deployed—and many practices have made that deployment decision primarily on the basis of case acceptance metrics rather than clinical governance. When an AI tool's value is presented to the practice in terms of revenue uplift and treatment acceptance rates, and when annotated imaging is used primarily during case presentation rather than as an internal clinical quality check, the tool functions as a sales aid. That is a different thing from a clinical diagnostic instrument, and the consent implications are different too.

DSO compliance officers should ask a direct question: in your practice, does the AI analysis happen before or after the dentist's independent examination? If the AI annotation is present during the dentist's initial radiograph review, it is influencing clinical judgment in ways that were not disclosed to the patient. If the AI annotation is used only to communicate a finding the dentist independently reached, it is a communication tool with a much cleaner consent profile. The clinical workflow determines the legal exposure—and most practices haven't documented which one they're running.

The risk-based framework published by Oral Health Group in 2025 makes this concrete: liability depends on where the AI intersects the clinical workflow, not on whether the AI is FDA-cleared. Clearance establishes a device is safe for its intended use. Governance determines whether you're actually using it that way.

What Regulators Haven't Said Yet—and Why Dental Practices Can't Wait for Guidance

The ADA has not issued binding guidance on AI disclosure requirements. The FDA regulates dental AI as a medical device but does not mandate patient-facing disclosure of device use. State dental boards have not updated consent standards to address AI. The EU AI Act, which classifies dental diagnostic AI as high-risk and requires transparency obligations, is in force in Europe but has no U.S. equivalent. The result is a regulatory vacuum that rewards practices willing to move first.

Proactive practices that update their consent language now, document their clinical workflows, and implement disclosure protocols will define what reasonable care looks like when the first significant dental AI malpractice case reaches judgment. That case is coming. Legal commentators at npj Digital Medicine note that while no legal cases have yet addressed AI-specific consent in dental or surgical settings, courts will draw on existing informed consent precedents—precedents built around the principle that patients have the right to know the basis of the clinical recommendation they're accepting.

The dental AI industry has marketed its products as tools for building patient trust. That trust is fragile if patients later discover that a machine was involved in their diagnosis and nobody thought to mention it. Practices that treat consent as an afterthought will eventually answer for that decision—in court, or before their state dental board, or simply in the patient relationships they can no longer recover.

Frequently Asked Questions

Can a dentist be sued for malpractice based on an AI diagnostic error?

Yes. Dental AI vendors universally limit their contractual liability, which means the dentist absorbs the legal risk of any AI-assisted misdiagnosis. As [NCBI legal analysis confirms](https://www.ncbi.nlm.nih.gov/books/NBK613216/), when AI operates as decision support, the clinician who validates and acts on the finding bears the liability—not the software vendor. FDA clearance for a device does not transfer legal responsibility from the practitioner to the manufacturer.

Do dental practices legally need to tell patients when AI is used in diagnosis?

No federal mandate currently requires it in the U.S., but the legal exposure for non-disclosure is growing. Several states have enacted or are considering healthcare AI transparency requirements, and a [2025 PMC study developing a dentistry-specific AI consent checklist](https://pmc.ncbi.nlm.nih.gov/articles/PMC12576000/) identified undisclosed AI use as a recurring ethical and legal failure point. The Heartland Dental lawsuit—though ultimately dismissed—demonstrated that patients will sue when they discover undisclosed AI use in their care.

How accurate is dental AI for cavity detection, and does accuracy affect liability?

Pooled accuracy across peer-reviewed meta-analyses ranges from 41.5% to 98.6% depending on the platform and imaging modality, with [a 2025 umbrella review finding pooled sensitivity of 0.85 and specificity of 0.90](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0329986) on bitewing radiographs. The variance matters for liability: a dentist who relies on a platform performing at the low end of that range, without independent verification, may fall below the standard of care even while believing they're using a validated tool.

What did the Heartland Dental AI lawsuit establish for dental practices?

The 2025 class-action alleged that [Heartland Dental allowed RingCentral's AI to record and analyze patient calls without consent](https://www.beckersdental.com/dso-dpms/heartland-dental-hit-with-class-action-lawsuit-over-ai-use/), violating the Federal Wiretap Act. The case was dismissed in January 2026, but it established a legal template: dental organizations—not AI vendors—face litigation when patients discover undisclosed AI use. The pattern will extend to diagnostic AI as patient awareness increases.

Should AI imaging findings be shown to patients before or after the dentist's independent review?

Showing AI annotations before the dentist's independent review risks blending clinical judgment with algorithmic output in ways that were never disclosed in patient consent. [Oral Health Group's 2025 risk framework](https://www.oralhealthgroup.com/features/a-risk-based-framework-for-dental-ai-adoption-2025-update/) recommends treating clinical workflow as the primary liability variable—when AI influences the finding rather than communicates it, the consent requirements and the liability exposure are both significantly higher.

More from Dental Technology

Every Word Your Patient Says Is Being Transcribed. Does Your Consent Form Know That?Ambient AI Is Saving DSO Dentists 45 Minutes a Day. Independent Practices Are Still Paying Someone to Type It.The ADA Just Killed the Annual Bitewing Habit. Here's What That Means for Your Insurance Codes, Liability Exposure, and Informed Consent Forms.The ADA Just Killed the Annual Bitewing Habit. Here's What That Means for Your Insurance Codes, Liability Exposure, and Informed Consent Forms.
← Back to Blog