9 minute read

Legal Liability Arising from the Use of AI by Dentists: What Jane Austen Has to Do with It

By Jake Kathleen Marcus, JD, PGDip

The use of artificial intelligence (AI) in dentistry, as in many other healthcare fields, raises questions about liability for harm or errors that may result from its use. These liability concerns typically center around who is responsible if AI systems provide incorrect or misleading information or if errors result in patient harm. It is critical that dentists understand the potential liability they face when using AI or tools that use AI and what dentists can and should do to protect themselves and their patients.

Several years ago, as part of the work for my graduate diploma in the law of technology, I took a course at Queen Mary University of London on AI. It was a deep dive into the science (which I generally understand) and the math (which I rarely understand) of algorithms, large language models and generative AI. It also covered the myriad legal issues requiring regulation, such as the use of AI in policy profiling (deeply flawed and resulting in civil rights violations the world over), and how use of the enormous amounts of data required raises copyright claims (even if the “garbage that comes out” means what went in was garbage, someone wrote that garbage, and the writer wants to be paid).

Around the same time, the lead of IT at a major tech company told me why generative AI — the AI that purports to create new ideas — simply didn’t work. Her one-word answer: “hallucinations.”(1) When AI doesn’t know the answer to a question, it simply makes one up, generating a response it thinks you want to hear. Her example: When asked about something Elizabeth Bennett did in “Sense and Sensibility,” some generative AI systems will have an answer. The problem, as any Jane Austen fan can tell you, is that Elizabeth Bennett is a character in “Pride and Prejudice,” not “Sense and Sensibility.” While this particular “hallucination” doesn’t happen anymore, there remains significant concern regarding the risk of AI hallucination in healthcare.(2)

I include this hallucination story here because, if AI can tell you what it thinks you want to hear about Jane Austen, it can also “hallucinate” diagnoses and treatment plans. Both the lawyer and the Jane Austen fan in me conclude that someone is going to get sued.

Professional Liability, or ‘Will AI Get Me Sued?’

Dentists have a duty of care to their patients, and if they choose to use AI in their diagnoses or treatment-planning, they remain responsible for the final decisions made. Dentists cannot solely rely on AI-generated data to absolve themselves of liability. If a dentist uses an AI tool that provides incorrect information, but the dentist fails to notice and/or act and a patient is harmed, the dentist could still be held liable for malpractice. Both law and ethics require that dentists exercise their own professional judgment in all cases.

A crucial element in a dental malpractice action is whether the dentist deviates from the “standard of care.” As healthcare providers and lawyers know (or should know), the applicable standard of care can be a moving target based on medical research, education and even the geography of the patient. With the many unresolved questions regarding AI, there is not yet an accepted standard of care for the use of AI in dentistry, thus creating greater vulnerability for dentists. Courts will have to consider whether using AI meets or exceeds the standard of care expected from a competent dentist, but, presently, no clear definitions exist.

Shared Liability?

In some cases, liability might be shared between the AI developers, manufacturers and dentists. If, for example, an AI tool misdiagnoses a condition due to flaws in its algorithm, both the creators of the AI system and the dental professional using it could face liability.

However, the dentist’s failure to critically assess the AI’s output before applying it in treatment is legally the fault of the dentist. Where there is damage to the patient and fault on the part of the dentist, the dentist is likely to be found liable for all or part of whatever damage occurs.

Medical Device Regulation

AI tools used in dentistry may be classified as “medical devices” depending on their function. The Food and Drug Administration (FDA) regulates medical devices, including some AI-powered technologies. Therefore, if AI software is classified as a medical device, the manufacturer is required to ensure its safety, effectiveness and accuracy. If the tool malfunctions or provides misleading information, the manufacturer may be liable for product defects or negligence.

But dentists cannot rely on FDA regulation to avoid liability in the event of a bad patient outcome. Dentists still have an obligation to fully inform themselves regarding the medical devices they use. FDA regulation may require the device to carry warnings or use instructions, which shift liability to the dentist if the dentist does not read and follow the warnings or varies in any way from the device instructions. If a medical device causes harm because of a defect in manufacture, some liability might be shifted to the manufacturer. However, if a device is harmful because it is not used in accordance with its instructions, the liability lies with the dentist and not the manufacturer. The involvement of AI in the design, manufacture or operation of the device does not change this most basic fact of a dentist’s liability for harm.

Informed Consent

Dentists should inform patients when AI is being used in their care. This transparency ensures that patients are aware of the nature of the technology used, allowing them to consent to or opt out of such treatments. If patients are not informed and something goes wrong, it could lead to legal challenges based on the failure to obtain informed consent.

However, the nature of AI and its inherent unknowns means the dentist may not understand either how the AI works or what mistakes it can make. If the dentist does not understand, the dentist cannot explain to the patient. If the dentist cannot explain to the patient, the patient cannot give informed consent. This cycle seems unbreakable. However, in the informed consent process, a dentist can reveal the limitations of what they know. Informing patients that there are unknowns is something dentists do every day and should not fear.

Data and Privacy Concerns

AI in dentistry often relies on patient data, including scans, radiographs and other health information. Misuse of this data or breaches in patient privacy, especially with AI systems that involve cloud storage, could lead to liability under privacy laws, such as the Health Insurance Portability and Accountability Act (HIPAA).

For dentists using AI, it is critical that HIPAA and the Health Information Technology for Economic and Clinical Health Act (HITECH) are strictly followed. For example, any system that uses or stores patient information, regardless of whether AI is used, must be HIPAA-compliant. For any transmission or storage of patient data that will involve a third-party company, the dentist must have a business associate agreement with that entity. For all of its potential benefits, AI is yet another risk to the dentist for sanction by the U.S. Department of Health and Human Services Office for Civil Rights for violation of HIPAA and/or HITECH.

Insurance Implications

Professional malpractice insurance for dentists might also need to evolve to address the risks of AI usage. Some insurance policies may not currently cover harm resulting from AI tools, which could expose dentists to greater personal financial risk. This is a problem with a simple solution — ask your carrier. If you use AI, be insured for its use.

Conclusion and Best Practices

Liability for the use of AI in dentistry is complex and evolving. It typically involves the interplay between the dentist’s professional duty, the manufacturers’ responsibility for the AI tools and evolving legal standards around AI in healthcare. It is essential that dental professionals understand that they bear the ultimate legal responsibility for harm that results from damage to their patients arising from the use of AI in treatment and in treatment plan decisionmaking. Dentists must be sure they are fully aware of the scope of malpractice coverage in the event of an AI-involved outcome. While informed consent from patients can mitigate some potential liability, patients cannot give truly informed consent to treatment choices they do not fully understand — particularly if, given the nature of AI, the dentist themself does not fully understand how the AI functions. Best practices to limit potential liability arising from the use of AI in the dental office are (and note these are best practices in dental offices even if “use of AI” is removed as a variable):

• Redraft all HIPAA/HITECH/data privacy documents to address the use of AI in your practice.

• Redraft informed consent documents to include what you do and what you do not know about the AI you are using, regardless of whether the AI is classified as a software or medical device.

• Stay informed and keep your staff informed. There is no shortage of continuing education on the use of AI in dentistry. Be sure you and your team are attending.

• Ask your professional liability carrier about your coverage for damage that might be caused by the use of AI or tools that use AI.

• Read the warnings and instructions on all equipment and software that uses AI, and follow them.

Thus far, litigation resulting from the use of AI in healthcare has concerned copyright and antitrust — business concerns that do not put healthcare providers themselves at greater risk of liability. Also — and critically — the use of AI may very well result in fewer clinical mistakes. Fewer clinical mistakes should result in fewer malpractice claims. It is too early to know if this is true, and too few medical liability claims have been filed to see a pattern, but dentists can best protect themselves and their patients by staying informed.

Jake Kathleen Marcus, JD, PGDip, has been a regulatory lawyer primarily in the healthcare space for over 35 years. They were recently awarded a postgraduate diploma in technology, media and telecommunications by Queen Mary University of London School of Law. To comment on this article, email impact@agd.org.

References

1. Shen, Yiqiu, et al. “ChatGPT and Other Large Language Models Are Double-edged Swords.” Radiology, 2023, vol. 307, no. 2, doi: 10.1148/radiol.230163.

2. Moulaei, K., et al. “Generative Artificial Intelligence in Healthcare: A Scoping Review on Benefits, Challenges and Applications.” International Journal of Medical Informatics, vol. 188, August 2024, p. 105474, doi: 10.1016/j.ijmedinf.2024.105474.

This article is from: