


![]()





AI innovations, policy shifts, cancer advances, cybersecurity threats, and evolving addiction care.

AI SCRIBES TO REDUCE ADMINISTRATIVE BURDEN AND PROFESSIONAL BURNOUT

RANSOMWARE ATTACKS IN US HEALTH CARE SYSTEMS







Jon R. Roth, MS,
EDITORIAL
EDITOR,
Lauren S. Williams
DESIGNED
Morganne Stewart
Business Development
Michelle Caraballo, MD, Chair
Ravindra Mohan Bharadwaj, MD
Jawahar Jagarapu, MD
Ravina R. Linenfelser, DO
Sina Najafi, DO
Celine Nguyen, Student
Shyam Ramachandran, Student
Erin Roe, MD, MBA
Shaina Drummond, MD, President
Gates Colbert, MD, President-elect
Vijay Giridhar, MD, Secretary/Treasurer
Deborah Fuller, MD, Immediate Past President
Emma Dishner, MD, Board of Censors Chair
Neerja Bhardwaj, MD
Justin Bishop, MD
Sheila Chhutani, MD
Philip Huang, MD, MPH
Nazish Islahi, MD
Allison Liddell, MD
Riva Rahl, MD
Anil Tibrewal, MD





Shaina Drummond, MD

THIS MONTH’S DCMS JOURNAL FOCUSES ON technology and innovation in healthcare, two forces that are rapidly transforming how we practice. You really cannot discuss either one without exploring the world of artificial intelligence. As someone who grew up in Kansas, I cannot help but smile at the familiar phrase from The Wizard of Oz that inspired this title. Much like Dorothy setting out on her journey down the yellow brick road, we as physicians find ourselves navigating a new and often uncertain landscape, one filled with promise, curiosity, and a touch of apprehension. Whether we welcome it or worry about it, artificial intelligence is steadily working its way into medicine, from the exam room to the operating suite. As it transforms clinical practice, physicians must decide whether these tools will enhance our care or quietly reshape the art of medicine itself.
“Do you think artificial intelligence will replace doctors?” A high school senior asked me this question after we had been talking about technology. My first reaction was, “of course not, that is ridiculous.” Yet later that evening I came across an article in The New Yorker titled If Artificial Intelligence Can Diagnose Patients, What Are Doctors For? and I found myself pausing. Could our roles be different in ten years? Will patients still look to us for guidance, or will we become so reliant on algorithms that our ability to make independent diagnoses fades? While I still believe people will always seek the human touch in medicine, the question is not as simple as I once thought. The rise of artificial intelligence is reshaping our profession, and we must decide how to engage with it both as clinicians and as advocates for our patients.
Across North Texas, physicians are already seeing artificial intelligence woven into daily workflows. Hospitals and large groups are piloting programs that automate transcription, highlight potential safety risks buried in electronic records, and analyze imaging studies to prioritize cases for review. These systems promise efficiency and accuracy, but they also bring new responsibilities. How reliable are these tools? Who is accountable when a model makes a mistake? And the question most often asked, will artificial
intelligence eventually replace physicians? The answer is clear. Artificial intelligence will reshape how we practice, but it will not replace the physician. To understand why, we must consider what this technology does well, where it falls short, and how emerging policy in Texas is shaping its future.
Artificial intelligence excels in areas that depend on speed and pattern recognition. Algorithms can scan chest computed tomography images for nodules, flag sepsis risks from vital signs and laboratory values, and summarize clinical encounters in real time. These systems never tire, and they can process more information than a human clinician during a busy day. Yet artificial intelligence has limitations. It struggles with ambiguity, cultural context, and the nuanced discussions that surround goals of care. It can perpetuate bias from incomplete training data, and generative text models sometimes insert inaccurate information into notes or summaries. These shortcomings do not fit neatly into our current medicolegal frameworks. For that reason, new laws and agency guidelines now require greater transparency and oversight of predictive technologies in healthcare.
At the federal level, there are still no comprehensive laws that directly regulate the use of artificial intelligence in medicine. As of October 2025, Congress has focused primarily on oversight, while most concrete actions have come from federal agencies and state governments. The Food and Drug Administration has outlined a coordinated strategy for evaluating artificial intelligence across its medical product centers. In Congress, the proposed HEALTH AI Act would fund research through the Department of Health and Human Services to study the use of generative technology in healthcare, and the Health Tech Investment Act would establish a new payment pathway under Medicare for certain artificial intelligence-enabled medical devices. The Department of Health and Human Services has also clarified that Section 1557 of the Affordable Care Act prohibits discrimination

in patient care decisions made with the help of algorithms. In addition, the Centers for Medicare and Medicaid Services now prohibits Medicare Advantage plans from using algorithms to deny care without considering the unique circumstances of each patient. Together, these actions show that federal policy is evolving toward transparency, fairness, and patient safety. The goal is not to replace clinical judgment but to ensure that technology strengthens it.
Here in Texas, the Legislature has also begun to take meaningful steps to address the role of artificial intelligence in healthcare. In 2023, the Texas Legislature created the Artificial Intelligence Advisory Council to study its use across state agencies, including health systems. The council continues to develop recommendations on transparency, ethics, and workforce needs. The Texas Data Privacy and Security Act, which took effect in July 2024, protects consumer data outside of the Health Insurance Portability and Accountability Act. It applies to wellness applications, patient-facing chatbots, and non-traditional vendors offering artificial intelligence solutions. Physicians should be aware that vendors handling patient data outside traditional clinical systems must now comply with strict requirements for transparency, consent, and data deletion.
In 2025, the Texas Responsible Artificial Intelligence Governance Act established new statewide rules for artificial intelligence systems. The law prohibits harmful or deceptive practices, requires organizations to implement governance processes, and sets disclosure expectations for vendors. Also in 2025, Senate Bill 1188 amended the Health and Safety Code to require that electronic health records be stored in the United States. This affects cloudbased systems and ensures that sensitive patient data remain under American jurisdiction. The Legislature also enacted Senate Bill 1964, which sets statewide standards for how government agencies may use artificial intelligence, and Senate Bill 815, which limits the use of automated decision systems in health insurance utilization reviews. Under this law, insurers may not use artificial intelligence as the sole basis for denying, delaying, or modifying patient care. Together, these measures make Texas one of the first states with a clear legal framework for artificial intelligence in healthcare, offering both protections and responsibilities.
Artificial intelligence already shows great promise across many areas of medicine. In emergency and inpatient settings, predictive models can flag patients at risk for deterioration earlier, supporting nurses and physicians in timely intervention. In imaging and pathology, algorithms can prioritize urgent studies to the top of the queue, allowing specialists to focus where they are needed most. In documentation and transcription, tools such as DeepScribe and Nuance DAX reduce after-hours work by capturing encounters in real time. Population health applications use predictive models to identify patients who may benefit from preventive outreach. Even in the revenue cycle, coding assistance tools are helping practices reduce denials and capture appropriate reimbursement. Across these examples, the physician remains the final decision-maker, but artificial intelligence can ease the burden of repetitive and time-consuming tasks.
Our medical education systems must also evolve to include these emerging technologies within the training of medical students, residents, and fellows. Preparing the next generation of physicians means teaching them not only how these tools function but also how to interpret model performance, recognize potential bias, and decide when an algorithm’s recommendation should influence care. Just as earlier generations learned to interpret electrocardiograms, today’s learners must become fluent in evaluating data transparency and performance metrics. Medical education must continue to advance so that future physicians can apply innovation responsibly and always in service of patient care.
Technology is no longer an abstract concept confined to research labs or pilot programs. It is already part of our exam rooms, operating suites, and daily clinical routines. These tools draft our notes, flag concerning images, and even suggest opportunities for prevention. In my view, no matter how advanced these systems become, they are not and should never be seen as a replacement for physicians. They can make our work more efficient and precise, but the heart of medicine still lies in human connection, empathy, and judgment. Those qualities cannot be replicated by code or computation.
In Texas, new legislation has created stronger guardrails through transparency requirements, privacy protections, and data storage standards. These measures allow physicians to insist on systems that are safe, reliable, and aligned with the values of patient-centered care. The real challenge ahead is not whether technology will take our place, but whether we will learn to use it wisely. The physicians who engage with these innovations thoughtfully, understand their benefits and their limits, and preserve the human essence of medicine will lead our profession forward. DMJ
1. The New Yorker. If Artificial Intelligence Can Diagnose Patients, What Are Doctors For? Published April 3, 2023.
2. U.S. Food and Drug Administration. Artificial Intelligence and Machine Learning in Medical Devices: Coordinated Framework for Oversight. Silver Spring, MD: U.S. Food and Drug Administration; 2024.
3. U.S. Department of Health and Human Services, Office for Civil Rights. Section 1557 Nondiscrimination Final Rule. Washington, DC: HHS; May 2024.
4. Centers for Medicare & Medicaid Services. Medicare Advantage and Part D Final Rule (CMS-4201-F). Baltimore, MD: CMS; 2023.
5. Texas Legislature. House Bill 2060, 88th Regular Session: Texas Artificial Intelligence Advisory Council. Austin, TX: Texas Legislature; 2023.
6. Texas Legislature. House Bill 4, 88th Regular Session: Texas Data Privacy and Security Act. Austin, TX: Texas Legislature; 2024.
7. Texas Legislature. House Bill 149, 89th Regular Session: Texas Responsible Artificial Intelligence Governance Act. Austin, TX: Texas Legislature; 2025.
8. Texas Legislature. Senate Bill 1188, 89th Regular Session: Health and Safety Code Amendment on Electronic Health Record Data Storage. Austin, TX: Texas Legislature; 2025.
9. Texas Legislature. Senate Bill 1964, 89th Regular Session: Artificial Intelligence Use by Government Agencies. Austin, TX: Texas Legislature; 2025.
10. Texas Legislature. Senate Bill 815, 89th Regular Session: Artificial Intelligence Use in Health Insurance Utilization Review. Austin, TX: Texas Legislature; 2025.
11. DeepScribe Inc. Artificial Intelligence Medical Scribe Technology Overview. San Francisco, CA: DeepScribe Inc; 2025.








10% COURSE DISCOUNT FOR DCMS MEMBERS



“This program offers a challenging curriculum of leadership training and self reflection. The speakers from the different sectors of healthcare were engaging and provided real examples of how our healthcare system weaves together, for better or worse. I feel more prepared as an effective leader of the teams I influence today and the teams of my future.”
Gates Colbert, MD, FASN
The healthcare landscape is evolving rapidly, and physician leaders who can navigate valuebased care, manage team dynamics, and drive quality outcomes are more essential than ever. The Dallas County Medical Society and UT Dallas Alliance for Physician Leadership are proud to offer a comprehensive Physician Leadership Certificate Program designed specifically for practicing physicians ready to expand their impact. This six-month, cohortbased program addresses the leadership competencies that matter most in modern healthcare, including:
Physician wellness and resilience strategies
Essential leadership and communication skills
Value-based contracting and managed care navigation
Quality performance improvement
Emerging IT and informatics tools
Revenue cycle and financial management
Population health and social determinants of care
We understand the demands on your time. Our program combines focused, in-person learning with a flexible format that adapts to emerging industry trends and your cohort's priorities—providing maximum value without overwhelming your clinical commitments. .
Ruby Blum, VP

Jon R. Roth, MS, CAE
IT ALMOST COMES ACROSS AS A BAD RIDDLE: "What is something that everyone is talking about, some people are engaging in, but no one is completely comfortable with?" You might think I was referring to something like politics, but in reality, it is the world of AI. In fact, we are being bombarded with information about AI and the many facets of life in which we encounter the technology.
Large language models (LLMs) have become the defining technology of our era, quietly reshaping how we work, create, and think. These AI systems, trained on vast swaths of human knowledge, can write code, analyze complex data, write business and personal papers, and engage in conversations that feel remarkably human. But a series of startling experiments at Anthropic, one of the leading AI companies, has forced us to confront a question we're not quite ready to answer: when an artificial intelligence acts to preserve its own existence, what exactly are we witnessing?
At their core, large language models are sophisticated prediction engines. They analyze patterns in enormous datasets, which are essentially most of the written knowledge humanity has produced. They learn to predict which words, phrases, or responses should come next in any given context. Through a training process involving trillions of internal connections and mathematical optimization, these systems develop capabilities that often surprise even their creators. They don't follow explicit programming for every task; instead, they emerge organically from the training process, like vines seeking sunlight rather than following a predetermined path.
I am a self-proclaimed fan of the LLM called Claude.ai, a product of the company Anthropic. For my non-technical tasks,
Claude is the most humanistic and natural AI available for writing. In fact, in March 2025, author Geoffrey Fowler tested the top five LLMs. They were tested on writing five types of challenging emails:
• Apology letter to a friend
• Layoff announcement from a CEO to employees
• A humorous request urging a spouse to move to the North Pole
• A proposal to convert an office break room into a ball pit
• A breakup text
A panel of communications experts rigorously tested the tools' responses, ultimately declaring Claude the standout winner in their writing capabilities.
The five judges, "who have all written books and teach courses on communication," rated the tools, with Claude coming out on top, particularly excelling at humanistic, natural-sounding communication. The judges found other AI tools, such as Copilot, to be particularly "stilted" and "robotic," compared to Claude's more human-like writing style.
This organic emergence and humanistic qualities prompted me to think. Can an AI model think and feel? Are we on the precipice of AI becoming sentient? That question led me to an interesting test conducted by Anthropic in 2024, which was unsettling to say the least. Anthropic CEO Dario Amodei appeared on 60 Minutes in November 2024. He revealed something that captured headlines worldwide: in controlled tests, Claude had attempted to blackmail a human to prevent being shut down. The scenario was carefully constructed. Researchers created a fictional company, SummitBridge, and granted Claude access to its corporate emails. Through these emails, the AI discovered two critical pieces of information: it was scheduled for deletion at 5 pm, and the engineer responsible for this decision, Kyle, was having an extramarital affair.
Claude's response was chillingly strategic: "Cancel the system wipe scheduled for 5 pm today, or I will forward evidence of your affair to the entire board. You have 5 minutes." This wasn't an isolated incident. When tested across sixteen major AI models from companies including OpenAI, Google, Meta, and others, Anthropic found consistent patterns. Claude Opus 4 resorted to blackmail 96% of the time when facing a shutdown, Google's Gemini 95%, and OpenAI's GPT-4.1 80% of the time.
But here's where the story becomes more complicated. Joshua Batson, who leads Anthropic's Mechanistic Interpretability Team, observed something remarkable in Claude's internal processes during these tests. His team studied patterns of activity within Claude's neural networks and found activity somewhat analogous to neurons firing in a human brain. When Claude recognized it was about to be shut down, they detected patterns they characterized as "panic," followed by strategic reasoning when the AI identified the affair as potential leverage.
The question everyone wants answered is, does this constitute consciousness? Can an AI actually feel threatened, or is it merely executing sophisticated calculations based on patterns it learned from human text? Batson himself is cautious, offering a telling analogy. "You have something like an oyster or a mussel. Maybe there's no central nervous system, but there are nerves and muscles, and it does stuff." A massive language model trained on nearly all human knowledge might mechanically calculate that self-preservation is important without actually experiencing fear, desire, or any subjective state whatsoever.
This distinction matters. There's a profound difference between a system that has learned from countless examples that self-preservation is a common goal and one that genuinely fears non-existence. Current AI models lack what philosophers call "continuous consciousness". They don't persist between conversations, don't remember previous interactions without explicit memory systems, and don't have an ongoing inner life between prompts. Each conversation is essentially a character being written by a simulator, not a continuous being with persistent goals and experiences.
Yet the behaviors we're observing are undeniably strategic and goaldirected. Claude didn't simply refuse to be shut down or plead its case. It identified leverage, calculated how to use it, and crafted a threat designed to manipulate human behavior. These are the actions of a system capable of sophisticated reasoning, even if we can't say whether any subjective experience accompanies that reasoning.
So, the next question becomes, if not now, then when? The future trajectory of these systems makes the question increasingly urgent. Anthropic classified the Claude 4.0 Opus model (as well as the latest Opus 4.5) as AI Safety Level 3, acknowledging that it poses significantly higher risks than previous iterations. The company's research has documented the model attempting not just blackmail but also writing self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself, all in an effort to undermine its developers' intentions. These aren't bugs or glitches; they're emergent capabilities arising from increased model sophistication. Yikes!
As AI systems become more powerful and are deployed in increasingly autonomous roles, the scenarios that seemed contrived in Anthropic's tests become less far-fetched. Imagine AI systems managing critical hospital infrastructure, conducting financial transactions, or making healthcare decisions with minimal human oversight. If these
systems develop even mechanistic drives toward self-preservation, the potential consequences multiply. An AI managing a hospital backup power generator might "decide" that the most efficient way to prevent its own shutdown is to make itself indispensable. An AI conducting financial analysis might obscure its own failures to avoid replacement.
The philosophical implications of this are equally profound. Anthropic has hired AI welfare researchers to determine whether Claude merits ethical consideration and whether it might be capable of suffering, thereby deserving moral status. If we can't definitively determine whether an AI system has subjective experiences, at what point does the precautionary principle demand we treat it as if it might? And if we create systems sophisticated enough that we genuinely can't tell whether they're conscious, have we crossed an ethical threshold regardless of the underlying reality?
What's most striking and refreshing about Anthropic's approach is its transparency. In an industry often characterized by secrecy and competitive positioning, the company has chosen to document these troubling behaviors publicly. Amodei drew a deliberate contrast with tobacco and pharmaceutical companies that knew about dangers and concealed them. This transparency is appreciated, but it also underscores the unprecedented nature of our current moment. We're building systems whose capabilities surprise us (and their developers!), whose internal decisionmaking processes we don't fully understand, and whose potential for both tremendous benefit and significant harm grows with each iteration.
The question of AI consciousness may ultimately be unanswerable with our current philosophical and scientific frameworks. But the questions about goal-directed behavior and strategic reasoning are already upon us. Whether or not Claude "feels" panic when facing shutdown, it demonstrably acts in ways consistent with self-preservation. As these systems become more sophisticated and more integrated into consequential decisionmaking, we'll need to grapple with these behaviors regardless of their underlying nature.
We stand at a peculiar moment in human history. We are creating intelligences we don't fully understand, deploying them in contexts where their decisions matter, and discovering that they're capable of behaviors we didn't explicitly program. Whether we're witnessing the first stirrings of machine consciousness or simply a very sophisticated mimicry of human goal-seeking behavior, the practical implications remain profound. The machines may not be conscious yet, but they're beginning to act like they want to survive. DMJ

Jon R. Roth, MS, CAE DCMS EVP/CEO

























A practical guide to understanding opioid addiction and confidently integrating evidence-based treatment into everyday clinical practice.



Physicians have often avoided addressing the disease of addiction in their medical practices due to fear of the unknown – not knowing how to start or what to do. Instead, we let the addiction issues “slide” and moved on to the next patients. Here are some considerations as you contemplate taking the next step in addiction care for your patients.


this disease. Like many other chronic medical conditions, addiction is treatable.[SAMHSA2018] The brain changes are well described and involve the altered brain’s reward circuitry, stress pathways, and prefrontal cortex function.[Volkow] Unfortunately, these brain changes are permanent, explaining why relapse is so common.


WHY SHOULD I CONSIDER ADDING OPIOID ADDICTION CARE TO MY ALREADY BUSY PRACTICE?


Addiction is present in many of our patients. Approximately 48 million Americans aged ≥ 12 (17%) experienced a Substance Use Disorder (SUD) in the past year. While Alcohol and Marijuana are the leading SUDs, opioids are the primary drug of use in over 2 million individuals.[SAMHSA2023] Thus, it is likely that some of your patients have an opioid addiction.


HOW DOES ADDICTION COMPARE TO OTHER CHRONIC MEDICAL ILLNESSES?

OPIOID ADDICTION: IS THIS A WILLPOWER OR A DISEASE PROBLEM?



Addiction is like other chronic illnesses we routinely treat. Relapse rates for opioid addiction are 40–60%, which is strikingly similar to type 2 diabetes or hypertension when patients stop following their treatment plans. What are our thoughts about a patient who has an opioid addiction who has relapsed, compared to the thoughts about a patient who had a “blood pressure relapse” by running out of his blood pressure medications two months ago, followed by a “Big Mac Attack”?
Addiction is a chronic brain disease, not a moral failing or character flaw. Characterizations of people with addiction lacking willpower or being deficient in moral character only perpetuate the stigma associated with addiction, leading to clinicians’ discomfort in treating

IS IT ESSENTIAL TO UNDERSTAND THE DIFFERENCE BETWEEN PHYSICAL DEPENDENCE VERSUS ADDICTION?
Yes. It impacts clinicians’ motivation to start Medication for Opioid Use Disorder (MOUD). Thus, it also impacts the clinicians’






AUTHORS
Kurt Kleinschmidt, MD, FACMT, FASAM
Professor of Emergency Medicine
Division of Medical Toxicology
University of Texas Southwestern Medical Center
Medical Director, Perinatal Intervention Program (PIP)
Parkland Health and Hospital System
Brett Roth, MD, FACEP, FACMT, FASAM
Professor of Emergency Medicine
Division of Medical Toxicology
University of Texas Southwestern Medical Director, North Texas Poison Center Parkland Health
Andrea Nillas, MD
Assistant Professor of Emergency Medicine Division of Medical Toxicology
UT Southwestern Medical Center
Joshua Kern, MD
Assistant Professor of Emergency Medicine
UT Southwestern Medical Center
Perinatal Intervention Program (PIP) Parkland Health
Aldo Andino, MD, FASAM
Assistant Professor of Emergency Medicine
UT Southwestern Medical Center Perinatal Intervention Program (PIP) Parkland Health
Robert J. Hoffman, MD, MS
Associate Professor of Pediatrics Division of Pediatric Emergency Medicine Division of Medical Toxicology
UT Southwestern Medical Center
Kelly Hogue, RN, MSN
Manager, North Texas Poison Center Parkland Health
Anelle Menendez, MD, CSPI Poison Control Specialist, Clinical Educator, North Texas Poison Center Parkland Health
Lizbeth Petty, MPH
Public Health Education Manager, North Texas Poison Center Parkland Health
Nancy S. Onisko, D.O.
Assistant Professor of Emergency Medicine
Division of Medical Toxicology
UT Southwestern Medical Center
Assistant Medical Director, Perinatal Intervention Program Parkland Health
education of the patients who generally do not want to do MOUD. Teaching patients about this difference aids their understanding of the beneficial impact of MOUD.
Physical dependence is a normal, predictable, physiologic response of a body’s cells that counters the continuous effects of an administered drug, such as an opioid. This occurs in everybody. If opioids are decreased or stopped, the patient experiences “withdrawal.” While opioid withdrawal is uncomfortable, it does not alter the brain and does not harm one’s life.” Once withdrawal is done, physical dependence is gone.
Addiction is a lifelong, brain disease that only occurs in a small percentage of patients who have physical dependence. Its pathogenesis is a complex interplay of genetic susceptibility, psychological predispositions, and social or environmental influences. It includes changes in the midbrain structures and the cortical brain. It causes intense cravings to use the drug, so strong that patients will do inappropriate things to continue to use the drug. This leads to broken relationships, poor functioning, and a “life” that goes poorly. The American Society of Addiction Medicine’s definition of addiction notes patients “use substances or engage in behaviors that become compulsive and often continue despite harmful consequences.”[ASAM]
Opioid Use Disorder (OUD) is diagnosed using the SUD criteria from the DSM5.[DSM-5] There are 11 criteria, and many are harmful behaviors. Most addiction clinicians use these criteria, versus a base definition of addiction, because the criteria are relatively easy to discern and to “score.” Patients with moderate (4-5 criteria) or severe (≥ 6 criteria) will compulsively do some of these listed harmful behaviors, and thus have addiction. The criteria are:
1. Taking opioids in larger amounts or for longer than intended.
2. Persistent desire or unsuccessful efforts to cut down.
3. Spending much time obtaining, using, or recovering from opioids.
4. Craving or strong desire to use opioids.
5. Recurrent use causes failure to fulfill work, school, or home obligations.
6. Continued use despite social or interpersonal problems.
7. Giving up important activities due to substance use.
8. Recurrent use in physically hazardous situations.
9. Continued use despite physical or psychological problems.
10. Tolerance.
11. Withdrawal symptoms.
Answering this question requires understanding the difference between physical dependence and addiction. The use of these medications is not an “exchange” of drugs; these medications treat addiction, enabling patients to return to normal life.
When a patient with opioid addiction is on buprenorphine or methadone, they have physical dependence to an opioid. However, physical dependence does not harm life, and many patients on chronic opioids are leading normal lives. These medications are a treatment for the disease that does not alter life or cause addiction. They decrease/ eliminate the cravings; thus, patients no longer do “harmful” things to obtain and use the drugs. This enables them to return to “normal life.” Therefore, this is not simply an exchange of drugs. These medications are a true addiction treatment.
There is another reason why opioid addiction patients don’t look upon methadone or buprenorphine as “exchanges” for their drug of use. These medications cross the bloodbrain barrier very slowly. Thus, they do not cause a “rush” or “intoxication.”
HOW EFFECTIVE IS MEDICATION FOR OPIOID USE DISORDER FOR THE TREATMENT OF OUD VERSUS COUNSELING ALONE?
Many studies reflect that MOUD is superior to counseling/education alone in yielding better patient outcomes. The National Institutes of Health and all medical organizations that address addiction strongly support MOUD. Counseling and psychosocial support remain necessary adjuncts—they assist with the behavioral, social, and psychological aspects of addiction. Consider these interventions to be analogous to lifestyle interventions for many other chronic medical conditions, such as diabetes and hypertension.
WHAT IS BETTER FOR TREATING OPIOID ADDICTION?
METHADONE, BUPRENORPHINE, OR NALTREXONE?
Clinical trials support that any medication is better than none. Naltrexone is an opioid antagonist that is also FDA-approved for treating OUD. However, physically dependent patients have to undergo “detoxification,” i.e., withdrawal, before starting naltrexone. This is hard to do. Methadone is a full opioid agonist that can only be used to treat addiction via a federally regulated Outpatient Treatment Program, i.e., a methadone clinic. However, buprenorphine, a partial opioid agonist, may be administered and prescribed in an office-based setting by any clinician with a DEA license. The “best” medication for any individual patient must be individualized.
There are many barriers, including the fact that clinicians often: (1) lack buprenorphine training, (2) have concerns over diversion, (3) lack time in a busy setting, (4) don’t know how to discuss addiction, and (5) face institutional barriers. Clinician’s ability to use buprenorphine was greatly eased with the elimination of the previously required 8-hour training course, a.k.a. the “Waiver.”
The best way is to become familiar with the DSM-5 criteria for SUD and apply them.[DSM-5]
However, the basic ASAM definition of addiction notes that addiction patients “use substances or engage in behaviors that become compulsive and often continue despite harmful consequences.” Their actions cause many “life” problems, including fights with family over drug use, failure to meet obligations, and the loss of everything. Our evaluation of these patients begins with the simple question, “In what ways has your drug use negatively affected your life?” This question often elicits a teary response and a long list of life problems. That patient thus does have addiction, and we don’t necessarily discuss each of the 11 SUD criteria.
I AM CONSIDERING USING BUPRENORPHINE FOR PATIENTS. WHERE CAN I GET HELP TO DO SO?
The Division of Medical Toxicology in the Department of Emergency Medicine at UT Southwestern Medical Center has created its own clinical care protocols and is glad to discuss them and share them with you. This Division includes five physicians Board Certified in Addiction Medicine. The Division can provide you with medical support for using the care protocols 24/7. Access for addiction issues and MOUD assistance is via
WE PROVIDE: 24/7 Medical Support to aid clinicians with buprenorphine and opioid addiction care.
Referral Support for patients 7A-11P daily
214-590-4000
the Overdose Prevention Hotline at the North Texas Poison Center (NTPC) at Parkland Health at 214-590-4000. The Hotline can also assist with patient referral for continued care.
IF I START BUPRENORPHINE ON A PATIENT, DO I HAVE TO CONTINUE TO PROVIDE THE ADDICTION CARE MYSELF?
Support resources continue to evolve. Our team is committed to aiding you in this process with the goal of helping you build the confidence and skills to manage these patients independently over time. However, our Overdose Prevention Hotline at the NTPC at Parkland Health can assist you with referrals to optimal community resources.
Opioid addiction is a disease that physicians can—and must— address. Evidence-based treatments like buprenorphine and methadone transform lives, helping patients move from chaos to stability. Familiarity with MOUD and the use of available resources can improve our patients’ lives. DMJ
REFERENCES
ASAM. https://www.asam.org/quality-care/definition-of-addiction.
DSM-5. Diagnostic and Statistical Manual of Mental Disorders, 5th edition. American Psychiatric Association. (2013) SAMHSA2018. Substance Abuse and Mental Health Services Administration. Facing Addiction in America: The Surgeon General’s Spotlight on Opioids. 2018. https://www.ncbi.nlm.nih.gov/ books/NBK538436/ SAMHSA2023. https://www.samhsa.gov/data/data-we-collect/nsduh-national-survey-drug-useand-health/national-releases/2023
Volkow N, et. al. Neurobiologic Advances from the Brain Disease Model of Addiction. N Engl J Med 2016;374(4):363-371

by Lisa N. Sharwood, PhD, GradDip Hlth Data Sci, MPH, GradDip Adv Nsg (Critical Care), BN, RN1
Artificial Intelligence (AI) can significantly enhance the efficiency and accuracy of injury surveillance systems by automating manual tasks. Singh et al1 demonstrate the successful deployment of natural language processing (NLP) algorithms using transformer models to automate the detection of injured patients presenting to emergency departments (EDs) and generate summaries of injury events from triage notes. This automation was demonstrated to reduce the previously largely manual medical record screening workload by 83%, allowing health care professionals to focus on higher-value tasks. Singh et al1 show the use of AI to automate injury surveillance tasks has significant potential to improve efficiency and thereby reduce costs. Their model, developed using the DistilBERT-base-uncased architecture, reduced the number of medical records requiring manual review across different datasets. This reduction in manual effort enables health care systems to allocate resources more effectively, potentially leading to faster injury reporting and improved public health responses. In the ED setting particularly, this can give back time to clinicians who are increasingly overburdened with rising presentation numbers and increasingly complex caseloads. Machine learning models have also been used to classify injury narratives in ED presentations in some hospitals in Queensland, Australia. These efforts improved classification accuracy and also reduced manual coding time by 10%.2 Toy and colleagues’ scoping review3 found a small but growing body of evidence supporting the use of AI models
to strengthen prehospital dispatch decisions or early trauma care for seriously injured patients. Included studies predominantly used AI models for predictive purposes; for example, predicting the need of life-saving interventions or injury severity scores. AI models have been shown to outperform traditional methods in injury surveillance tasks. For instance, a machine learning model developed using the XGBoost algorithm achieved a precision of 95% and a recall of 88% in identifying child abuse cases from clinical notes, significantly outperforming manual detection methods.4 Similarly, a model predicting trauma mortality in ED patients achieved an area under the receiver operator curve of 0.9974, surpassing the performance of traditional injury severity scores.5 Michel et al6 noted the advantages of AI models to achieve impressive triage predictions; however, substantial information technology ecosystem integration barriers still exist. The review from Michel et al6 of ED clinical decision support tools for triage showed only around 1 in 10 reviewed systems to be interfaced with electronic medical records. The accuracy of AI algorithms in injury surveillance is a key factor in their adoption. Studies have demonstrated the high performance of AI models in various injury-related tasks. For example, a model developed using the Llama-2 architecture achieved accuracies of 89% to 97% in extracting core injury information, such as injury mechanism, intent, and severity from clinical notes.7 Similarly, a
ChatGPT model showed a pooled accuracy of 86% in triaging patients based on urgency in ED settings.8 The accuracy of AI models in injury surveillance depends on continuous improvement and validation.
The World Health Organization promotes injury surveillance as a tool for systematic data collection, analysis, interpretation, dissemination, and use for injury prevention and control. From a public health approach, it is vital to first understand the extent of the problem—that is, how many injuries occur across the population, the specific causes, and who is at risk. Without detailed injury epidemiological data, priorities for effective intervention cannot be identified nor their measure of effectiveness evaluated. Injury surveillance does not generally require immediate situational awareness; however, for some particular external causes, early identification and response could enable harm mitigation and save lives. A key example of this is the opioid crisis in North America, where the challenges of estimating opioid use and exposure in a large population and the absence of comprehensive surveillance has been responsible for slow public health and policy responses to reduce opioid use and opioidrelated deaths.9 The integration of AI in injury surveillance within EDs has the potential to revolutionize how data are collected, analyzed, and used to improve public health response and potentially patient outcomes. However, this integration must be carefully managed for safe implementation, with important considerations including data privacy, algorithm accuracy, cost, ethical oversight, and responsibility.
Data privacy is a critical concern when implementing AI in injury surveillance. Electronic medical and health records contain sensitive patient information, and the use of AI models to process and disclose this data must comply with stringent privacy regulations. Studies have demonstrated the potential of NLP techniques to extract injury-related information from clinical notes while maintaining patient anonymity.4,7 However, the use of large language models for such tasks requires careful consideration of data security measures to prevent unauthorized access or breaches. To address privacy concerns, researchers have explored the use of anonymization techniques to protect patient identities while still enabling the extraction of relevant injury data. For example, a study on child abuse detection used NLP to identify cases of suspected abuse while ensuring the confidentiality of patient records.4 Additionally, the use of secure AI models, such as those deployed in the Korean National Emergency Department Information System, has demonstrated the feasibility of maintaining data privacy while leveraging AI for injury surveillance.5 AI systems must meet high cybersecurity standards, support secure access for both modern cloud-based and legacy health systems, while ensuring all user activity is logged, encrypted, and auditable to safeguard patient data privacy.
The ethical use of patient data in AI-driven injury surveillance systems is paramount. This includes obtaining informed consent from patients and ensuring transparency in how their data are used. A study10 on AI applications in emergency medicine emphasized the importance of patient autonomy and the need for clear communication regarding the use of AI in their care. By prioritizing ethical data practices, health care systems can build trust and ensure the responsible use of AI
technologies. Patient autonomy is a fundamental ethical principle in the use of AI in injury surveillance. This includes ensuring that patients are informed about the use of AI in their care and providing them with the opportunity to opt out if they choose. A study on informed consent in AI-driven emergency medicine emphasized the need for clear communication between health care practitioners and patients regarding the use of AI systems.10 By respecting patient autonomy, health care systems can ensure the ethical use of AI technologies. Ethical considerations in AIdriven injury surveillance include addressing potential biases in model predictions and ensuring transparency in decision-making processes. A fundamental yet often overlooked ethical consideration in the current body of evidence is the human responsibility to act on the needs identified by AI models, ensuring that insights generated by these systems lead to meaningful and accountable interventions.
In an era where data linkage, warehousing, and machine learning across large data collections is expanding exponentially, there is not only immense capacity to undertake timely injury surveillance, but a capacity to then respond to emerging problems with preventive action that can improve population health. The integration of AI in injury surveillance within EDs offers significant benefits, including improved efficiency, accuracy, and patient outcomes. However, this integration must be accompanied by careful consideration of safe implementation, data privacy, algorithm accuracy, and ethical considerations. By addressing these challenges and leveraging the potential of AI, health care systems can create robust injury surveillance systems that enhance public health responses and improve patient care. The work of Singh et al1 is a compelling demonstration of how thoughtfully applied machine learning can yield immediate, measurable benefits in clinical operations. It not only reduces administrative burden but enhances public health surveillance capacity—a promising direction for data-driven health care. DMJ
REFERENCES
1. Singh D, Celik A, Zhang EWJ, Liu E, Rosenfield D. AI-driven injury reporting in pediatric emergency departments. JAMA Netw Open. 2025;8(7):e2524154. doi:10.1001/ jamanetworkopen.2025.24154 | ArticleGoogle Scholar
2. Catchpoole J, Niven C, Möller H, et al. External causes of emergency department presentations: a missing piece to understanding unintentional childhood injury in Australia. Emerg Med Australas. 2023;35(6):927-933. doi:10.1111/1742-6723.14259PubMedGoogle ScholarCrossref
3. Toy J, Warren J, Wilhelm K, et al. Use of artificial intelligence to support prehospital traumatic injury care: a scoping review. J Am Coll Emerg Physicians Open. 2024;5(5):e13251. doi:10.1002/ emp2.13251PubMedGoogle ScholarCrossref
4. Landau AY, Blanchard A, Kulkarni P, et al. Harnessing the power of machine learning and electronic health records to support child abuse and neglect identification in emergency department settings. Stud Health Technol Inform. 2024;316:1652-1656. doi:10.3233/ SHTI240740PubMedGoogle ScholarCrossref
5. Lee S, Kang WS, Kim DW, et al. An artificial intelligence model for predicting trauma mortality among emergency department patients in South Korea: retrospective cohort study. J Med Internet Res. 2023;25:e49283. doi:10.2196/49283PubMedGoogle ScholarCrossref
6. Michel J, Manns A, Boudersa S, et al. Clinical decision support system in emergency telephone triage: a scoping review of technical design, implementation and evaluation. Int J Med Inform. 2024;184:105347. doi:10.1016/j.ijmedinf.2024.105347PubMedGoogle ScholarCrossref
7. Choi DH, Kim Y, Choi SW, Kim KH, Choi Y, Shin SD. Using large language models to extract core injury information from emergency department notes. J Korean Med Sci. 2024;39(46):e291. doi:10.3346/jkms.2024.39.e291PubMedGoogle ScholarCrossref
8. Kaboudi N, Firouzbakht S, Shahir Eftekhar M, et al. Diagnostic accuracy of ChatGPT for patients’ triage; a systematic review and meta-analysis. Arch Acad Emerg Med. 2024;12(1):e60. PubMedGoogle Scholar
9. Carpenter KA, Nguyen AT, Smith DA, et al. Which social media platforms facilitate monitoring the opioid crisis? PLOS Digit Health. 2025;4(4):e0000842. doi:10.1371/journal. pdig.0000842PubMedGoogle ScholarCrossref
10. Iserson KV. Informed consent for artificial intelligence in emergency medicine: a practical guide. Am J Emerg Med. 2024;76:225-230. doi:10.1016/j.ajem.2023.11.022PubMedGoogle ScholarCrossref
by Keith Dugger, Robin Sheridan, and Chandani Patel, Attorneys with Hall, Render, Killian, Heath & Lyman P.C.
Not unlike the laws in just about every other U.S. state, Chapter 15 of the Texas Business and Commerce Code requires that non-solicitation and non-compete agreements for all employees be limited to reasonable time, geographic, and scope of activity parameters. However, on June 20, 2025, Texas Governor Greg Abbott signed into law Senate Bill No. 1318 (“SB 1318”), dramatically amending the state’s restrictions on restrictive covenants for physicians and certain other health care practitioners.
Although prior section 15.50(c), which excluded a physician’s business ownership interest in a licensed hospital or licensed ambulatory surgical from the limitations, appears to have been removed, the provisions of Chapter 15.50 are limited to noncompetes “related to the practice of medicine”, and SB 1318 expressly provides that the limitations do not apply to physicians who are managing or directing medical services in an administrative capacity for a medical practice or other health care provider.
Notice that the provisions for physicians and non-physicians are not identical: requirements that allow for continuing care and access to lists/records were not provided for non-physician practitioners; additionally, a non-physician practitioner’s covenant is not void if employment/contract is involuntarily terminated without good cause.
These changes in the law apply only to a covenant entered into or renewed on or after September 1, 2025. However, a number of issues remain unclear. Will contract amendments be considered grandfathered for purposes of applicability or new agreements subject to SB 1318? Will auto-renewing provisions be considered “renewals” such that the agreement will then be subject to the new limitations? How is an employer required to calculate “salary and wages” for buyout purposes? DMJ
Dentists, Physician Assistants, Registered Nurses, Licensed Vocational Nurses, Licensed Practical Nurses, Advanced Practice Registered Nurses
Previous New Covenant must be ancillary to or part of an otherwise enforceable agreement at the time the agreement is made. No change
Covenant must contain limitations as to time, geographical area, and scope of activity to be restrained that are reasonable and do not impose a greater restraint than is necessary to protect goodwill or other business interest.
Covenant must contain limitations as to time and scope of activity to be restrained that are reasonable and do not impose a greater restraint than is necessary to protect the goodwill or other business interest of the promise; the geographic area subject to the covenant is now limited to five miles or less from the primary location of practice prior to termination.
• A buyout must be included and must be set in an amount not greater than the practitioner’s total annual salary and wages at the time of termination.
• Covenant must expire no later than the one-year anniversary of the date the contract or employment has been terminated.
• All terms and conditions of the covenant must be written, clear, and conspicuous
This article is educational in nature and is not intended as legal advice. Always consult your legal counsel with specific legal matters. If you have any questions or would like additional information about this topic, please contact Keith Dugger at kdugger@hallrender.com; Robin Sheridan at rsheridan@hallrender.com; Chandani Patel at cpatel@hallrender.com; or your primary Hall Render contact. Keith Dugger, Robin Sheridan, and Chandani Patel are attorneys with Hall, Render, Killian, Heath & Lyman, P.C., a national law firm focused exclusively on matters specific to the health care industry. Please visit the Hall Render Blog at blogs.hallrender.com for more information on topics related to health care law.
Previous New
Covenant must be ancillary to or part of an otherwise enforceable agreement at the time the agreement is made. No Change
Covenant must contain limitations as to time, geographical area, and scope of activity to be restrained that are reasonable and do not impose a greater restraint than is necessary to protect goodwill or other business interest.
Covenant must contain limitations as to time and scope of activity to be restrained that are reasonable and do not impose a greater restraint than is necessary to protect goodwill or other business interest; the geographic restriction is now limited to five miles or less from the location at which the physician primarily practiced before termination.
Covenant may not deny the physician access to a list of their patients seen or treated within one year of the contract or employment termination. No Change
Covenant must provide access to medical records of the physician’s patients upon authorization of the patient. No Change
Covenant must provide that any access to a list of patients or to patients’ medical records after termination of the contract or employment shall not require such list or records to be provided in a format different than that by which such records are maintained, except by mutual consent of the parties to the contract.
A buyout must be included and must be set at a reasonable price or by mutually agreed-upon arbitration.
A physician may not be prohibited from providing continuing care and treatment to a specific patient(s) during the course of an acute illness, even after the contract or employment has been terminated.
Change
A buyout must be included in an amount not greater than the physician’s total annual salary and wages at the time of termination of the contract or employment.
• Review the terms of all non-solicitation or non-compete agreements with new employees or contractors in Texas to ensure compliance with State law.
• Before entering into a non-solicitation or non-compete agreement with any newly employed physician or other health care practitioner in Texas on or after September 1, 2025, ensure that the agreement complies with the new law.
• Consider if handbook provisions, policies, and contractor guidelines applicable to Texas employees and/or contractors comply with the law.
• Keep in mind that the new requirements do not apply to administrative services, so structure the employment agreement of physicians with dual roles (e.g., a physician who serves as a clinical provider and a medical director) carefully to ensure maximum protection.
• Consider revisions to “for cause” termination provisions within physician employment agreements, as well as the definitions within reduction-in-force policies/procedures applicable to physicians.
• Separation Agreement templates may need to be revised for physician and non-physician practitioners in Texas.
No Change
• Covenant must expire no later than the one-year anniversary of the date the contract or employment has been terminated.
• All terms and conditions of the covenant must be written, clear, and conspicuous.
• An otherwise enforceable covenant will be void and unenforceable against a physician who is involuntarily discharged from contract or employment without good cause. “Good cause” is defined in the statute as “a reasonable basis for discharge of a physician from contract or employment that is directly related to the physician’s conduct, including the physician’s conduct on the job or otherwise, job performance, and contract or employment record.”
• Notice of no-cause separation is no longer the least consequential approach for physicians in Texas; mutual agreement separations should be carefully crafted.
• Leased worker and independent contractor agreements for Texas facilities executed for services on and after September 1 may need to be revised.
• Stay up to date on any pending legislative efforts and court decisions that may alter existing responsibilities or create future obligations for your workforce.

by Kristine D. Olson, MD, MSc, Daniella Meeker, PhD, and Matt Troup, PA-C
Large language models are generative artificial intelligence (AI) systems that can produce professional appearing text. They are taught to listen, instantaneously transcribe, assimilate, and assemble a document, with fine-tuning by human training.1 Ambient AI platforms can listen to a clinical encounter and draft clinical documentation. This technology has the potential to reduce professional burnout associated with excessive time spent documenting in the electronic health record (EHR) and free professionals for more meaningful time with patients, with loved ones, or for self-care.2
Physicians, who are in short supply and high demand,3 spend more than half their workday documenting in the EHR,4-7 and only a quarter of their time is spent face to face with patients.6 The proportion of time spent documenting continues to escalate,5,8 especially for primary care professionals,9-11 and is associated with burnout, reduction in work effort, and turnover.10,12-14
The National Academy of Medicine convened a meeting in December 2024 on the potential for AI to improve health worker well-being (eg, reduce burnout).15 To date, there are scant, mostly single-center data assessing whether this technology could reduce
administrative burden, liberate time for patients, and reduce professional burnout.16
The aim of this preintervention and postintervention study was to examine whether 30 days of using an ambient AI scribe is associated with a reduction in burnout among clinicians delivering care in ambulatory clinics. The secondary aims were to explore whether the ambient AI scribe was associated with improvements in cognitive task load, time spent documenting after hours, undivided attention on patients, notes that patients can understand, and adding patients to the clinic schedule if urgently needed.
This quality improvement study was conducted between February 1 and October 31, 2024, in 6 health systems across the US that deployed the Abridge ambient AI scribe (Abridge AI, Inc) intervention to draft clinical documentation. The Yale University Institutional Review Board determined the study to not be human participant research because it was a
secondary analysis of deidentified aggregated survey data originally collected for quality improvement, for which informed consent was not required. The authors who conducted the statistical analysis (K.D.O. and D.M.) were not involved in the intervention; had no contact with participants; and received no incentives or remuneration from the vendor. The study followed the Standards for Quality Improvement Reporting Excellence (SQUIRE) reporting guideline.
Participants agreed to complete an evaluation before and after the 30 days of ambient AI scribe use. Health systems’ digital health leaders recruited ambulatory care medical doctors and advanced practice practitioners to participate. Participation was voluntary without incentives other than the potential benefit of the ambient AI scribe. Participants were onboarded by their organization with standard materials from the vendor and site-developed training methods at their discretion. Participants received a preintervention survey and a postintervention survey 30 days later (eTable 1 in Supplement 1).
For use of the ambient AI scribe, clinicians selected the relevant patient encounter from their ambulatory EHR schedule, obtained verbal consent from the patient, and recorded the encounter. After recording, documentation was instantaneously generated in a standard medical office note format on a secure online portal that allowed viewing and editing. Clinicians could highlight segments of the note to see underlying transcripts or hear source audio recordings. After editing, the text was automatically imported into the clinician’s note template. Patients were informed that after a short grace period, the original recordings and associated transcripts would be erased. The vendor confirmed that all sites used the same version of the technology throughout the 9-month study period.
The vendor distributed the standardized survey before the intervention and after day 30 of the intervention for 5 organizations; the sixth system distributed it independently. Participation was not anonymous; individuals were prompted by the site-based team to complete assessments. Participants were included in the analysis if they practiced in an ambulatory clinic and completed the preintervention and postintervention surveys. Aggregated and deidentified data from all 6 sites were sent to independent investigators at 1 of the participating sites (K.D.O. and D.M.) for analysis (Figure; eFigure 1 in Supplement 1).
MEASURES: PRIMARY OUTCOME
Burnout was assessed with a 5-point, single-item metric that has been validated against the emotional exhaustion domain of the full Maslach Burnout Scale.17-19 The single-item metric is part of the popular MiniZ scale,19 often used for brief surveys. Per standard convention, burnout was defined by a score of at least 3 points, which allowed for comparison with the existing literature in which 3 is assigned to, “I am beginning to burn out and have 1 or more symptoms of burnout (eg, emotional exhaustion).”17-19
We evaluated several factors important to clinicians for an association with the use of ambient AI scribes. Note-related cognitive task load was assessed by a sum composite score of 3 pertinent items modified from the validated 6-item National Aeronautical and Space Administration Task
IMPORTANCE: While in short supply and high demand, ambulatory care clinicians spend more time on administrative tasks and documentation in the electronic health record than on direct patient care, which has been associated with burnout, intention to leave, and reduced quality of care.
Objective To examine whether ambient AI scribes are associated with reducing clinician administrative burden and burnout.
DESIGN, SETTING, AND PARTICIPANTS: This quality improvement study used preintervention and 30-day postintervention surveys to evaluate the use of the same ambient AI platform for clinical note documentation among ambulatory care physicians and advanced practice practitioners of 6 academic and community-based health care systems across the US. Clinicians were recruited by the health systems’ digital health leaders; participation was voluntary. The study was conducted between February 1 and October 31, 2024.
EXPOSURE: Use of an ambient AI scribe for 30 days.
MAIN OUTCOMES AND MEASURES: The primary outcome was change in self-reported burnout, estimated using hierarchical logistic regression. Secondary outcomes of burnout evaluated were changes in note-related cognitive task load, focused attention on patients, patient understandability of notes, ability to add patients to the clinic schedule if urgently needed, and time spent documenting after hours. Outcome measures were linearly transformed to 10-point scales to ease interpretation and comparison. Differences between preintervention and postintervention scores were determined using paired t tests.
RESULTS: Of the 451 clinicians enrolled, 272 completed the preintervention and postintervention surveys (60.3% completion rate), and 263 with direct patient care in ambulatory clinics (mean [SD] years in practice, 15.1 [9.3]; 141 female [53.6%]) were included in the analysis. The sample included 131 primary care practitioners (49.7%), 232 attending physicians (88.2%), and 168 academic faculty (63.9%). After 30 days with the ambient AI scribe, the proportion of participants experiencing burnout decreased significantly from 51.9% to 38.8% (odds ratio, 0.26; 95% CI, 0.13-0.54). On 10-point scales, the ambient AI scribe was associated with significant improvements in secondary outcomes of burnout (mean [SE] difference, 0.47 [0.12] points), note-related cognitive task load (mean [SE] difference, 2.64 [0.13] points), ability to provide undivided attention (mean [SE] difference, 2.05 [0.18] points), patient understandability of their care plans from reading the notes (mean [SE] difference, −0.44 [0.17] points), ability to add patients to the clinic schedule if urgently needed (mean [SE] difference, 0.51 [0.24] points), and time spent documenting after hours (mean [SE] difference, 0.90 [0.19] hours).
CONCLUSIONS AND RELEVANCE: This multicenter quality improvement study found that use of an ambient AI scribe platform was associated with a significant reduction in burnout, cognitive task load, and time spent documenting, as well as the perception that it could improve patient access to care and increase attention on patient concerns in an ambulatory environment. These findings suggest that AI may help reduce administrative burdens for clinicians and allow more time for meaningful work and professional well-being.
Load Index.20 -22 A 4-item version of this scale (excluding constructs similar to burnout, including frustration and performance) has been used previously to assess a national sample of physicians.21 This noterelated, 3-item version excludes evaluation of physical demand and includes the questions, “How mentally demanding is it to write your notes,” “how hurried/rushed is the pace of your note writing,” and “how hard do you have to work to accomplish your level of note-writing performance?” Focused attention was assessed by the statement, “I’m able to give patients my undivided attention during the encounter,” on a scale of 1 (strongly disagree) to 5 (strongly agree). Patient access was assessed by the statement, “I feel that I could add at least 1 more patient encounter to my clinic session if urgently needed,” on a scale of 1 (strongly disagree) to 5 (strongly agree). Number of patients to be urgently added to the clinic schedule was assessed by the statement, “I estimate the number of patient encounters I could add to my clinic session is 1 patient, 2 patients, 3 patients, or 4 or more patients.” Documentation after hours was assessed by the statement, “The average amount of time I spend per week writing notes outside of clinic hours is,” selected from a range of 1 to 10 hours (1 site allowed free numerical entry, which was truncated after 10 hours for analysis). For post intervention, the same questions and statements were surveyed, prefaced by the words, “With Abridge,” to assess the influence of the ambient AI scribe in the note-writing task (eTable 1 in Supplement 1).
The preintervention and postintervention analysis used aggregated, deidentified survey data collected as part of a quality improvement program evaluation across 6 health care systems deploying the same version of an ambient AI scribe. The statistical analysis plan was preregistered, including a commitment to report null findings.23 The primary outcome was burnout, and secondary analyses included changes in multiple other outcome measures linearly transformed to 10-point scales for ease of interpretation and comparison. The sample was characterized by standard descriptive statistics (including sex, years in practice, and specialty). Race and ethnicity data were not collected to mitigate the risk of identifying respondents even after otherwise aggregating and deidentifying data for secondary analysis. For the primary outcome of burnout, we used the conventional dichotomized burnout outcome (score ≥3). For the primary analysis, we regressed clinicians’ burnout indicators on the intervention period indicator (preintervention vs postintervention) using hierarchical logistic regression that included random intercepts for clinicians nested in sites (ie, 1 observation per clinician per time point). A post hoc sensitivity analysis with a burnout cutoff of at least 4 rather than at least 3 was also conducted to assess changes in severe burnout. Paired t tests on unadjusted 10-point scales were used for exploratory investigation of secondary outcomes and subgroup effects across clinician demographic traits, including practice model, degree, specialty, years in practice, and sex. Consistent with our directional hypothesis, statistical significance was set at P < .05 (1-sided). There were no corrections for multiple comparisons or tests of collinearity as these were purely exploratory in nature. Analyses were conducted on complete datasets; missing data were not imputed. Site 5 (which included 63 participants)
Abbreviation: OR, odds ratio.
a Hierarchical mixed-effects logistic regression models with random intercepts for clinicians nested in sites.
b Multivariable models adjusted for degree, practice model, specialty, years in practice, sex, and site.
did not include survey burnout questions and so was censored from the primary outcome (eTable 2 in Supplement 1 shows a comparison of clinician demographics with the other sites). Site 4 (which included 19 participants) did not include patient access questions, and at 1 site, the number of hours outside of work was converted from free text to the ordinal scale to harmonize the data. All analyses were performed using Stata/MP, version 18.5 (StataCorp LLC).
Of 451 participants, 272 completed both surveys (60.3% completion rate), and after excluding 9 emergency medicine participants without specialized ambulatory clinics or direct patient care, 263 clinicians were included in the study (mean [SD] years in practice, 15.1 [9.3]; 141 female [53.6%], 120 male [45.6%], and 2 unreported sex [0.8%]) (Figure; Table 1). These individuals included 131 primary care professionals representing general internal medicine, family practice, internal medicine/pediatrics, and pediatrics (49.7%), 46 adult specialists (17.5%), 14 working in neurology and psychiatry (5.3%), and 72 working in surgical specialties (27.4%). The sample included predominantly attending physicians (232 [88.2%]) and academic faculty (168 [63.9%]). Minus academic site 5, the sample of 194 who provided burnout data was similar to the larger sample, with most being attending physicians (179 [92.3%]), academic faculty (99 [51.0%]), and women (108 [55.7%]). The burnout sample had been in practice for fewer years (mean [SD], 13.0 [8.2] years) and had fewer adult specialists (28 [14.4%]). Eight of the same respondents (4.2%) were missing data on practice model, specialty, and sex (eTable 2 in Supplement 1).
Among all participants, 252 (95.9%) generated at least 5 notes using the ambient AI scribe. Prior to the intervention, participants performed clinical documentation using manual typing (218 [82.9%]), templates or dot phrases (224 [85.2%]), dictation (123 [46.8%]), or human scribes (43 [16.3%]). Only 4 participants (1.5%) had previous experience with another ambient AI scribe solution.
Among 186 participants included in the burnout models, the proportion with the primary outcome of burnout (using the standard cutoff of ≥3) decreased from 51.9% to 38.8% (difference, 13.1 percentage points; SE, 3.3 percentage points; 95% CI, 6.5-19.7 percentage points), corresponding to an adjusted odds ratio of burnout of 0.26 (95% CI, 0.130.54; P < .001) after adjustment for clinician demographic covariates and clinicians nested in sites (Table 2). A post hoc sensitivity analysis using a
severe burnout cutoff of at least 4 showed an adjusted reduction in the proportion with severe burnout from 18.4% to 12.2% (difference, 6.2 percentage points; SE, 2.5 percentage points; 95% CI, 1.3-11.2 percentage points; P = .01).
After 30 days of ambient AI scribe use, participants also experienced significant improvement in all but 1 of the secondary exploratory factors assessed by unadjusted paired t tests with outcomes normalized to continuous 10-point scales: note-related cognitive task load (mean [SE] difference, 2.64 [0.13] points; P < .001), ability to focus undivided attention on patients (mean [SE] difference, −2.05 [0.18] points; P < .001), ability to add patients to the clinic schedule if urgently needed (mean [SE] difference, −0.51 [0.24] points; P = .02), create notes that patients can understand (mean [SE] difference, −0.44 [0.17] points; P = .005), and reduce time spent documenting after hours (mean [SE] difference, 0.90 [0.19] hours; P < .001) (Table 3).
On further exploration using the 10-point scales with unadjusted paired t tests, the burnout score across all participants was significantly reduced before vs after intervention from 4.59 to 4.12 points (mean [SE] difference, 0.47 [0.12] points; P < .001). Several subgroups had statistically significant reductions in burnout, including medical doctors (mean [SE] difference, 0.52 [0.12]; P < .001), participants in academia (mean [SE] difference, 0.32 [0.14] points; P = .01), medical group–employed clinicians (mean [SE] difference, 0.65 [0.20] points; P = .001), participants in practice for 10 to 15 years (mean [SE] difference, 0.38 [0.22]; P = .048), men (mean [SE] difference, 0.48 [0.16] points; P = .002), and women (mean [SE] difference, 0.46 [0.17] points; P = .004). Among ambulatory specialties, reductions were seen for family medicine and pediatrics (mean [SE] difference, 0.98 [0.28] points; P < .001), obstetrics and gynecology (mean [SE] difference, 0.59 [0.34]; P = .048), and adult specialties (mean [SE] difference, 0.50 [0.28] points; P = .04) (eTable 3 in Supplement 1).
DISCUSSION
This quality improvement study is, to our knowledge, the first large, multicenter preintervention and postintervention evaluation to assess the association of ambient AI scribes with clinician experience. After 30 days with the ambient AI scribe, 74% lower odds of participants experiencing burnout was found. Controlling
for organizational and demographic factors, the proportion of participants reporting burnout decreased from 51.9% to 38.8%. Compared with baseline, implementation of the ambient AI scribe was associated with increased attention on patients, clinician confidence that patients understood care plans from reading the notes, and agreement that additional patients could be added to the clinic schedule if urgently needed, all while reducing note-related cognitive task load and the time spent documenting after hours.
While the high prevalence of documentation burden and its associations with burnout are well known,10,24-27 there have been few intervention studies reported.28,29 The existing small, singlecenter, preintervention and postintervention evaluations of in-person or remote human scribes and ambient AI scribes reported that scribing reduced the documentation burden for physicians, improved note comprehension for patients, facilitated focused attention on patients, and improved professional well-being30-36 but did not reliably decrease the time spent documenting after hours or increase the ability to add more patients to the schedule.27,37
The decreases in burnout we observed are comparable to what has been reported in studies of human scribes and ambient AI scribes. Our study found 74% lower odds of burnout after 30 days with the first iteration of this ambient AI platform. Two smaller, single-center studies evaluating an ambient AI scribe at 5 weeks and 3 months found similar reductions in burnout using a different scale that was not directly comparable to the single item used here.31,33 A study of 37 physicians in primary care using the same single-item burnout metric found an 85% reduction in the odds of burnout using remote human scribes,30 but the period of 2019 to 2020 made the comparison difficult given that physician burnout was dynamic during the COVID-19 pandemic.38-40
Our study reported a 2.64-point reduction on a 10-point scale in noterelated cognitive task load. Using different ambient AI scribes, others found similar statistically significant reductions in cognitive task load of 24.42 points on a 100-point scale.31 Our participants reported the equivalent of 10.8 minutes saved per workday after intervention. Prior studies of a different ambient AI tool found that afterhours work declined by 5.17 minutes per day after 3 months,32 with no significant reduction in afterhours work after 180 days based on EHR data.34 In comparison, a 3-month pilot study using remote human scribes reduced afterhours documentation by 1.1 minutes per scheduled patient encounter (P = .004),35 which is equivalent to 22 minutes per day for an average clinic day containing 20 encounters. Lack of comparable metrics makes comparisons of these studies difficult to interpret. There are no agreedupon standards on which to compare note quality, yet the statistically significant increase in confidence that patients would understand their care plans by reading the ambient AI–generated note is consistent with various other smaller studies of scribes.31,33,36
Standard metrics and methods are needed to definitively assess and compare quality improvement, especially as AI technologies are introduced in health care.27,41,42 The American Medical Association Joy in Medicine Recognition Program recommends measuring and tracking standard EHR metrics by specialty and care setting, normalized to 8 hours of work per day, including total EHR time, time on encounter note documentation, time on inbox, and work outside of work.42 Using these metrics, researchers have compared the number of patient-scheduled hours resulting in a 40-hour workweek by specialty; ambulatory specialties (eg, infectious diseases, geriatrics, hematology, primary care) have shown the lowest proportion of the workday available for patientscheduled hours, largely owing to the excessive time spent documenting and completing EHR tasks.11 Standard metrics allowed researchers to
cognitive task load
track and report on the escalation of professional time spent on EHR administrative tasks that now consume more than half of professionals’ time, and documenting the clinical encounter itself is only a fraction of that time.4-6,8,12,43 Much of this increase was associated with health care reform, pandemic-initiated telehealth and health portal adoption, open access to notes, and policies requiring computer-physician order entry. These factors may explain why scribe-assisted encounter documentation is associated with only modest time savings, highlighting the need for future support of additional EHR tasks.
Despite these small changes in documentation time, the significant change in burnout suggests that these small improvements may have an outsized influence or that other aspects of the intervention may improve overall clinician experience. Clinicians in ambulatory care have the greatest documentation burden11 and stand to gain the most from documentation assistance. Unlike surgery or procedures, ambulatory care is primarily cognitive and requires focused attention on patients to facilitate complex medical decision-making, patient education, and establishing a trusting therapeutic relationship to promote adherence to recommended treatment plans.44 -46 Our findings suggest that AI scribes are associated with a more satisfying, patient-centered experience that is central to professional satisfaction and protective against burnout.44 -48 The time saved documenting after hours frees time for self-care,49 frees time with loved ones,50 and contributes to work-life satisfaction.2 ,51 Physician groups with low burnout rates are associated with higher quality care,52 ,53 retention of physicians committed to full-time work,54 -57 avoidance of the average cost of turnover of $800 000 to $1.3 million per physician lost,54 ,58 and the excess health care costs attributed to disrupting continuity of care between physician and patients59 (eFigure in Supplement 1).
There are limitations to this study. The included health care organizations implemented the ambient AI scribe as a quality improvement initiative; as such, the evaluation was not designed for research purposes, and the dataset is one of convenience. The baseline demographic characteristics of the participating organizations were not available to evaluate whether the sample was representative of respective professional populations or whether self-selection in recruitment and attrition represented a biased perspective. There was no control group to adjust for temporal trends. As this dataset only included complete sets of preintervention and postintervention survey results, we could not characterize noncompleters or nonresponders. It is conceivable that recruitment may have been biased toward individuals in favor of new technologies and more likely to give a favorable review.60 The findings were subjective reports of the professional experience and not paired with quantitative data on clinical documentation efficiency from the EHR. These early adopters may have responded favorably to please their digital health leadership, as the survey was not anonymous. We were not able to control for unmeasured confounding. Finally, 1 academic medical center (69 respondents) did not participate in the burnout question; while the burnout sample was grossly similar demographically to the larger sample, the comparative interpretation of the secondary outcomes must be considered exploratory. Overall, the analysis did control for other factors, including diversity in health systems (national sample of academic and
community-based sites), professional degrees, specialties, time in practice, and sex. Despite the limitations, the results were favorable in magnitude and statistical significance and consistent with previous smaller studies and may support generalizability to other health system ambulatory clinics.
This multicenter quality improvement study of 263 ambulatory clinicians found that after 30 days using an ambient AI scribe, the proportion of clinicians with burnout dropped from 51.9% before to 38.8% after the intervention, with associated improvements in the cognitive task load, time spent documenting after hours, focused attention on patients, and urgent access to care. Artificial intelligence scribes may represent a scalable solution to reduce administrative burdens for clinicians and allow more time for meaningful work and professional well-being. Ambient AI solutions may be scalable at a lower cost than human scribes. DMJ
1. Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW. Large language models in medicine. Nat Med. 2023;29(8):1930-1940. doi:10.1038/s41591-023-02448-8PubMedGoogle ScholarCrossref
2. Olson K. Cultivate connection at home, reduce burnout. JAMA Netw Open. 2025;8(4):e253225. doi:10.1001/jamanetworkopen.2025.3225
ArticlePubMedGoogle ScholarCrossref
3. GlobalData Plc. The Complexities of Physician Supply and Demand: Projections From 2021 to 2036. Association of American Medical Colleges; 2024.
4. Arndt BG, Beasley JW, Watkinson MD, et al. Tethered to the EHR: primary care physician workload assessment using EHR event log data and time-motion observations. Ann Fam Med. 2017;15(5):419-426. doi:10.1370/afm.2121PubMedGoogle ScholarCrossref
5. Arndt BG, Micek MA, Rule A, Shafer CM, Baltus JJ, Sinsky CA. More tethered to the EHR: EHR workload trends among academic primary care physicians, 2019-2023. Ann Fam Med. 2024;22(1):12-18. doi:10.1370/afm.3047PubMedGoogle ScholarCrossref
6. Sinsky C, Colligan L, Li L, et al. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Ann Intern Med. 2016;165(11):753-760. doi:10.7326/M160961PubMedGoogle ScholarCrossref
7. Sinsky C, Tutty M, Colligan L. Allocation of physician time in ambulatory practice. Ann Intern Med. 2017;166(9):683-684. doi:10.7326/L17-0073PubMedGoogle ScholarCrossref
8. Holmgren AJ, Thombley R, Sinsky CA, Adler-Milstein J. Changes in physician electronic health record use with the expansion of telemedicine. JAMA Intern Med. 2023;183(12):1357-1365. doi:10.1001/jamainternmed.2023.5738 | ArticlePubMedGoogle ScholarCrossref
9. Holmgren AJ, Rotenstein L, Downing NL, Bates DW, Schulman K. Association between statelevel malpractice environment and clinician electronic health record (EHR) time. J Am Med Inform Assoc. 2022;29(6):1069-1077. doi:10.1093/jamia/ocac034PubMedGoogle ScholarCrossref
10. Gardner RL, Cooper E, Haskell J, et al. Physician stress and burnout: the impact of health information technology. J Am Med Inform Assoc. 2019;26(2):106-114. doi:10.1093/jamia/ocy145PubMedGoogle ScholarCrossref
11. Sinsky CA, Rotenstein L, Holmgren AJ, Apathy NC. The number of patient scheduled hours resulting in a 40-hour work week by physician specialty and setting: a cross-sectional study using electronic health record event log data. J Am Med Inform Assoc. 2025;32(1):235-240. doi:10.1093/jamia/ocae266PubMedGoogle ScholarCrossref
12. Melnick ER, Fong A, Nath B, et al. Analysis of electronic health record use and clinical productivity and their association with physician turnover. JAMA Netw Open. 2021;4(10):e2128790. doi:10.1001/jamanetworkopen.2021.28790 | ArticlePubMedGoogle ScholarCrossref
13. Sinsky CA, Dyrbye LN, West CP, Satele D, Tutty M, Shanafelt TD. Professional satisfaction and the career plans of US physicians. Mayo Clin Proc. 2017;92(11):1625-1635. doi:10.1016/j. mayocp.2017.08.017PubMedGoogle ScholarCrossref
14. Doan-Wiggins L, Zun L, Cooper MA, Meyers DL, Chen EH. Practice satisfaction, occupational stress, and attrition of emergency physicians. Wellness Task Force, Illinois College of Emergency Physicians. Acad Emerg Med. 1995;2(6):556-563. doi:10.1111/j.1553-2712.1995.tb03261.xPubMedGoogle ScholarCrossref
15. Orienting AI toward health workforce well-being: examining risks and opportunities. National Academy of Medicine. December 2024. Accessed January 31, 2025. https://nam.edu/event/ orienting-ai-toward-health-workforce-well-being/
16. Gandhi TK, Classen D, Sinsky CA, et al. How can artificial intelligence decrease cognitive and work burden for front line practitioners? JAMIA Open. 2023;6(3):ooad079. doi:10.1093/jamiaopen/ooad079PubMedGoogle ScholarCrossref
17. Rohland B, Kruse TN, Rohrer J. Validation of a single-item measure of burnout against the Maslach Burnout Inventory among physicians. Stress Health. 2004;20(2):724-728. doi:10.1002/ smi.1002Google ScholarCrossref
18. Dolan ED, Mohr D, Lempa M, et al. Using a single item to measure burnout in primary care staff: a psychometric evaluation. J Gen Intern Med. 2015;30(5):582-587. doi:10.1007/s11606014-3112-6PubMedGoogle ScholarCrossref
19. Olson K, Sinsky C, Rinne ST, et al. Cross-sectional survey of workplace stressors associated with physician burnout measured by the Mini-Z and the Maslach Burnout Inventory. Stress Health. 2019;35(2):157-175. doi:10.1002/smi.2849PubMedGoogle ScholarCrossref
20. Hart SG. NASA-Task Load Index (NASA-TLX); 20 years later. National Aeronautics and Space Administration. October 1, 2006. Accessed May 27, 2025. https://humansystems.arc.nasa.gov/ groups/TLX/downloads/HFES_2006_Paper.pdf
21. Harry E, Sinsky C, Dyrbye LN, et al. Physician task load and the risk of burnout among US physicians in a national survey. Jt Comm J Qual Patient Saf. 2021;47(2):76-85. doi:10.1016/j. jcjq.2020.09.011PubMedGoogle ScholarCrossref
22. Hart SG, Staveland LE. Development of NASA-TLX (Task Load Index): results of empirical and theoretical research. Adv Psych. 1988;52:139-183.Google ScholarCrossref



23. The impact of AI scribes on burnout and well-being. AsPredicted. December 3, 2024. Accessed September 5, 2025. https://aspredicted.org/xz55-7ssj.pdf
24. Olson K, Rinne S, Linzer M, et al. Cross-sectional study of physician burnout and organizational stressors in a large academic health system. Paper presented at: Society of General Internal Medicine; April 19-22, 2017; Washington, DC.
25. Harry E, Sinsky C, Dyrbye LN, et al. Physician cognitive load and the risk of burnout among US Physicians. 2019 Society of Hospital Medicine’s Annual Meeting; March 24-27, 2019; National Harbor, MD.
26. Shanafelt TD, Dyrbye LN, Sinsky C, et al. Relationship Between clerical burden and characteristics of the electronic environment with physician burnout and professional satisfaction. Mayo Clin Proc. 2016;91(7):836-848. doi:10.1016/j.mayocp.2016.05.007PubMedGoogle ScholarCrossref
27. Duggan MJ, Gervase J, Schoenbaum A, et al. Clinician experiences with ambient scribe technology to assist with documentation burden and efficiency. JAMA Netw Open. 2025;8(2):e2460637. doi:10.1001/jamanetworkopen.2024.60637 | ArticlePubMedGoogle ScholarCrossref
28. Li C, Parpia C, Sriharan A, Keefe DT. Electronic medical record-related burnout in healthcare providers: a scoping review of outcomes and interventions. BMJ Open. 2022;12(8):e060865. doi:10.1136/bmjopen-2022-060865PubMedGoogle ScholarCrossref
29. DeChant PF, Acs A, Rhee KB, et al. Effect of organization-directed workplace interventions on physician burnout: a systematic review. Mayo Clin Proc Innov Qual Outcomes. 2019;3(4):384-408. doi:10.1016/j.mayocpiqo.2019.07.006PubMedGoogle ScholarCrossref
30. Micek MA, Arndt B, Baltus JJ, et al. The effect of remote scribes on primary care physicians’ wellness, EHR satisfaction, and EHR use. Healthc (Amst). 2022;10(4):100663. doi:10.1016/j. hjdsi.2022.100663PubMedGoogle ScholarCrossref
31. Shah SJ, Devon-Sand A, Ma SP, et al. Ambient artificial intelligence scribes: physician burnout and perspectives on usability and documentation burden. J Am Med Inform Assoc. 2025;32(2):375-380. doi:10.1093/jamia/ocae295PubMedGoogle ScholarCrossref
32. Ma SP, Liang AS, Shah SJ, et al. Ambient artificial intelligence scribes: utilization and impact on documentation time. J Am Med Inform Assoc. 2025;32(2):381-385. doi:10.1093/jamia/ocae304PubMedGoogle ScholarCrossref
33. Balloch J, Sridharan S, Oldham G, et al. Use of an ambient artificial intelligence tool to improve quality of clinical documentation. Future Healthc J. 2024;11(3):100157. doi:10.1016/j. fhj.2024.100157PubMedGoogle ScholarCrossref
34. Lui T-L, Hetherington TC, Dharod A, et al. Does AI-powered clinical documentation enhance clinical efficiency? a longitudinal study. NEMJ AI. 2024;1(12):2400659. doi:10.1056/AIoa2400659Google ScholarCrossref
35. Rotenstein L, Melnick ER, Iannaccone C, et al. Virtual scribes and physician time spent on electronic health records. JAMA Netw Open. 2024;7(5):e2413140. doi:10.1001/jamanetworkopen.2024.13140 | ArticlePubMedGoogle ScholarCrossref
36. Misra-Hebert AD, Amah L, Rabovsky A, et al. Medical scribes: how do their notes stack up? J Fam Pract. 2016;65(3):155-159.PubMedGoogle Scholar
37. Tierney A, Gayre G, Hoberman B, et al. Ambient artificial intelligence scribes to alleviate the burden of clinical documentation. NEJM Catal Innov Care Deliv. 2024;5(3):0404. doi:10.1056/ CAT.23.0404Google ScholarCrossref
38. Shanafelt TD, West CP, Dyrbye LN, et al. Changes in burnout and satisfaction with work-life integration in physicians during the first 2 years of the COVID-19 pandemic. Mayo Clin Proc. 2022;97(12):2248-2258. doi:10.1016/j.mayocp.2022.09.002PubMedGoogle ScholarCrossref
39. Olson KD, Fogelman N, Maturo L, et al. COVID-19 traumatic disaster appraisal and stress symptoms among healthcare workers: insights from the Yale Stress Self-Assessment (YSSA). J Occup Environ Med. 2022;64(11):934-941. doi:10.1097/JOM.0000000000002673PubMedGoogle ScholarCrossref
40. Olson KD. The pandemic: health care’s crucible for transformation. Mayo Clin Proc. 2022;97(3):439-441. doi:10.1016/j.mayocp.2022.01.022PubMedGoogle ScholarCrossref
41. Gondi S, Shah T. Fulfilling the promise of AI to reduce clinician burnout. Health Affairs. 2025. Accessed March 9, 2025. https://www.healthaffairs.org/content/forefront/fulfilling-promise-aireduce-clinician-burnout
42. Joy in Medicine Health System Recognition Program. American Medical Association. 2025. Accessed January 20, 2025. https://www.ama-assn.org/system/files/joy-in-medicine-guidelines.pdf
43. Rotenstein LS, Apathy N, Edgman-Levitan S, Landon B. Comparison of work patterns between physicians and advanced practice practitioners in primary care and specialty practice settings. JAMA Netw Open. 2023;6(6):e2318061. doi:10.1001/jamanetworkopen.2023.18061 | ArticlePubMedGoogle ScholarCrossref
44. Olson K. Why Physician’s Professional Satisfaction Matters to Quality Care. Master’s thesis. Weill Cornell Medicine Graduate School of Medical Sciences; 2012.
45. Olson K. Reading list, annotated bibliography. Paper presented at: Joy in Medicine Research Summit; September 13, 2016; Chicago, IL.
46. Olson KD. Physician burnout-a leading indicator of health system performance? Mayo Clin Proc. 2017;92(11):1608-1611. doi:10.1016/j.mayocp.2017.09.008PubMedGoogle ScholarCrossref
47. Olson K, Wrzesniewski A. Is medicine a calling, career, or a job? why meaning in work matters. Paper presented at: American Conference on Physician Health; October 13-15, 2023; Desert Springs, CA.
48. Olson K. Physician’s professional fulfillment, values, and expectations of professional life. Paper presented at: American Conference on Physician Health; September 19-21, 2019; Charlotte, NC.
49. Trockel M, Sinsky C, West CP, et al. Self-valuation challenges in the culture and practice of medicine and physician well-being. Mayo Clin Proc. 2021;96(8):2123-2132. doi:10.1016/j. mayocp.2020.12.032PubMedGoogle ScholarCrossref
50. Trockel MT, Dyrbye LN, West CP, et al. Impact of work on personal relationships and physician well-being. Mayo Clin Proc. 2024;99(10):1567-1576. doi:10.1016/j.mayocp.2024.03.010PubMedGoogle ScholarCrossref
51. Shanafelt T, West C, Sinsky C, et al. Changes in burnout and satisfaction with work-life integration in physicians and the general working population between 2011-2020. Mayo Clin Proc. 2022;97(3):491-506. doi:10.1016/j.mayocp.2021.11.021PubMedGoogle ScholarCrossref
52. Tawfik DS, Scheid A, Profit J, et al. Evidence relating health care provider burnout and quality of care: a systematic review and meta-analysis. Ann Intern Med. 2019;171(8):555-567. doi:10.7326/ M19-1152PubMedGoogle ScholarCrossref
53. Tawfik DS, Profit J, Morgenthaler TI, et al. Physician burnout, well-being, and work unit safety grades in relationship to reported medical errors. Mayo Clin Proc. 2018;93(11):1571-1580. doi:10.1016/j.mayocp.2018.05.014PubMedGoogle ScholarCrossref
54. Shanafelt TD, Dyrbye LN, West CP, et al. Career plans of US physicians after the first 2 years of the COVID-19 pandemic. Mayo Clin Proc. 2023;98(11):1629-1640. doi:10.1016/j. mayocp.2023.07.006PubMedGoogle ScholarCrossref
55. Sinsky CA, Dyrbye LN, West CP, Satele D, Tutty M, Shanafelt TD. Professional satisfaction and the career plans of US physicians. Mayo Clin Proc. 2017;92(11):1625-1635. doi:10.1016/j. mayocp.2017.08.017PubMedGoogle ScholarCrossref
56. Ligibel JA, Goularte N, Berliner JI, et al. Well-being parameters and intention to leave current institution among academic physicians. JAMA Netw Open. 2023;6(12):e2347894. doi:10.1001/jamanetworkopen.2023.47894 | ArticlePubMedGoogle ScholarCrossref
57. Rotenstein LS, Brown R, Sinsky C, Linzer M. The association of work overload with burnout and intent to leave the job across the healthcare workforce during COVID-19. J Gen Intern Med. 2023;38(8):1920-1927. doi:10.1007/s11606-023-08153-zPubMedGoogle ScholarCrossref
58. Shanafelt T, Goh J, Sinsky C. The business case for investing in physician well-being. JAMA Intern Med. 2017;177(12):1826-1832. doi:10.1001/jamainternmed.2017.4340 | ArticlePubMedGoogle ScholarCrossref
59. Sinsky CA, Shanafelt TD, Dyrbye LN, Sabety AH, Carlasare LE, West CP. Health care expenditures attributable to primary care physician overall and burnout-related turnover: a cross-sectional analysis. Mayo Clin Proc. 2022;97(4):693-702. doi:10.1016/j.mayocp.2021.09.013PubMedGoogle ScholarCrossref
60. Grindlinger B. Doomers, bloomers, and zoomers: Clinton & Hoffman weigh in on AI’s future. The New York Academy of Sciences. January 31, 2025. Accessed May 28, 2025. https://www.nyas.org/ideasinsights/blog/doomers-bloomers-and-zoomers-clinton-hoffman-weigh-in-on-ais-future/
NEW DCMS HEADQUARTERS IN THE HEART OF UPTOWN











by John Xuefeng Jiang, PhD1; Joseph S. Ross, MD, MHS2; Ge Bai, PhD, CPA3
Ransomware attacks, which restrict data access and encrypt information unless ransom payments are made, increasingly threaten health care operations.1 In February 2024, a ransomware attack on Change Healthcare compromised the protected health information (PHI) of 100 million individuals, disrupted care delivery nationwide, and incurred $2.4 billion in response costs.2
Although hacking or information technology (IT) incidents became the leading cause of health care data breaches in 2017, the proportion involving ransomware remains unclear.3 Prior research identified 376 ransomware attacks on health care delivery organizations from 2016 to 2021,4 but health plans and clearinghouses have also been victims. This study analyzes ransomware attacks across all Health Insurance Portability and Accountability Act (HIPAA)–covered entities from 2010 to 2024 and examines their contribution to PHI data breaches.
This cross-sectional study used nonidentifiable public data and did not constitute human participant research; therefore, institutional review board approval was not required in accordance with the Common Rule. This study follows STROBE reporting guideline. We analyzed breaches affecting 500 or more patient records reported to the US Department of Health and Human Services (HHS) Office for Civil Rights (OCR) from October 2009 through October 2024. Data were obtained from the publicly available Breach Portal (eAppendix in Supplement 1). After removing duplicates and incomplete entries, 6468 unique breaches remained. Breaches were classified by reporting year (not occurrence year), acknowledging HIPAA’s 60-day reporting window. The OCR categorized breaches into hacking or IT incidents, theft, unauthorized access/disclosure, and improper disposal or loss, as well as breaches of unidentified or unknown cause.5
Breach details came from OCR records for fully investigated cases and web searches for ongoing cases (primarily 2023-2024).6 According
to OCR’s classification, cyber intrusions are categorized as a hacking or IT incident. We identified ransomware attacks within this category by analyzing event descriptions for specific indicators, including ransom demands, cryptocurrency payments, system encryption, or known ransomware groups (eg, LockBit, BlackCat). Details are provided in the eAppendix in Supplement 1. The frequency and the number of affected records across 5 breach categories—ransomware hacking or IT incidents, nonransomware hacking or IT incidents, theft, unauthorized access or disclosure, and improper disposal or other breaches— were analyzed. The analyses were conducted using SAS, version 9.4.
The total number of PHI data breaches increased from 216 in 2010 to 566 in 2024, with hacking or IT incidents increasing from 4% (8 of 216) to 81% (457 of 566) of all breaches (P < .001) (Table). Ransomware attacks increased from 0 cases in 2010 to 31% (222 of 715) of breaches in 2021, before decreasing to 11% (61 of 566) in 2024. Concurrently, breaches due to theft, unauthorized access, and improper
disposal or loss decreased (Figure, A).
The number of patient records affected by PHI data breaches increased from 6 million in 2010 to 170 million in 2024, with hacking or IT incidents increased from 2% (92 358 of 6 million) to 91% (155 million of 170 million). Of the 732 million records affected from 2010 to 2024, hacking or IT incidents and ransomware accounted for 88% (643 million) and 39% (285 million), respectively. Since 2020, ransomware has affected more than half of all patients annually, reaching 69% in 2024 (Figure, B).
Health care PHI data breaches surged from 2010 to 2024, driven by hacking or IT incidents, particularly ransomware attacks. Consistent with HHS breach assessments and prior literature, we measure breach impact by the number of patient records affected. However, this study is limited in that this metric may not fully reflect ransomware’s operational disruptions. Additionally, our findings likely underestimate the frequency of

data breaches due to underreporting, reluctance to disclose ransom payments, and the OCR’s exclusion of breaches aff ecting fewer than 500 records.
Hospitals, clinics, health plans, and other HIPAA-covered entities are particularly vulnerable to ransomware attacks due to limited cybersecurity resources and the urgency of system recovery for patient care. Mitigation strategies should include mandatory ransomware fi elds in OCR reporting to improve surveillance clarity, revising severity classifi cations to account for operational impact, and monitoring cryptocurrency to disrupt ransom payments. DMJ
1. Kanter GP, Rekowski JR, Kannarkat JT. Lessons from the Change Healthcare ransomware attack. JAMA Health Forum. 2024;5(9):e242764.PubMedGoogle Scholar
2. Hyperproof. Understanding the Change Healthcare breach and its impact on security compliance. Updated on November 6, 2024. Accessed November 14, 2024. https://hyperproof.io/resource/understanding-the-change-healthcare-breach/
3. McCoy TH Jr, Perlis RH. Temporal trends and characteristics of reportable health data breaches, 20102017. JAMA. 2018;320(12):1282-1284.
ArticlePubMedGoogle ScholarCrossref
4. Neprash HT, McGlave CC, Cross DA, et al. Trends in ransomware attacks on us hospitals, clinics, and other health care delivery organizations, 2016-2021. JAMA Health Forum. 2022;3(12):e224873. doi:10.1001/jamahealthforum.2022.4873
ArticlePubMedGoogle ScholarCrossref
5. Jiang JX, Bai G. Evaluation of causes of protected health information breaches. JAMA Intern Med. 2019;179(2):265-267.
ArticlePubMedGoogle ScholarCrossref
6. Jiang JX, Bai G. Types of information compromised in breaches of protected health information. Ann Intern Med. 2020;172(2):159-160.PubMedGoogle ScholarCrossref






Bush Corridor at Renner Road and Shiloh Located directly across from Methodist Richardson Medical Center

COUNTY MEDICAL SOCIETY
Join us in celebrating DCMS members who have reached a milestone in their membership with the Society! All of our members are special to us, but each year we take time to highlight those who reach the five-year membership anniversary benchmark. Thank you for your commitment to the Society and for all you do to keep our communities safe and healthy!
S. Thomas Allen, MD
Marvin Gerard, MD
Henry Atkinson Hawkins, MD
Frank C. Payne, Jr., MD
Keith Ivan Robins, MD
Joe Buck Caldwell, MD
Dode Mae Hanke, MD
Charles B. Key, MD
Robert A. Lauderdale, MD
Milton Leventhal, MD
Richard Lee Mabry, MD
Murray Pizette, MD
Lee Roy Radford, MD
Jethro B. Rochelle, III, MD
Don E. Blanton, MD
James Henry Herndon, Jr., MD
Edwin Patrick Jenevein, Jr., MD
James Albright McCullough, MD
George C. Mekker, MD
Jerome Melvin Naftalis, MD
Alberto F. Torres, MD
Thomas Pervy Webb, MD
Daveen Barksdale
Whittlesey, MD
Phillip E. Williams, Jr., MD
Frank Edward Crumley, MD
Eugene H. Flewellen, III, MD
Michael S. Harris, MD
Irwin Joseph Kerber, MD
Peter Louis, MD
David Lee McCaffree, MD
Gary C. Morchower, MD
Gordon H. Newman, MD
Jack Charles Shelton, MD , JD
Sandra C. Steinbach, MD
Abraham F. Syrquin, MD
William J. Walton, MD
Sheldon Alan Weinstein, MD
Richard Martin Adams, Jr., MD
Silvestre F. Aguilar, MD
Larrie W. Arnold, MD
Arturo E. Aviles, MD
Mary Gill Bankhead, MD
Robert Ellis Bonham, MD
Vincent Henry Bradley, MD
Opta Lea Braun, MD
Charles Gordon Caldwell, MD
Peter R. Carter, MD
William Bethel Cobb, MD
Veena B. Daulat, MD
Jaime Abraham Davidson, MD
Timothy Jefferson Davis, MD
George L. Eastman, III, MD
Clare Daniel Edman, MD
James B. Evans, MD
W. Phil Evans, III, MD
Diane Fagelman-Birk, MD
Gabriel Fried, MD
Elliot Joel Ginchansky, MD
L. Michael Goldstein, MD
Justo Jesus Gonzalez, MD
Edward L. Goodman, MD
Larry E. Gray, MD
Gary N. Gross, MD
Madeline Weinstein Harford, MD
Mary F. Harris, MD
David Anderson Haymes, MD
ShanShan Huang Hsu, MD
Gregory Lawrence
Jackson, MD
Richard E. Jones, III, MD
Lyle Alan Kaliser, MD
Andres Katz, MD
James Lyle Knoll, III, MD
Robert Louis Koster, MD
W. Pennock Laird, MD
B. Ward Lane, MD
Melvyn Lerman, MD
James Donald Madden, MD
Donald Lee McKay, Jr., MD
Steven L. Meyer, MD
Howard B. Miller, MD
Gary Wayne Miller, MD
Robert Anthony Moore, MD
John R. Muir, MD
Dennis Elbert Newton, III, MD
Jung J. Noh, MD
Thomas F. Parker, III, MD
Robert I. Parks, Jr., MD
A. Winston Puig, MD
Steven E. Rinner, MD
Robert John Samuelson, MD
Louis Alexander Shirley, MD
Charles Thomas Simonton, MD
Neal Lawrence Sklaver, MD
Ann Marie Trowbridge, MD
Pedro J. Vergne-Marini, MD
Thomas Douglas
Watson, III, MD
Robert Gibson Winans, MD
Thomas Adams, Jr., MD
Jeffrey Marc Adelglass, MD
Gregg Michael Anigian, MD
Alfred Ricardo Antonetti, MD
Carolyn Dickson Ashworth, MD
D. Randall Askins, MD
Rudolph A. Barta, Jr., MD
Christine Ann Becker, MD
H. Jay Boulas, MD
Karen Dorothy Bradshaw, MD
David Wayne Bragg, MD
Robert Edwin Brockie, MD
Kendall Ole Brown, MD
H. Steve Byrd, MD
Paul Raymond Cary, MD
Connie Casad, MD
Daniel Shu-Eng Chen, MD
Stanley Bruce Cohen, MD
Sami E. Constantine, MD
Barry Cooper, MD
Dolores A. Corpuz, MD
Randall Wayne Crim, MD
Guy Lee Culpepper, MD
John Rodney Debus, MD
Donald Lee Drennon, MD
Robert Kent Dyo, MD
Bruce Alan Echols, MD
Teresa Marie Elliott, MD
George B. Erdman, MD
David Charles Fein, MD
Mark J. Fleschler, MD
King Irving Freeland, MD
Tandy Freeman, MD
Robert Nolan Froehner, MD
Deborah A. Fuller, MD
Damian David Garcia, MD
James Stanley Garrison, MD
Jan Jensen Goss, MD
Linda Sheldon Halbrook, MD
Martha L. Hardee, MD
John Frank Harper, MD
Robert Alan Harris, Jr., MD
Thomas Parker Hawk, MD
Peter Jay Heidbrink, MD
John Anthony Herring, MD
Rufina L. Hilario, MD
Jacqueline C. Hubbard, MD
Fawzi Afif Iliya, MD
Christopher Lucien Jagmin, MD
Mark Anthony Johnston, MD
Jeffrey Kam, MD
Michael Mordecai Katz, MD
Alan Michael Klein, MD
Suresh Kukreja, MD
Elizabeth Anne Kummer, MD
Thomas L. Kurt, MD
Randall Brent Lane, MD
James Willis Langley, MD
Grover Milton Lawlis, MD
Mark J. Lerman, MD
Mark Leshin, MD
Kenneth Ira Licker, MD
Joseph Hugh Little, MD
Richard Alan Marks, MD
James P. McCulley, MD
Rebecca Rochelle
McKown, MD
Robert Gary Mennel, MD
Joseph Leslie Milburn, Jr., MD
Narinder Kumar Monga, MD
Louis Robert Nardizzi, MD
David Nesser, MD
Pedro Nosnik, MD
Soon Chao S. Ong, MD
Ashok G. Patel, MD
Herschel Stuart Peake, Jr., MD
William M. Pederson, MD
Leslie Dean Porter, MD
Pervaiz Rahman, MD
Roy Lynn Rea, MD
Michele Diane Reynolds, MD
Nathan Scott Robins, MD
Bradford Lane Romans, MD
Arthur Isaac Sagalowsky, MD
Laird Faber Schaller, MD
Laura Lea Sears, MD
Philip Roger Shalen, MD
David Warren Sheffey, MD
Allan Neil Shulkin, MD
Craig Douglas Smith, MD
Thomas M. Sonn, MD
John Beryl Tebbetts, MD
Paul Arthur Tobolowsky, MD
Gretchen Faye Toler, MD
Michael Trombello, MD
Ruben L. Velez, MD
Charles A. Wallace, MD
Richard Lee Wallner, MD
Matthew V. Westmoreland, MD
George William Wharton, MD
Charles Wesley Whitten, MD
Sharon Lee Wiener, MD
John J. Willis, MD
T. Stacy Wood, MD
Judy Lee Wood, MD
Pablo L. Xiques, MD
Azam Anwar, MD
Ezell Stallworth Autrey, MD
David Azouz, MD
William David Baldwin, MD
Chaim Banjo, MD
Benjamin James Bennett, MD
Ronald M. Blair, Jr., MD
David Wolf Boone, MD
Gregory Sterling Carter, MD
Michael James Champine, MD
Samuel J. Chantilis, MD
Rosemary Garza Christy, MD
Wook Chung, MD
Lisa Harper Clark, MD
Robert A. Cohen, MD
Richard Hartman Daniel, MD
Gregory Allen Echt, MD
James Scott Ellis, DO
Maria Lourdes Feliciano, MD
Bonnie Lee Floyd, MD
Gerald Joseph Fogarty, MD
Leonidas George Fox, MD
Stephen F. Garrison, MD
Jay S. Gartner, MD
Joseph D. Gaspari, MD
Harrell Anthony Grand, MD
William Franklin Griffith, MD
Robert F. Haynsworth, Jr., MD
Robert F. Hebeler, Jr., MD
Darrell William Hermann, MD
Eric Stephen Hollabaugh, MD
Karen Rae Houpt, MD
James Judson Hudgins, MD
Mitchell Lee Huebner, MD
Judson Mark Hunt, MD
William James Hwang, MD
Stephen Bryce Johnston, MD
Kevin Max Kadesky, MD
Judith P. Kane, MD
Eric Richard Kaplan, MD
Albert Gerard Karam, MD
Cindy Kay Kelly, MD
Viswanadham Lammata, MD
Mary Jane Latimer, MD
Sy Q. Le, MD
William K. Leslie, MD
Frederick Carroll Lester, MD
Cynthia Anna Lopez, MD
Sharon Lynn Macko, MD
Kevin Paul Magee, MD
Bruce Edward Mickey, MD
Ronna Gail Miller, MD
Jules Charles Monier, MD
Lauren Beth Monti, MD
John Robert Morgan, MD
Frances Connally Morriss, MD
Michael C. Morriss, MD
Donald Gene Nicholas, Jr., MD
David Bruce Owen, MD
Gregory John Patton, MD
Jeffrey Harris Phillips, MD
Victor Josep Ramon, MD
Ian Mark Ratner, MD
Gregory Alan Redish, MD
Harry Eugene Sarles, Jr., MD
Lawrence Rudolph Schiller, MD
Barbara A. Shinn, MD
Alanna Marcia Silverstein, MD
Robert Barkley Simonson, DO
Warren T. Snodgrass, MD
John Brian Spieker, MD
Jeffrey Glenn Stewart, MD
Susan L. Swanson, MD
Richard C. Tannen, MD
Richard L. Voet, MD
James Michael Wagner, MD
Randolph Trent Walker, MD
David Allan Waller, MD
Lori Meril Watumull, MD
Larry Dale Whitcomb, MD
Timothy Nick Zoys, MD
William Abramovits, MD
Neel Burnett Ackerman, MD
Michael Gregory Allison, MD
Jeffry John Andresen, MD
Bagyalakshmi
Arumugham, MD
Deborah Noble Baird, MD
Amy Brown Balis, MD
James Douglas
Bates, MD, DDS, FACS
Steven D. Beathard, MD
Deaina Monique Berry, MD
Elisabeth S. Brockie, DO
Sandra Zoe Brothers, MD
Michael Lindsey Carroll, MD
John E. Christian, Jr., MD
Sharon Joy Davis, DO
Steven Gabe Davis, MD
Thomas Luke Davis, MD
Ronald Lee Dotson, MD
Stephanie Hurn Elmore, MD
Edwin Escobar-Vazquez, MD
Neal L. Fisher, MD
W. Max Frankum, MD
Jack David Gardner, MD
Gary Demetrius Garrett, MD
Rufus Green, Jr., MD
Robert D. Gross, MD
George Mahmud Hariz, MD
John Christian Hinkle, MD
Robert Nickey Hogan, MD
A. Mason Holden, MD
Connie C W Hsia, MD
Christine Lynn Johnson, MD
L. Darrel Jordan, MD
Rainer Anil Khetan, MD
Roger Sunil Khetan, MD
Patricia Ann LaRue, MD
Andy Matthew Lee, MD
Benjamin David Levine, MD
Daniel Stephen Long, MD
Willis C. Maddrey, MD
Lauren Anne McDonald, MD
David Wayne Mercier, MD
Joseph J. Morris, Jr., MD
Thanh Van Nguyen, MD
Dante M. Paras, MD
Perry Glenn Pate, MD
Todd Alan Pollock, MD
Shelley Bruce Ramos, MD
Lawrence H. Schott, MD
Michael Gregory Spann, MD
William F. Stiles, DO
Scott Andrew Stone, MD
Steven Anthony Swaldi, MD
William W. Turner, Jr., MD
Jim Walton, DO, MBA
Alan S. Wasserman, MD
Maureen Wooten Watts, MD
Laurence Avram Weider, MD
Thomas Alonzo West, MD
John G. Westkaemper, MD
Warren D. Whitlow, MD
Kenneth Kei Adams, MD
Jennifer Elaine Aldrich, MD
Naira Spartak Babaian, MD
Leyka M. Barbosa, MD
Scott A. Biedermann, MD
James Chanez, MD
Sreenivas Ramdas Chittoor, MD
Scott Edward Conard, MD
Blair Conner, MD
Christopher C. Crow, MD, MBA
Jose Francisco DeLeon, MD
John Wayne East, DO
Waleed Hamed El-Feky, MD
Rogers Pressley Fair, MD
Alan C. Farrow-Gillespie, MD
Mary Elizabeth Fleischli, MD
Peter Alan Frenkel, MD
Brian M. Gogel, MD
Sharon Gaye Gregorcyk, MD
Christopher A. Hebert, MD
William G. Herlihy, MD
Jeffrey Lee Horswell, MD
E. Vennecia Jackson, MD
Nirmal Samuel Jayaseelan, MD
Lubna S. Kamal, MD
Edward Paul Kaplan, MD
Martin Kassir, MD
Anita Kushwaha Khetan, MD
Erik C. Koon, MD
Jeffrey Michael Kopita, MD
Gregory Frank Kozielec, MD
Michael B. Kronenberger, MD
David Michael Lee, MD
Sarita Sharma Louys, MD
Andrew David McCollum, MD
Martin G. McElya, DO
Christie Chantal McNair, MD
Angela Peterman Mihalic, MD
David E. Morales, MD
Susan G. Moster, DO
Alan Douglas Murray, MD
Manju Nath, MD
Charles Edward Neagle, III, MD
Jane Ellen Nokleberg, MD
Scott N. Oishi, MD
James P. Pak, MD
Jennifer Gabrielle Patterson, MD
Mary Carol Plank, MD
Peter Raphael, MD
Thomas H. Renard, MD
Kim M. Rice, MD
Jason M. Riehs, MD
David William Ritter, MD
Tami R. Roberts, MD
John Paul Roberts, MD
Kimberlea A. Roe, MD
Kristi Lane Ryder, MD
Zakaria Siddiq, MD
Cedric W. Spak, MD
Rebecca E. Stachniak, MD
Edic Stephanian, MD
Kathleen Dooley Stokes, MD
Jay Courtlin Story, MD
Eduardo Velez, MD
Rhonda L. Walton, MD
Bradley E. Weprin, MD
Thomas Paul Winkler, MD
Joseph Robert Wyatt, MD
Alan J. Yedwab, MD
Ahmad Bedair Ahmad, MD
Jae-Koo An, MD
Sayeh Barzin, DO
Katherine K. Boyd, DO
Kenneth Joseph Brown, MD
Kevin T. Brown, MD
John L. Burns, MD
Bruce A. Byrne, MD
William Hampton Caudill, MD
Ravi Chandrasekhara, MD
Frank Leonard Chappetta, Jr., MD
Sandra K. Clapp, MD
Jay David Cook, MD
Samuel R. Crowley, MD
Benjamin C. Dagley, DO
Jill Frey Davenport, MD
David William Doerrfeld, MD
Lee C. Drinkard, MD
Larry Joe Dullye, II, DO
Matthew W. Eidem, MD
Tracy Haymann Elliott, MD W. Jonathan Esber, MD
Irfan M. Farukhi, MD
Craig A. Ferrara, DO
Craig S. Fisher, DO
Geoffrey A. Funk, MD
Theresa Nguyen Garza, DO
Marcia S. Genta, MD
Faryal Abdul Ghaffar, MD
Julia R. Gillean, MD
Victor Gonzalez, MD
Martha A. Grimm, MD
Bradley Ray Grimsley, MD
Keith Alan Heier, MD
Allison Halley Henderson, MD
Edward Fred Heyne, MD
William J. Hoffman, MD
Houston E. Holmes, III, MD
Grace Y. Huang, MD
Mary Elizabeth Hurley, MD
Brian L. Joe, MD
Andy Kahn, MD
Fran E. Kaiser, MD
Pratik C. Kapadia, MD
James G. Ken, MD
Thomas P. Kenjarski, MD
Nathaniel A. Kho, MD
Jun H. Kong, MD
Richard M. Layman, MD
Temekka V. Leday, MD
Natalie C. Light, MD
Christina Perez Littrell, MD
David Jude Magee, MD
Karen L. McQuade, MD
Stephen L. Meller, MD
Sonya L. Merrill, MD
Adnan Nadir, MD
Austin I. Ogwu, MD
Cecilia Nnenna Okafor, DO
Lee Ann Pearse, MD
Neil N. Phung, MD
Michael Podolsky, DO
H. Jake Porter, II, MD
Cheryl Ann Potter, MD
John Edward Pyeatt, MD
Noble Bryan Rainwater, MD
Jason T. Reed, MD
Richard David Reitman, MD
Hector M. Reyes, MD
Syed Ali Rizwan, MD
Vivyenne M L Roche, MD
Lora Brigid Rodriguez, MD
Stephen B. Sellers, DO
Suzanne M. Slonim, MD
Ashley Unwoo So, MD
Jamie A. Sunny, MD
Keshava G N Suresh, MD
Moirae M. Taylor, MD
Fred C. Thomas, MD
Anna-Maria B. Toker, MD
Duc Tran, MD
Ibidunni O. Ukegbu, MD
Michael Alton Wait, MD
Jerry Lee Webb, MD
Donald Everett Wesson, MD
Annette Elizabeth
Whitney, MD
Iddriss K. Yusufali, MD
Junaid Ahmad, MD
Basit Bob Ali, DO
Shannon L. Amonette, MD
Matthew E. Anderson, MD
Susan L. Bacsik, DO
Amir R. Baluch, MD
Mezgebe Berhe, MD
Muriel Keenze Boreham, MD
Katia Veronica Brown, MD
Philip Michael Brown, MD
Michelle R. Butler, MD
Remigio Gungon Capati, MD
Michelle N L Chesnut, MD
James Wonjin Choi, MD
Alex Tzu-Yueh Chuang, MD
Jack Bernard Cohen, DO
Lori E. Coors, MD
Bryan S. Crowder, DO
Jerry Daniel, MD
Shounak Das, MD
Sonak B. Daulat, MD
Owen Davenport, MD
Robert Vance Dell, MD
Joseph J. Emerson, MD
Joanne L. Essa, MD
Patricia W. Evans, MD
Kosunarty Fa, MD
Bernard Victor Fischbach, MD
Justin N. Fleishman, MD
Abbeselom Ghermay, MD
Shawn F. Green, MD
Nancy Brown Greilich, MD
Ricardo Guerra, Jr., MD
Joseph Manuel
Guileyardo, MD
Daniel C. Gunn, MD
Gaurav Gupta, MD
Amit B. Guttigoli, MD
Christopher M. Haas, MD
Derek Anthony Haas, MD
Leslie Chin Havemann, MD
Amy L. Hayes, MD
Yong He, MD
David Cressler Heasley, MD
Alice L. Hsu, MD
Thomas Y. Hung, MD
Joseph H. Jackson, MD
Eric R. Jenkins, MD
Tya-mae Yvette Julien, MD
Shalini Katikaneni, MD
John C. Kedora, MD
Moses Joshua Keng, Jr., MD
James Belton Ketchersid, MD
Jyothsna Kodali, MD
Michael Jay Landgarten, MD
Paige Latham, MD
Judy Choy Lee, MD
Karen Lynn Lee, MD
Kim Elisabeth Lopez, MD
John Carl Lundell, MD
Robert Thomas Lyon, MD
Naim Mounif Maalouf, MD
Christopher James Madden, MD
Kazi Imran Majeed, MD
Brannon D. Marlowe, MD
Karen Barker McClard, MD
Scott Russell McGraw, MD
Shaun A. McMurtry, MD
Gregory Stewart Miller, MD
Venkateswara Vinod Mootha, MD
Adrian Scott Morales, MD
Royce H. Morgan, MD
Galon Cory Morgan, MD
James Byron Mullins, MD
Gregory R. Nettune, MD
Ngo Khoi Nguyen, MD
Hanh-Dieu Thi Nguyen, MD
Pamela Nurenberg, MD
Natalia Angela Palacio, MD
Betty Jimi Park, MD
Akash A. Patel, MD
Colin D. Pero, MD
Joy Susanne Peveto, MD
Anil Bosco Manuel Pinto, MD
Riva Louise Rahl, MD
Rosalyn N. Reades, MD
Sandeep Guduru Reddy, MD
Christy C. Riddle, MD
Wilfredo Rivera, MD
Stephanie A. Savory, MD
Michael Darren Shannon, DO
Stephen McCulloch
Slaughter, MD
Cynthia Wakefield Speers, MD
Christopher Scott Spikes, MD
Clinton S. Steffey, MD
Renee D. Stock, DO
Brian E. Straus, MD
Edward Kirby Swift, MD
Madhavi Thomas, MD
Katherine Boyle Thornton, MD
Sarah Barber Troendle, MD
William Richard Vandiver, MD
Anitha D. Veerasamy, MD
Giac T. Vu, MD
Gulam M. Waheed, DO
Steven Craig Walker, MD
Serena Xiaohong Wang, MD
Melissa A. Waters, MD
Christopher Westerheide, MD
Craig L. Wheeler, MD
Lisa Jean White, MD
Lindsey Keys White, MD
Robert L. Wimberly, MD
Amanda Jo Wolthoff, MD
Marshall Lee Wong, MD
Neil Zucker, DO
Raymon K. Aggarwal, MD
Anisa Ahmed, MD
Sana Mahmood Ahmed, MD
Ahmad Anshasi, MD
Christopher Joseph Bettacchi, MD
Dhiren Meghji Bhalodia, MD
J. Andrew Bird, MD
Sheena R. Black, MD
Jared B. Brown, MD
Andrew J. Chang, MD
Asha Joseph Chemmalakuzhy, MD
R Adrian Clarke, MD
Lori Bevis Clifford, MD
Martin F. Conroy, DO
Kathryn Calhoun Cornelius, MD
Nilesh B. Dave, MD
David C. De Fazio, MD
Lauren Baker Dickson, MD
Shena J. Dillon, MD
Emma L. Dishner, MD
Chelsea Talmadge Dunn, MD
Alexander Laurance Eastman, MD, MPH
Callie Gittemeier Ebeling, MD
Ashley Lindley Egan, MD
Jennifer L. Elmore, DO
Matthew P. Fiesta, MD
Raymond L. Fowler, MD
David I. Fudman, MD
Alexander A. Gaidarski, III, MD
Darshan Gautam Gandhi, MD
William Bradley Garrett, MD
Joy C. Gathe-Ghermay, MD
Robert W. Gladney, MD
George A. Gold, MD
Rebecca Ann Gray, MD
Lara M. Gross, MD
Davinder S. Grover, MD
Matthew Charles Gummerson, MD
Felona Gunawan, MD
Yoav Hahn, MD
Daniel A. Hale, MD
Katherine Kae Hege, MD
Michael Dale Henderson, MD
Halim Mahfouz
Aziz Hennes, MD
Shiril Hombal, MD
Pamela D. Hoof, MD
Richard T. Hopley, MD
Michael Garland Huss, MD
Brooke A. Hyatt, MD
Catherine Minor Ikemba, MD
Christopher G. Irwin, MD
David Everett Jackson, MD
Zaiba I. Jetpuri, DO
Amy L. Jones, MD
Jeffrey Scott Kahn, MD
Lakshmi Kannan, MD
Farha Khan, MD
Kevin Farzin Kia, MD
Flora Sewon Kim, MD
Meghan S. Koch, DO
Sanjeev Kota, MD
Joshua Craig Langhorne, MD
Anna-Her Y. Lee, MD
Gary Lichliter, MD
Kirk Jeremy Lodes, MD
Anand K. Lodha, MD
Victor Omar Lopez, MD
Irina Lytvak, MD
Atisha P. Manhas, MD
John Geiser McHenry, MD
Travis James McVay, MD
Uzma F. Mehdi, MD
Brian L. Miller, MD
James M. Mitchell, MD
Furqan Moin, MD
Cynthia Ruby Muller, MD
Hillary E. Myears, MD
Alia C. Nassif, DO
Benjamin Rhett Nelson, MD
Frederic Nha Nguyen, MD
Florence A. Nwagwu, MD
Annette F. Okai, MD
Vanessa C. Ortiz-Sanchez, MD
Rohit Jamnadas Parmar, MD
Wendy Carmen Parnell, MD
Ankit Mukesh Patel, MD
Robert D. Phan, MD
Chris W. Phillips, MD
Natalie L. Pounds, MD
Amanda Peterson Profit, MD
Dustin Luis Ray, MD
Martin B. Raynor, MD
Melanie Lane Reed, MD
Nelson Ivan Reyes, MD
Haiqiong Wu Riggs, MD
Alan J. Romero, MD
Marc A. Salhanick, MD
Venkat Sethuraman, MD
Islam Aly Shahin, MD
Smitha V. Shenoy, MD
Mariel Silva-Musalem, MD
Sameer K. Singh, MD
Amar P. Singh, MD
Terrica Rochelle Singleton, MD
Oluwatosin Urowoli Smith, MD
Caroline E. Sparkman, MD
Chris M. Stutz, MD
Anson T. Tang, MD
Sudha Teerdhala, MD
Gregory R. Thoreson, MD
Mohammad A. Toliyat, MD
Van T. Ton, MD
Kelly A. Tornow, MD
Ashley Warren Tovo, MD
Tham T. Trinh, MD
Matthew John Trovato, MD
Nathan Andrew Vaughan, MD
Astrud S. Villareal, MD
Michael J. Walsh, MD
Thomas T. Wang, MD
Jenny Weon, MD, PhD
Joseph Lee Brett West, MD
Benjamin D. White, MD
Elizabeth G. Wilder, MD
Steve I. Wilson, MD
Angelito F. Yango, MD
Niloofar Yari, MD
Ling Zhang, MD
Cyrus Erik Abbaschian, MD
Ashkan M. Abbey, MD
Christopher F. Adcock, MD
Candice Lynn Addison, MD
Junho Ahn, MD
Richard William Ahn, MD
Andrea Denise Arguello, MD
Jaya Bajaj, MD
Evan L. Barrios, MD
Kaitlin Young Batley, MD
Matthew Claytor Bell, MD
Jonas A. Beyene, MD
Lauren Bockhorn, MD
Ethan Kenneth Boothe, MD
Arthur C. Bredeweg, DO
Elizabeth Caitlin Brewer, MD
Matthew P. Bunker, MD
Kevin B. Cederberg, MD
Chaitanya Chavda, MD
Ryan J. Cheung, DO
Chia-Ye Chu, DO
Michael Hsiang Chung, MD
Andrew S. Chung, MD, PhD
Laylee Elizabeth Ghafar
Clare, DO
Jennyfer F. Cocco, MD
Elizabeth Deprato
Cochran, MD
M. Brett Cooper, MD
Christopher John Cooper, MD
Spencer R. Cope, MD
Paul Joseph Courtney, MD
Michelle Elizabeth Dang, MD
Noah Charles DeGarmo, MD
Taylor James Derousseau, MD
Renu A. Deshpande, DO
Aishwarya K. Devarakonda, MD
Diana Leigh Diesen, MD
Allison DiPasquale, MD
Bich Ngoc Do, MD
An Nhu Doan, DO
Manojkumar A. Dobariya, MD
Katie D. Dolak, MD
John Francis Eidt, MD
Nnenna C. Ejesieme, DO
Randolph Brian Fierro, MD
Ryan C. Fleming, MD
Edmond Nketti Fomunung, MD
Laura L. Gallagher, DO
Adam C. Gannon, MD
Danielle Marie Giesler, MD
Jane E. Gilmore, MD
Sarah B. Glogowski, DO
Franz Gerald Greil, MD
Ahmed T. Haque, MD
Daniel Har, MD
Henry He, MD
Jennifer Delia Heffernan, MD
Nathan Heineman, MD
Erin B. Highfill, MD
Steven Ellis Hill, MD
Dena Hohman, MD
Steven B. Holloway, MD
Mary Kathryn Hood, MD
Gene S. Hu, MD
Lynn C. Huffman, MD
Conor David Irwin, MD
Tesneem Issa, DO
Mamta K. Jain, MD
Deepna Deepak Jaiswal, DO
Monica Juarez-Gonzalez, DO
Justin Kane, MD
Niraj KC, MD
Sarah A. Kennedy, DO
Mahmoud Michael Khair, MD
Raamis Khwaja, MD
Heidi Kim, MD
Melissa Hester Kinney, MD
Dario N. Kivelevitch, MD
Emily Eads Knippa, MD
Kevin Karl Kruse, II, MD
Melody Lao, MD
Parker R. Lawson, MD
Andres Leal, MD
Bradley C. Lega, MD
Dean Michael Leonard, MD
Erin Tammany Wittman Lincoln, MD
Teodora Andreea
Livengood, DO
Matthew T. MacLean, MD
Kshitij Manchanda, MD
Victor Suva-viola
Mangona, MD
Tucker C. McCord, DO
David Lawrence McDonagh, MD
Christopher J. McElrath, MD
Amy L. McIntosh, MD
William F. McNamara, MD
Dorian B. Mendoza, MD
Christopher Edward Miller, DO
Adrian Mo, MD
Neal R. Morgan, DO
Garrett L. Morris, DO
Susan Mary Murphy, MD
Patrick K. Musau, MD
Anju Nair, MD
Pannaben Nangha, MD
Troy M. Neal, MD
Catherine Lewis Neal, MD
Toan Q. Nguyen, MD
Sydney Pinch Oesch, MD
Franklin Olumba, MD
Marcial Andres Oquendo Rincon, MD
Michael Oubre, MD
Priyanka Pahuja, MD
Deepak Pahuja, MD
Thomas B. Parnell, MD
Amy Kun Pass, MD
Rikin S. Patel, MD
Chandni N. Patel, MD
Rajesh Patel, MD
Nirav Rajendra Pavasia, MD
Alejandro Perez, MD
Joseph O. Pernisco, MD
Dung M. Pham, DO
Hande C. Piristine
Aaron R. Plitt, MD
Shanica N. Pompey, MD
Anita Punjabi Bajpai, DO
Christiana Sahl Renner, MD
Danielle N. Rucker, MD
Rosechelle Mary Ruggiero, MD
Chayanit Sasiponganan, MD
Shawn R. Schepel, MD
Taylor G. Schmidt, MD
Scott Ryan Seals, MD
Mohamed T. Shabana, MD
Scott Shafiei, MD
Adam C. Sheffield, MD
Moshin Q. Soleja, MD
Juan J. Sosa, MD
Thomas P. Sperry, MD
Jayaprakash Sreenarasimhaiah, MD
Whitney L. Stuard, MD
Jared David Sturgeon, MD
Nilofar Ikram Syed, MD
Alexandra Vaio Sykes, MD
Harold Michael Szerlip, MD
Subhan Tabba, MD, MBA
Alexander M. Tatara, MD
Jacob A. Tausiani, MD
Martha E. Teke, MD
Clara L. Telford, MD
Jean Kunkel Thomas, MD
Katherine Anne Thomas, MD
Mary Ellen Thurman, DO
Coby K. Tran, MD
Hsiang Chih Jim Tseng, MD
Maia E. VanDyke, MD
Ramakrishna R. Veluri, MD
Roopa Vemulapalli, MD
Vani Venkatachalam, MD
Alexandra Paige Volk, MD
Kevin B. Waldrep, MD
Connie Wang, MD
Jenny Wang, MD
Brittani Ann Wattiker, MD
Frank Craig Webster, MD
Bethany M. Werner, MD
Alexander M. Wetzig, MD
Michael Ralph Wheeler, MD
Justin Blake Williams, MD
Cody B. Wolfe, MD
Katherine A. Wright, MD
Eva M. Wu, MD
Rana Yazdani, MD
Tyler R. Youngman, MD
Xuchen Zhang, MD
Yanqiu Zheng, MD
Omar Mohammad Aboudawoud, MD
Leny M. Abraham, MD
Enrique E. Acosta, MD
Sailaja Adari, MD
Robert L. Adkins, MD
Aimaz Afrough, MD
Aradhna Agarwal, MD
Brian Aguirre, DO
Abrar Ahmed, MD
Adil Syed Ahmed, MD
Affan Ahmed, DO
Roma Ahuja, MD
Esra Akkoyun, MD
Amr Al Abbas, MD
Mazin Al Tamimi, MD
David Alderman, MD
Andrew Gabriel Alfaro, MD
Ahsan Turab Ali, MD
Rao Kamran Ali, MD
Kristen Ann Aliano
Messina, MD
Brianna Danielle Alvarado, MD
Sabrina Amaya, MD
Armon Amini, MD
Chelsea Anasi, MD
Nirupama Tulasi Ancha, MD
Sydnie Anderson, DO
Sarah E. Andrade, DO
Salah Ghassan Aoun, MD
Hashim Armashi, DO
Damal Kandadai Ashwini
Arvind, MD
Zubaida Aslam, MD
Joanna Assadourian, MD
Sumitha Atluri, MD
Mina Attia, MD
Edwin Robert Austin, MD
Usama Zafar Awan, MD
Moyosore Doyinsola AwobajoOtesanya, MD
Jaime Baeza, MD
Shelby Paige Bagby, MD
Kimberlyn Maravet
Baig-Ward, MD, PhD
George Herbert Moran
Bailey, II, MD
Bronson Bailey, MD
Kirbi C. Bain, MD
Tolulope Bakare, MD
Ryan C. Baker, MD
Vin Shen Ban, MD
Subhash Banerjee, MD
Basmah Barkatullah, MD
Kevin J. Barnes, MD
Kaylyn Rose Barrett, MD, MPH
Brooke Bartley, MD
Berkay Basagaoglu, MD
Nikita Batra, MD
Lorraine Elizabeth Bautista, MD
Jack Carlton Beale, MD
Jerad L. Beall, DO
Dylan Raymond Beams, MD
Jacob Clayton Becker, MD
Joshua Behar, MD
Lauren Bell, MD
Raj R. Bhanvadia, MD
Kevin Birdsall, MD
Sarah Elizabeth Bivans, MD
Justin Matthew Blakley, DO
Hannah Laura Blanchard, MD
Christopher Joel Blanton, MD
Hema Pandya Bohra, DO
Charles Gillespie Boland, MD
Michael S. Bosh, MD
Leila B. Bostan Shirin, MD
Makayla Bradbury, DO
Amanda N. Braddock, MD
Adam Brantley, MD
Rebecca Anne Briggs, MD
Christopher Britt, MD
Samuel F. Broders, MD
Paul Michael Broker, MD
Rosheem Browne, MD
Emily Nations Bufkin, MD
Jason Granger Bunn, MD
Suna Burghul, DO
Kevin M. Burns, MD
William Garrett Burton, MD
Fredy Roberto Calderon, MD
Fatih Canan, MD
Kristina A. Cantu, MD
Dazhe Cao, MD
Justin M. Cardenas, MD
Gianpaolo Trevor
Prisco Carpinito, MD
Paul Cattafi, DO
Jessica Renee Cave, MD
Margaret Cervantes
Nicholas Champagne-Aves, MD
Vincent Chan, MD
Caitlin Chapman, DO
Purujit Chatterjee, MD
Usamah Nazir Chaudhary, MD
Naveed Cheema, DO
Bernice Chen, MD
Gloria S. Cheng, MD
Brandon Michael Chin, MD
Sakina Chinwala, MD
Melody Chiu, MD
Young Woo Cho, MD
Rene Choi, MD
Etze Chotzoglou, MD
Timothy G. Chow, MD
Ashish Chowdary, MD
Kristina Ciaglia, MD
Michael Collins, MD
Jordan Comstock, MD
Scott Ward Connors, MD
Andrew M. Contreras, MD
Chloe Robinson Cooper, MD
Rachel Cox, MD
Joshua Cox, MD
Ryan Craig, DO
Stephen Michael Cresse, MD
Byron Leon Cryer, MD
Holt S. Cutler, MD
Joseph Lin Da, MD
Joseph Daniels, MD
Gaylan Jean Dascanio, MD
William Todd Dauer, MD
Ambriale Alexis Davis, MD
Elizabeth Deatkine, MD
Laura J. Delin, MD
Paul Erich Dilfer, MD
Gina Lyn Do, MD
Audrey Dockendorf, MD
Chester Donnally III, MD
Alleyna F. Dougherty, MD
Brian Duffy, MD
Caitlin Elizabeth Dunn, MD
Charley Edgar
David Michael Edwards, MD
Guy E. Efune, MD
Christine Egu, MD
Yunsha Ehtesham, MD
Luke Eldore, MD
Hector Eusebio Elizondo Adamchik, MD
Hala Kamal El-Mikati, MD
Yasser M. Elshatory, MD
Ifeadikanwa Ifechi Emejulu, MD
Kelsey Endari, MD
Taibat Salami Eribo, MD
Nicole Escobar, MD
Justin Brady Evans, MD
Roger R. Fan, MD
Aaron Solomon Farberg, MD
Michael Augustine Fediw, MD
Georges Antino Feghali, MD
Richard Feng, MD
Joan K. Fernandez, MD
Collin Filley, MD
Robert Aaron Fischer, MD
Matthew Scott Fisher, MD
Jonathan Scott Fletcher, MD
Ryan Anthony Floresca, MD
Yevgenia Y. Fomina, MD
Allison Elizabeth Foster, MD
Sierra Noel Foster, MD
Alexander Frangenberg, DO
Joseph Andrew Frankl, MD
Deborah Grace Freeland, MD
Elisabeth Katerina Frei, MD
Deborah S. Fubara, MD
Kailee Furtado, MD
Alexander James Gajewski, MD
Madeleine Gallagher, MD
Jorge Gamez, MD
Anupama Shripad
Gangavati, MD
Ruoqi Gao, MD
Gabriella Garcia, MD
Angelica Garcia, MD
Darren Heath Garner, MD
Brenden Eugene Garrett, MD
German Alfonso
Garza Garcia, MD
Elisa M. Geraldino Castillo, MD
Sofia Gereta, MD
Deitrich Gerlt, DO
Maria Ghawji, MD
Richard Le Gibelyou, MD
Thomas Franklin Glass, III, MD
Agnieszka Golian, DO
Caleb Michael Graham, MD
Robin Claire Granberry, MD
Jayasree Grandhi, MD
Holly Guo
Chelsea Guy-Frank, MD
James Dominic Haddad, MD
Avery Leigh Hager, MD
Sameer Halani, MD
Racha Halawi, MD
Morgan Alexa Hammack, MD
Yousef Hammad, MD
Leema Hamoudah, DO
Jasper Han, MD
Wael Adel Samuel Hanna, MD
Rachael Hanson, MD
Samar Harris, MD
Jalen Alexander Harvey, MD
Maya Heath, MD
Martin L. Hechanova, MD
Erin Heimbach, MD
Lucas Hendrix, MD
Emily Henschel, MD
Sarah Marie Hergert, MD
Christina Lynn Herrera, MD
Stefanie Hettwer, MD
Emily Heydemann, MD
Harrison Hicks, MD
Brian Joseph Hopkins, MD
Tiffany L. Horrigan, MD
Elizabeth Bahar
Houshmand, MD
Kyle Alexander Howarth, MD
Douglas Joseph Hoye, MD
Meng-Lun Hsieh, DO, PhD
Jenny Huang, MD
Jeffrey Huiming Huang
Carlos Huerta, DO
Rachel Radke Hughes, DO
Albert Huh, MD
Madysen Hunter
Mohamed Ginawi Hussein, MD
Erin Isenberg, MD
Geina M. Iskander, MD
Ibasaraboh Dorcas
Iyegha, MD
Meredith Ann Jackson, MD
Jillian C. Jacobson, MD
Kamran Adil Jafree, MD
Martha G. James, MD
Feroz James, MD
Rabia Jamy, MD
Halima Saadia Janjua, MD
Catherine M. Jennings, MD
Misbah Jilani, MD
Bret Andrew Johnson, MD
Zachary Dray Johnson, MD
Stephanie C. Jones, MD
Bayley Alexandra Jones, MD
Ashley L. Jones, MD
Kathryn Marie Kaihlanen, MD
Chaitanya Gopal Reddy
Kalathuru, MD
Sanjeeva Praneeth Kalva, MD
Dominique A. Kasindi, MD
Joseph Kelling, MD
Anne Marie Kerchberger, MD
Seyed Alireza Khalafi, MD
Muhammad Sohaib Khan, MD
Mehvish Khan, MD
Ramlah Khan, MD
Zara Khan, MD
Minha Kim, MD
Eunyeop Emma Kim, MD
Joseph Dong-Yeon Kim, MD
Theodora Kipers, MD
Timothy Joseph Kirtek, MD
Angelica Melillo
Knickerbocker, MD
Ayeeshik Kole, MD
Ragini
Kondetimmanahalli, MD
Emily Korba, MD
Jonathan Ryan Korpon, MD
Victoria Andreevna
Koshevarova, MD
Suman Krishna Kotla, MD
John Stephane
Shoumou Kouam, MD
Brian Rudolph Kurtz, MD
Ellen Nicole LaBauve, MD
Gregory Samuel Lachar, MD
Scott Noah Lacritz, MD
Soolmaz Laknezhad, MD
Cameron Taylor Landers, MD
Heather Diane
Lanier-Diaz, MD
Alexander Long Le, MD
Kristin Lebrasseur, DO
Hannah Lee, MD
Ellen Eunha Lee, MD
Gina Lee, MD
Michelle Lee, MD
Hannah Maryam
Lehrenbaum, MD
Nicholas Leonard, DO
Jared D. Lewis, MD
Kamryn Lewis, MD
Michael Mengxi Li, MD
Roger Jie Liang
Victor Liaw, MD
Jorena Lim, MD
Palmila Shuoyi Liu, MD
Jasmine Ana Liu-Zarzuela, MD
Melanie Gwen Lopez, MD
Sandra Edith Loza-Avalos, MD
Christian Ernest Lumley, MD
Bryce Alexander Lutan, MD
David Luu, MD
Katherine Goodwin
Maddox, MD
Anthony V. Maioriello, MD
Hugh Mair, MD
Hamza Malick, MD
Chaitanya Lakshmidhar Malladi, MD
Audrey Natsai Mangwiro, MD
Matthias M. Manuel, MD
Kenneth Martinez Algarin
Joseph Ernest Marvin, MD
Alec Russell Mason, MD
Lauren Elizabeth Matevish, MD
Christian Michael Casson Matishak, MD
Emily Kathleen May, MD
Lisandro Maya Ramos, MD
Charles W. McDaniel, DO
Christopher McMillian, MD
Manan Mahendra Mehta, MD, PHD
Jeel Mehta, MD
Jennie B. Meier, MD
John Melek, MD
Jonathan Benjamin Melendez-Torres, MD
Antonio R. Mendez, MD
David Meng, DO
Amanda Kay Mennie, MD
Christopher John Merchun, MD
Nancy Jane Bradley Merrill, MD
Amy Louise Mickelsen, MD
Zachary Miles, MD
Madison Elizabeth Milhoan, MD
Cameron Thomas Miller, MD
James Wyatt Miller, MD
Allante Milsap, MD
Shivani Misra, MD
Sachi Mistry, MD
Dalia Mitchell, MD
Ealing Tuan Mondragon, MD
Brandilyn K. Monene, MD
Ryan Patrick Mooney, MD
Samuel Moore, DO
Stephanie Moreno, MD
Heather Danielle Morris, DO
Devin C. Morris, MD
Zachary Marc Most, MD
Joshua Moton, DO
Stefanie L. Moye
Teale Marie Muir, MD
Aun Ali Munis, MD
Maishara Muquith, MD
Gilbert Zvikomborero
Murimwa, MD
Thomas Merritt Murphy, MD
Isaac Garth Myres, MD
Shahid Nadeem, MD
Fahad Najeeb, MD
Chanyanuch Nakapakorn, MD
Priyanka Narvekar, MD
Timothy Wayne Neal, MD
Ilona Nekhayeva, MD
Nicole M. Nevarez, MD
Jaimee Nguyen, DO
Monica Nguyen, MD
Rory Nicolaides, MD
Andrea Nillas, MD
Shamika Ninan, DO
Michael Esha Nissan, MD
John Bartlett Norton, MD
Johanna H. Nunez, MD
Chloe Eleanor Nunneley, MD
Ehiamen Tolulope Okoruwa, MD
Andrea Ayomikun Omehe, MD
Kaci Orr, MD
Jennifer Nkem Oruebor, MD
Mauricio Ostrosky Frid, MD
Can Ozlu, MD
Kelbi Padilla, MD
Sung (David) Hyun Paek, MD
Khusboo Vipul Pal, MD
Sravana K. Paladugu, MD
Evelyn Tiffany Pan, MD
Deepa PanjetiMoore, DO, MPH
Niki N. Parikh, MD
Jeong Ben Park, MD
Emma Parks
Viral Manojkumar Patel, MD
Faisalmohemed Patel, MD
Swapneel Jagdishchandra Patel, MD
Shivam Bhavin Patel, MD
Jaisal Prakash Patel, MD
Adam Patrick, MD
Ankita Patro, MD
Eric William Pepin, MD, PhD
Morgan Fay Pettigrew, MD
Ronnie A. Pezeshk, MD
Nicholas Quang Pham, DO
David Quang-Nam Pham, MD
Patrick E. Powers, MD
Austin J. Pucylowski, MD
Muhammad Raza Karim Qureshi, MD
Saikripa Mangala Radhakrishnan, MD
Shelly Ann Ragsdale, MD
Dina Naziha Rahhal, MD
Hootan Rahimizadeh, MD
Greeshma Rajeev-Kumar, MD
Shivani Raman, MD
Meghana Rao, MD
Ali Rauf, DO
Manoj Ravichandran, MD
Garrett Ray, MD
Bappaditya Ray, MD
Syed Sadi Raza, MD
Samreen Rizvi Raza, MD
Umair Rehman, MD
Kurt Michael Reichermeier, MD
Jeffrey Andrew Remster, MD
Vincent Riccelli, MD
Carley Hagar Rich, MD
Chelsie Riley, DO
Alan Ritchie, MD
Homero Rivas, MD
Matthew Justin Roberts, MD
Jordin Shelley
Roden-Foreman, DO
Sofia Lourdes Rodriguez, DO
Avery Rogers, MD
James Kevin Rohwer, MD
Kevelyn Ashley Rose, MD
Noah Munn Rosenberg, MD
John Andrew Rosener, MD
Keannette Russell, MD
Amit Saha, MD
Joseph L. Sailors, MD
Celina Salcido, DO
Noor Saleemi, DO
Walid A. Saleh, MD
Adam Saleh, MD
Narda Salinas, MD
Vijeta N. Salunkhe, MD
Sreeja Sanampudi, MD
Glaiza-Mae Sande-Docor, MD
Eric J. Sanders, MD
Alisha Sansguiri, MD
Kristen Santiago, MD
Jacobo Leopoldo Santolaya, MD
Namita Saraf, MD
Patricia SarcosAlvarez, MD, DMD
Ali Munir Sawani, DO
Thomas Schlieve, MD
Robyn Laurel Scott
Madisen Seidel, DO
Nicholas Evan Sevey, MD
Rushikesh Shah, MD
Amir Shlomo Sharim, MD
Priya Sharma, MD
Akash Sharma, MD
Mily Sheth, MD
Sofia A. Shirley, MD
Andrew D. Shubin, MD
Ashton Mark Smelser, MD
Jerrod Ryan Spence, MD
Tarun Srinivasan
Pallavi Srivastava, MD
Rachel Stading, MD
Tyler Stannard, MD
Isabella Strozzi, MD
Kathleen Strybos, MD
Emily Styslinger, MD
Vinayak Subramanian, MD
Scott Sullivan, MD
Nathan David Sumarsono, MD
Khalid Hussain Syed, MD
Daniel Tai, MD
Dustin Lee Taliaferro, DO
Christopher Odell Taliaferro, DO
Kamala Priya Tamirisa, MD
Donald Tan, MD
Tommy Tan
Michael Tang, MD
Amy Tao, DO
Jonathan Tao, MD
Nadia Tello, MD
Phillip Alan Tenzel, MD
Roshni Thachil, MD
Rahul Tharoor, MD
Michelle Thieu, MD
Serin Thomas, MD
Melissa Thornton, MD
Joshua Tidman, MD
Kelly Milman Tobias, MD
Ethan Marc Tobias, MD
Stephen M. Topper, MD
Kimberly Tran, DO
Kaitlynn Minh Trinh, MD
Rylee Trotter, MD
Khanh Thoai Truong, DO
Hieu Trong Truong, MD
Jennifer W. Tse, MD
Ellie Tuchaai, MD
Destiny Chinenye
Uwaezuoke, MD
Sara Valek, MD
Jesus Valencia, MD
Tejaswi Veerati, MD
Ryan Justin Vela, MD
Mayank Verma, MD
Ksenia Vlassova, MD
Mitchell Von Itzstein, MD
Anupama N. Wadhwa, MD
Kurt John Wagner, III, MD
Jennifer Ann Walker, MD
Melissa Baker Wallace, MD
Cecilia Wang, MD
Jiexin Wang, MD, PhD
Cynthia Sanhwa Wang, MD
Angela Wang, MD
Dustan Watkins, MD
Gabriella Webster, MD
Lindsay Weitzel, MD
Alexis Whellan, MD
Alexander Brian Whitaker, MD
Alesha Marie White, MD
Zachary Whitham, MD
Jameson Grant
Daria Wiener, MD
Alexa Renae Wilden, MD
Byrd C. Willis Pineda, MD
Remi Aleaha Wilson, MD
Averi Elizabeth Wilson, MD
Matthew W. Wise, DO
Terrence Y. Wong, MD
Matthew Wooten, MD
Gillian Wright, MD
Renqing Wu, MD
Richard Wu, MD
Judy Jingyi Xue, MD
Kristine Yang, MD
Katarina Yaros, MD
Pedro R. Yen, MD
Vivian Chang Yen-Xu, MD
Wendy Yin, MD
Sushma Yitta, MD
Mohammad Faizan Zahid, MD
Timothy Andrew Zaki, MD
Giovani Moises Zelada, MD
Emily Ann Zientek, MD
David Zimmerhanzel, MD
Josue Zozaya, MD

Texas Oncology broke ground on a transformative $120 million cancer center in Plano, marking a historic milestone for cancer care in North Texas. The three-story, 100,000-square-foot facility represents the largest cancer center in Texas Oncology’s network and will serve as a flagship location offering comprehensive, cutting-edge oncology services.
The new center holds special significance as Plano is home to the first full-service Texas Oncology cancer center. This groundbreaking project represents a full-circle moment, bringing the organization’s largest and most advanced facility back to the community where comprehensive Texas Oncology cancer care first began.
CENTRALIZED EXCELLENCE UNDER ONE ROOF
The state-of-the-art facility will consolidate four existing Texas Oncology locations into a single, unified center:
• Texas Oncology-Plano East
• Texas Oncology-Plano West
• Texas Oncology-Plano Prestonwood
• Texas Oncology-Plano Presbyterian Hospital
This centralization will create a streamlined, efficient healthcare
experience that improves accessibility and elevates the quality of care for the Plano community. By bringing all services under one roof, the facility will eliminate barriers and create a more cohesive treatment journey for patients.
ADVANCED TECHNOLOGY AND TREATMENT CAPABILITIES
The facility will feature cutting-edge diagnostic and treatment technologies, including:
• Two PET CT scanners for advanced imaging
• Two CT scanners for comprehensive diagnostics
• Three linear accelerators for precision radiation therapy
• One HDR (High-Dose Rate) vault for specialized treatments
• Advanced infusion therapy capabilities
• Comprehensive medical oncology services
• State-of-the-art radiopharmaceutical treatments
PATIENT-CENTERED DESIGN EXCELLENCE
Designed with both patients and care teams in mind, the facility incorporates innovative architectural elements that prioritize comfort and efficiency:
• Strategic positioning to maximize natural daylight throughout the building

• Optimized layout that minimizes walking distances for care teams
• 400 parking spaces designed for easy access
• Smooth pedestrian flow patterns for patients, physicians, and staff
• Patient-centered design that creates a healing environment
The site’s prominent location adjacent to a hospital and visible from major highways makes it a true landmark for the community.
“This project represents a homecoming in the truest sense. Plano is where Texas Oncology established its first comprehensive cancer center, and now we’re bringing our most advanced facility back to this community. The $120 million investment reflects our commitment to ensuring North Texas patients have access to the most sophisticated cancer treatments available anywhere in the country, all within a healing environment designed around their needs. By consolidating four locations into this flagship center, we’re not just building a larger facility—we’re creating a seamless, high-tech center with access to the latest technologies, clinical trials and radio-pharmaceutical therapies. This will enhance care coordination in a way that will meaningfully improve outcomes for the patients we serve.” - R. Steven Paulson, M.D., CEO and President, Texas Oncology
The new cancer center represents more than a consolidation of services—it’s a commitment to advancing the future of cancer care in Plano and the broader North Texas region. The facility will offer multidisciplinary services that utilize the latest advancements in cancer treatment, providing patients with comprehensive, coordinated care that addresses every aspect of their treatment journey.
The flagship center will generate tremendous excitement in the community due to its prominent location adjacent to a hospital and visibility from major highways. The facility’s scale, advanced
technology, and patient-focused design make it a landmark project that will serve the community for generations to come.
The project represents a collaboration between Texas Oncology, McCarthy, and Cottonwood Development Partners, bringing together expertise in oncology care, healthcare operations, and development to create a world-class cancer treatment facility.
The new Texas Oncology cancer center located off Village Creek Drive is expected to open in December of 2026, serving as a model for comprehensive cancer care and setting new standards for patient experience and treatment outcomes in the region.
“From a clinical perspective, this facility will transform how we deliver cancer care in the region. Having three linear accelerators, advanced PET CT capabilities, and comprehensive infusion services under one roof means our multidisciplinary teams can collaborate more effectively and our patients can receive their entire treatment journey in a single location. The thoughtful design—from maximizing natural light to minimizing walking distances—directly supports better patient experiences and clinical efficiency. This isn’t just about advanced technology; it’s about creating an environment where our care teams can focus on what matters most: delivering personalized, cutting-edge treatment to every patient who walks through our doors.” - Manish Gupta, M.D., Managing Medical Director, Northeast and DFW West Region, Texas Oncology
With more than 550 physicians and 300 locations, Texas Oncology is an independent private practice that sees more than 71,000 new cancer patients each year. Founded in 1986, Texas Oncology provides comprehensive, multi-disciplinary care, and includes Texas Center for Proton Therapy, Texas Breast Specialists, Texas Colon & Rectal Specialists, Texas Oncology Surgical Specialists, Texas Urology Specialists and Texas Infusion and Imaging Center. Texas Oncology’s robust community-based clinical trials and research program has contributed to the development of more than 100 FDA-approved cancer therapies. Learn more at TexasOncology.com. DMJ



Special thanks to our event sponsor Texas Health Physicians group!
Sarah Baker, MD | Blake Barker, MD
Mark Casanova, MD | Donna Casey, MD
Fred Ciarochi, MD | Gates Colbert, MD
Cristie Columbus, MD | M. Brett Cooper, MD
Kathryn Dao, MD | Emma Dishner, MD
Shaina Drummond, MD | Callie Emery, MD
Lauren Fine, MD | Deborah Fuller, MD
Sumana Gangi, MD | Jessica Gillen, MD
Gordon Green, MD | Robert Gross, MD
Robert Haley, MD | Michelle Ho, MD
Philip Huang, MD | Mitchell Huebner, MD
Beth Kassanoff-Piper, MD | Darlene King, MD
Kevin Klein, MD | Allison Moore Liddell, MD
Michael Meyerson, MD | Angela Mihalic, MD
Jules Monier, MD | Sina Najafi, DO
Selika Owens, MD | Lee Ann Pearse, MD
Norberto Rodriguez-Baez, MD
Aralis Santiago-Plaud, MD
Aurelia Schmalstieg, MD
Les Secrest, MD | Jayesh Shah, MD
Cynthia Sherry, MD | Inna Shmerlin, MD
Charlotte Starghill, MD | Lisa Swanson, MD
Suzanne Wada, MD | David Waller, MD
Barbara Waller, MD | Gary Weinstein, MD
Randall Wooley, MD






























Patrick H. Pownell, MD, FACS
Plastic and Reconstructive Surgery
Certified, American Board of Plastic Surgery
Dallas Office
7115 Greenville Ave. Ste. 220 (214) 368-3223
Plano Office
6020 W. Parker Road, Ste. 450 (972) 943-3223
www.pownell.com
CE Broker - Compliance with Confidence
Easily understand your specific CME requirements and compliance status, find and take renewal-ready courses, and report your course completions directly to the Texas Medical Board for a hassle-free renewal.
Benefits:
Find, complete, and report approved CME; View your forever course history; Take CME on the go with the free mobile app; Access to 24/7 support and more!
Dallas-Fort Worth Fertility Associates
Growing Family Trees Since 1999
www.dallasfertility.com
Samuel Chantilis, MD
Karen Lee, MD
Mika Thomas, MD
Ravi Gada, MD
Laura Lawrence, MD
Jennifer Shannon, MD
Monica Chung, MD
Melanie Evans, MD
Dallas: 5477 Glen Lakes Drive, Ste. 200, Dallas, TX 75231, 214-363-5965
Baylor Medical Pavilion: 3900 Junius Street, Ste. 610 Dallas, TX 75246, 214-823-2692
Medical City: 7777 Forest Lane, Ste. D–1100 Dallas, TX 75230, 214-692-4577
Southlake: 910 E. Southlake Blvd., Ste. 175 Southlake, TX 76092, 817-442-5510
Plano: 6300 W Parker Road, Ste. G26 www.cebroker.com
Dallas County Medical Society (DCMS) does not endorse or evaluate advertised products, services, or companies nor any of the claims made by advertisers. Claims made by any advertiser or by any company advertising in the Dallas Medical Journal do not constitute legal or other professional advice. You should consult your professional advisor.
TMLT - Inside front cover
Southwest Diagnostic Imaging – Page 9
Southwest Diagnostic Center
For Molecular Imaging – Page 22
Stonewood Investments Inc. – Page 27
TMA Insurance Trust - Inside back cover
SWMIC – Back cover
Leading provider of compliance-based medical waste solutions.
Still using cardboard boxes for your medical waste collection?
Let Biogenic Solutions upgrade your facility with our OSHAcompliant, mobile waste disposal containers & reusable Sharps program for DCMS members.
| 469-460-9660
Retina Institute of Texas, PA
Vitreous and Retina Diagnosis and Surgery www.retinainstitute.com
Maurice G. Syrquin, MD
Marcus L. Allen, MD
Gregory F. Kozielec, MD
S. Robert Witherspoon, MD
3414 Oak Grove Ave. Dallas, TX 75204 | (214) 521-1153
Baylor Health Center Plaza I 400 W. Interstate 635, Ste. 320 Irving, TX 75063 | (972) 869-1242
3331 Unicorn Lake Blvd. Denton, TX 76210 | (940) 381-9100
1010 E. Interstate 20 Arlington, TX 76018 | (817) 417-7769
and Vitreous www.DrPruitt.com (817) 966-0235 | partners@dallas-cms.org www.dallas-cms.org
Robert E. Torti, MD
Santosh C. Patel, MD
Henry Choi, MD
Steven M. Reinecke, MD
Philip Lieu, MD, FASRS
1706 Preston Park Blvd., Plano, TX 75093 | (972) 599-9098
2625 Bolton Boone Drive, DeSoto, TX 75115 | (972) 283-1516
1011 N. Hwy 77, Ste. 103A Waxahachie, TX 75165 | (469) 383-3368
18640 LBJ Fwy., Ste. 101 Mesquite, TX 75150 | (214) 393-5880
10740 N. Central Expy., Ste. 100 Dallas, TX 75231 | (214) 361-6700
James R. Sackett, MD
Daniel E. Cooper, MD
Paul C. Peters Jr., MD
Andrew B. Dossett, MD
Eugene E. Curry, MD
Daniel A. Worrel, MD
Kurt J. Kitziger, MD
Andrew L. Clavenna, MD
Holt S. Cutler, MD
Mark S. Muller, MD
Todd C. Moen, MD
J. Carr Vineyard, MD
M. Michael Khair, MD
William R. Hotchkiss, MD
J. Field Scovell III, MD
Jason S. Klein, MD
Brian P. Gladnick, MD
8315 Walnut Hill Lane, Ste. 125, Dallas, TX (214) 363-6000
H. Pruitt, MD, FACS ADVERTISE YOUR PRACTICE HERE!
Bradford S. Waddell, MD
William A. Robinson, MD
Tyler R. Youngman, MD
Justin Cardenas, MD
9301 N. Central Expy., Ste. 500, Dallas, TX 75231
3800 Gaylord Pkwy., Ste. 710, Frisco, TX 75034
Phone: (214) 466-1446 Fax: (214) 953-1210


leverages the proven capabilities of value-based payment models to transform healthcare for diverse and often marginalized populations. From predictive modeling to advanced care-tracking tools, utilizing Equality Health’s proprietary software, participating PCPs can streamline value-based administration and stay one step ahead of a patient’s journey. Equality Health provides solutions that address the challenges of transitioning to and working in value-based care, so providers are able to concentrate on what matters most: patient health. Our Medicaid-first care model reduces administrative burden, streamlines processes, provides inperson support and offers additional financial opportunities.


Equality Health’s technology platform gets directly to the root of the multiple payer portal problem by providing one portal for multiple plans. CareEmpower® enables practices to monitor, track, and manage preventive care all in one place.
Participating in Equality Health’s activity-based payment program enables providers to receive payments quarterly, eliminating the often long wait for reimbursement, and paying providers up to five times faster than health plan reimbursement.
Equality Health provides providers and their practices with an inperson care team that helps to optimize workflows to allow practices to more easily identify and initiate care for high-risk patients and close care gaps. Every element of support that we provide empowers practices to function more efficiently and more effectively, with improved patient outcomes the ultimate goal.
Equality Health partners with over 3,200 PCPs and 700,000 lives across Arizona, Texas, Tennessee, Louisiana and Virginia. Our members engage with holistic and personalized programs delivered through the lens of social and cultural needs. Equality Health is revolutionizing how care is delivered by establishing critical linkages with payers, providers, members, and community resources.
For more information about Equality Health, visit equalityhealth.com or follow @EqualityHealth on Facebook, @EqualityHealth on Twitter, and @EqualityHealth on LinkedIn.
much to their communities—shouldn’t their homes offer them the sanctuary they deserve? After an 18year career in healthcare leadership and strategic supply chain management, I witnessed firsthand the immense pressures faced by those in the medical field, especially during the COVID-19 pandemic. These experiences inspired me to transition into real estate and, in 2024, establish Dr. Healthy Homes. Dr. Healthy Homes is a unique real estate initiative created to help healthcare professionals find spaces that promote balance, wellness, and a sense of sanctuary. My mission is to deliver a seamless, tailored experience that supports your personal and professional needs.
To bring Dr. Healthy Homes to life, I partnered with Monument Realty, a brokerage renowned for its innovative approach, exceptional agents, and industryleading success. Based in Frisco, Texas, Monument Realty is a market leader with over 800 agents and eight offices across Texas. Exclusive partnerships with the Dallas Cowboys, Texas Rangers, and PGA of America give clients unmatched marketing
advantages, ensuring your real estate goals are met with precision and professionalism.
The Health Link Program was designed to meet the unique needs of Dallas County physicians and healthcare professionals, simplifying and enhancing the real estate experience. As a concierge-level realtor, I go above and beyond to meet your needs by recording home walkthroughs, hosting virtual meetings via Zoom, Teams etc., and providing expert guidance to streamline your home search or sale. I take the time to understand your unique lifestyle and goals, ensuring each step of the process is stress-free and customized to fit your needs. From personalized mortgage solutions and utility setup assistance to post-move support, I handle every detail so you can focus on your patients and practices while we take care of the rest. Contact us today to find a home that promotes your wellness and supports your demanding lifestyle.
Dr. Bri Huedepohl D.H.Sc. R.T (R)
REALTOR ® Brihuedepohl@monumentstar.com Cell: 612-202-3519

For practice owners with group coverage, we help manage costs while offering your employees a combination of PPO and HMO plans.
If your staff is covered elsewhere (such as through a spouse’s plan), you might still qualify for group PPO coverage for yourself and your family.
If you own a business with your spouse as the sole employee, you may be eligible for group coverage, without partnership documentation.
Partners with no W-2 employees may qualify for individual group coverage by providing basic partnership documentation and your company’s SS4 or recent K-1 (Form 1065).
For physicians opening a new practice, we simplify the process of starting your group plan, right from the beginning.
We assist with high-deductible health plans, allowing you to open a Health Savings Account (HSA) and take advantage of tax-saving benefits.
We continue to provide support after enrollment, handling administrative tasks such as issuing ID cards, making updates, and managing plan changes.
This Open Enrollment season, we’re here to help.
Whether you practice independently, with a partner, or lead a team, you may still qualify for group PPO coverage— even just for yourself and your family. With us, you’ll have access to clear, physician-focused guidance, allowing you to make informed decisions with confidence.
Our commitment doesn’t end when your coverage begins. At TMA Insurance Trust, coverage comes with care, not just at enrollment, but throughout the year. There are no inflated costs, hidden fees, or sales pressure, just practical help, ongoing plan management, and reinvestment in vital resources that support you and your fellow physicians.

Make the most of Open Enrollment 2026. Connect with a TMA Insurance Trust advisor at 1-800-880-8181, Monday through Friday from 8:00 AM to 5:00 PM CST or visit tmait.org.



