Skip to content

Insights from the Armstrong Institute

Voices for Safer Care Home AI on AI AI on AI: Chatter That Matters: Using Chatbots to Improve Patient-Provider Communication

AI on AI: Chatter That Matters: Using Chatbots to Improve Patient-Provider Communication

By Dr. Albert Wu.

“The best course of action for your symptoms is benign neglect” advised the emergency room doctor. Susan Sheridan had developed sudden drooping of the right side of her face and a fierce headache. She was stunned to be sent home with her symptoms labelled as “benign.”  Then, the former international banker turned patient safety advocate did something for the first time: She opened ChatGPT and typed in her symptoms. The screen replied, “facial droop, facial pain, dental work.”1 Instantaneously, the program suggested a few diagnoses, including Bell’s Palsy. Armed with this possibility, she returned to the emergency room. A different doctor saw her and quickly confirmed the diagnosis. The presumptive diagnosis was shingles, perhaps triggered by the dental work. She was given steroids and antivirals and recovered quickly.

“The most crucial step toward healing is having the right diagnosis,” said Dr Andrew Weil. Is this lesson so simple? Is the solution to equip patients with ChatGPT, sit back and watch the rise in efficiency and accuracy of diagnosis?  Perhaps so, but what happens when patients begin arriving at their physician’s office confident about the diagnosis they’ve been given by chatbot?

Flaws versus features: self-diagnosis in the age of information

Although the chatbot revolution seems unprecedented in medicine, precursors have been developing under the radar for over 50 years. Since the internet became available, patients have increasingly done their own research about their condition before arriving to the clinic.  But now, patients are beginning to use AI tools to help them diagnose their ailments. A poll from the Kaiser Family Foundation found that 17% of US adults say they use AI chatbots, a computer program that uses artificial intelligence (AI) to simulate conversation with humans, at least once a month to find health information and advice.2

Trained on medical knowledge, GPT-4 and other chatbots may be used in queries related to diagnosis and treatment. The system is interactive and the user can ask follow-up questions to reach an answer. Some evidence is emerging that lends credence to this behavior. In an early study, ChatGPT was able to reach a correct diagnosis in 39% of New England Journal of Medicine case challenges.3 These cases often stump the Journal’s physician readership.

However, what can seem like a blessing to patients can seem like a curse to overburdened physicians. Queries from patients via online portals (e.g. MyChart) have increased several-fold in the last few years. There is evidence that deluge of online patient messages is a major driver of physician burnout. Patients today, armed with recommendations from AI chatbots, arrive to their physician visits with a greater degree of certainty about their diagnosis and what they need. The information they have obtained provides what appears to be a complete picture of the diagnosis and treatment options. Yet, some of that information may be incomplete or misleading, or altogether incorrect.

The phrase “Don’t confuse your Google search with my medical degree” is often repeated in medical circles. The suggestions proposed by patients can be out of context or inconsistent with current practice. This leads to eye-rolling when physicians are presented with a far-fetched theory offered by the internet. A recent review recommended against unsupervised patient use of ChatGPT for diagnosis and triage.4

On the other hand, physicians themselves are also increasingly inclined to consult the internet, in general, and AI chatbots, in particular, to help them consider possibilities they may have forgotten or overlooked.

“Let’s figure this out together”

We recommend that physicians and organized medicine lean in to the arrival of AI chatbots, which are increasingly ubiquitous and clearly here to stay. We recommend three courses of action: (1) training patients and patient caregivers, (2) training providers, and (3) training patients and providers together.4

Equipping patients with the new generation of AI tools can benefit both patient and provider goals: AI can improve patients’ ability to manage their own health, improve patient-provider communication, and lead to more efficient and effective diagnosis. To ensure that AI chatbots are a force for good in the care delivery process, patients should be encouraged and empowered to educate themselves about their ailments. They should also understand the limitations of the tools. Before treating themselves, they should almost always discuss the situation with a clinician. Patients could be instructed on how to use AI chatbots to understand and manage their own condition. They could also be pointed to the tools that yield the most accurate information.

Physicians should be trained to talk to their patients about the use of AI chatbots. It is counterproductive for physicians to dismiss patient suggestions out of hand; this is likely to make patients feel disrespected and rejected. At best, this can damage the trust and relationship crucial to good quality care. At worst it may even contribute to anger and conflict.

An extension of this approach in communication and care would be for physicians to explore the uses of this new tool together: When a patient presents a potential diagnosis or suggests a treatment, a solution might be to say, “There is so much information out there to deal with. Let’s see what we can find out.” The physician might then turn the computer screen so the patient can see it and initiate a search together.

These approaches require physicians to relinquish some of their role as the sole source of medical information and decision making. But that central role is already in flux. It would be wise to embrace this change, and it is key  to proactively engage with patients as partners in their own care.

AI tools will inevitably become smarter and less fallible. Patients and physicians are at a transition point in dealing with a flood of health information. Some of conflict and frustration is inevitable. However, adopting an approach of patients and providers working shoulder to shoulder to “figure this out together” could be the most direct strategy to putting AI chatbots to use to improve patients’ health and the quality of care.

About the author:

Albert W. Wu, MD, MPH is a practicing general internist and Fred and Juliet Soper Professor of Health Policy & Management, with Joint Appointments in Epidemiology, International Health, Medicine, Surgery, and Business at Johns Hopkins University. He is director of the Center for Health Services and Outcomes Research and PhD in Health Services Research. He has worked in patient safety since the 1988. He was Senior Adviser for Patient Safety at WHO from 2007-2009 and continues with this work. He is director of Strategic Collaborations for the Armstrong Institute, leads the online Masters of Applied Science in Patient Safety & Healthcare Quality, and is Editor-in-Chief of Journal of Patient Safety and Risk Management. He coined the term “second victim,” and is co-founder and co-director of the RISE (Resilience in Stressful Events) peer support program.

 

Cited Works

  1. Rosenbluth T. Dr. Chatbot will see you now. New York Times. September 11, 2024. https://www.nytimes.com/2024/09/11/health/chatbots-health-diagnosis-treatments.html
  2. Presiado M, Montero A, Lopes L, et al. KFF misinformation tracking pool: Artificial intelligence and health information. KFF. August 15, 2024. https://www.kff.org/health-misinformation-and-trust/poll-finding/kff-health-misinformation-tracking-poll-artificial-intelligence-and-health-information/ (accessed 6 October 2024).
  3. Kanjee  Z, Crowe  B, Rodman  A.  Accuracy of a generative artificial intelligence model in a complex diagnostic challenge.  2023;330(1):78-80.
  4. Wu AW. Chatting together: Using AI chatbots to improve diagnostic excellence. Journal of Patient Safety and Risk Management. 2024;29(5):222-224.

 

The opinions expressed here are those of the authors and do not necessarily reflect those of The Johns Hopkins University.

nv-author-image

Leave a Reply

Your email address will not be published. Required fields are marked *