Patients, Doctors Fear AI in Medicine — Should They?

— Worries and risks surrounding generative artificial technology

by Robert Pearl, MD July 5, 2023

The sanctity of the doctor-patient relationship has always been a cornerstone of modern medicine. It’s a relationship rooted in trust, confidentiality, and mutual understanding.

Yet, with the advent of generative artificial intelligence (AI) technologies like ChatGPT opens in a new tab or window, apprehension is growing among doctors and patients. Physicians fear these tools will undermine (or replace) them. Patients worry the personal touch of their healthcare providers will be replaced by zeros and ones.

A recent Pew Research Poll opens in a new tab or window found that most Americans feel this way. Sixty percent of patients surveyed said they feared their healthcare provider would rely too much on AI to diagnose disease and recommend treatments, and 57% said they worried that AI will erode the connection they have with their healthcare provider.

As millions of Americans begin to navigate the technological revolution of medicine, a little technophobia is understandable. But fear shouldn’t overshadow the valid reasons for optimism.

ChatGPT and similar technologies have the potential to strengthen, rather than compromise, medical care in the U.S. If trained properly and used wisely, AI can rekindle, not wreck, the doctor-patient relationship.

To understand the upside, consider the No. 1 fear patients express about AI in healthcare: the risk that their doctor will depend too much on it.

This isn’t a new type of anxiety. According to a 2021 opinion poll opens in a new tab or window out of Penn State, taken more than a year before the rollout of ChatGPT, 77% of Americans said they believe society relies too much on technology to succeed.

But when it comes to the doctor-AI relationship, there’s little risk of physicians leaning too hard on ChatGPT to the detriment of patients. More likely, AI will merely bolster the doctor’s decision-making.

Already, doctors and their teams are using generative AI tools like ChatGPT and Med-PaLM 2opens in a new tab or window when seeking a second opinion. In the future, data-rich AI models will arm doctors with a diagnostic support system that can minimize medical errors and maximize patient safety.

At top academic institutions, educators are gearing up to teach medical students and residents how to use AI safely and effectively. Educators agree that chatbots, when used responsibly, can accelerate, and deepen medical learning opens in a new tab or window — while also helping students avoid the well-documented pitfalls of information overload and burnout opens in a new tab or window. Meanwhile, Congress has begun hearings opens in a new tab or window on AI with the intent to protect Americans from harm.

For the foreseeable future, we can expect doctors to remain accountable for all medical decisions while turning to ever-more reliable AI sources for a helpful boost.

In addition to fears that AI will damage the doctor-patient relationship, another common concern is that AI, like any new technology, might glitch, generating inaccurate and potentially deadly advice opens in a new tab or window. This, too, is a valid concern. Today’s generative AI tools can’t be used in medicine without a physician’s oversight. But as the tech becomes more dependable, it has the potential to fill a huge healthcare void.

Imagine, for example, your child awakens at midnight with a 103° fever. The doctor’s office is closed and, should you call anyway, a recorded message will tell you to “dial 911 in case of an emergency.” This leaves you as a parent with two options: wait until morning to telephone your family doctor and hope your child doesn’t die. Or race to the emergency department, where you’ll likely wait hours to be seen and get charged 12 times more opens in a new tab or window than at your physician’s office.

Future generations of AI will likely give parents another option. AI technologies are doubling in power opens in a new tab or window and performance every 3opens in a new tab or window to 6opens in a new tab or window months, which means next-gen AI should have no trouble learning to ask the same questions as medical professionals who work in 24/7 clinical call centers. And AI will be trained to follow the same expert protocols, too. Once that happens, tools like ChatGPT will be able to dispense safe, reliable, and immediate medical advice — day or night.

And when it comes to concerns that AI will depersonalize doctor visits, patients should consider a likelier possibility. Most doctors spend 10 to 20 hours a week open in a new tab or window on administrative duties like filling out insurance forms and entering patient data into medical records. These mundane tasks not only contribute to rising rates of burnout in medicine, but also consume precious time that physicians could be spending with patients.

By delegating these jobs to AI (using an existing suite of voice recognition, transcription, and automation tools), your physician can focus more on you and less on the computer that, today, sits between the two of you.

Another set of patient concerns relates to privacy and security.

AI systems depend on massive data sets. In medicine, these databases may one day contain the totality of your medical information and would be a tempting target for cyber criminals. But it will be much harder for evildoers to breach those files than patients might think. That’s because companies like Microsoft and Google are helping develop some of medicine’s leading AI products.

Tech giants couldn’t survive without sufficient public trust. Therefore, they’ll have a strong financial incentive to keep our medical information as well protected as our passwords, credit card information, and browsing history. In our digital world, security and privacy risks are ever present. But those risks will be no greater with medical AI tools than with other technologies we use comfortably today.

Finally, the emergence of AI in medicine will raise an array of ethical questions. Who’s responsible when AI makes a medical mistake? How will we ensure that all people have equitable access to AI-based healthcare services? These are complex issues that will require thoughtful dialogue among ethicists, physicians, patients, technologists, and policymakers.

Congress will need to do its part to define guidelines and regulations that govern the ethical use of AI in healthcare, protecting patient interests while fostering innovation. But as we compare the potential risks that AI poses against the potential health benefits it offers, all of us — patients and medical professionals — should feel optimistic. Generative AI will improve the practice of medicine and our personal health.

Robert Pearl, MD, is a professor at Stanford’s School of Medicine and Graduate School of Business in California and the former longtime CEO at the Permanente Medical Group (Kaiser Permanente).