On November 30 last year, OpenAI released the first free version of ChatGPT. Within 72 hours, doctors were using the artificial intelligence-powered chatbot. “I was excited but, to be honest, a little bit alarmed,” said Peter Lee, the corporate vice-president for research and incubations at Microsoft, which invested in OpenAI.
He and other experts expected that ChatGPT and other AI-driven large language models could take over mundane tasks that eat up hours of doctors’ time and contribute to burnout, like writing appeals to health insurers or summarising patient notes.
They worried, though, that artificial intelligence also offered a perhaps too tempting shortcut to finding diagnoses and medical information that may be incorrect or even fabricated, a frightening prospect in a field like medicine.
Most surprising to Lee, though, was a use he had not anticipated — doctors were asking ChatGPT to help them communicate with patients in a more compassionate way.
In one survey, 85 per cent of patients reported that a doctor’s compassion was more important than waiting time or cost. In another, three-quarters of respondents said they had gone to doctors who were not compassionate. And a study of doctors’ conversations with the families of dying patients found that many were not empathetic.
Enter chatbots, which doctors are using to find words to break bad news and express concerns about a patient’s suffering, or to explain medical recommendations.
Evan Lee of Microsoft said that was a bit disconcerting. “As a patient, I’d personally feel a little weird about it,” he said. But Dr Michael Pignone, the chairman of the department of internal medicine at the University of Texas at Austin, US, has no qualms about the help he and other doctors on his staff got from ChatGPT to communicate regularly with patients. He explained the issue in doctor-speak: “We were running a project on improving treatments for alcohol use disorder. How do we engage patients who have not responded to behavioural interventions?” Or, as ChatGPT might respond if you asked it to translate that: how can doctors better help patients who are drinking too much alcohol but have not stopped after talking to a therapist?
He asked his team to write a script for how to talk to these patients compassionately.
“A week later, no one had done it,” he said. All he had was a text his research coordinator and a social worker on the team had put together, and “that was not a true script”, he said. So Dr Pignone tried ChatGPT, which replied instantly with all the talking points the doctors wanted.
Social workers, though, said the script needed to be revised for patients with little medical knowledge. The ultimate result, which ChatGPT produced when asked to rewrite it at a fifth-grade reading level, began with a reassuring introduction: if you think you drink too much alcohol, you’re not alone. Many people have this problem, but there are medicines that can help you feel better and have a healthier, happier life.
That was followed by a simple explanation of the pros and cons of treatment options. The team started using the script this month. Dr Christopher Moriates, the co-principal investigator on the project, was impressed. “Doctors are famous for using language that is hard to understand,” he said. “It is interesting to see that even words we think are easily understandable really aren’t.”
Some question if it is necessary to turn to an AI program for empathetic words. “Most want to trust and respect our doctors,” said Dr Isaac Kohane, professor of biomedical informatics at Harvard Medical School, US. “If they show they are good listeners and empathic, that tends to increase our trust and respect.”
But empathy can be deceptive. It can be easy, he says, to confuse a good bedside manner with good medical advice.
There’s a reason doctors may neglect compassion, said Dr Douglas White, the director of the programme on ethics and decision-making in critical illness at the University of Pittsburgh School of Medicine, US. “Most doctors are pretty cognitively focussed, treating the patient’s medical issues as a series of problems to be solved,” White said. As a result, he said, they may fail to pay attention to “the emotional side of what patients and families are experiencing”.
At other times, doctors are all too aware of the need for empathy, but the right words can be hard to come by.
Dr Gregory Moore, who until recently was a senior executive leading health and life sciences at Microsoft, wanted to help a friend who had advanced cancer. Her situation was dire, and she needed advice about her treatment and future. He decided to pose her questions to ChatGPT.
The result “blew me away,” Moore said. In compassionately worded answers to Moore’s prompts, the program gave him the words to explain to his friend the lack of effective treatments: I know this is a lot of information to process and you may feel disappointed or frustrated by the lack of options... I wish there were more and better treatments... and I hope that in the future there will be.
Dr Moore became an evangelist, telling his doctor friends what had occurred. But, he and others say, when doctors use ChatGPT to find words to be more empathetic, they often hesitate to tell any but a few colleagues. “Perhaps that’s because we are holding on to what we see as an intensely human part of our profession,” Dr Moore said.
Or, as Dr Harlan Krumholz, the director of Center for Outcomes Research and Evaluation at Yale School of Medicine, US, said, for a doctor to admit to using a chatbot this way “would be admitting you don’t know how to talk to patients”.
Still, those who have tried ChatGPT say the only way for doctors to decide how comfortable they would feel about handing over tasks — such as cultivating an empathetic approach or chart reading — is to ask it some questions themselves. “You’d be crazy not to give it a try and learn more about what it can do,” Dr Krumholz said.
NYTNS