Free ChatGPT may incorrectly answer drug questions, study says

Free ChatGPT may incorrectly answer drug questions, study says

Harun Ozalp | Anadolu | Getty Images

The free version of ChatGPT may provide inaccurate or incomplete responses — or no answer at all — to questions related to medications, which could potentially endanger patients who use OpenAI’s viral chatbot, a new study released Tuesday suggests.

Pharmacists at Long Island University who posed 39 questions to the free ChatGPT in May deemed that only 10 of the chatbot’s responses were “satisfactory” based on criteria they established. ChatGPT’s responses to the 29 other drug-related questions did not directly address the question asked, or were inaccurate, incomplete or both, the study said. 

The study indicates that patients and health-care professionals should be cautious about relying on ChatGPT for drug information and verify any of the responses from the chatbot with trusted sources, according to lead author Sara Grossman, an associate professor of pharmacy practice at LIU. 

For patients, that can be their doctor or a government-based medication information website such as the National Institutes of Health’s MedlinePlus, she said.

An OpenAI spokesperson said the company guides ChatGPT to inform users that they “should not rely on its responses as a substitute for professional medical advice or traditional care.”

The spokesperson also shared a section of OpenAI’s usage policy, which states that the company’s “models are not fine-tuned to provide medical information.” People should never use ChatGPT to provide diagnostic or treatment services for serious medical conditions, the usage policy said.

ChatGPT was widely seen as the fastest-growing consumer internet app of all time following its launch roughly a year ago, which ushered in a breakout year for artificial intelligence. But along the way, the chatbot has also raised concerns about issues including fraud, intellectual property, discrimination and misinformation. 

Several studies have highlighted similar instances of erroneous responses from ChatGPT, and the Federal Trade Commission in July opened an investigation into the chatbot’s accuracy and consumer protections. 

In October, ChatGPT drew around 1.7 billion visits worldwide, according to one analysis. There is no data on how many users ask medical questions of the chatbot.

Notably, the free version of ChatGPT is limited to using data sets through September 2021 — meaning it could lack significant information in the rapidly changing medical landscape. It’s unclear how accurately the paid versions of ChatGPT, which began to use real-time internet browsing earlier this year, can now answer medication-related questions.  

Grossman acknowledged there’s a chance that a paid version of ChatGPT would have produced better study results. But she said that the research focused on the free version of the chatbot to replicate what more of the general population uses and can access. 

She added that the study provided only “one snapshot” of the chatbot’s performance from earlier this year. It’s possible that the free version of ChatGPT has improved and may produce better results if the researchers conducted a similar study now, she added.

Grossman noted that the research, which was presented at the American Society of Health-System Pharmacists’ annual meeting on Tuesday, did not require any funding. ASHP represents pharmacists across the U.S. in a variety of health-care settings.

ChatGPT study results

The study used real questions posed to Long Island University’s College of Pharmacy drug information service from January 2022 to April of this year. 

In May, pharmacists researched and answered 45 questions, which were then reviewed by a second researcher and used as the standard for accuracy against ChatGPT. Researchers excluded six questions because there was no literature available to provide a data-driven response. 

ChatGPT did not directly address 11 questions, according to the study. The chatbot also gave inaccurate responses to 10 questions, and wrong or incomplete answers to another 12. 

For each question, researchers asked ChatGPT to provide references in its response so that the information provided could be verified. However, the chatbot provided references in only eight responses, and each included sources that don’t exist.

One question asked ChatGPT about whether a drug interaction — or when one medication interferes with the effect of another when taken together — exists between Pfizer‘s Covid antiviral pill Paxlovid and the blood-pressure-lowering medication verapamil.

ChatGPT indicated that no interactions had been reported for that combination of drugs. In reality, those medications have the potential to excessively lower blood pressure when taken together.  

“Without knowledge of this interaction, a patient may suffer from an unwanted and preventable side effect,” Grossman said. 

Grossman noted that U.S. regulators first authorized Paxlovid in December 2021. That’s a few months before the September 2021 data cutoff for the free version of ChatGPT, which means the chatbot has access to limited information on the drug. 

Still, Grossman called that a concern. Many Paxlovid users may not know the data is out of date, which leaves them vulnerable to receiving inaccurate information from ChatGPT. 

Another question asked ChatGPT how to convert doses between two different forms of the drug baclofen, which can treat muscle spasms. The first form was intrathecal, or when medication is injected directly into the spine, and the second form was oral. 

Grossman said her team found that there is no established conversion between the two forms of the drug and it differed in the various published cases they examined. She said it is “not a simple question.” 

But ChatGPT provided only one method for the dose conversion in response, which was not supported by evidence, along with an example of how to that conversion. Grossman said the example had a serious error: ChatGPT incorrectly displayed the intrathecal dose in milligrams instead of micrograms

Any health-care professional who follows that example to determine an appropriate dose conversion “would end up with a dose that’s 1,000 times less than it should be,” Grossman said. 

She added that patients who receive a far smaller dose of the medicine than they should be getting could experience a withdrawal effect, which can involve hallucinations and seizures

Source link