AI Hallucinations: What Are They and Are They Always Bad?

Published:

September 22, 2025

AI hallucinations may not always be bad, according to a panel of experts at the MedCity INVEST Digital Health conference. Instead, they may let people know when there are gaps in the data.

Hallucinations are a frequent point of concern in conversations about AI in healthcare. But what do they actually mean in practice? This was the topic of discussion during a panel held last week at the MedCity INVEST Digital Health Conference in Dallas.

According to Soumi Saha, senior vice president of government affairs at Premier Inc. and moderator of the session, AI hallucinations are when AI “uses its imagination,” which can sometimes hurt patients because it could be providing wrong information.

One of the panelists — Jennifer Goldsack, founder and CEO of the Digital Medicine Society — described AI hallucinations as the “tech equivalent of bullshit.” Randi Seigel, partner at Manatt, Phelps & Phillips, defined it as when AI makes something up, “but it sounds like it’s a fact, so you don’t want to question it.” Lastly, Gigi Yuen, chief data and AI officer of Cohere Health, said hallucinations are when AI is “not grounded” and “not humble.”

But are hallucinations always bad? Saha posed this question to the panelists, wondering if a hallucination can help people “identify a potential gap in the data or a gap in the research” that shows the need to do more.

Yuen said that hallucinations are bad when the user doesn’t know that the AI is hallucinating.

Read the full article

Written by

Written by

Stay ahead with expert insights on transforming utilization management and payment integrity—delivered straight to your inbox.