What shadow AI is trying to tell your health plan
Published:
March 18, 2026

AI adoption in healthcare and healthcare operations is no longer a question of if—it’s a question of how fast, and how well. Health plans are under pressure to modernize, streamline clinical operations, and prove ROI from technology investments. In that environment, the temptation to move quickly is real and understandable.
As health plans shift more of their budgets toward AI solutions—and as employees start turning to unsanctioned tools when other options aren't available—two related threats have emerged: generic-purpose AI deployed in clinical settings where it doesn’t belong and “shadow AI” adopted by employees without oversight.
As both become more common, they pose significant risks to patient safety, data security, and long-term operational integrity.
The AI gold rush in healthcare
According to Deloitte’s 2024 Life Sciences and Health Care Generative AI Outlook Survey, 75% of leading health care companies are already experimenting with or planning to scale generative AI across the enterprise. Providers are leading this charge—and health plans are watching closely. Health systems have reached 27% domain-specific AI adoption while health plans sit at just 14%, according to Menlo Ventures. The gap is closing fast, and the competitive pressure to close it is real.
CTOs and COOs are being asked to move quickly and to start deploying AI not just for one use case/team but across clinical operations at scale. The ambition is right, but the approach matters more. The health plans that win with AI won’t be the ones that moved fastest. They’ll be the ones who moved fast and right.
Two risks health plans can’t ignore
As AI adoption accelerates, two problematic patterns are quietly taking root in health plans. Neither is the result of bad intentions, but both carry real consequences.
Risk #1: Generic-purpose AI in clinical settings
Generic-purpose AI tools—think large language models (LLMs) or automation platforms built for broad commercial use—can be powerful in the right context. In clinical operations, they often fall short in ways that matter.
The core problem is that these tools are built for breadth, not depth. They aren’t trained on clinical workflows or the nuanced requirements of a health plan’s utilization management (UM) program, and they haven't been validated against the decisions that affect patient outcomes. The result is lower accuracy, a higher risk of hallucination, and a lower return on investment.
Consider how clinical policy language is actually written. A guideline may state coverage applies when a patient has “severe lung disease.” For a clinician or UM nurse, that phrase maps onto specific criteria such as pulmonary function thresholds, oxygen saturation levels, documented symptoms, and prior treatment history. A generic AI model may understand the phrase linguistically but lack the clinical context to interpret it correctly. Without structured policy logic based on clinical evidence or clinical validation, the model may produce an answer that sounds reasonable but doesn’t align with the policy’s true intent.
That gap introduces several risks:
Hallucinations and inaccurate recommendations.
LLMs generate plausible responses, not necessarily clinically correct ones. When applied to medical policies or patient data, they may fabricate details, infer missing criteria, or produce recommendations unsupported by clinical evidence.
Opaque reasoning.
Many generic models may produce a recommendation, but the reasoning behind it is difficult to trace, making it hard to tie decisions back to clinical policies or to provide the required auditability.
Data security and compliance exposure.
Generic AI tools often lack the HIPAA-specific safeguards, data controls, and audit trails required in healthcare. If protected health information enters these systems without proper protections, health plans may face significant compliance risk.
Bias reinforcement.
Generic models are trained on broad datasets that may contain biased or incomplete information. Without domain-specific tuning and clinical oversight, those biases can surface in recommendations and introduce variability in decision-making.
Risk #2: Shadow AI
Shadow AI refers to tools employees adopt outside of IT or leadership approval because sanctioned options are unavailable or inadequate. These tools almost always leverage generic-purpose AI rather than solutions built for clinical operations. A January 2026 Wolters Kluwer Health survey of more than 500 healthcare workers found that over 40% were aware of colleagues using unauthorized AI tools—and nearly 20% admitted to using one themselves.
So why does this happen? Healthcare workers see the productivity gains AI can deliver and won’t wait for a procurement cycle that could take months. Of those who said they used unapproved AI tools, 45% claimed faster workflows as their primary motivation, and another 24% said the unapproved tool offered better functionality than what their organization had approved.
Shadow AI carries all the risks of generic-purpose AI but with an additional layer of exposure that is far more difficult to manage. There are no guardrails, no oversight, and no audit trail. The HIPAA implications alone are significant. Any time PHI is entered into an unsanctioned tool, the health plan may be in violation even if no breach occurs.
But here’s the important reframe: shadow AI isn’t just a threat, it’s a diagnostic signal. When employees are turning to unauthorized tools, they’re telling you something important: there are workflows in your organization where people are under-resourced, and where approved AI could be making a meaningful difference.
Moving fast is still a necessity…but it cuts both ways
None of this is an argument for slowing down. Health plans that fail to adopt AI risk rising costs, escalating medical loss ratios, and growing exposure to errors or misaligned clinical decisions. Slow adoption also fuels shadow AI: when teams lack approved, purpose-built tools, they find their own solutions. The longer the gap persists, the greater the operational, compliance, and financial risk across departments.
COOs and CTOs face a delicate balancing act. They’re asked to deploy AI at enterprise scale while remaining accountable for clinical quality, data security, and regulatory compliance. This is exactly why the AI partner that a health plan chooses —and the process you use to choose an AI partner—matter so much.
A look at trustworthy clinical AI
A horizontal AI strategy—one that spans clinical operations from utilization management to care coordination to quality review—is only as strong as the trust embedded in each component.
When evaluating AI for clinical operations, look for technology that checks the following boxes:
- Clinically purpose-built: Uses clinical data, validated in real workflows, and designed for your specific operational context.
- Explainable by design: Provides recommendations that decision-makers can understand, with a clear audit trail.
- Governance-ready: Maintains robust data policies, access controls, and an independently validated security posture.
- Proven in production: Demonstrates real-world outcomes, provides references, and performs against benchmarks—not just pilot results.
- Transparent and auditable: Ensures all decisions and processes can be reviewed, tracked, and verified by internal or external stakeholders.
- Built for the long run: Offers ongoing support, delivers model updates, and maintains clear escalation paths when issues arise.
The bottom line
AI has the real potential to drive meaningful gains in clinical operations—reducing administrative burden, improving accuracy, accelerating decision-making, and freeing clinical staff to focus on higher-value work. But adopting AI is only the first step. Health plans need rigorous vetting upfront and ongoing governance to ensure these systems remain accurate, compliant, and aligned with clinical policies.
Moving fast is necessary, but moving fast and right is the only sustainable approach.
Looking for AI solutions built specifically for clinical operations? Learn how purpose-built clinical AI can help your health plan move fast without the risk.
Available For Download
Written by
Cohere
Health
Cohere Health’s clinical intelligence platform and agentic AI-powered solutions connect health plans’ strategic goals and providers’ needs, optimizing the speed, cost, and quality of care. With an enterprise approach that streamlines payer-provider decision-making across the care continuum–including policy, prior authorization, payment accuracy, and more–the company improves collaboration and reduces burden, resulting in up to 8x ROI and 94% provider satisfaction. Cohere Health is recognized on TIME’s World’s Top HealthTech Companies 2025 list, on the 2025 Inc. 5000 list, and by numerous industry analysts.
Stay ahead with expert insights on transforming utilization management and payment integrity—delivered straight to your inbox.



