
What You Should Know
- The Trend: Wolters Kluwer Health report reveals “Shadow AI”—the use of unauthorized AI tools by employees—has permeated healthcare, with nearly 20% of staff admitting to using unvetted algorithms and 40% encountering them.
- The Motivation: The driver isn’t malice, but burnout. Clinicians are turning to these tools to speed up workflows and reduce administrative burden, often because approved enterprise alternatives are missing or inadequate.
- The Risk: The gap in governance is creating massive liability, including data breaches (averaging $7.4M in healthcare) and patient safety risks from unverified clinical advice.
40% of Healthcare Staff Have Encountered Unauthorized AI Tools
A new report from Wolters Kluwer Health reveals the extent of this invisible infrastructure. According to the survey of over 500 healthcare professionals, 40% of staff have encountered unauthorized AI tools in their workplace, and nearly 20% admit to using them.
“Shadow AI isn’t just a technical issue; it’s a governance issue that may raise patient safety concerns,” warns Yaw Fellin, Senior Vice President at Wolters Kluwer Health. The data suggests that while health systems debate policy in the boardroom, clinicians are already deploying AI at the bedside—often without permission.
The Efficiency Desperation
Why are highly trained medical professionals turning to “rogue” technology? The answer is not rebellion; it is exhaustion.
The survey indicates that 50% of respondents cite “faster workflows” as their primary motivation. In a sector where primary care physicians would need 27 hours a day to provide guideline-recommended care, off-the-shelf AI tools offer a lifeline. Whether it’s drafting an appeal letter or summarizing a complex chart, clinicians are choosing speed over compliance.
“Clinicians and administrative teams want to adhere to rules,” the report notes. “But if the organization hasn’t provided guidance or approved solutions, they’ll experiment with generic tools to improve their workflows”.
The Disconnect: Admins vs. Providers
The report highlights a dangerous gap between those who make the rules and those who follow them.
- Policy Awareness: While 42% of administrators believe AI policies are “clearly communicated,” only 30% of providers agree.
- Involvement: Administrators are three times more likely to be involved in AI policy development (30%) than the providers actually using the tools (9%).
This “ivory tower” dynamic creates a blind spot. Administrators see a secure environment; providers see a landscape where the only way to get the job done is to bypass the system.
The $7.4M Risk
The consequences of Shadow AI are financial and clinical. The average cost of a data breach in healthcare has reached $7.42M. When a clinician pastes patient notes into a free, open-source chatbot, that data potentially leaves the HIPAA-secure environment, training a public model on private health information.
Beyond privacy, the physical risk is paramount. Both administrators and providers ranked patient safety as their number one concern regarding AI. A “hallucination” by a generic AI tool used for clinical decision support could lead to incorrect dosages or missed diagnoses.
From “Ban” to “Build”
The instinct for many CIOs is to lock down the network—blocking access to ChatGPT, Claude, or Gemini. However, industry leaders argue that prohibition is a failed strategy.
“GenAI is showing high potential for creating value in healthcare but scaling it depends less on the technology and more on the maturity of organizational governance,” says Scott Simeone, CIO at Tufts Medicine.
The solution, according to the report, is not to ban AI but to provide enterprise-grade alternatives. If clinicians are using Shadow AI because it solves a workflow problem, the health system must provide a sanctioned tool that solves that same problem just as fast—but safely.
As Alex Tyrrell, CTO of Wolters Kluwer, predicts: “In 2026, healthcare leaders will be forced to rethink AI governance models… and implement appropriate guardrails to maintain compliance”. The era of “looking the other way” is over.