AI tools have become so accessible, it’s almost effortless
With just a few clicks, anyone in your organisation (from the intern to the finance director) can start using powerful, publicly accessible AI tools in the course of their work for free.
But this ease of access brings with it a new category of risk.
Without clear governance or guardrails in place, you may have little or no control over what AI tools your people are bringing into the workplace, what data is being uploaded to them or what information they are recording. For example, how many of us have joined an online meeting recently, only to notice an unfamiliar ‘AI note-taker’ quietly joining as another attendee?
This week, we’re exploring what’s known as ‘Shadow AI’ – what it is, why it matters and what you can do to help protect your organisation while enabling your teams to harness the benefits of AI.

What Is Shadow AI?
Shadow AI refers to employees using unapproved AI tools in the workplace – usually with good intentions, but without input from IT, Compliance, or Legal teams.
Here are some common examples:
- Using an unapproved AI tool to draft responses to clients
- Uploading meeting recordings to an unapproved AI summariser
- Feeding internal reports or spreadsheets into unapproved AI tools for faster analysis
- Using free transcription bots in video calls
Used in the right way, these tools can add huge value. But without governance, they can also introduce significant risk. And often, it’s not down to malice – just a lack of guidance and clarity.
Why It Matters
According to a 2025 KPMG survey on UK attitudes to AI:
- 65% of UK workers say they intentionally use AI at work.
- Yet more than half (54%) admit they have made mistakes in their work because of AI
- Over a third (38%) confess to using it in inappropriate ways, including uploading copyrighted or sensitive material.
- Worryingly, 39% of workers using public AI tools say they have entered company data, such as financial, sales, or customer information, into these systems.
This kind of unchecked AI can create a perfect storm of risks for organisations including:
- Data privacy breaches
- Legal non-compliance
- Reputational damage
- Loss of control over sensitive information
The biggest risk here isn’t AI itself. The real risk is not having oversight or governance around how it’s being used in your organisation.
AI is not going away
The reality is that the AI train has already left the station; and whether you’re on-board or not, some of your people may already be using unapproved AI tools in their daily work, and many more are eager to explore them.
AI is not a passing trend; it’s rapidly becoming embedded in how work gets done. Rather than resisting it, forward-thinking organisations are recognising the need to put measures in place to manage the risk, and thereby empower their people and their organisation to take advantage of the incredible opportunities that AI offers in a safe and secure environment.
The goal isn’t to shut it down. The goal is to create an environment where AI can be used responsibly within your organisation. That means:
- Creating clear AI governance
- Reviewing and approving tools centrally
- Setting clear boundaries for what’s acceptable
- Training staff so they know how to use AI the right way.
Shadow AI is already making the headlines
A recent Sky News investigation revealed that GPs had been using unapproved AI tools to transcribe patient consultations. While the tools saved time and improved efficiency, they also led to sensitive patient information being uploaded to public platforms.
The intention wasn’t harmful – the problem was a lack of AI policy and training. With clear guardrails in place, staff would have better equipped to know exactly which tools were safe to use, and how to use them properly.
And in a recent study of over 300 legal departments they found that 81% were using unapproved AI tools, and nearly half had no formal AI policy in place. That’s a huge compliance risk, particularly in sectors handling sensitive data.
These aren’t isolated cases. Shadow AI is widespread, and affecting organisations of all sizes in every sector.
Types of risk your organisation could be exposed to
Legal Risk | Breaching GDPR or industry-specific regulations |
Data Privacy | Uploading client, HR, or financial data into public AI tools |
Bias & Discrimination | AI outputs that disadvantage protected groups in hiring or decision-making |
Loss of Confidentiality | Leaks via AI-generated content or chatbots |
Reputational Damage | Public exposure of AI-generated errors or misuse |
Compliance Failures | Use of AI in regulated sectors without proper oversight |
How to Tackle Shadow AI in Your Organisation
At South West AI Solutions, we support organisations to create a safe, structured environment where AI can be used responsibly within your organisation.
Here’s what we typically recommend:
1. Audit What’s Already in Use
Start by understanding which tools are already being used – formally or informally.
This isn’t a witch-hunt; it’s a chance to learn what’s working, identify gaps, and start shaping a list of approved tools. And don’t overlook your early adopters. Those individuals who are using AI already (even unapproved tools) may be your best AI advocates if supported in the right way, ready to help others learn and bring valuable insight into real-world use cases.
2. Build a Simple Governance Framework
A clear, practical policy that outlines what’s allowed, what isn’t, and what needs approval is a strong foundation. Align it with GDPR and your risk appetite.
3. Train Your Teams
Most misuse stems from a lack of knowledge, not intent. Equip your staff with the knowledge and tools to use your AI tools safely.
4. Appoint AI Champions
Every team has someone who’s naturally curious and ahead of the curve. Empower these individuals to be internal champions – helping colleagues stay compliant, while still exploring the benefits of AI.
Final Thought
Shadow AI isn’t about bad people doing bad things. It’s about good people using powerful tools without the right guardrails in place.
As business leaders, our job isn’t to shut it down – it’s to light the path forward. Because when AI is used well, it doesn’t just save time, it empowers your people, drives innovation, and helps your organisation focus on what really matters.
Need Help Creating a Safer AI Culture?
Whether you need help auditing current use, building a governance framework, training your teams, or developing a secure toolkit – we’re here to help.
Book a free introductory call with our team today and let’s help make AI safe, strategic, and empowering for you, your people, and your organisation.

Matt Greaves
CEO and Co-Founder
South West AI Solutions