Your Copilot Rollout Just Told You Something Important. Are You Listening?
You’re six months into your Copilot deployment. Adoption sits at 40%. IT Security breathes easier - at least people aren’t using unapproved tools anymore.
Except they are.
The same people who won’t touch your carefully deployed, security-approved, perfectly compliant Copilot are happily pasting proprietary information into ChatGPT. Walk past any desk and you’ll see it: browser tabs open to Claude, Perplexity, whatever free tool promises to help them get through their day faster.
This isn’t a technology problem. It’s an organisational trust problem. And your Copilot rollout just diagnosed it.
The Wrong Question
Most leadership teams are asking: “How do we get adoption up?”
That’s measurement theatre. It assumes the problem is user resistance, insufficient training, or poor communication. It isn’t.
The right question is: “What did this rollout teach us about our organisation?”
Because here’s what happened: you deployed a tool meant to replace unauthorised AI usage. Instead, you created a bifurcated environment where people use approved tools for visible, low-risk work and continue using unapproved tools for everything that actually matters.
That’s not failure. That’s intelligence. The question is whether you’re paying attention to what you just learnt.
Why IT-Led Deployments Create This Problem
IT approaches deployment as a data and security problem:
Lock down the tool
Control access
Monitor usage
Measure adoption rates
This is correct for deployment. It’s catastrophic for adoption.
Adoption is a change management problem:
What job is this tool actually doing for people?
Why would someone choose this over what they’re already using?
What organisational factors make the “approved” option feel worse than the “risky” option?
When IT leads the entire initiative, you get a secure, compliant tool that nobody trusts enough to use for anything important. When people lead their own adoption - ChatGPT, Claude, whatever they found on their own - you get enthusiastic usage of insecure tools.
The gap between these two realities is where your rollout teaches you something valuable.
What Good Intelligence Actually Looks Like
Stop measuring adoption rates. Start measuring learning rates.
Instead of “40% adoption,” you want to know:
What are people actually using it for?
15% using it for email summarisation
12% for meeting notes
8% for code review
5% tested it for two weeks and abandoned it
Why did they abandon it?
Not “user error” or “resistance to change.” Actual reasons:
Response time too slow compared to ChatGPT
Doesn’t integrate with existing workflow
Output quality worse than what they were already using
Requires too many approval steps to be useful for real work
Interface assumes they work differently than they actually do
What are the resisters using instead?
22% still using ChatGPT for the exact tasks Copilot was meant to replace. When you ask them why:
“It just works”
“I don’t have to ask permission”
“The interface makes sense”
“I trust my judgement more than I trust the approved tool”
That last one should stop you cold.
Map the shadow AI ecosystem
Who’s using what tools, for which tasks, and what does that pattern tell you about what people actually need versus what you thought they needed?
This isn’t about catching people breaking policy. It’s about understanding why your approved solution lost to unauthorised alternatives.
The Trust Problem Nobody’s Talking About
Here’s what your rollout just revealed: people are sceptical of AI in general - concerned about data privacy, worried about bias, defensive about GDPR compliance. All the surveys show this. All the focus groups confirm it.
And yet those same sceptical people are enthusiastically using unauthorised AI tools.
The resistance isn’t to AI. It’s to your AI.
They trust their own judgement with ChatGPT more than they trust your organisation’s judgement with Copilot. That’s not irrational. That’s learnt behaviour.
Ask the resisters directly:
“Why does the unapproved tool feel safer to you?”
“What would need to be true for you to trust the approved option?”
Their answers will tell you everything about your organisation’s relationship with risk, control, and trust. And probably nothing you want to hear.
Common themes from organisations that actually asked:
“The approved tool feels like surveillance” When every use is logged, monitored, and potentially flagged, people make a risk calculation: personal exposure to using unauthorised tools feels less threatening than organisational visibility into what they’re working on.
“I don’t trust it to work when I need it” IT-led deployments prioritise uptime and security. But if the tool is slow, gets blocked by firewalls, or goes down during critical work periods, people learn it’s unreliable. They route around it.
“Nobody asked us what we actually needed” The deployment assumed use cases. The resisters found those assumptions wrong. Rather than fight to change the approved tool, they just kept using what already worked.
“It doesn’t integrate with how we actually work” The approved tool requires people to change their workflow to accommodate it. The unauthorised tools adapted to their existing workflow. The unauthorised tools won.
What This Means for Your Next AI Initiative
Your Copilot rollout wasn’t supposed to solve your AI adoption problem. It was supposed to diagnose it.
The diagnosis: your organisation’s biggest barrier to AI adoption isn’t technology, security, or capability. It’s trust. And IT can’t deploy that.
Before your next AI initiative:
Stop treating adoption as an IT problem to solve
IT should absolutely lead security, compliance, and technical deployment. But change management needs to lead adoption strategy. These are different skills, different goals, different metrics.
Interview the resisters like they’re consultants, not problems
They know something about your organisation that you don’t. They’re not Luddites. They’re people who did the maths and decided your approved solution doesn’t serve them better than the alternatives they found.
Find out why. Their answers are more valuable than any adoption metric.
Map what people are actually using AI for
Not what you hoped they’d use it for. What they’re actually doing. The shadow AI ecosystem exists because it’s solving real problems your approved tools aren’t. That’s data. Use it.
Acknowledge that trust is earned, not deployed
You can’t mandate trust through policy. If your approved tools consistently feel worse than unauthorised alternatives - slower, more restrictive, less useful - people will route around them. They’ll smile in the training session and go back to ChatGPT the moment you leave the room.
Build tools worth trusting. That means involving the people who’ll use them before deployment, not after adoption stalls.
Measure what you learnt, not what you deployed
Success isn’t “93% adoption within six months.” Success is “we now understand why approved tools lose to unauthorised ones, and we’re addressing those specific organisational factors.”
That’s intelligence you can use. Adoption rates are just vanity metrics if you don’t understand what’s underneath them.
The Real Stakes
Most AI initiatives fail in the messy middle - not because the technology is bad, but because organisations don’t understand their own capacity for change until it’s too late.
A Copilot rollout, done right, is the cheapest organisational diagnostic you’ll ever run. It shows you:
Where trust breaks down
What your people actually need versus what you assumed they needed
Which parts of your organisation are ready for change and which will resist anything that feels imposed
What risk calculations people are making when they choose unauthorised tools over approved ones
Done wrong, it’s just an expensive way to discover that your people don’t trust you enough to use the tools you give them.
The question isn’t whether your Copilot rollout succeeded or failed. The question is: what did you learn, and what are you going to do about it?
Because your next AI initiative is coming. And if you haven’t figured out why people chose ChatGPT over Copilot, you’re about to make the same mistake at larger scale with higher stakes.
About me: I’m Paul, an AI adoption consultant based in Sheffield. I help UK mid-market organisations diagnose and fix failing AI implementations. I previously led L&D functions at HelloFresh, Babbel, and Morgan Stanley. If your organisation is stuck in one of these failure patterns and you want to talk about what a fix looks like

