Paul Thomas Paul Thomas

Why UK's £400bn AI Training Investment Won't Fix the Skills Gap (And What Will)

The UK government announced last week that it's launching new AI skills frameworks to address a £400 billion productivity gap. Growth Hubs are running 15-week bootcamps. The British Chambers of Commerce has partnered with training providers to deliver "role-specific upskilling." On paper, this looks like progress.

The UK government announced last week that it's launching new AI skills frameworks to address a £400 billion productivity gap. Growth Hubs are running 15-week bootcamps. The British Chambers of Commerce has partnered with training providers to deliver "role-specific upskilling." On paper, this looks like progress.

But here's what people in the UK are actually doing: searching for free AI tools. They want to know what AI is. They're looking for generators and detectors. The most common questions aren't "how do I train my team?" or "what does AI competency look like?"—they're "where can I get this for free?"

This isn't a training gap. This is a fundamental disconnect between what organisations think they need and what their people are actually doing with AI.

In this analysis, we'll examine why traditional AI training fails UK businesses, what competency-based frameworks actually work, and how your organisation can bridge the gap before it becomes insurmountable.

UK AI Adoption Statistics: The Gap Between Use and Competency

New data from the British Chambers of Commerce claims that 35% of UK SMEs are "actively using" AI, with some reports suggesting 89% engagement when you include embedded tools like Microsoft Copilot or Xero's AI features. The headline writes itself: "Turning point as more SMEs unlock AI."

But the data reveals something more troubling: adoption is happening by accident, not design.

People are using AI because it's baked into the software they already pay for, not because they've been trained to use it strategically. The proof is in the search behaviour.

According to Google Trends data for the UK, "AI detector" and "AI checker" both rank in the top 15 AI-related searches. Translation: more people are worried about getting caught using AI than are searching for how to use it effectively.

When paranoia outranks competency, you don't have an adoption problem—you have a trust and capability problem.

The Real Cost of Untrained AI Adoption

The £400 billion figure represents more than just lost productivity. It represents:

  • Wasted software investments: Companies paying for Copilot licenses that employees don't know how to use beyond basic text generation

  • Hidden compliance risks: Teams using AI tools without understanding data governance implications

  • Missed strategic opportunities: Organizations using AI for simple automation when it could be driving genuine capability transformation

  • Widening competitive gaps: The distance between companies with structured AI competency and those without

This isn't theoretical. Skills England's recent audit found that while AI tool adoption has surged, competency assessments show minimal improvement in actual capability metrics over the past 18 months.

Why Traditional AI Training Programs Fail UK Businesses

For the past year and a half, UK businesses have been training their staff through webinars, YouTube tutorials, and "just ask ChatGPT" directives. No quality standards. No competency benchmarks. No clear definition of what "good enough" looks like.

The result? A workforce that can generate text but can't evaluate whether that text is any good, let alone whether it's solving the right problem.

The Three Critical Failures of Current AI Training

1. Training Happens in a Vacuum

Most AI training courses teach prompt engineering in isolation from actual work. Employees learn to write clever prompts but can't integrate AI into their daily workflows because they're never shown how.

Example: A marketing team learns to generate social media captions in a two-hour workshop. Six months later, they're still writing captions manually because no one showed them how to:

  • Integrate AI into their content calendar system

  • Maintain brand voice consistency across AI-generated content

  • Build quality control processes for AI outputs

  • Measure whether AI-generated content performs better than human-written content

2. No Competency Standards Exist

Until The Alan Turing Institute's December 8 consultation on an AI Skills for Business Competency Framework, the UK had no standardized definition of what "AI literacy" means.

This means:

  • No way to assess whether training actually worked

  • No benchmarks to measure employee progress

  • No clear expectations for what "proficient" looks like

  • No accountability for training providers

3. Focus on Tools Instead of Capability

Most training asks: "How do we use ChatGPT?"

The right question is: "What work can we do now that we couldn't do before?"

That's the difference between augmentation (doing the same work faster) and amplification (doing fundamentally different work). UK businesses are being trained for augmentation while their competitors build amplification capabilities.

Effective AI Training Frameworks for UK Businesses

The UK's new Skills England AI framework, launched in late 2025, represents the first serious attempt to standardize AI competency. Regional Growth Hubs across Berkshire, Norfolk and Suffolk are now running hybrid bootcamps that combine self-paced learning with in-person mentoring.

The British Chambers of Commerce's AI Academy is delivering "workflow-integrated" training—teaching people how to use AI within their actual daily tasks rather than in a vacuum.

These are significant improvements. But they're at least 18 months too late.

What Good AI Competency Training Actually Includes

Based on my work with UK mid-market organizations implementing AI successfully, effective competency-based training must include:

Workflow Integration From Day One

Don't teach AI in a classroom. Teach it inside the actual software your team uses. If your sales team works in Salesforce, train them on AI within Salesforce. If your finance team uses Xero, that's where AI training happens.

One financial services firm I worked with moved from 23% AI adoption to 87% in four months by abandoning webinars entirely. Instead, they appointed "AI champions" in each department who trained colleagues directly within their workflow tools during regular work hours.

Clear Competency Levels

Skills England's framework defines four competency levels:

  1. Awareness: Can explain what AI is and identify appropriate use cases

  2. Application: Can use AI tools effectively for routine tasks

  3. Analysis: Can evaluate AI outputs critically and optimize prompts for quality

  4. Autonomy: Can design AI-enhanced workflows and train others

Every training program should specify which level it targets and how success is measured at each level.

Quality Control Before Scale

The biggest mistake UK businesses make is rolling out AI tools company-wide without teaching people how to evaluate output quality.

A professional services firm I worked with implemented a simple rule: For the first 30 days, every AI-generated document must be reviewed by someone at "Analysis" level competency before it's sent to clients. After 30 days, employees at "Application" level could self-certify.

This prevented quality disasters while building confidence and competency simultaneously.

Regular Competency Assessment

Skills England's Employer AI Adoption Checklist includes quarterly competency audits. This isn't about testing employees—it's about identifying where gaps remain so training can adapt.

One manufacturing company discovered through their Q2 audit that while 90% of employees could use AI for basic tasks, only 12% could evaluate whether AI's recommendations were actually better than their existing processes. They adjusted training to focus specifically on critical evaluation skills, and by Q3 that number had jumped to 68%.

Agentic AI: The Growing Divide in UK Business Capabilities

"Agentic AI" has become one of the fastest-growing search terms in the UK over the past year. For those not tracking the terminology, agentic AI refers to autonomous systems that can pursue goals independently—AI that doesn't just draft an invoice but actually sends it, or software that doesn't just flag maintenance issues but schedules the engineer.

According to recent sector analysis, approximately 22% of UK SMEs are now piloting agentic systems. That's the headline.

The reality? 78% are still stuck on basic prompt engineering, asking ChatGPT to write emails and hoping for the best.

What the 22% Are Doing Differently

The businesses successfully piloting autonomous AI agents aren't doing it because they're smarter or better funded. They're doing it because they've built the foundational competency to handle more complex implementations.

Here's what that looks like in practice:

Case Study: Regional Distribution Company

A Midlands-based distribution company with 450 employees moved from basic AI adoption to piloting agentic systems in 14 months. Their progression:

Months 1-3: Workflow-integrated training focused on getting employees to "Application" level competency with AI-assisted data entry and customer communication.

Months 4-6: Quality control processes established. AI outputs reviewed systematically. Competency audits identified which employees were ready for "Analysis" level work.

Months 7-9: Analysis-level employees trained on building AI-enhanced workflows. First autonomous processes piloted in low-risk areas (routine order confirmations, basic inventory alerts).

Months 10-14: Agentic systems deployed for route optimization and predictive maintenance scheduling. ROI: 34% reduction in delivery delays, 28% improvement in equipment uptime.

The key insight: They didn't jump to agentic AI. They built capability systematically, with clear competency standards at each stage.

Why Most UK Businesses Aren't Ready for Agentic AI

If your team is still searching "what is AI" in 2025, you're not ready for autonomous agents in 2026. The gap isn't technical—it's competency.

Agentic systems require:

  • Strong evaluation skills: Can your team assess whether an autonomous decision was correct?

  • Clear process design: Do you understand your workflows well enough to automate them intelligently?

  • Risk management protocols: Can you identify and mitigate potential failures before they happen?

  • Change management capability: Can your organization adapt when AI fundamentally changes how work gets done?

The 22% piloting agentic AI have these capabilities. The 78% don't—and traditional training won't get them there.

How to Build AI Competency in Your Organization

If you're an L&D leader, HR director, or operations manager, the uncomfortable truth is that your organisation probably falls into the 78%. Not because you're incompetent, but because the infrastructure for good AI training simply didn't exist until the last few months.

Here's what actually matters now:

1. Audit Before You Train

Skills England's Employer AI Adoption Checklist should be your first step. Find out where your actual gaps are before you pay for another webinar on prompt engineering.

Key questions to answer:

  • What percentage of employees are at each competency level?

  • Which workflows could benefit most from AI integration?

  • Where are the biggest quality control gaps?

  • Who has natural aptitude for advanced AI work?

One professional services firm discovered through their audit that their junior staff were actually more AI-competent than senior partners. They restructured training entirely, having juniors train seniors on tactical AI use while seniors focused on strategic application.

2. Appoint an AI Competency Owner

The UK government just appointed Professor Chris Dungey as a national AI champion for manufacturing. Your organisation needs its own version—someone who owns AI adoption end-to-end.

This isn't an IT role (implementing tools) or an L&D role (running courses). It's a strategic role that:

  • Sets competency standards

  • Designs training that actually works

  • Measures progress against clear benchmarks

  • Identifies and scales what's working

  • Kills what isn't

At smaller organizations (under 100 employees), this might be 20-30% of someone's role. At larger organizations, it's a full-time position.

3. Focus on Workflow Integration, Not Theory

The BCC's model of teaching people AI within their daily workflows is the only approach that scales. If your training happens in a classroom or webinar rather than inside the actual software your team uses, you're wasting time and money.

Practical implementation:

  • Identify the 3-5 most common workflows in each department

  • Train people to use AI within those specific workflows

  • Measure adoption by tracking whether AI is actually being used in daily work

  • Iterate based on what's working and what isn't

4. Build Quality Control Before Scaling

Don't roll out AI company-wide until you've established:

  • Clear standards for what "good" AI output looks like

  • Review processes for high-stakes work

  • Feedback loops so people learn from mistakes

  • Escalation protocols when AI produces questionable results

One healthcare administration company implemented a "red/yellow/green" system: Green tasks could be AI-automated immediately, yellow tasks required human review, red tasks stayed manual until competency improved. This prevented quality disasters while building confidence systematically.

5. Prepare for Agentic Systems Now

The 22% piloting autonomous AI agents aren't doing it because they're smarter. They're doing it because they've built the foundational competency to handle more complex implementations.

If your team is still asking "what is AI" in 2025, you're not ready for 2026. But if you start building competency now—with clear standards, workflow integration, and regular assessment—you can close that gap within 12-18 months.

Frequently Asked Questions About the UK AI Skills Gap

What is the UK AI skills gap?

The UK AI skills gap refers to the £400 billion productivity shortfall caused by the disconnect between AI tool adoption (89% of businesses) and actual AI competency (estimated at less than 25% of the workforce operating at "Analysis" level or higher).

Why is AI training failing UK businesses?

Traditional AI training fails because it teaches tools in isolation from actual work, lacks competency standards, and focuses on augmentation (working faster) rather than amplification (working differently). Without workflow integration and clear benchmarks, training doesn't translate into capability.

What is agentic AI?

Agentic AI refers to autonomous systems that can pursue goals independently without human intervention. Examples include AI that automatically schedules appointments, sends invoices, or optimizes supply chain routing—not just recommending actions but actually executing them.

How much does effective AI competency training cost?

Costs vary significantly based on organization size and current capability level. Skills England's funded bootcamps are available at subsidized rates for qualifying businesses. Private training ranges from £500-£2,500 per employee for comprehensive competency-based programs, versus £50-£200 for basic webinars that typically don't produce lasting capability gains.

What is Skills England's AI framework?

Skills England's AI framework, launched in late 2025, defines four competency levels (Awareness, Application, Analysis, Autonomy) and provides standardized assessment tools for measuring employee capability. The framework is designed to give UK businesses a common language for AI competency and help them benchmark progress.

How long does it take to build AI competency?

With structured, workflow-integrated training, most organizations can move employees from "Awareness" to "Application" level in 2-3 months, and from "Application" to "Analysis" level in 6-9 months. Reaching "Autonomy" level typically requires 12-18 months of systematic development.

Solving the UK AI Skills Gap: Next Steps

The £400 billion figure being thrown around isn't about training more people. It's about training people properly. It's about moving from "we use AI" to "we use AI competently." It's about understanding that adoption without competency isn't progress—it's just expensive theatre.

The good news? The infrastructure is finally being built. Competency frameworks, regional bootcamps, workflow-integrated training—these are all significant improvements on the "watch one webinar and figure it out" approach of 2024.

The bad news? Most organisations haven't realised they need to catch up yet. They're still searching for "free AI" while their competitors are deploying autonomous agents.

You've already adopted AI. Now it's time to adopt it competently. The market won't wait.

Need Help Assessing Your Organization's AI Competency Gap?

I work with UK mid-market organizations to diagnose why AI implementations aren't delivering expected results and build practical competency frameworks that actually work. If your AI adoption is stalling, your training isn't translating into capability, or you're not sure whether you're in the 22% or the 78%, let's talk.

Schedule a diagnostic assessment | Read more about AI implementation challenges

About the Author

Paul Thomas is an AI adoption consultant specializing in diagnosing and fixing failing AI implementations in UK mid-market organisations. He works with L&D leaders, HR directors, and operations managers to build competency-based AI training that actually delivers results. Paul writes The Human Stack, a newsletter on AI, work, and organisational change. Get in touch to discuss your organization's AI competency strategy.

Read More
Paul Thomas Paul Thomas

AI Implementation: Why Less Than 25% of UK Organisations Turn Adoption into Results

AI adoption in the UK is rapidly accelerating, driven by government initiatives, private innovation, and digital infrastructure growth. The UK leads in AI R&D with strong investments in startups and university partnerships. Key sectors using AI include finance (fraud detection, automation, risk), healthcare (imaging, data, diagnostics), and manufacturing (robotics, supply chains, quality control).

Challenges include skills shortages, ethics, and data privacy. The government promotes responsible AI focused on transparency and fairness while encouraging innovation. Hybrid and remote work models also shape AI use, affecting workforce dynamics and productivity.

AI is transforming industries and work, demanding continuous upskilling and mindful integration. The Human Co. helps professionals thrive by offering practical, research-based resources that blend human potential with tech progress.

We finally have data on what everyone suspected: most organisations are buying AI tools but not actually using them to change anything.

A new study from consulting firm Differentis shows that despite accelerated AI adoption, less than a quarter of organisations are turning AI input into business action. Separately, research from team.blue surveying 8,000+ European small businesses found that whilst nearly one in five are using AI extensively, 30% don't know which digital tools they should be using and 26% lack the confidence to get started.

The interesting bit isn't that AI adoption is happening slowly. It's that adoption is happening – organisations are spending money, running pilots, attending workshops – but nothing is actually changing. We're not in an awareness problem anymore. We're in an execution problem.

The question isn't "should we adopt AI?" It's "why are we so bad at turning AI purchases into AI results?"

The AI implementation gap: What the latest research shows

The Differentis research points to organisations "trying to run before they can walk" – investing out of FOMO without clear use cases or change management capacity. The team.blue survey shows something similar from a different angle: established businesses (operating for more than a decade) show the highest resistance, with around 60% having no plans to use AI.

These aren't laggards. These are experienced operators who've seen "transformational technology" rollouts before. They know what happens: big announcement, pilot project, some training sessions, a dashboard no one looks at, and six months later everyone's back to working the old way with an unused SaaS subscription somewhere in the budget.

Here's what's actually happening in most AI rollouts:

Someone senior decides "we need to do AI" – Often after a board meeting, conference, or competitor announcement. The decision comes from anxiety (falling behind) or aspiration (being innovative), not from a specific operational problem that needs solving.

IT or Innovation gets handed the brief – "Get us some AI." Not "fix this workflow" or "solve this capacity problem." Just... AI. Somewhere. Doing something.

A tool gets purchased – Usually based on vendor demos, analyst reports, or what a peer company is using. Maybe Microsoft Copilot because you're already on M365. Maybe a specialised tool because it looks impressive in a demo.

Training gets organised – Lunch-and-learn sessions. "Introduction to AI" workshops. Maybe some change champions identified. HR ticks the "change management" box.

Nothing structurally changes – No processes get redesigned. No roles get redefined. No one's objectives or performance metrics change. The AI tool just gets added on top of existing work.

Usage drops off – A few enthusiasts keep using it. Most people try it once, find it doesn't fit their actual workflow, and quietly go back to the old way. Management looks at adoption dashboards and wonders why people "aren't embracing change."

Sound familiar?

Why AI implementation fails (it's not the technology)

Organisations aren't stupid. So why does this keep happening?

Because AI implementation is treated as a technology project when it's actually an organisational change project. And most organisations are set up to handle technology projects much better than they're set up to handle change.

Technology projects have clear parameters:

  • Budget: £X

  • Timeline: Y months

  • Scope: Deploy tool Z

  • Success: Tool is live, training delivered, adoption measured

You can run that as a project. You can assign an owner. You can report progress. You can declare victory.

Organisational change is messier:

  • Who owns the redesign of how work actually gets done?

  • Who decides which parts of current processes should be kept vs. rebuilt?

  • How do you know if people are working differently or just using new tools to do old work the old way?

  • What happens when "working differently" creates tension with existing incentives, metrics, or power structures?

Most organisations don't have good answers to these questions. So they don't ask them. They run the technology project instead, declare success, and wonder why nothing changes.

The team.blue finding that SMBs want "step-by-step guidance" and "training and workshops" is telling. That's what everyone's already selling. That's what's already not working. The gap isn't information – it's implementation discipline.

How to tell if your AI implementation is working or just performance

Here's a simple diagnostic. Answer these honestly:

1. Can you name three specific workflows that should work differently after the AI rollout?
Not "people will be more productive" or "we'll be more innovative." Actual workflows. "New client onboarding" or "monthly reporting process" or "tier-1 support triage."

If you can't name them specifically, you're going through the motions.

2. Have the people who actually do those workflows been involved in redesigning them?
Not "consulted" or "informed." Involved. As in, they're in the room when decisions get made about what changes and what stays the same.

If they haven't, you're going through the motions.

3. Are you measuring behaviour change or tool adoption?
Most organisations measure: logins, features used, training completion, satisfaction surveys.
What you should measure: How long does X process take now vs. before? How many handoffs? What's the error rate? Where do people still work around the tool?

If you're measuring adoption rather than outcomes, you're going through the motions.

4. Has anything been stopped to make room for the new way of working?
Real change requires capacity. If you're adding AI tools on top of everything people already do, they won't use them properly because they don't have time. What meetings got cancelled? What reports got dropped? What old tools got decommissioned?

If nothing stopped, you're going through the motions.

5. Do people's objectives and performance metrics reflect the new way of working?
If someone's objectives are the same before and after the AI rollout, their behaviour won't change either. Why would it?

If the metrics didn't change, you're going through the motions.

If you answered "no" or "not really" to more than two of these, you've got an execution gap. You're buying tools, not building capability.

What successful AI implementation actually requires

I'm not going to give you a twelve-step implementation framework. You don't need more process. You need different priorities.

Start with a real problem, not a technology
"We want to use AI" is not a real problem. "Our client onboarding takes 6 weeks and involves 14 handoffs" is a real problem. Solve the problem. If AI helps, use it. If it doesn't, don't.

Make it someone's actual job
Not a side-of-desk responsibility. Not a steering group that meets monthly. Someone wakes up every day responsible for making the workflow actually work differently. Give them authority. Give them time. Give them air cover to challenge how things currently work.

Run small, contained experiments
One workflow. One team. Eight weeks. Clear success criteria that are about outcomes, not adoption. If it works, expand. If it doesn't, learn why and adjust. Stop doing company-wide rollouts of things you haven't proven in a small context first.

Redesign the work, not just the tools
The AI tool is the easy bit. The hard bit is: What does this role do now? What decisions can be made faster or by different people? What handoffs are no longer needed? What used to require judgement that can now be automated? What now requires more judgement than before?

If you're not redesigning the work, you're just adding technology to broken processes.

Measure what matters
Stop measuring adoption. Start measuring: How long does this take now? How many errors? How much rework? Where are the bottlenecks? Are the outcomes better?

The AI implementation gap exists because most organisations are optimised for buying technology, not for changing how work gets done. Until that changes, the gap will keep growing – and getting cheaper AI tools will just mean more organisations buying things they won't use effectively.

If your AI rollout feels stuck, you're not alone. Most organisations are struggling with the same implementation challenges – not because they've bought the wrong tools, but because they're treating organisational change as a technology problem.

I work with mid-sized UK organisations to diagnose why AI implementations aren't delivering and build practical roadmaps that actually change how work gets done. If you're past the "should we do AI?" stage and stuck at "why isn't this working?", let's talk.

Read More
Paul Thomas Paul Thomas

Your Copilot Rollout Just Told You Something Important. Are You Listening?

Unlock your company's future with AI adoption—where innovation meets practical impact. For UK businesses navigating rapid market shifts, AI isn’t just a tech upgrade; it’s a competitive necessity.

Imagine streamlining operations, enhancing decision-making with real-time data insights, and personalizing customer experiences at scale—all while reducing costs. AI empowers your workforce by automating repetitive tasks and freeing talent to focus on strategic, creative work.

At The Human Co., we don’t just help you implement AI; we guide you in integrating it seamlessly with your people-first culture, ensuring technology amplifies human potential rather than replaces it.

Don’t get left behind in a digital transformation race that’s already underway. Adopt AI intelligently—future-proof your business, accelerate growth, and stay ahead in the evolving UK market. The future-ready workplace starts now. Let’s build it together.

You’re six months into your Copilot deployment. Adoption sits at 40%. IT Security breathes easier - at least people aren’t using unapproved tools anymore.

Except they are.

The same people who won’t touch your carefully deployed, security-approved, perfectly compliant Copilot are happily pasting proprietary information into ChatGPT. Walk past any desk and you’ll see it: browser tabs open to Claude, Perplexity, whatever free tool promises to help them get through their day faster.

This isn’t a technology problem. It’s an organisational trust problem. And your Copilot rollout just diagnosed it.

The Wrong Question

Most leadership teams are asking: “How do we get adoption up?”

That’s measurement theatre. It assumes the problem is user resistance, insufficient training, or poor communication. It isn’t.

The right question is: “What did this rollout teach us about our organisation?”

Because here’s what happened: you deployed a tool meant to replace unauthorised AI usage. Instead, you created a bifurcated environment where people use approved tools for visible, low-risk work and continue using unapproved tools for everything that actually matters.

That’s not failure. That’s intelligence. The question is whether you’re paying attention to what you just learnt.

Why IT-Led Deployments Create This Problem

IT approaches deployment as a data and security problem:

  • Lock down the tool

  • Control access

  • Monitor usage

  • Measure adoption rates

This is correct for deployment. It’s catastrophic for adoption.

Adoption is a change management problem:

  • What job is this tool actually doing for people?

  • Why would someone choose this over what they’re already using?

  • What organisational factors make the “approved” option feel worse than the “risky” option?

When IT leads the entire initiative, you get a secure, compliant tool that nobody trusts enough to use for anything important. When people lead their own adoption - ChatGPT, Claude, whatever they found on their own - you get enthusiastic usage of insecure tools.

The gap between these two realities is where your rollout teaches you something valuable.

What Good Intelligence Actually Looks Like

Stop measuring adoption rates. Start measuring learning rates.

Instead of “40% adoption,” you want to know:

What are people actually using it for?

  • 15% using it for email summarisation

  • 12% for meeting notes

  • 8% for code review

  • 5% tested it for two weeks and abandoned it

Why did they abandon it?

Not “user error” or “resistance to change.” Actual reasons:

  • Response time too slow compared to ChatGPT

  • Doesn’t integrate with existing workflow

  • Output quality worse than what they were already using

  • Requires too many approval steps to be useful for real work

  • Interface assumes they work differently than they actually do

What are the resisters using instead?

22% still using ChatGPT for the exact tasks Copilot was meant to replace. When you ask them why:

  • “It just works”

  • “I don’t have to ask permission”

  • “The interface makes sense”

  • “I trust my judgement more than I trust the approved tool”

That last one should stop you cold.

Map the shadow AI ecosystem

Who’s using what tools, for which tasks, and what does that pattern tell you about what people actually need versus what you thought they needed?

This isn’t about catching people breaking policy. It’s about understanding why your approved solution lost to unauthorised alternatives.

The Trust Problem Nobody’s Talking About

Here’s what your rollout just revealed: people are sceptical of AI in general - concerned about data privacy, worried about bias, defensive about GDPR compliance. All the surveys show this. All the focus groups confirm it.

And yet those same sceptical people are enthusiastically using unauthorised AI tools.

The resistance isn’t to AI. It’s to your AI.

They trust their own judgement with ChatGPT more than they trust your organisation’s judgement with Copilot. That’s not irrational. That’s learnt behaviour.

Ask the resisters directly:

  • “Why does the unapproved tool feel safer to you?”

  • “What would need to be true for you to trust the approved option?”

Their answers will tell you everything about your organisation’s relationship with risk, control, and trust. And probably nothing you want to hear.

Common themes from organisations that actually asked:

“The approved tool feels like surveillance” When every use is logged, monitored, and potentially flagged, people make a risk calculation: personal exposure to using unauthorised tools feels less threatening than organisational visibility into what they’re working on.

“I don’t trust it to work when I need it” IT-led deployments prioritise uptime and security. But if the tool is slow, gets blocked by firewalls, or goes down during critical work periods, people learn it’s unreliable. They route around it.

“Nobody asked us what we actually needed” The deployment assumed use cases. The resisters found those assumptions wrong. Rather than fight to change the approved tool, they just kept using what already worked.

“It doesn’t integrate with how we actually work” The approved tool requires people to change their workflow to accommodate it. The unauthorised tools adapted to their existing workflow. The unauthorised tools won.

What This Means for Your Next AI Initiative

Your Copilot rollout wasn’t supposed to solve your AI adoption problem. It was supposed to diagnose it.

The diagnosis: your organisation’s biggest barrier to AI adoption isn’t technology, security, or capability. It’s trust. And IT can’t deploy that.

Before your next AI initiative:

Stop treating adoption as an IT problem to solve

IT should absolutely lead security, compliance, and technical deployment. But change management needs to lead adoption strategy. These are different skills, different goals, different metrics.

Interview the resisters like they’re consultants, not problems

They know something about your organisation that you don’t. They’re not Luddites. They’re people who did the maths and decided your approved solution doesn’t serve them better than the alternatives they found.

Find out why. Their answers are more valuable than any adoption metric.

Map what people are actually using AI for

Not what you hoped they’d use it for. What they’re actually doing. The shadow AI ecosystem exists because it’s solving real problems your approved tools aren’t. That’s data. Use it.

Acknowledge that trust is earned, not deployed

You can’t mandate trust through policy. If your approved tools consistently feel worse than unauthorised alternatives - slower, more restrictive, less useful - people will route around them. They’ll smile in the training session and go back to ChatGPT the moment you leave the room.

Build tools worth trusting. That means involving the people who’ll use them before deployment, not after adoption stalls.

Measure what you learnt, not what you deployed

Success isn’t “93% adoption within six months.” Success is “we now understand why approved tools lose to unauthorised ones, and we’re addressing those specific organisational factors.”

That’s intelligence you can use. Adoption rates are just vanity metrics if you don’t understand what’s underneath them.

The Real Stakes

Most AI initiatives fail in the messy middle - not because the technology is bad, but because organisations don’t understand their own capacity for change until it’s too late.

A Copilot rollout, done right, is the cheapest organisational diagnostic you’ll ever run. It shows you:

  • Where trust breaks down

  • What your people actually need versus what you assumed they needed

  • Which parts of your organisation are ready for change and which will resist anything that feels imposed

  • What risk calculations people are making when they choose unauthorised tools over approved ones

Done wrong, it’s just an expensive way to discover that your people don’t trust you enough to use the tools you give them.

The question isn’t whether your Copilot rollout succeeded or failed. The question is: what did you learn, and what are you going to do about it?

Because your next AI initiative is coming. And if you haven’t figured out why people chose ChatGPT over Copilot, you’re about to make the same mistake at larger scale with higher stakes.

About me: I’m Paul, an AI adoption consultant based in Sheffield. I help UK mid-market organisations diagnose and fix failing AI implementations. I previously led L&D functions at HelloFresh, Babbel, and Morgan Stanley. If your organisation is stuck in one of these failure patterns and you want to talk about what a fix looks like

Schedule a Call

Read More
Paul Thomas Paul Thomas

Why AI Rollouts Fail in UK Organisations

Unlock your company's future with AI adoption—where innovation meets practical impact. For UK businesses navigating rapid market shifts, AI isn’t just a tech upgrade; it’s a competitive necessity.

Imagine streamlining operations, enhancing decision-making with real-time data insights, and personalizing customer experiences at scale—all while reducing costs. AI empowers your workforce by automating repetitive tasks and freeing talent to focus on strategic, creative work.

At The Human Co., we don’t just help you implement AI; we guide you in integrating it seamlessly with your people-first culture, ensuring technology amplifies human potential rather than replaces it.

Don’t get left behind in a digital transformation race that’s already underway. Adopt AI intelligently—future-proof your business, accelerate growth, and stay ahead in the evolving UK market. The future-ready workplace starts now. Let’s build it together.

During the summer 2025, the Ministry of Justice posted a job advert: “AI Adoption Lead” at £73,000-£83,000. The NHS is hiring “AI Implementation Managers” at similar rates. HMRC wants “AI Governance Specialists.” None of these roles existed at the beginning of 2025.

What happened between then and now?

Failed rollouts. Expensive ones.

I’ve spent the past year analysing UK job market data and talking to mid-market organisations about their AI initiatives. The pattern is consistent: organisations announce ambitious AI strategies, buy expensive tools, mandate adoption... and then quietly hire someone at £90K-£130K to fix what went wrong.

Here’s what I’m seeing fail, repeatedly, in UK organisations specifically.

https://unsplash.com/@gettyimages

Pattern 1: The Deployment Fallacy

What it looks like:

  • Executive reads FT article about AI productivity gains

  • IT procurement buys Microsoft Copilot licenses for everyone

  • Launch email: “We’re now an AI-enabled organisation”

  • Six months later: 8% adoption rate, zero measurable ROI

Why it fails: The assumption is that deployment equals adoption. Just because you’ve given people access doesn’t mean they know what to do with it, trust it enough to use it, or can integrate it into actual workflows.

I spoke with an L&D Director at a Sheffield-based manufacturing firm (250 employees). They bought Copilot licenses for the entire operations team. Three months in, usage stats showed 12 people had logged in. Once.

When she asked why, the responses were:

  • “Don’t know what I’d use it for”

  • “Tried it, got rubbish output, haven’t been back”

  • “Is this even allowed? We work with customer data”

The last point is critical in the UK context. GDPR liability sits with the organisation, not the employee. So rational middle managers default to “better not risk it” rather than “let’s experiment.”

What this actually requires:

  • Use case identification (specific to roles, not generic)

  • Capability building (not a 30-minute webinar)

  • Risk guidance (what’s actually permissible under UK GDPR and sector regulations)

  • Success metrics that matter (time saved on specific tasks, not “productivity”)

Pattern 2: Shadow AI (Or: The £50M ICO Fine You’re Currently Incubating)

What it looks like: Microsoft’s Work Trend Index shows 78% of AI users are bringing their own tools to work because their organisations are moving too slowly. In the UK, this isn’t just an efficiency problem - it’s a compliance catastrophe waiting to happen.

Why it’s worse in the UK: US organisations operate under “at-will” employment and looser data protection. UK organisations face:

  • ICO enforcement that’s actually got teeth (and budget)

  • Employment tribunals that take algorithmic bias seriously

  • GDPR fines that can genuinely hurt mid-market companies

  • The Equality Act 2010 applying to any AI-assisted hiring or performance management

I talked to an HR Director at a professional services firm. They discovered - by accident - that half their recruitment team was using ChatGPT to “improve” candidate CVs before passing them to hiring managers. Nobody had told them not to. But nobody had told them it was safe either.

When she looked into the compliance implications:

  • Candidate data being processed through a US-based system (GDPR issue)

  • No audit trail of what was changed (discrimination risk if challenged)

  • No policy, no training, no documented decision-making process

The actual risk: The ICO just published guidance specifically warning about AI in recruitment. They’re not messing around. Organisations that can’t demonstrate they understand what AI tools are being used, for what purposes, and with what safeguards, are sitting on a compliance timebomb.

Shadow AI isn’t people being reckless. It’s your organisation failing to provide a viable alternative.

What this actually requires:

  • Acceptable use policies that aren’t just “don’t use AI”

  • Procurement of enterprise tools with UK data residency

  • Training that includes compliance, not just capability

  • A reporting culture where people can admit they’re using tools without being punished

Pattern 3: Pilot-to-Scale Death Valley

What it looks like:

  • Successful pilot with the digital team in London

  • Enthusiasm! Innovation! Press release!

  • Attempt to scale to the Manchester office, Birmingham office, regional teams

  • Complete collapse

Why it fails: Pilots succeed because they get disproportionate attention, volunteer participants, and executive air cover. Scaling requires the tool to work for people who didn’t volunteer, don’t have spare capacity, and whose managers are sceptical.

A contact at a national law firm told me about their “AI contract review” pilot. The London commercial team loved it. Saved hours. When they rolled it out firm-wide, the Manchester employment team tried it once and rejected it - the training data was all US employment law, so it was actively dangerous for their use cases.

The UK-specific issue: Regional offices outside London often feel like they’re having “London solutions” imposed on them. There’s already cultural resistance. AI compounds this because:

  • Most AI tools are trained on US data/contexts

  • London teams have more exposure to AI hype and experimentation

  • Regional teams are closer to “Real Economy” businesses with tighter margins and less tolerance for expensive experiments

What this actually requires:

  • User research outside the pilot group before scaling

  • Regional variation in implementation (not one-size-fits-all)

  • Middle manager capability building (they’re the scaling bottleneck)

  • Honest ROI tracking (not just case studies from the pilot)

Why This Creates a £90K-£130K Job Market

Here’s the thing: these problems are expensive. Really expensive.

When I analysed UK job postings for AI adoption roles, I found a salary premium of £20K-£40K over equivalent non-AI positions. Why? Because organisations are desperate for people who can:

  1. Translate between technical and organisational realities

  2. Navigate UK-specific compliance requirements

  3. Actually drive adoption (not just write strategy docs)

  4. Fix expensive mistakes quietly

The Ministry of Justice, the NHS, HMRC - they’re hiring because they’re already deep into one of these failure patterns. They need someone who’s seen this movie before.

What I’m Doing About This

This is why I’m launching The Co.Lab.

I’ve spent 20+ years in L&D and organisational change, most recently helping mid-market companies (250-2,000 employees) diagnose and fix failing AI rollouts. I’m based in Sheffield, which means I work with the “Real Economy” - organisations outside the London tech bubble who can’t afford expensive experiments.

The Co.Lab is about three things:

  1. Pattern recognition - What actually fails, how to spot it early

  2. UK-specific guidance - GDPR, ICO, Equality Act, sector regulations

  3. Operational pragmatism - What works in organisations without unlimited budgets or Chief AI Officers

Next week’s post: “Which Failure Pattern Is Your Organisation In? A Diagnostic Framework”

I’ll walk through how to identify which of these three patterns you’re experiencing, what the early warning signs are, and what the fix actually requires (spoiler: it’s not more tools).

If you’re in L&D, HR, Ops, or change management and you’re being asked to “drive AI adoption” without being given actual support - this newsletter is for you.

If you’re a middle manager being squeezed between executive mandates and team resistance - this is for you.

If you’re in a UK organisation that’s moving slowly on AI and you’re wondering if everyone else has figured something out that you’re missing - they haven’t. This is for you too.

About me: I’m Paul, an AI adoption consultant based in Sheffield. I help UK mid-market organisations diagnose and fix failing AI implementations. I previously led L&D functions at HelloFresh, Babbel, and Morgan Stanley. If your organisation is stuck in one of these failure patterns and you want to talk about what a fix looks like

Schedule a Call

Read More
Paul Thomas Paul Thomas

How to Use AI to Make Behavioural Change Actually Stick

Your organization spent $50,000 on executive coaching this year. When finance asks what you got for it, what do you tell them? “Sarah feels more confident”? “The team says good things about their sessions”?

That’s not measurement. That’s anecdote.

But here’s the problem: the usual measurement approaches are worse than useless. Satisfaction surveys tell you people enjoyed the sessions, not whether they changed. Quarterly 360s come too late to catch what’s actually happening. And trying to tie coaching to business outcomes is mostly fantasy - you can’t isolate coaching’s impact from the ten other things that changed in the same quarter.

So most organizations either abandon measurement entirely or settle for theater. Neither helps you understand what’s working.

The real measurement challenge isn’t proving ROI. It’s tracking behavioral change as it happens (or doesn’t happen). Because the gap between “good coaching session” and “actually doing things differently three weeks later” is where most coaching value disappears.

You leave the session committed to delegating more effectively. Three weeks later, you’re still the bottleneck making every decision, but you don’t notice because you’re too busy to notice. Your coach asks “how’s the delegation going?” in your next session, and you genuinely believe you’ve improved. You haven’t. You just can’t see what you’re doing.

This is the noticing gap: the space between knowing what to change and seeing when you’re not changing it.

Traditional coaching measurement tries to solve this with quarterly check-ins. But quarterly is far too slow to catch backsliding. By the time you get the data, you’ve reinforced the wrong pattern for three months.

AI changes this because it can notice for you.

Not in some dystopian “your boss is watching” way. In a “here’s what you actually did this week vs. what you said you’d do” way. AI can analyze your communication patterns, surface your defaults under pressure, and show you the exact moment you stopped delegating and started solving.

It’s a mirror that doesn’t require you to remember to look in it.

The Framework: What to Track

When I work with clients on coaching and development, we use a simple structure: Insight → Action → Evidence. You need to know what you’re trying to change (insight), experiment with doing it differently (action), and gather proof that something shifted (evidence).

The problem is the evidence part. Most organizations either abandon measurement entirely (”you can’t measure soft skills”) or default to theater (satisfaction surveys that tell you nothing about behavioral change).

The middle ground is triangulated observation: multiple signals, both quantitative and qualitative, that let you see patterns without pretending you can reduce human development to a single number.

AI fits here perfectly. Not as the measurement system, but as the pattern-recognition tool that makes your own behavior visible in real-time.

How This Actually Works

Instead of reviewing your quarter once every three months, you can review your week every Friday. You feed AI your actual communication - emails, Slack messages, meeting notes - and ask it to show you your patterns.

Not vague “how am I doing?” questions. Specific behavioral analysis tied to what you’re trying to change.

Don’t overthink the selection process. Open your calendar, find the most stressful meeting you had last week, and export the emails you sent in the hour immediately following it. That’s your data.

Here’s one example. This month’s focus is resilience - so let’s look at how you show up under pressure.

Prompt: Pressure Degradation

I want to understand how my behaviour changes under pressure. Below are two sets of messages - Set A (business as usual) and Set B (high pressure/crisis mode).

Compare them and tell me:

1. What specific language disappears when I’m under pressure?
2. What shows up instead? (Quote exact phrases)
3. Do I get shorter, longer, more direct, more vague?
4. What do I stop doing when stressed that I do when calm?
5. Give me one early warning sign that I’m shifting into pressure mode

Be specific - quote my actual language so I can catch this pattern in real-time.

What you feed it: 6-10 emails or Slack messages - half from normal times, half from high-pressure situations (you’ll know which is which).

⚠️ Data Safety: If you aren’t using an Enterprise instance of ChatGPT/Claude, please anonymize your text before pasting. Strip out specific names, dollar amounts, and proprietary strategy details.

Cadence: Monthly pattern check, or after any high-pressure period

What this surfaces: Your stress tells. The specific ways your communication degrades under pressure. Most people don’t realize they have a “stressed version” of themselves that shows up predictably. This prompt makes it visible.

The AI won’t sugarcoat it. It’ll quote your exact language back at you and show you what disappears (context, empathy, clarity) and what shows up instead (shortness, vagueness, command mode). That’s behavioral evidence you can work with.

Maybe you discover you stop explaining “why” when you’re stressed and just start announcing “what.” Maybe your sentences get clipped and your tone shifts to pure transaction. Maybe you stop asking questions and start directing.

Whatever your pattern is, you can’t change it if you can’t see it. This makes it visible.

Read More
Paul Thomas Paul Thomas

The AI Training Gap: Why 78% Use AI But Only 24% Have Training

AI has become ubiquitous in UK workplaces. According to recent research, 78% of business leaders agree that AI adoption is essential to stay competitive. Yet here's the uncomfortable truth: only 24% of employees have received any formal AI training.

This training gap isn't just a skills issue - it's a change management crisis.

The Real Cost of the Training Gap

When organizations roll out AI tools without proper training, several predictable problems emerge:

• Productivity doesn't improve because people don't know how to use the tools effectively

• Resistance grows as employees feel threatened rather than empowered

• Leadership becomes frustrated when ROI doesn't materialize

• The AI initiative stalls or fails entirely

Most organizations treat AI rollouts like software deployments - buy the licenses, send the announcement, maybe run some training. Then they're surprised when adoption stalls, productivity doesn't budge, and leadership starts asking uncomfortable questions about ROI.

Why Traditional Training Isn't Enough

Running a few training sessions won't solve this. The challenge isn't just teaching people which buttons to press. It's addressing:

• Fear that AI will make their role redundant

• Uncertainty about which tasks should be AI-assisted vs human-led

• Lack of clear processes for integrating AI into daily workflows

• Managers who don't know how to coach teams using AI tools

Change Management: The Missing Piece

This is fundamentally a change management challenge. Successful AI adoption requires:

Structured Readiness Assessment - Understanding where your organization actually is, not where you hope it is

Skills Gap Analysis - Identifying specific capability gaps across different roles and teams

Change Champions - Developing internal advocates who can demonstrate value and support peers

Continuous Support - Ongoing coaching and adjustment, not one-off training events

Making AI Adoption Work

The organizations succeeding with AI adoption aren't just buying better tools - they're investing in structured change management that addresses the human side of the transformation.

At The Human Co., we help organizations across the UK bridge the AI training gap through targeted change management that turns AI anxiety into AI capability. Our approach addresses both the technical skills and the cultural shift needed for sustainable AI adoption.

Ready to close your organization's AI training gap? Book a discovery call to discuss your specific challenges and how we can help.

Learn more
Read More