The UK Government’s AI Skills Hub: A £4.1m Lesson in How Not to Build Real AI Capability

Why the AI Skills Hub Is a Case Study in Failed AI Learning Design

The UK government just launched an AI skills platform that fundamentally misunderstands how people actually learn AI. They’ve built a gym but left all the equipment boxed up and then forgot to hire any gym instructors.

As someone who’s led digital transformation projects for a living, I reviewed the new AI Skills Hub expecting to find a model for effective AI education. Instead, I found a textbook example of why most AI adoption efforts in the UK fail.

The AI Skills Hub is a perfect case study in why the vast majority of AI implementations fail: organizations confuse content with capability, curation with learning design, and comprehensiveness with usefulness.

For HR & Transformation Leaders: 4 Lessons from a £4m Mistake

  1. Content ≠ Capability: A library of courses is not a learning strategy. The UK’s new hub proves that excellent curation fails without proper learning design, sequencing, and hierarchy .

  2. Theory Without Practice is Wasted: You cannot build AI fluency through observation alone. Effective AI upskilling requires integrated sandboxes for immediate experimentation, something this platform entirely lacks, forcing users to "learn to swim in the desert" .

  3. "Comprehensive" Can Kill Engagement: The platform prioritizes covering everything over relevance, forcing intermediate users to wade through beginner content . True personalization means subtracting what users already know, not just filtering by vendor .

  4. The "Netflix Model" Has Limits: Dumping 24 modules on a single page without a "start here" flag paralyzes learners . Successful adoption requires a guided path, not an infinite scroll .

The platform demonstrates excellent curation but catastrophic execution. All it took me was thirty minutes to draw up a laundry list of broken buttons, fake courses, a poorly utilized UI, and not a single chatbot in sight.


Fatal Flaw One: ‘Netflix of Learning’ and AI Skills Overload

Too Many Courses, No Clear First Step

The UK government's new AI Skills Hub has three fatal flaws, and you'll spot the first in about thirty seconds of use.

You take the assessment. Answer questions about your role, experience level, tech preferences. The system churns away, presumably building you a personalized learning pathway. You feel optimistic, maybe they've cracked it.

Then you land on your results page.

Twenty-four courses stare back at you in an endless scroll.

  • Module 1 has four courses

  • Module 2 has eight more

  • Module 3, another eight

  • Module 4, at least four

They're all sitting there, equal weight, equal prominence. No hierarchy. No "start here" flag. Only the vaguest of progress indicators. A wall of content cards stretching down your screen like a CVS receipt.

And then there are the images. The same stock photos appear multiple times across different courses—two courses featuring identical shots of people at a whiteboard, another pair showing the same woman at a computer, two more using the same dark control room scene. 

When you're trying to scan 24 courses and build a mental map of what's what, visual repetition destroys your ability to differentiate. Your brain relies on visual cues to remember "that's the Python course" versus "that's the cloud one," but when they look identical, you're back to reading every title, every time. 

It's the learning platform equivalent of a Where's Wally book where all the Wally’s are wearing the same outfit.

Clearly the design team skipped the cognitive load class that day.

Within two minutes, you're paralyzed. Should you do the Google cloud course first? The Python fundamentals? The ethics training? Are these sequential or parallel? Required or optional? 

Nobody knows. The platform certainly isn't telling you.

This is what happens when content dumping gets confused with learning design.

The courses themselves are fine, for free introductory level entries. I've done a few of them myself already. Google, IBM, DataCamp, universities, standards bodies. Someone did their homework on what exists. But curation is only half the job. The other half is sequencing, and that's where this falls apart.

Netflix doesn't dump every episode of every season on one page and wish you luck. It shows you Season 1, Episode 1. Big thumbnail. Auto-play queued up. Progress bar visible. "Continue Watching" sitting right there when you come back. The path is obvious because someone designed it to be obvious.

The AI Skills Hub takes the ‘throw enough mud’ approach to the problem but none of it sticks. 

I know what happens next because I've watched hundreds of people bounce off platforms like this. They scroll through the first few modules, realize they can't figure out where to start, and close the tab. They might come back once. Maybe twice. Then the link lives in their bookmarks, gathering dust next to all those other "I should really do this someday" good intentions.

There is progress tracking, but it's reduced to a tiny icon in the farthest corner of the page, I doubt many will ever notice it. If you do, you could be zero percent through the pathway or fifty percent; you'd never know. Confusing completion bars. No milestone markers. No sense of momentum.

There's no useful time transparency. Each course lists its own duration—in days—the first course under my 'Technology Stack' will take me somewhere between two and seven days to complete. To add to the confusion, the course 'Getting Started with Google Cloud Learning Path' is tagged under 'Agriculture and Food'—it's also for beginners, not intermediates, but we'll get to that in a moment.

The real commitment you're signing up for? Buried somewhere, maybe. 

There's no prerequisite logic. Should you take the data analysis course before or after the Python course? Does the cloud computing training matter for the AI ethics module? The platform shrugs. Figure it out yourself.

Compare this to any learning platform that actually works. freeCodeCamp shows you one challenge at a time. Duolingo won't let you skip ahead until you've proven competence. Even YouTube's "Next up" feature understands that people need a default path, not infinite choice.

The tragic part? Someone built an assessment tool that asks good questions. Someone curated 24 genuinely useful courses. Someone organized them into logical modules. Then someone else dumped all that work onto a single scrolling page and called it personalized learning.


Fatal Flaw 2: No AI Sandbox, No Real Practice

An AI Skills Platform That Won’t Let You Use AI

I tried to enroll in the Google Gemini course. Clicked the "Enroll Now" button. Got redirected to Google Skills. Landed on an error page that said "Sorry, access denied to this resource."

No explanation why. No troubleshooting steps. No alternative path forward. Just a dead end.

That was the extent of my hands-on learning experience with the AI Skills Hub.

But the bigger failure isn't the broken enrollment link, those happen, systems have bugs. The bigger failure is that there's no way to practice AI on the platform itself. You can read about AI, watch videos about AI, take quizzes about AI. You cannot use AI.

This is a government skills platform for AI that doesn't let you touch AI.

Take a second to let that sink in.

Now think about that design choice for a minute. Someone built an entire education hub around tools meant to be interactive, then made the hub completely static. No integrated assistant. No practice environment. No sandbox where you can test a prompt, see what happens, adjust, and try again.

Learning AI without using AI is like learning to swim in the desert. 

The best AI education efforts all understand this. Anthropic publishes a prompt engineering guide where every example includes a working prompt you can copy and test. OpenAI provides a playground where you can experiment with different models and parameters. Google offers Colab notebooks where you can run actual code. Both ChatGPT and Gemini have literal learning modes baked in. 

The AI Skills Hub skips this entirely. It points you toward external courses hosted on other platforms, many of which also don't include practice environments. You're expected to learn through observation, then somehow figure out application on your own time, in your own environment, with your own tools.

That's not how adults learn technical skills. Adults learn by doing, getting feedback, adjusting, and doing again. The feedback loop needs to be tight, seconds, not days. Try something, see if it works, understand why, move forward.

Without an integrated practice environment, that feedback loop doesn't exist. You watch a video about prompt engineering, then what? Open ChatGPT in another tab and hope you remember the technique? Wait until you're back at work and try it in a real project where mistakes matter? 

There is something that is perfectly obvious to me about the teams who built this platform. They do not use AI tools daily. If they did, the absence of a practice environment would feel immediately wrong. Like building a carpentry course with no wood, no tools, and no workshop. Just pictures of hammers and videos of other people sawing.

The irony is thick. This is a platform teaching people to use generative AI. Generative AI's whole value is immediate interaction, you put text in, you get text out, you iterate. The technology itself is built for rapid experimentation. And the platform teaching people how to use it offers zero opportunity to experiment.

Compare this to how the UK government builds other digital services. GOV.UK works because you can actually do the thing: file taxes, renew a license, check your NHS records. The interfaces are functional. The services are transactional. You don't just read about filing taxes, you file them.

The AI Skills Hub took the opposite approach. It's all reading, no doing. Which means people will finish courses without developing actual proficiency. They'll know what prompt engineering is. They won't be able to write good prompts.

The enrollment error I hit isn't the real problem. The real problem is that even if the link had worked, even if I'd gotten into the Google course, I'd still be learning about AI instead of learning to use AI. The platform architecture guarantees that disconnect.

You can't teach people to use AI without letting them use AI. That's not a controversial statement. It's obvious to anyone who's actually tried to get good at prompting, or analysis, or any other AI-adjacent skill. The learning happens in the practice, not the theory.


Fatal Flaw 3: The ‘Intermediate’ Lie in “Personalised” AI Pathways

Comprehensive vs Personalised: Why You Can’t Have Both

The platform's assessment asks good questions. What's your industry? What's your role, worker, leader, or professional? What's your experience level? Which tech stack does your organization use?

I answered as an intermediate user. AI Worker. Intermediate experience. Google stack. The system processed my responses and generated my personalized pathway.

Then it sent me to Introduction to Python.

I selected "Intermediate: Some experience with AI tools and applications." The assessment heard that, processed it, and responded by building a pathway where 71 percent of the courses assume zero prior knowledge. Seventeen of twenty-four courses are beginner level.

The assessment asks what you know. The pathway ignores your answer.

You can see this clearly in how the tech stack filter works versus how the experience filter doesn't. I selected Google as my preferred stack, and sure enough, Module 1 surfaces Google Cloud, Gemini, and Google AI Essentials. That filtering works. Someone wrote logic that says "if user picks Google, show Google courses."

But nobody wrote logic that says "if user picks Intermediate, hide Introduction courses." The experience level selection appears to do nothing. An intermediate user and a beginner user probably get identical pathways, just with different tech vendor logos.

Module 2 demonstrates the problem perfectly. Eight courses labeled "Fundamental Skills"—six are beginner level. The titles give it away: "Introduction to Cloud Computing." "Software Fundamentals." Then there's "Introduction to Python" and "Python Programming Fundamentals," both labeled intermediate despite having "Introduction" and "Fundamentals" right in their names.

If I followed this pathway start to finish, I'd spend 50-73 hours. But I already know most of what the first 17 courses teach. Strip those out and you're left with maybe 13 courses and 25-35 hours of actually relevant content. The platform wants me to spend an extra 20-40 hours re-learning material I already understand.

When an intermediate learner hits "Introduction to Cloud Computing," they feel the same way an experienced driver feels retaking the written test. Insulted. Like someone didn't trust them to assess their own competence.

The assessment asked me to assess my own competence. I did. Then the pathway second-guessed me and included the beginner content anyway. Just in case. Just to be thorough. Just to make sure nothing gets missed.

The UK government built an assessment tool that asks the right questions. Then they built a pathway generator that doesn't use the answers. Those are two separate teams working toward opposite goals. One team trusts users to know their level. The other team trusts no one and includes everything.

Comprehensive and personalized are opposites when it comes to learning design. Comprehensive means covering everything. Personalized means showing people exactly what they need and nothing more.

The AI Skills Hub picked comprehensive. Which means it's not actually personalized, no matter what the assessment suggests. It's just a course catalog with a filter that doesn't filter much.

How to Fix the AI Skills Hub: Practical Changes to Actually Build Capability

None of these problems require a platform rebuild. They need different decisions about what to show, when to show it, and how to guide people through it.

Immediate Navigation Fixes: From Catalog to Clear Pathway

Make the pathway sequential, not simultaneous. Stop showing all 24 courses at once. Show Module 1 with its four courses and a clear "Start Here" button. Lock Module 2 until someone completes Module 1. Lock Module 3 until they finish Module 2. Instructional design 101. People need a clear next step, not 24 simultaneous options.

Add progress indicators. Show "Module 1 of 4" at the top. Show "Course 2 of 4" within the module. Add a progress bar that fills as people complete courses. Make it visible where they are, how far they've come, and what's left. Duolingo figured this out years ago. The AI Skills Hub can too.

Display total time commitments upfront. Don't make people calculate that this pathway requires 50-73 hours. Tell them. Right at the top. "This pathway takes approximately 50-73 hours over 3-6 months at a sustainable pace." Give people informed consent before they start, not a surprise halfway through.


Assessment Fixes: Trust the Skills Data You Collect

Trust your own assessment tool. The platform asks if someone's a beginner or intermediate. Use that answer. When someone selects intermediate, filter out courses with "Introduction" or "Fundamentals" in the title. They don't need them. They said they don't need them. Believe them.

Add three follow-up questions: Have you written code before? Have you used cloud platforms? Have you worked with AI tools regularly? Route people based on their answers. Someone who codes doesn't need Introduction to Python. Someone who uses Google Cloud doesn't need cloud fundamentals. Skip what they know. Show them what they don't.

This cuts the intermediate pathway from 24 courses to maybe 13. From 50-73 hours to 25-35. Still comprehensive, but actually respectful of what people already know.

Build a quick-start option. Not everyone wants comprehensive. Some people want practical skills fast. Give them that option. Six courses, 8-12 hours, focused on immediate application. Let them choose: quick start or full mastery. Both are legitimate. Right now the platform only offers one speed, and it's too slow for people who need results this quarter, not next year.


Structural Fixes: Embed Practice, Progress, and Feedback

Add an integrated practice environment. Embed a chat interface—Claude, GPT-4, Gemini, whatever. Put it on every course page. When a lesson teaches prompt techniques, let people test those techniques immediately in a sidebar. Give them example prompts to try. Show them what good outputs look like versus bad ones. Let them experiment in a space where failure costs nothing.

Sal Khan from Khan Academy figured this out three years ago. The infrastructure already exists. The UK government just needs to license it and integrate it. Anthropic, OpenAI, and Google would probably offer educational discounts. If not, the cost is still trivial compared to the £4 million spent building the rest of the platform.

Write module descriptions that explain why. Each module needs context. Why does this module exist? What will you be able to do after completing it? How does it connect to the next module? Right now the modules have titles and course lists. That's not enough. People need to understand the journey, not just the stops.

Module 1: "These Google tools are what you'll use daily. Learn them first because everything else builds on them."

Module 2: "Now develop the technical foundation for deeper work. Pick the fast track or the comprehensive track based on your goals."

Module 3: "Apply those technical skills to real AI challenges. This is where theory becomes practice."

Three sentences per module. That's all it takes to orient people.

Break large modules into tiers. Module 2 has eight courses ranging from one hour to full-time bootcamps. That's too broad. Split it: "Quick Fundamentals" and "Deep Technical Skills." Let people choose which tier matches their goals. Someone who wants basic cloud literacy doesn't need the same path as someone preparing for a software engineering career. Stop treating them like they do.

Test with actual users. Put five intermediate-level people in a room. Give them 30 minutes with the platform. Watch where they get stuck. Watch where they give up. If none of them complete Module 1 in that session, the design failed. That's the metric that matters—not how many courses are available, but how many people actually progress through them.

Most of these fixes could ship in weeks, not months. Progressive disclosure—showing one module at a time—is front-end work. Adding proficiency checks to the assessment is a few form fields and some routing logic. Embedding a chat interface is an API integration. These aren't moonshots. They're basic product improvements that any competent team could implement.

The harder fix is cultural. Someone needs to accept that comprehensive and personalized point in opposite directions. You can have a course catalog with everything, or you can have a learning pathway with exactly what each person needs. You can't have both and call it the same thing.

The UK government built a £4 million catalog. Now it needs to build pathways. Real ones. Where the assessment filters content, where progress is visible, where practice is integrated, where the next step is always obvious.

Right now the AI Skills Hub looks good in screenshots. It doesn't work in practice.


The Real Lesson for HR and Transformation Leaders

The AI Skills Hub gets a 3 out of 10. Not because the courses are bad—they're not. Not because the assessment is poorly designed—it asks the right questions. The platform fails because it fundamentally misunderstands what people need to actually learn AI.

Good content organized badly is a wasted opportunity. A personalized assessment that doesn't personalize anything is just data collection. A skills platform that doesn't let you practice skills isn't teaching, it's lecturing. But an AI Skills Hub that features absolutely no AI?

Unforgivable.

What the AI Skills Hub Reveals About the UK’s Approach to AI Adoption - The UK government positions itself as an AI leader. This platform undermines that positioning because it reveals a disconnect between understanding AI conceptually and understanding how people actually develop skills.

The hardest part of AI adoption isn't the technology. It's the change management, the behavior shift, the gap between knowing what AI can do and actually being able to do it yourself. The AI Skills Hub could help bridge that gap. Instead, it widens it by making learning harder than it needs to be.

Fix the navigation. Fix the filtering. Add practice. Track progress. Test with real users. 

The AI Skills Hub will be dead within a year unless these issues are fixed. The fatal flaw isn't the technology, it's treating this as a content problem when it's actually a behavior change problem. 

You don’t build AI capability with glossy portals and curated libraries. You build it by making it easy, safe, and expected for real people to try new tools in the flow of their work.

Right now, the AI Skills Hub is a case study in the opposite. It proves that you can get the content right and still miss the point.


Paul Thomas is a behavioral turnaround specialist focused on failed AI adoption in UK mid-market firms. After leading digital transformation programs as an HR leader—and watching technically sound rollouts stall for organizational reasons—he now diagnoses why GenAI implementations fail.

His forensic approach examines governance gaps, workflow friction, and managerial adoption patterns that external consultants and training programs miss. He works with HR Directors, Finance leaders, and Transformation leads who need diagnostic clarity before committing to recovery.

He also writes and publishes The Human Stack - a weekly newsletter with more than 5000 readers on how to lead well in the era of GenAI.

Need help with AI adoption? Read more on our Services or Schedule a Call to see how we can help.

Previous
Previous

Where South Yorkshire Workers Can Learn AI Skills in 2026 (Without Breaking the Bank)

Next
Next

Why Microsoft Copilot Rollouts Stall at 20% Adoption