Our Ethical AI Commitment
Our training frameworks prioritise transparency, explainability, and the mitigation of bias in AI outputs, because responsible adoption starts with responsible learning.
At The Human Co., we don't just teach people how to use AI. We teach them how to use it safely, fairly, and accountably, with the people affected by these tools always at the centre of the conversation.
This page sets out the principles that guide every workshop, training programme, and consultancy engagement we deliver. They are grounded in the UK Government's own ethical standards for AI, and they shape everything from the content we create to the way we advise organisations on rolling out AI at scale.
Why This Matters
AI is transforming public services and private businesses across the country, but it also introduces real risks. Algorithmic bias can embed discrimination into decision-making. Opaque systems can erode public trust. And without proper governance, organisations can find themselves on the wrong side of both the law and public expectation.
The UK Government has recognised this. In December 2024, the Algorithmic Transparency Recording Standard (ATRS) became mandatory for all central government departments and arm's-length bodies that deliver public services or interact with the public.
The Government's AI Playbook, published in February 2025, sets out 10 core principles, including that AI must be used "lawfully, ethically and responsibly" with "meaningful human control". And the updated Data and AI Ethics Framework (December 2025) now covers seven principles: transparency, accountability, fairness, privacy, environmental sustainability, societal impact, and safety.
This isn't theoretical. If your organisation uses AI in any decision-making process that affects the public, you already have obligations. Our job is to make sure your people understand them and can meet them with confidence.
Our Six Commitments
-
Description text goes We build transparency into every training programme we deliver. Participants learn not just how to use AI tools, but how to clearly communicate to colleagues, stakeholders, and the public what role AI plays in their work, what data it draws on, and what its limitations are.
This aligns directly with the UK Government's principle of outcome-based transparency and explainability — the ability to clarify to any person impacted by a service how an AI solution works and which factors influence its outputs.
-
Description text goes hereEvery programme we run includes dedicated content on recognising and mitigating algorithmic bias. We cover how bias enters AI systems through unrepresentative training data, inappropriate datasets, data reflecting historical discrimination, and design architecture choices, and we equip participants with practical techniques to identify and address it.
This includes pre-processing (adjusting training datasets), in-processing (applying constraints during model training), and post-processing (interventions on outputs) the same approaches recommended by the ICO's AI audit framework.
-
AI should serve people, not replace their judgement. Our training emphasises the principle of meaningful human control: ensuring that a human being retains genuine discretion to review, challenge, and overturn AI-generated outputs, not simply rubber-stamp them.
We help organisations design human-in-the-loop processes for high-impact decisions, so that AI augments expertise rather than bypassing it.
-
Item descriptionWe ensure that participants understand their obligations under the Equality Act 2010 and the Public Sector Equality Duty (PSED). Our training covers how to conduct Equality Impact Assessments on AI-assisted processes, how to select appropriate fairness metrics (such as demographic parity and equalised odds), and how to monitor outcomes across demographic groups.
AI systems must not undermine the legal rights of individuals or discriminate unfairly, a principle enshrined in the Government's five cross-sectoral regulatory principles for AI.
-
All our programmes address the intersection of AI and UK data protection law, including UK GDPR. We cover lawful bases for processing personal data in AI systems, data minimisation principles, and the requirements around automated decision-making under Article 22, including the right of individuals to obtain meaningful information about the logic involved.
-
We help organisations build the governance structures they need: from creating AI inventories and risk registers, to establishing clear lines of accountability for AI-assisted decisions. Our approach reflects the Government's emphasis on AI governance boards, ethics review processes, and quality assurance built into every stage of the AI lifecycle.
Aligned to UK Government Standards
The Frameworks We Work With
Our training content is built around and aligned to the following UK Government frameworks and standards. This means when your teams complete a programme with us, they are learning practices that directly support your compliance and assurance obligations.
| Framework | Published By | Relevance to Our Training |
|---|---|---|
| Data and AI Ethics Framework | GDS / DSIT (Dec 2025) | Seven ethical principles: transparency, accountability, fairness, privacy, safety, environmental sustainability, societal impact |
| AI Playbook for UK Government | GDS (Feb 2025) | 10 core principles for safe, responsible AI use in the public sector |
| Algorithmic Transparency Recording Standard (ATRS) | GDS / DSIT | Mandatory standard for documenting and publishing how algorithmic tools are used in public decision-making |
| Five Cross-Sectoral Regulatory Principles | DSIT | Safety, security & robustness; transparency & explainability; fairness; accountability & governance; contestability & redress |
| ICO AI and Data Protection Guidance | ICO | Practical guidance on fairness, automated decision-making, and data protection impact assessments |
| CDEI AI Assurance Framework | CDEI | Tools and techniques for building justified trust in AI systems, including bias audits and impact assessments |
What This Means in Practice
How We Apply These Principles
These commitments aren't just words on a page. Here's how they show up in our work:
Workshop design: Every workshop includes a dedicated module on ethical AI use, tailored to participants' roles and the specific AI tools their organisation is deploying.
Scenario-based learning: We use real-world case studies — including published ATRS records — to help participants understand what good looks like and where things go wrong.
Practical toolkits: Participants leave with frameworks they can apply immediately: bias checklists, transparency templates, and decision-logging tools aligned to ATRS requirements.
Ongoing support: Our consultancy engagements include governance reviews and AI readiness diagnostics that assess how well your organisation's current practices align with these standards.
Our Own Use of AI
Practising What We Teach
We use AI tools in our own business — for research, content ideation, and administrative efficiency. We hold ourselves to the same standards we teach:
We never use client-identifiable data in any AI tool.
All AI-assisted content is reviewed by a human before publication, checking for accuracy, bias, and alignment with our values.
We are transparent about where and how we use AI in our work. If you'd like to know more, just ask.
We stay current. As frameworks evolve — the ATRS is still expanding across the broader public sector, and the Data and AI Ethics Framework was updated as recently as December 2025 — we update our materials accordingly.
Ready to Build Ethical AI Capability in Your Organisation?
Whether you're a government department preparing for ATRS compliance, a public body navigating the AI Playbook, or any organisation that wants its people to use AI with confidence and integrity, we can help.

