Artificial intelligence (AI) is now a key tool that is heavily utilised within businesses and public bodies.
From financial services to healthcare, logistics to retail, AI mimics human intelligence to shape decisions, process data and efficiently solve problems. For organisations across the UK, the question is no longer whether to adopt AI, but how to do so responsibly in the long term.
AI ethics provides the framework for addressing this challenge. It is not a technical afterthought but a legal necessity governing how AI systems are designed, procured and deployed. Ethical AI is at the heart of regulatory compliance, risk management and remaining a sustainable business or an efficient public body.
What are the core principles involved in AI ethics?
AI ethics is a set of values, principles, and techniques to guide moral conduct in the development and use of AI. This ensures that automated systems operate consistently with fundamental rights and values so that AI is truly used to benefit society. Without AI ethics, AI can advance entrenched biases, compromise personal data and expose organisations to legal and reputational risk.
The below core principles are commonly used by organisations in their AI regulatory frameworks:
- Fairness: AI should treat all humans fairly by avoiding bias and discrimination and ensuring compliance with the Equality Act 2010.
- Transparency: providing accessible explanations of how AI influences decisions so that users can understand how the algorithms work and why AI has made certain decisions.
- Accountability: ensuring that responsibility for outcomes rests with identifiable decision-makers such as those who worked on the AI system or those using it.
- Upholding data privacy and protection: AI systems must meet data privacy and data protection standards that are consistent with the Data Protection Act 2018 and UK GDPR. Additionally, there should be robust cybersecurity measures in place to prevent data breaches and unlawful access.
- Reliability and safety: there should be frequent testing, validating, and monitoring of AI tools to confirm that they work as intended.
- Human oversight: maintaining human involvement which is proportionate to the risks and context of the use of the AI system.
The role of safe AI
Whilst ethical AI encompasses the broad principles of fairness, transparency, and accountability that should guide AI development, safe AI focuses specifically on ensuring these systems operate reliably without causing harm. Introducing safe AI means designing systems that can mitigate harm and fail predictably when things go wrong. AI systems often operate in complex, real-world environments where they encounter scenarios that designers never anticipated during development. Safe AI requires organisations to establish clear boundaries around what their AI systems can and cannot do, implementing technical controls that prevent the system from making decisions outside its competence or in situations where uncertainty is too high. This includes setting thresholds for when an AI must defer to human judgement, creating fallback mechanisms when the system encounters borderline cases, and ensuring that errors, when they inevitably occur, do not cascade into catastrophic outcomes.
Safe AI therefore demands a proactive approach to identifying potential failures before deployment, stress-testing systems against unexpected inputs or data, and maintaining clear procedures for rapidly disabling or rolling back AI functionality when safety concerns emerge. Investing in Safe AI is ultimately about ensuring that the pursuit of efficiency and innovation never comes at the expense of the wellbeing of the people who interact with these systems.
Why safe and ethical AI is best for businesses
For commercial operators, the power of AI ethics could make or break a business. By implementing ethical AI, it helps your business build trust with key stakeholders, create a competitive advantage for tenders and avoid these dangerous risks:
- Legal risk: an organisation deploying AI systems in its operations retains the legal responsibility for legal risks created by that system even if the system was not developed by the organisation itself. Examples of where legal risk of AI use has materialised for organisations include:
- An Amazon recruitment tool was reportedly abandoned due to sexist bias. The AI system penalised CVs which included the word "women", with the AI system seemingly training itself to think that male candidates were the best option.
- iTutor Group was fined after its recruitment AI tool was rejecting applicants on the basis of age. The software automatically rejected candidates who were over 55 years old, regardless of their qualifications.
- Reputational risk: news and outrage can spread quickly if AI systems and “algorithmic bias” causes offence or unfair outcomes. Reputational risk often goes hand in hand with other risks, and can be difficult to recover from. Real world examples show that the risk is heightened when the stakes of the outcome are higher:
- When exams were cancelled during COVID-19, UK students in Year 11 and 13 could not sit their exams. The Government announced that Ofqual would use an algorithm to moderate teacher assessed grades. This algorithm involved factors such as the school's historical results, teacher's perception of the student and predicted grades to allocate final grades. The model unfairly downgraded nearly 40% of grades, disproportionately affecting students from deprived backgrounds and state schools. There was public outcry and this grading approach was abandoned by the Government a week later.
- Operational risk: organisations may find themselves forced to withdraw or suspend AI tools if ethical flaws emerge late in deployment. This can waste significant funding investment and disrupt core processes.
The trust deficit: why public bodies must lead on AI ethics
The public sector faces the unenviable position of being subject to more extensive duties than private businesses and subject to even greater scrutiny due to being funded by the taxpayer. AI is frequently used in the public sector for important tasks such as identifying high risk behaviour from CCTV footage, detecting fraudulent tax claims and using chatbots to answer public queries.
AI tools deployed by government and local authorities create new risks in relation to duties of fairness, transparency, and accountability:
- The Public Sector Equality Duty (PSED) under the Equality Act 2010 requires authorities to consider the effect of decisions on protected groups and advance equality of opportunity.
- Transparency obligations under the Freedom of Information Act 2000 mean that it will be easier to obtain information about AI usage from public bodies than those in the private sector, increasing the risk of scrutiny.
- Public bodies are at risk of judicial review challenges where their actions are unfair or irrational. This risk cannot be avoided by "passing the buck" to AI systems. Decisions remain subject to judicial review, and the risk of an unreasonable or irrational decision being made is increased where ethical AI safeguards are not in place.
When AI systems are used in a public sector context, particularly for welfare allocation, policing, or healthcare, AI ethics need to be maintained constantly as the processes affect public trust, fairness and the rights of vulnerable individuals.
Conclusion
Good AI ethics and safety is not a box ticking exercise. It is vital to take seriously in order for all organisations to use AI responsibly and drive innovation while avoiding risks. The case for ethical AI is powerful to help increase investor confidence, procurement success and staff engagement in both the private and public sectors. Public bodies are standard setters in their role as large procurers of goods and services, and must therefore take the responsibility of reinforcing benchmarks for ethical AI. If private entities wish to stay competitive, they will need to understand and engage with AI ethics as well. For both types of organisations, the pitfalls and loss of opportunity that could follow ignorance of AI ethics are simply too expensive
We can support both the private and public sector to navigate AI ethics through establishing governance structures, conducting risk assessments, building ethical requirements into contracts with AI suppliers and training staff to recognise any ethical risks that are present in AI systems.
We are experts in helping you maintain a strong AI ethics framework so stay tuned for more on this series where we highlight the AI rulebook, best practices of ethical AI, AI ethics in the supply chain and AI ethics challenges for the public sector.