How can we help you?

For most organisations, artificial intelligence (AI) is not developed in-house. Instead, AI is procured: embedded in HR platforms, finance systems, customer service tools, or analytics software. Businesses remain responsible for the outcomes of the AI systems they use but often lack visibility over how those systems were designed, trained or validated.

This article examines the ethical, safety and legal implications of procuring AI through the supply chain, focusing on due diligence, contractual risk allocation, and governance and safe deployment of third-party AI systems.

The problem of 'moral outsourcing'

Relying on third-party AI does not absolve an organisation of responsibility. Courts, regulators, and the public are unlikely to accept the defence that “the system came from our supplier”.

This has been described as the problem of “moral outsourcing”; delegating ethically sensitive decisions (such as recruitment, credit scoring or healthcare triage) to opaque tools over which the procuring entity has little control. Safety concerns compound these ethical challenges; AI systems may produce unreliable outputs, fail in unpredictable ways, or create operational risks that the procuring organisation is ill-equipped to identify or mitigate. From a legal perspective, liability remains with the deploying organisation. The supply chain does not break the chain of accountability; it extends it.

From a UK perspective, organisations deploying third-party AI must navigate several areas of law:

  1. Equality law: Under the Equality Act 2010, if a recruitment tool discriminates against candidates with protected characteristics, the employer (not just the vendor) may face liability. This principle extends across the legal framework governing AI deployment.
  2. Data protection law: Data protection-related laws, including the UK GDPR and elements of the Data (Use and Access) Act 2025, impose continuing responsibility on controllers to ensure that any AI system they use complies with principles of fairness, transparency and accountability. Suppliers may be processors or joint controllers but the deploying organisation cannot abdicate responsibility for outcomes.
  3. Consumer protection law: Businesses offering services to consumers remain responsible for ensuring fairness under the Consumer Rights Act 2015 and associated legislation even where AI tools influence outcomes.

These are just some of the laws required to be taken into account when implementing and procuring AI. Critically, legal responsibility cannot be delegated down the supply chain; organisations remain accountable for the AI systems they deploy, regardless of their origin.

Common use cases of third-party AI

Notwithstanding this moral outsourcing concern, across the UK, and globally, businesses and public bodies increasingly procure and deploy AI through their supply chains in applications including: recruitment and HR platforms that filter CVs or rank candidates; customer service chatbots and virtual assistants; predictive analytics tools in finance, retail and logistics; healthcare triage chatbots and diagnostic support technologies; and fraud detection and compliance monitoring systems. Each carries potential for bias, opacity, error or safety failures; risks that ultimately rest with the end-user organisation.

The approach to managing AI supply chain risk differs between small and large organisations.

  • SMEs may lack bargaining power to negotiate bespoke contractual terms. Ethical due diligence and careful vendor selection become particularly important. Trade associations or sector regulators may provide template clauses and frameworks, though resource constraints remain a challenge for smaller organisations.
  • Large enterprises have greater leverage but face greater scrutiny. Public commitments to ethical procurement may form part of ESG obligations and investor relations, creating reputational imperatives beyond legal compliance.

If supply chain AI procurement is not done properly, the simplest of tasks could result in bigger issues for the organisation from both ethical and safety perspectives than continuing without AI. The fundamental challenge is clear: an organisation's AI governance framework is only as robust as its weakest supply chain link. Internal policies and safeguards provide limited protection if vendors operate to lower standards or lack adequate oversight mechanisms.

Real-world failures

AI supply chain risks are no longer theoretical. Recent controversies involving procured or third-party AI systems include:

  • Amazon’s recruitment tool (2018): the company disbanded its experimental AI hiring system after discovering it disadvantaged women in technical roles.
    UK school exam grading algorithm (2020): whilst developed in-house by Ofqual, a standardisation algorithm used to award A-level and GCSE grades sparked widespread backlash when approximately 40% of results were downgraded, disproportionately affecting students from disadvantaged backgrounds.
  • Dutch SyRI welfare fraud system (2020): The Hague District Court struck down the SyRI system on the basis that it violated privacy rights and lacked transparency. This demonstrates risks of deploying third-party AI without adequate oversight. A UK local authority procuring such a system would likely bear liability under data protection and human rights law.
  • HireVue automated recruitment (2021): US-based recruitment software provider HireVue stopped using facial analysis after criticism that the technology could be biased. Many UK employers rely on similar platforms. This highlights risks where procurement embeds untested technologies.

These examples demonstrate the risks of procuring and deploying opaque AI systems through the supply chain without adequate oversight, safety validation or contractual safeguards. In each case, the deploying organisation bore reputational and legal consequences; illustrating that supply chain protections determine overall risk exposure. Ethical risk management should be at the forefront of an organisations strategy.

Due diligence, safety validation and ethical procurement

Best practice begins at the AI procurement stage when engaging with suppliers in the supply chain. Organisations should conduct supplier audits examining training data, validation processes, safety testing and ethical safeguards. Requesting algorithmic impact assessments (increasingly common in leading jurisdictions) forms part of this scrutiny, alongside assessment of system reliability, operational safety and supply chain resilience, including vendor cybersecurity, data handling and governance structures. Procurement teams must also review regulatory alignment to ensure suppliers comply with ICO guidance, FCA expectations or sector-specific requirements.

The UK government has published guidance on AI procurement for the public sector, encouraging buyers to include ethical considerations in tender processes (UK Gov AI Procurement Guidelines, 2020). Whilst aimed at government, the principles are equally relevant for private organisations.

Effective AI governance requires organisations to extend their ethical, safety and legal standards throughout their supply chains. Internal compliance measures alone are insufficient; vendors and partners must be held to equivalent standards encompassing both safe and ethical AI deployment to ensure end-to-end accountability.

In addition to the above, organisations should consider adopting two complementary governance instruments to manage AI ethics across their operations and supply chains.

First, an externally published AI ethics statement sets out the organisation's principles and commitments regarding the safe and ethical development, procurement and deployment of AI systems. Such statements demonstrate accountability to customers, investors and regulators, and provide a framework against which the organisation's practices can be assessed. They typically address principles such as fairness, transparency, accountability, privacy, safety and human oversight.

Second, a supply chain AI ethics code of conduct or policy establishes binding standards for vendors and partners. This instrument translates high-level ethical commitments into contractual requirements, ensuring that third-party AI systems align with the organisation's values and risk appetite. A supply chain code should specify minimum standards for training data quality, bias testing, explainability, safety validation, security and ongoing monitoring.

Together, these instruments offer several benefits. They support sales and business development by providing assurance to prospective clients that AI systems are designed and implemented both safely and responsibly. They mitigate legal and reputational risks by embedding safety protocols and ethical safeguards throughout the supply chain. They also facilitate consistent governance by ensuring that internal teams and external partners operate to the same standards.

Leading organisations in regulated sectors have begun to adopt such frameworks. Financial services firms, healthcare providers and public sector bodies increasingly publish AI ethics statements alongside procurement policies that require suppliers to meet equivalent standards.

Contractual Safeguards in the AI Supply Chain

Contracts are the critical tool for managing AI supply chain risk and allocating responsibility between the procuring organisation and its vendors. Transparency clauses should oblige suppliers to disclose data sources, testing results and update policies. Liability and indemnity provisions should ensure vendors share responsibility for unlawful, unsafe or harmful outcomes, whilst audit rights give buyers the ability to review compliance with contractual and regulatory standards. Change control mechanisms requiring notification of significant updates or retraining of AI models protect against drift, and termination rights allow exit if the AI system is found to be unlawful, biased, unsafe or unreliable.

These provisions should be coupled with robust internal governance to ensure that contractual rights are exercised in practice. However, contractual protections are only effective if suppliers have the capability and commitment to meet their obligations; emphasising the importance of vendor selection and ongoing supply chain monitoring.

AI Supply Chain Management: Templates and checklists

Organisations procuring AI should develop supplier questionnaires covering training data, bias testing, safety validation, governance and regulatory compliance. Organisations should also consider publishing an AI ethics statement that sets out their principles and commitments, and adopting a supply chain AI ethics code of conduct that sets binding standards for vendors and partners to provide assurance, mitigate risks, and ensure consistent governance across internal and external AI deployments. Ethical procurement policies should integrate AI ethics into supply chain frameworks, whilst board oversight should ensure significant AI procurement decisions are escalated to directors.

Practical tools, such as the World Economic Forum’s AI Procurement Guidelines and the OECD framework on trustworthy AI, can provide useful templates.

Organisations must recognise that their AI risk profile is determined not solely by their own practices, but by the cumulative standards across their entire supply chain. A comprehensive approach requires alignment of ethical principles, safety protocols, technical safeguards and governance mechanisms from procurement through deployment.

Conclusion

AI supply chain risk is one of the most pressing challenges for businesses and public bodies in 2025. Procuring AI does not absolve organisations of responsibility. It creates dual responsibility to manage both internal governance and supplier standards. Crucially, an organisation's protections are only as strong as its supply chain's protections; focusing solely on internal AI implementation whilst neglecting vendor oversight leaves critical vulnerabilities unaddressed.

Lawyers advising on procurement have a pivotal role. By embedding transparency, liability and audit provisions into agreements, and guiding clients through due diligence, lawyers can help ensure AI supply chains remain legally compliant and ethically defensible. The supply chain represents both the greatest risk and the greatest opportunity for ensuring AI systems remain compliant and ethical. Organisations that recognise this reality and extend their governance frameworks throughout their vendor relationships will be best positioned to manage AI-related risks effectively.

Organisations that treat AI supply chain safety and ethics seriously, recognising that their protections extend only as far as their vendors' protections, will be best placed to maintain trust, attract investment and deliver resilient services. The supply chain is not peripheral to AI governance; it is central to it.