How can we help you?

Corruption and financial crime is at crisis level in the UK.

A severe drain on resources and public confidence, yet historically enforcement has been not only under-resourced but also uncoordinated, resulting in the UK remaining a hive for economic crime in the present day.

Last year, however, there was significant development geared towards changing the current fraud landscape in the UK. To name a few key examples, on 1 September 2025, the failure to prevent fraud offence introduced by the Economic Crime and Corporate Transparency Act 2023 came into force, as well as building on the use of AI-powered fraud detection technology, and the Anti-Corruption Strategy 2025 launched to give enforcement more teeth against corruption including using AI to speed up Serious Fraud Office (SFO) investigations.

In this article, we look at what is on the horizon and what businesses must be aware of in 2026 from a fraud perspective, covering some key changes in the law but also identifying shifts in governmental strategies that could be relevant to fraud prevention. 

Failure to Prevent Fraud

The offence of failure to prevent fraud introduced under section 199 Economic Crime and Corporate Transparency Act 2023 (ECCTA 2023) was first enforceable against large organisations from September 2025, meaning that 2026 will be its first full year of enforceability. Under this section, criminal proceedings may be initiated against large organisations where a person who is associated with it commits a fraudulent offence intending to benefit the organisation or any person to or from whom the organisation provides services.

A defence from criminal liability is available for the organisation under subsection (4) where the organisation can prove that it had reasonable prevention procedures in place, or if not, it was reasonable to expect the organisation to not have prevention procedures in place. The benchmark to benefit from this defence is most readily met where the business has implemented training and policies which focus on what fraud and bribery in the business could look like, and how employees can assist the business in taking action where fraud and bribery occurs as well as the relevant whistleblowers' protection to incentivise the identification of fraud.  

The Crown Prosecution Service (CPS) and Serious Fraud Office (SFO) published joint guidance for prosecutors on dealing with corporate prosecutions (updated on 10 November 2025) setting out updated routes to establishing corporate criminal liability including the either-way strict liability offence of failure to prevent fraud.

As yet, there have been no recorded prosecutions under this offence. However, the introduction of the offence marks a shift in politics to holding companies accountable for fraud that occurs internally and bringing about effective whistleblowing policies to flush out potentially fraudulent activity.

For more information on the scope of the new criminal offence, the consequences for committing the offence, and best practices to prevent liability, see our briefing here.

The UK Government's Anti-Corruption Strategy (Dec 2025)

Following the ECCTA 2023 coming into force, the UK government has continued with its commitment to tackling economic crime by pledging greater efficiency in enforcement of corruption, fraud and bribery proceedings in the issuing of its Anti-Corruption Strategy (the Strategy) in December 2025. 

The Strategy focuses on corruption, whilst building on the Economic Crime Plan 2 (2023 – 2026)(ECP2). Whilst ECP2 sets out the UK's broad framework to fighting economic crime in partnership with the private sector, the Strategy seeks to deliver a more detailed plan to target corruption with over 120 commitments.

In particular, the Strategy implements the following key reforms:

  • Firstly, the Strategy extends the use of AI technology by the National Crime Agency, Serious Fraud Office and the International Anti-Corruption Coordination Centre to accelerate investigations and detect indicators of fraud early. 
  • Secondly, and in a bid to streamline supervisor functions in anti-money laundering (AML) and counter-terrorist financing (CTF), 22 professional services bodies will be consolidated into one. Identification, prevention and prosecution of non-compliance with AML and CTF standards in a professional services capacity will be supervised by the Financial Conduct Authority (FCA) alone, which overhauls the current uncoordinated approach taken to date.
  • Thirdly, the Strategy implements reforms at Companies House by extending requirements to give basic and beneficial owner information across crown dependencies and overseas territories, and advocate for public registers of a similar nature to be implemented across the world, and opening up data sharing to UK law enforcement agencies to look beneath the surface of complex (and fraudulent) ownership structures. 
  • Finally, as the period of the Economic Crime Plan 2 concludes, the Strategy announces the government's future plans to publish successor frameworks in the form of a new expanded Fraud Strategy in 2026, as supported by a new Anti-Money Laundering and Asset Recovery Strategy. In these papers, it is anticipated that the Government will encourage the effective use of whistleblowing policies to prevent fraudulent behaviour in large organisations, mandate transparency of commercial decisions and behaviour, extend financing of specialist roles in policing authorities and promote the use of AI by a coordinated network of regulatory bodies.

Tackling Fraud with Artificial Intelligence

Organisations now utilise artificial intelligence to respond to the rapid digitisation of fraud and its ability to go undetected behind the use of sophisticated and complex technologies. As noted in the Strategy section above, the UK government is pushing supervisory bodies towards the extended use of AI for this reason. However, it is also accepted that it should do so with transparency. 

On one hand, the UK government reports that AI-driven analytics, such as the Fraud Risk Assessment Accelerator, has prevented fraud losses through the widespread AI processing of public datasets to identify anomalies in human behaviour. On the other hand, there are concerns with the unintended bias in the training of AI models that the processing of data to this degree could have an unfair effect on individuals in marginalised groups.

Regulatory bodies going forward, particularly the FCA, will implement policies to focus on the explainability and risk management of the use of AI in detecting financial crime. The FCA has initiated reviews of advanced AI’s impact on finance, emphasising the need for clear guidance on AI oversight by senior managers rather than prescriptive AI-specific rules, given its unpredictability at times. Despite its many benefits and the fact that it is necessary to identify complex economic crime that utilises the same technologies, organisations are expected to implement robust human controls and auditing procedures to ensure reliability in fraud detection by AI. 

This methodology may well extend to the application of the new failure to prevent fraud offence; failure to utilise human oversight of AI decision-making could give rise to failure to have "reasonable procedures" to prevent fraud that could be missed by algorithmic systems. Similarly, having no human oversight of AI or utilising an AI model that explains the rationale of its decision-making where appropriate (known as explainable AI, or xAI) could signify a lack of accountability or transparency in processes, as bias in artificial intelligence models has and will continue to be an issue to tackle. 

This also applies on an international scale. The EU Artificial Intelligence Act adds a parallel layer of compliance to companies who operate in (or deal with those in) EU Member States. The purpose of the EU Artificial Intelligence Act is to ensure the healthy development and use of AI, introducing a robust system of transparency and documentation in the use of AI decision-making. Transparency in AI decision-making, being a helpful force in tackling and exposing the biases in AI use, will continue to be a priority for legislators in the UK and abroad as pressure on businesses to detect fraud with AI increases.  

Increased Sharing of Fraud Data

In the government's dedicated approach to tackling economic crime, it has implemented GDPR-friendly reforms to assist organisations and businesses in the sharing of fraud data to allow greater efficacy in the prevention of crime.

The most notable example of this was to grant information sharing measures to AML regulated firms under sections 188 and 189 of the ECCTA 2023; the guidance for which was recently published on the Government website in 2025 and first came into force on 15 January 2024.  

However, there are a few initiatives that are being employed in 2026 which extend this approach beyond AML-regulated firms:

  • From spring 2026, Companies House are requiring mandatory identity verification of all its directors and people with significant control to ensure a cleaner record of all companies ownership and governance.
  • Another example is found in the banking and finance sector. The Payment Systems Regulator is driving its initiative to require Payment Service Providers to increase transparency of its fraud data related to Authorised Push Payment fraud, to identify this information quickly so as to prevent it in real time. 

Public Authorities (Fraud, Error and Recovery) Act 2025

Parliament has also enacted the Public Authorities (Fraud, Error and Recovery) Act 2025 (the PA(FER)A) with a view to safeguarding public money from economic crime. This is achieved by granting powers to the Minister for the Cabinet Office (the Minister), and by extension the Public Sector Fraud Authority (the PSFA), to investigate and recover money lost from public sector fraud.

Among these provisions, there is an obligation on information holders (primarily banks and financial institutions) to disclose to the Minister (and the PSFA) all necessary and proportionate information which relates to a person whom they have reasonable grounds to suspect has committed fraud against a public authority (section 3 PA(FER)A). 

This information is obtained by serving on these information holders an information notice under section 3 PA(FER)A 2025. Specific investigative powers include the power to retrieve specific bank account statements and other general information of a bank's (or financial institution's) customer, who is reasonably suspected of committing fraud against the public authority. For reference, 'general information notices' can be serviced to obtain statements summarising each account an individual has with the bank or financial institution. Information holders are restricted from tipping off the account holder of the investigations and may be liable for failure to provide the information under s. 54 PA(FER)A.

The Government has opened a consultation on three draft Codes of Practice on Eligibility Verification, Debt Recovery and Information Gathering to make changes under the PA(FER)A and the PSFA's new powers understandable and digestible by the public. This consultation closes on 27 February 2026.

Conclusion

The developments emerging for 2026 signal a decisive shift in the UK’s approach to fraud prevention, marked by a stronger governmental position on pursuing illegal activity, enhanced regulatory coordination and an increasing reliance on advanced technology. 

The first full year of the failure to prevent fraud offence underscores a growing expectation that organisations must adopt a proactive approach, with whistleblowing, fraud prevention training and effective oversight becoming essential, rather than optional. Alongside this, the Government’s Anti-Corruption Strategy and related reforms reflect a clear move towards streamlining and centralising supervision, specifically through the FCA’s expanded supervisory role and wider data‑sharing initiatives designed to close gaps in enforcement.

The growing deployment of AI across enforcement bodies and regulated firms presents both opportunities and challenges. While AI promises faster detection and improved risk assessment, regulators are equally focused on ensuring that its use remains explainable, unbiased and supported by adequate human oversight. 

This year more than ever organisations face a landscape in which diligence, transparency and robust governance are of paramount importance. Clients and legal practitioners should prioritise risk-based AI governance frameworks, align them with the existing and evolving legal standards, and maintain documentation detailing its use in the prevention of crime, to be best placed to navigate this evolving area of law.

Please get in touch if you require advice on best practices to respond to these regulatory changes.