How can we help you?

Artificial Intelligence (AI) has rapidly evolved from a niche innovation to a core strategic asset in banking. Financial institutions are increasingly deploying AI across a wide range of functions, including fraud detection, customer service, and credit underwriting.

These technologies promise significant efficiency gains and improved customer experiences. However, as banks begin to experiment with consumer-facing AI tools such as ChatGPT, particularly in regulated areas such as credit decisions, the regulatory and operational risks are becoming more pronounced. This article explores the current state of AI adoption in banking, the associated compliance challenges, and practical strategies for risk mitigation, while also looking ahead to emerging regulatory frameworks.

AI adoption in banking: From pilots to core strategy

AI is now deeply embedded in banking operations. In fraud detection, machine learning models are used to identify suspicious transactions in real time, helping institutions respond swiftly to potential threats. Customer service has been transformed by chatbots and virtual assistants that handle routine inquiries, freeing up human agents for more complex tasks. In credit underwriting, predictive analytics are streamlining decision-making processes, reducing manual review and accelerating loan approvals. Industry surveys indicate that over 90% of banks are actively investing in AI, with budgets continuing to grow. Agentic AI—systems capable of autonomously performing multi-step tasks—is also gaining traction, particularly in compliance and risk management functions.

Regulatory and operational risks: A growing compliance challenge

Despite its benefits, AI introduces a complex array of risks that cause concern to compliance officers and risk managers. One of the most pressing concerns is data privacy. AI systems often rely on large datasets that may include personal data and special category data (such as health information, biometric data, or data revealing racial or ethnic origin). 

Under UK GDPR, processing special category data requires both a lawful basis under Article 6 and, for special category data, an additional condition under Article 9. Without a proper lawful basis and, where required, applicable Article 9 condition, and data handling protocols compliant with UK GDPR principles, institutions risk violating data protection laws.

Fair lending compliance is another critical issue. If training data contains historical biases or if the model’s logic is opaque, there is a real risk of discriminatory outcomes. Under the Equality Act 2010, financial institutions must not discriminate on the basis of protected characteristics including age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation. This type of discrimination could constitute indirect discrimination even if unintentional. This not only potentially exposes institutions to legal liability under the Equality Act 2010, potential regulatory action by the FCA, and claims under consumer credit legislation, but also undermines public trust.

Model explainability is a growing regulatory priority. Financial institutions must be able to justify adverse credit decisions, yet many AI models—especially those based on deep learning—lack transparency. Additionally, Article 22 of the UK GDPR provides individuals with the right not to be subject to decisions based solely on automated processing, including profiling, which produces legal effects concerning them or similarly significantly affects them. This includes automatic rejection of credit applications. This right is subject to exceptions where the decision is necessary for entering into or performing a contract, authorised by law, or based on the individual's explicit consent, and in these cases appropriate safeguards must be in place including the right to obtain human intervention, express their point of view, and contest the decision.

Third-party vendor risk is also a concern. When banks use external AI tools, they remain responsible for the outcomes under the principle that outsourcing does not diminish regulatory responsibility. In the UK, this is governed by various regulatory frameworks including FCA and PRA rules on outsourcing. While the Digital Operational Resilience Act (DORA) is an EU regulation that emphasizes robust oversight of outsourced services for EU financial institutions, UK banks should refer to the UK's own operational resilience framework, including the FCA's and PRA's operational resilience rules that came into force in March 2022, and guidance on outsourcing and third-party risk management. Additionally, AI systems are vulnerable to prompt injection attacks, where malicious inputs manipulate model behaviour. These attacks can lead to data leakage, unauthorized access, and reputational damage.

ChatGPT-specific concerns in credit decisioning

Consumer AI tools such as ChatGPT present unique challenges when used in regulated banking functions. One major issue is the potential for bias in training data. ChatGPT is trained on vast amounts of internet-sourced content, which may reflect societal biases. If used in credit decisioning, this could result in discriminatory lending practices.

Another concern is the lack of audit trails. Unlike traditional models, ChatGPT does not produce structured logs of its decision-making process. This makes it difficult to conduct regulatory reviews or respond to examination requests. Confidentiality is also at risk. Inputting sensitive customer data into public AI tools can lead to data protection breaches under UK GDPR, especially if the data is retained or used for further training without proper safeguards.  Looking at specific articles within UK GDPR, this type of processing may violate principles of purpose limitation (Article 5(1)(b)), data minimisation (Article 5(1)(c)), and may constitute unlawful processing without an appropriate legal basis under Article 6. Financial institutions must ensure appropriate technical and organisational measures are in place under Article 32, and may need to conduct Data Protection Impact Assessments under Article 35 before using such tools.

Mitigation strategies: Building a resilient AI governance framework

To address these risks, financial institutions must implement comprehensive governance frameworks. Establishing dedicated AI oversight committees can help ensure that model deployment aligns with ethical standards and regulatory requirements. These committees should include representatives from compliance, legal, risk, and technology teams.

Model validation is essential. Institutions should prioritize the use of interpretable models or integrate explainability tools that allow for clear justification of decisions. Human oversight remains critical, particularly for high-impact decisions such as loan approvals and fraud investigations. Maintaining a “human-in-the-loop” approach ensures that AI outputs are reviewed before final action is taken.

Vendor due diligence is another key strategy. Banks must thoroughly assess third-party AI providers, examining their data handling practices, model transparency, and compliance certifications. Data sanitization protocols should also be implemented to prevent prompt injection attacks. This includes filtering inputs and using enterprise-grade AI platforms that do not retain or train on user data.

Regulatory horizon: What’s coming next

The regulatory landscape for AI in banking is evolving rapidly. In Europe, the EU AI Act, set to take effect in August 2026, classifies credit decisioning models as “high-risk.” Institutions will be required to conduct conformity assessments, document training data, and ensure human oversight. General-purpose AI tools like ChatGPT will also face stricter governance under this framework.

In the United States, the Consumer Financial Protection Bureau (CFPB) has made it clear that lenders using AI must provide specific, accurate reasons for credit denials under the Equal Credit Opportunity Act (ECOA) and Regulation B. This applies even when decisions are made using complex algorithms. Regulators are also expected to prioritize AI model governance, explainability, and bias mitigation in upcoming examinations. For UK financial institutions, the relevant framework includes the Equality Act 2010, FCA Principles for Businesses (particularly Principle 6 on treating customers fairly), and consumer credit regulations under the Financial Services and Markets Act 2000.

It remains to be seen how the specific AI regulatory framework will be shaped in the UK. However, UK financial institutions are already subject to existing regulatory requirements that apply to AI systems, including the UK GDPR and Data Protection Act 2018, the Equality Act 2010, FCA Principles for Businesses, PRA rules on risk management and governance, operational resilience requirements, and consumer credit regulations.

The UK government has published its AI White Paper proposing a pro-innovation, principles-based approach, and regulators including the FCA and Bank of England are developing AI-specific guidance. UK institutions should monitor developments from HM Treasury, the Department for Science, Innovation and Technology, and financial services regulators.