logo_outline-1
shutterstock_2296151421
writer icon

Author: Priscilla Gaudoin - Head of Risk & Compliance - published September 2025

subject icon

Topics: AI, Corporate Governance, Accountability, Risk Management

globe icon

Regions and Regulators: UK - FCA, Bank of England, PRA

Recognised CPD Badge (transparent) 24-25 1

The Institute of Directors' survey reports five stand out brakes on AI adoption: Skills gaps (29%), costs (25%), data protection and security risks (23%), governance/ethical concerns (21%), and explainability and black-box issues (20%). Many leaders want ‘clear, proportionate, regulation” and practical guidance to de-risk investment. 

Why does this land differently in financial services? Even without an AI-specific rulebook, UK financial regulation already impacts AI through existing frameworks: consumer duty, operational resilience, outsourcing, third party risk, model risk, SM&CR accountability, and UK GDPR. The Bank of England, PRA, and FCA explicitly said that they will lean on these regimes rather than rush standalone AI rules whilst continuing to monitor harms.

What does this mean for FS industry?

The FCA insists it will not impose rigid AI rules immediately. Instead, it expects firms to govern AI responsibly under existing requirements. It emphasises principles like transparency, fairness, accountability, and contestability.  And its CEO, Nikhil Rathi, has warned that evolving AI may outpace the regulator’s ability to craft prescriptive rules. 

Customer Outcomes

Where AI capability touches retail customers, firms must evidence good outcomes across products and services, price and value, consumer understanding and support, and that they monitor and remediate when models drift. The Duty timeline and updates (e.g. board champion expectation removed) show the FCA is actively refining supervision which in turn means that AI change programmes should keep pace.

Firms need to build outcome testing into AI lifecycle controls (pre-deployment trials, in-life monitoring, vulnerable customer impact analysis, explainability suitable for non-specialists).

Governance & Accountability (SM&CR)

AI doesn’t diminish individual accountability. Senior Managers must have clear Statements of Responsibilities covering AI-relevant risks and evidence ‘reasonable steps’. Firms should expect scrutiny of governance, documentation, challenge, and escalation paths around AI decisions.

This means that firms need to map AI responsibilities to SMFs (e.g. SMF 1, SMF24/16/4 as relevant), minute challenge, and keep a living register of material AI systems, owners, risks and controls.

Model risk management (PRA SS1/23)

For banks and designated investment firms using material models (including vendor models and complex algorithms), the PRA’s SS1/23 now treats model risk management (MRM) as a risk in its own right, effective 17 May 2024. It expects a model inventory, tiering, validation, performance monitoring, and SMF accountability as well as annual self-assessments to the Board.

Firms must align AI models with a firm-wide MRM framework: classify or tier models (including, non-traditional ML), tighten independent validation (conceptual soundness, data/assumption testing, stability/bias), and evidence board oversight.

Operational Resilience and Critical Third Parties

AI services (especially cloud-based) can be part of important business services. Under PS21/3, firms must set impact tolerances, map dependencies, and test their ability to remain within tolerances, requirements consolidating by 31 March 2025. Meanwhile, new critical third party (CTP) rules give regulator direct oversight of systemic providers.

Firms must treat AI providers (model APIs, vector DBs, inference platforms) as resilience dependencies. This means firms need to map them, run severe, but plausible scenario tests (e.g. model outage, degraded accuracy); and rehearse response playbooks and rollbacks.

Outsourcing & third-party risk (PRA SS2/21, FG 16/5)

When AI is procured or hosted externally, the outsourcing rules still apply. Firms must consider materiality assessments, due diligence, audit and access rights, data location, business continuity and exit, and proportional controls.  The PRA's refreshed SS2/21 and the FCA’s FG 16/5 remain the cloud touchstone.

Firms must put robust clauses into AI and vendor contracts (eg training data provenance, IP/indemnities, security, testing & validation support, incident reporting, termination and model and data export).

Data protection, fairness & explainability (ICO)

Under UK GDPR, you must ensure lawfulness, fairness, transparency, manage special category data, perform DPIAs, and provide meaningful explanations for AI-assisted decisions. The  ICO-Alan Turing Institute guidance is the practical playbook for explainability.

Firms need to pair technical explainability (feature importance, counterfactuals) with customer ready narratives, log justification templates, and test for drift/bias with red-team style probes.

Policy direction: “use existing rules, close gaps where needed”

The 2024 UK government’s ‘pro-innovation’ stance empowered sector regulators (like FCA, PRA, ICO) to apply current regimes to AI, with targeted gap-filling rather than a single AI act. 

Firms should not wait for an AI rulebook but should align AI change to today’s obligations by tracking targeted updates (e.g. CTPs, SM&CR refinements) that indirectly raise AI standards.

Turning blockers into a compliance-driven roadmap

Firms should consider the following actions:

  1. Inventory & tiering of all AI & ML models (including vendor tools), define triggers for materiality.
  2. Accountability: Who owns deployment, validation, deployment, monitoring and rollback? Update SMF Statement of Responsibilities.
  3. Consumer Duty: Test pre- and post-launch (including vulnerability impacts), with MI to evidence outcomes.
  4. Operational resilience: Map AI to important business services, set impact tolerances, test severe scenarios, evidence lessons learned on a continuous basis
  5. Third party controls: Contract clauses (audit access, security, SRE/Uptime data, incident reporting, exist), plus periodic due diligence.
  6. Data protection: Data Protection Impact Assessments, purpose limitation, minimization, explainability plans.
  7. Monitoring & model risk: independent validation, performance thresholds, drift and bias alerts, periodic re-validation and board reporting.

AI promises transformative benefits across financial services, from fraud detection to personalised services. It also brings material risks that regulators in the UK are beginning to address through a principles-based, technology-neutral approach, rather than prescriptive regulation. Key voices such as the Treasury Select Committee, HM Treasury, the FCA, and the Bank of England, are promoting a balanced model that enables innovation whilst safeguarding consumer fairness, financial stability, and market integrity.

Firms must proactively invest in robust governance, explainability, bias mitigation, cybersecurity, and third party oversight, as well as stay engaged with evolving regulatory expectations. This is central to capturing AI benefits responsibly in the UK’s regulatory landscape.

How Ruleguard can help:

Ruleguard is an industry-leading GRC platform designed to help regulated firms manage the burden of evidencing and monitoring compliance. It has a range of tools to help firms fulfil their obligations across the UK, Europe and APAC regions.

Ruleguard's Supplier Oversight Solution enable firms to automate processes, create and maintain the crucial evidence trail, whilst also sharing information with third parties to provide more robust oversight and manage regulatory risk. 

Ruleguard is a comprehensive solution that lets you protect and propel your business forward through the complex regulatory landscape.


Related Webinars, White Papers and Blogs

Ruleguard hosts regular events on various regulatory topics. You can watch our webinars on-demand at your convenience, or read our blogs, white papers, infographics, and tune in to our podcasts. 

Supplier Oversight made effortless

Keep track of your third-party providers directly with Ruleguard's Supplier Oversight Solution. Ensure protection against foreseeable harm to retail customers in line with cross-cutting rules.

Book a call today!
 

Lets chat!
Priscilla photo-1

About the author

In a career spanning 30 years, Priscilla has worked as a consultant, CCO and MLRO providing regulatory oversight and advice to firms across the financial services industry. She is responsible for our thought leadership programme, writing regular articles and white papers, and hosting webinars on a variety of regulatory matters.

She is a Fellow of the International Compliance Association, a certified GRC practitioner, and a member of the Institute of Risk Management.

 
Contact Priscilla