A man looks at a screen to read the FCA's AI report

FCA report shows 75% of firms are now using AI, but only 34% know how it works

The FCA has published its Artificial Intelligence and Machine Learning Survey for 2024, revealing that 75% of financial services firms are now using AI. Is accuracy now more important than explainability for compliance?

26 November 2024 6 mins read
By Jennie Clarke
Written by humans

Written by a human

In brief:

  • The FCA’s latest report reveals that 75% of firms are using AI – marking a significant increase since 2022
  • Despite increased use, only 34% said that they are confident they have a complete understanding of how AI works
  • Responses indicate that where AI models have been developed in-house, understanding generally increases, whereas understanding decreases with systems that use third party models
  • 81% of firms are using explainability models to validate their AI decision making, but recent Global Relay roundtables suggest that accuracy may soon become more important

On November 21, the Financial Conduct Authority (FCA) published its 2024 Artificial Intelligence and Machine Learning Survey, revealing that 75% of financial services firms are already using artificial intelligence (AI), with an extra 10% planning to use it within the next three years.

The survey, which received 118 responses from firms across the gamut of financial services, serves as a tool for both the FCA and Bank of England (BoE) to further their understanding of the “capabilities, deployment, development and use of AI in financial services.”

The report broadly mirrors similar survey results published by the regulators in 2022, which pinned the number of firms using AI at 58%, and the number planning to use it in the next three years at 14%. By this estimation, the number of firms using AI within their organizations has risen by 17%, marking a significant jump.

Third-party models are on the rise, but directly affect AI understanding

Third party models and implementations make up the lion’s share of AI use within financial services, with around 33% of respondents stating that they use the AI services of third parties to complement their strategies. This has risen from 17% in 2022. The use of third-party technology has a direct impact on individuals’ understanding of how AI works, with only 34% saying that they have a “complete understanding.”

This is largely attributed to the use of third-party models. 46% of survey respondents said that they only have a “partial understanding” of how their AI systems work, but were more confident in that understanding where AI models had been developed internally. In essence, if models are developed in-house, people have a clearer understanding of how they work.

AI rewards overtake AI risks

Global Relay’s Industry Insights Report for 2024, published at the beginning of the year, found that only 10.4% of financial organizations believed AI to be a reward for compliance teams. In comparison, 17.4% said that it was a definite risk, 32.2% said that it was a combination of both, and 40% were uncertain or said it was too soon to say.

The FCA’s results, which have been published around nine months later, found that 21% of respondents believe AI will continue to emerge as a benefit in the next 3 years, with only 9% feeling that it will establish itself as more of a risk. Clearly, attitudes on AI are changing as firms begin to see the advantages of the technology and consider solutions to mitigate associated risk.

Within its survey, the FCA asked respondents to rank the highest perceived benefits and risks that AI presents. They identified four key benefits:

  1. Data and analytical insights
  2. Anti-money laundering
  3. Combating fraud
  4. Cybersecurity

Respondents noted that they expect productivity, operational efficiency, and cost to become greater benefits in the near future.

The results also found five key risks, four of which pertain to data concerns:

  1. Cybersecurity
  2. Data privacy and protection
  3. Data quality
  4. Data security
  5. Data bias and representativeness

The safety and security of any data that AI ingests – whether it be large language models (LLMs) or chatbots – has long been a concern for financial organizations. This is due in part to a lack of clarity around data ownership, data storage, or how data is transferred or shared.

To mitigate the risks of AI, 84% of respondents said that they have designated an individual to be accountable for their AI framework, with 72% sharing that executive leadership were responsible for AI use cases. On the whole, the survey found that organizations generally use a combination of governance frameworks, controls, and AI-specific use cases to manage AI and associated risks.

Explainability may be less important than accuracy

Historically, explainability has been a central barrier to the adoption of AI within compliance programs. In the event of a regulatory audit or investigation, could compliance or surveillance teams clearly explain how they came to a decision if that decision was made, or at least informed, by AI?

In Global Relay-hosted roundtables focusing on AI in surveillance, attendees would often express concerns around explainability or defensibility. The FCA’s survey shows that firms choosing to adopt AI are mostly doing so with explainability in mind – 81% of firms using AI employ “some kind of explainability method.”

The most popular explainability methods are feature importance (72%), which provides a general ranking of features with a view to explaining how much each input variable (or feature) contributes to a machine learning model’s prediction. The second most popular, Shapley additive explanations (64%), works in a similar way, but considers all possible combinations of features to assess impact. Despite this continued focus on explainability, comments from recent Global Relay roundtables suggest that the topic is becoming less of a concern when implementing AI. As one attendee noted:

“You don’t need to make the argument that AI is perfect. I think you just need to make the argument that it’s significantly better than lexicon is.”

Over time, it is likely that the productivity and accuracy benefits of AI-enabled compliance systems will outweigh the demand to explain how a system works. In reference to AI’s use within surveillance systems, another attendee noted:

“You’re either going to have extremely easily explainable surveillance, but it’s ineffective and wastes money, time, and effort – or you can have significantly improved surveillance, where you might not be able to explain every element of the large language model.”

Global Relay Surveillance harnesses the latest LLMs to deliver a future-proof, comprehensive solution for robust communications monitoring.

 

About Article

Published 26 November 2024

About Author

Share Article

SUPPORT 24 Hour