AI’s impact on the financial industry – SEC speaks to its risks, revolutionization, and everything in between

The SEC’s Chairman Gary Gensler recently spoke about the challenges and benefits AI presents to the financial world on a micro and macro level. Ensure you’re on track to identify, adapt, and manage these points as this technology rapidly develops for compliance teams.

09 August 2023 8 mins read
Profile picture of Kathryn Fallah By Kathryn Fallah
Written by humans

Written by a human

In brief:

  • Gary Gensler, Chair of the SEC, recently gave a speech about AI and its transformative power in the financial sector
  • Gensler highlights that while AI may provide radical assistance within the financial space, it is not without risks, and organizations need to be vigilant in managing any AI-related challenges
  • While the SEC is technology neutral, it emphasizes the importance of staying on top of business practices related to this growing technology, and complying with regulations to keep the market healthy and consumers protected

Can machines think? Since Alan Turing asked that question in his paper “Computing Machinery and Intelligence,” published in 1950, it’s become clear that they have the potential to do much more than that. Artificial intelligence (AI) is taking the world by storm, and financial organizations must ensure that they stay ahead of the game.

Reflecting on invention and ever-growing developments

On July 17, 2023, Gary Gensler, Chair of the U.S. Securities and Exchange Commission (SEC), spoke about AI and its influence on the modern world, specifically its role in the financial industry. He compared Isaac Newton’s creation of calculus during a pandemic nearly four centuries ago to the newest, eye-catching AI model, ChatGPT-4, formed during the recent COVID-19 pandemic.

Just as Newton built what is now calculus on earlier mathematicians’ work, AI builds upon ideas from predecessors. Public interest in AI is increasingly growing, and the possibilities for its uses are endless, especially considering the rate at which data is collected from various sources (such as Big Tech companies), including social media, sensors built into our technology, and browsing information. Using a range of data and neural networks, AI identifies patterns and makes predictions. The more data you feed it, the more it is able to learn.

Gensler is not blind to these advancements and highlights the opportunities AI creates in finance:

“As machines take on pattern recognition, particularly when done at scale, this can create great efficiencies across the economy. In finance, there are potential benefits of greater financial inclusion and enhanced user experience.”

The financial industry initially had a wary approach to AI. Some banks have gone to the length of banning employees from using AI platforms like ChatGPT-4 to protect confidential information, and in some instances whole countries banned the use of Generative AI systems. Within the past several months, however, it seems the financial industry’s views on AI have been becoming more receptive. In Boston, for example, the City of Boston has endorsed “responsible experimentation” as an approach to AI, which many saw as a potential blueprint for future use. While continuing to be wary, Gensler’s speech goes some way to suggest the Boston model might be catching.

My, oh my – AI!

There is no doubt that AI has significant benefits for financial services, though throughout Gensler’s speech we are urged to consider the concerns that arise in relation to AI. On a micro level, the first one highlighted in this speech is narrowcasting – the idea that AI can analyze information and data patterns about specific groups of people or individuals to make predictions and communicate.

A range of businesses have adopted narrowcasting in the decision-making processes for significant resolutions, such as who qualifies for a job or line of credit. Gensler explains how this may lead to issues when a lack of explainability and probability of bias comes into play.

AI’s design enables some decisions and calculations to be unexplainable to the human end user. There are an abundant number of parameters that facilitate constant data processing and learning – so much so that it can be difficult to break down AI decisions into clear, auditable pathways.

Explainability is a crucial aspect of staying compliant with regulatory obligations, as well as building and maintaining trust with clients. If the reasoning behind a decision seems out of place, do you have precautions set to document and support all the tools your organization employs? If you’ve entrusted an AI tool to make certain decisions, are you able to explain how those decisions are being made? And, more importantly, why they may have gone wrong?

 “The ability of these predictive models to predict doesn’t mean they are always accurate or robust. They might be predicting based upon some latent feature that leads to a false prediction.”

Additionally, AI is utilized more and more in advisor and broker services. Organizations need to verify they are framing any advice or recommendations made using AI around a client’s best interest instead of an advisor. As Gensler has since highlighted in a conversation with Dealbook, AI models may put companies’ interests ahead of investors’:

“You’re not supposed to put the adviser ahead of the investor, you’re not supposed to put the broker ahead of the investor.”

As well as lack of explainability and conflicting priorities, AI also poses risks in vulnerability. Bad actors can exploit AI by, for instance, deceiving people through narrowcasting to discover their interests to spam them. On a broader scale, AI can be used maliciously to influence elections or capital markets. Any misuse of this technology is seriously prosecuted by the SEC to protect investors, capital formation, and market health.

In reflection of this risk to security, it is essential that organizations are proactive and establish clear security measures and processes to combat any fraudulent behavior.

Great power, great responsibility – Macro challenges

Winston Churchill once said, “With great power comes great responsibility.” Gensler recently said AI “will be the center of future crises, future financial crises.” The challenges AI poses are considerable on a small scale and even more expansive on a large scale, which means that financial organizations need to be extra attentive when utilizing this rapidly developing tool.

Data privacy, for example, is a clear area of risk presented by AI. Consider, for instance, who owns individuals’ data? AI platforms collect information from all individuals who use it to refine its parameters and extend the database. Moreover, data collected from downstream applications passes down to base models. When one or a couple of AI platforms access all this information, it can lead to “economic rent.” As this data is passed from place to place, where does the data ownership begin – and where does it end? In the case of intellectual property (IP) or personal data, this is an even more pressing question.

With that in mind…

Since its conception, AI has come a tremendous way. As Gensler states, it is “the most transformative technology of our time.” Even still, it can morph beyond our imagination.

Considering the apprehension around AI as of late, it is noteworthy to see the SEC mention the benefits it can have in the financial sector. There is an interesting juxtaposition between Gensler, on the one hand, decrying AI as the arbiter of future crises, while on the other hand speaking to the benefits it could bring. Though there are still numerous concerns around this technology, it seems we can look out for further integrations in business practices as it advances. Indeed, don’t all rewards carry a degree of risk?

Similar to the approach adopted by the City of Boston, Gensler appears to be endorsing an “approach with caution” mindset rather than polarizing pro or against AI.

To meet regulatory expectations, organizations need to confirm they are approaching AI equipped with risk management tools to stay safe from harm while maintaining financial health and keeping pace with regulation. When considering the deployment of AI-driven tools, remember that human intelligence must still feature. Unlike Alexa, this isn’t a tool you can plug in and walk away from – the compliance team must still be on hand to fine-tune the end result. Who will explain the end result if not?

Global Relay has built AI-enabled solutions engineered to seamlessly integrate throughout the Global Relay ecosystem of archive, collaboration, and compliance applications. Our clients use AI to enhance business efficacy across surveillance, behavioral analytics, customer support, and more. Our approach to innovation is to build it right, the first time – so that all our products are robust and secure.

If you’d like to know more about our AI-enabled products, speak to the team or visit our product page.

 

SUPPORT 24 Hour