Generative artificial intelligence (AI) and large language models (LLM) like ChatGPT have become novel and dazzling technological innovations that have grasped the world’s attention since becoming publicly available.
Upon its release, ChatGPT garnered a plethora of attention from a curious public wondering about the inner workings of its capabilities. Within two months of its launch, it became the fastest app to reach 100 million active users. While it has become widely-used amongst a range of individuals, AI models are also increasingly popular in multiple industries, such as finance.
AI isn’t unfamiliar to the financial space – the technology has been employed for an assortment of practices that simplify business operations, such as with automated chatbots or document review systems. However, the introduction of generative AI platforms that are instantly accessible and geared towards a far-reaching audience is unique in the sense that they can now be used for messaging, content creation, educational purposes, and more.
To keep in step with the modernizing times and meet client needs, firms are making moves to implement – or at least understand – new AI technologies to reap the opportunities they offer. Yet, as the many elements of AI continue to be explored, firms need to know the rules and cover their bases in operating new models to verify they’re taking the right steps.
What are the rules around generative AI?
As is the case with AI generally, financial regulators have not outlined explicit guidelines on how to handle its implementation, as its intricacies are still being deliberated. Across regions, regulators have taken different approaches to supervising the technology, with some being pro-innovation and receptive to its opportunities, and others taking a more cautious, investigative route to protect against risks.
Though regulators have been issuing broad guidance on AI in the past years, ChatGPT and related generative platforms have opened the door to an even wider range of possibilities and use cases in the industry.
FINRA’s approach to generative AI
A regulator that has taken the bull by the horns is the Financial Industry Regulatory Authority (FINRA), which released Regulatory Notice 24-09 outlining obligations for generative AI platforms and LLMs to reiterate the parameters regulators have implemented so far.
The Notice highlights that, while these new technologies present opportunities for firms, they should still be “mindful of the potential implications for their regulatory obligations.” It adds that FINRA’s rules “continue to apply when a member firm uses Gen AI or similar technologies in the course of their businesses.” This, it seems, extends to recordkeeping requirements.
While FINRA echoed the overall viewpoint we’ve previously heard from financial regulators in this notice, it did recognize that AI is developing and now includes technology that is capable of “generating significantly better text, synthetic data, images, or other media in response to prompts.”
FINRA has also commented on the consideration that AI applications can contribute to communications and that “the content standards of Rule 2210 (Communications with the Public) apply whether member firms’ communications are generated by a human or technology tool, and that guidance discusses the specific application of the rules to certain AI-generated communications.”
Relatedly, FINRA’s Frequently Asked Questions About Advertising Regulation stated that firms are responsible for their communications “regardless of whether they are generated by a human or AI technology,” implying that if AI communications are treated as regular communications, they must be retained and supervised.
The FCA’s approach to AI
U.K. regulators, which are taking an outcomes and principles-based approach, have shared thoughts on the matter as well, such as in the Financial Conduct Authority’s (FCA) speech on its approach to Big Tech and artificial intelligence. While this release is still widely a commentary on AI in general, it briefly mentions generative models and their influence on industry development.
In particular, the FCA stated that generative AI and synthetic data can “help improve financial models and cut crime” and also “tackle the advice gap with better, more accurate information delivered to everyday investors” by taking stigmas away from a wide range of customers to help uncover viable solutions. In this sense, should firms be considering the use of AI models to distribute any sort of information, it is wise that these steps are retained to maintain market integrity.
Comply with AI
After a period of education and understanding, we have now seen the beginnings of enforcement actions related to AI misuse, such as the Securities and Exchange Commission’s (SEC) $400,000 fine against two investment advisers for making false statements about their use of AI.
SEC Chair Gary Gensler indicated that these firms marketed they were using AI in certain ways that they were not, emphasizing the regulator’s stringency with misleading claims:
“We’ve seen time and again that when new technologies come along, they can create buzz from investors as well as false claims by those purporting to use those new technologies. Investment advisers should not mislead the public by saying they are using an AI model when they are not.”
As firms begin to implement AI and utilize models to handle data, they must ensure that they are approaching the situation from a risk-based perspective and are promoting accurate information delineating how they’re utilizing new technologies.
AI era: how will ChatGPT influence finance?
As AI is already being used within operations, it’s important that firms understand the obligations regulators have laid out so far and take steps to account for the ways that they are utilizing developing models.
While there are no specific rules on how to handle platforms like ChatGPT just yet, regulatory commentary related to generative AI and LLMs imply that firms must have outlines of how they’re implementing its use – whether that be in relation to generated communications, surveillance, advice, or data analysis – or face the consequences. It is also generally accepted that, if firms are using generative AI such as ChatGPT within its communications, those channels will need to be captured to meet recordkeeping obligations.