Large Language Model or Large Liability Model? What are the risks of ChatGPT in financial services

As firms begin to implement ChatGPT into their business models they must be aware of its limitations. We look at the risks that ChatGPT could pose to financial services.

15 July 2024 6 mins read
By Jennie Clarke
Written by humans

Written by a human

In brief:

  • ChatGPT presents myriad risks, from relaying and advising based on inaccurate information, to security threats from harboring large amounts of sensitive data
  • While other regulators ponder risks, FINRA has taken a neutral stance on generative AI and is engaging with members and policymakers to discuss the implications of new tech
  • Firms must implement stringent policies and algorithms to prevent AI bias and protect consumers from data exposure

Since its introduction, ChatGPT (Generative Pre-Trained Transformer) has been used for a plethora of tasks, from interview prep to essay writing to financial decision making. Although it is often positioned as a positive advancement within the financial services industry, this Large Language Model (LLM) also present a host of ethical and practical challenges.

As firms begin to implement ChatGPT into their business models they must be aware of its limitations, and regulators must oversee its usage to ensure productive outcomes devoid of bias and vulnerability to malicious actors. ChatGPT is unchartered territory for the financial services industry, thus, organizations must implement safeguarding rules and a robust surveillance framework to protect investors and consumers, as well as the organizations themselves.

A positive disruption?

The key attribute of ChatGPT is that it functions in natural language and has the ability to analyze and synthesize large sets of financial and market data. Possible use cases for this type of artificial intelligence (AI) could be:

  • Aiding customer decision-making with semantic and colloquial language capabilities, through a chatbot feature
  • Playing a significant role in communicating complex insights that are digestible
  • Generating reports with summaries for human compliance officers, with evidence of malfeasance such as of market abuse
  • Reducing costs of labor associated with document clarification and interpretation, automating the process from end to end

But, is this story too good to be true?

Risky business

The leading risks associated with the application of ChatGPT in the financial space stem from the fact that it is trained by the input of data by the user, paired with internet sources. Consequently, it may have a greater affinity towards incorporating gender, racial, or political biases that are deep rooted and exist across the web in providing advice on financial outcomes. As the AI tool is unable to decipher the difference between credible, reliable and non-bias sources, it has a greater potential to disseminate fake as well as hallucinatory information.

“If you ask ChatGPT questions about a specific thing, it can cross-contaminate the answer as it cannot distinguish where one thing stops and another thing starts” Robert Nowacki, Technical Account Manager, Business Development, Global Relay

Unlike humans, ChatGPT cannot comprehend clear breaks and spaces in topics of conversation, leaving consumers with inaccurate and ‘contaminated’ advice.

Here, it is also important to note the human element of the costs surrounding the tool. Indeed, it cuts labor costs, however it is important that firms are not encouraging a large-scale human replacement program with the adoption of new tech. Instead, firms should look to setup a human-based supervision process to audit ChatGPT on account of its inaccuracies and likelihood to create errors.

Human comprehension of data and consumer background should be valued as key to business success, not disposable in the face of AI. This is also relevant to contextual understanding of customers, where AI may not be able to provide a thorough understanding to a unique situation if there is no blueprint to tackle the issue, as a human may be able to. Human expertise is crucial to the success of financial services and the application of AI; hence, firms must not overlook, but work to strike a balance between technology and human intelligence, in order to relieve an over-dependency on tech.

In addition, AI tools present greater vulnerability to cybercrime where individuals are overloading ChatGPT with large amounts of personal data in order to train it to give reliable advice and support. Not only is this a risky move for data protection, it becomes a target point for malicious actors that seek to expose and use sensitive data for monetary gain. Where cybercriminals are able to train ChatGPT on vast amounts of code to form undetectable malware strains to bypass security defenses, it is vital that firms ensure that no one is at risk. AI cannot see the red flags that humans can. It is the responsibility of regulators and governing bodies, globally, to act to prevent financial loss as a result of ChatGPT. Will this mean regulators and governments will need to limit publicly available information on the internet to ensure organizational software is not compromised by ChatGPT’s ability to scour the web to potentially harm firms?

FINRA is getting with the times

Although FINRA has taken a decidedly neutral stance around ChatGPT, to ensure it functions in a way that dynamically interacts with the evolution of technology, it has set out clear guidance that ensures firms practice policies and procedures that address governance and accuracy of AI. The rules are applicable based on how a member firm chooses to deploy technology, however, due to this there is a lot of room left for ambiguity.

“Governance is the cornerstone of any AI use. All must have properly documented governance as well as a raft of testing so that those decisions to use can be defended robustly where challenged”, – Rob Mason, Directory of Regulatory Intelligence, Global Relay

For this reason, FINRA have set up an ‘interpretive requests’ mechanism to help firms navigate the gray areas that come with AI adoption. For now, firms may want to avoid trialling and testing ChatGPT’s use in areas that hold the highest proportion of risk, such as credit decision making and regulatory reporting. As this is a learning curve for both the regulator and firms, FINRA has taken an open stance in which it welcomes, dialog, and is wary that as technology evolves so must regulation.

The emergence of AI has swept the globe and despite its advantages, this Generative Pre-Trained Transformer has its pitfalls. Although it has a long way to go, ChatGPT has the propensity to assist with governance and policy writing and reviewing, it has the possibility to revolutionize the way surveillance is conducted. As long as the regulator and firms remain vigilant towards these shortcomings and designate policies where needed, risk can and will be mitigated.

Global Relay’s data Connectors allow you to monitor and archive communications across a variety of channels, and supports your firm as it strives for venue completeness. Through a direct integration with OpenAI, Global Relay’s Connector for ChatGPT Enterprise will help you solidify your compliance strategy and consolidate any and all of your critical communications.

 

About Article

Published 15 July 2024

About Author

Share Article

SUPPORT 24 Hour