Generative artificial intelligence (AI) programs and Large Language Models (LLMs) are driving transformative, fast-paced change across industries. As the most talked about topic in tech, the biggest platforms – like ChatGPT – are becoming household names. By processing huge volumes of data at incredible speed, these platforms are unlocking new opportunities and empowering the firms that tap into them.
However, where these technologies intersect with regulated industries, like financial services, there is growing discussion about whether the risks outweigh the rewards. With regulators taking a ‘wait and see’ approach to AI, or believing current regulations are enough to oversee their compliant use, firms are taking a cautious approach to AI and LLM implementation. Around 42% of surveyed firms said they would integrate AI into their workflows over the next twelve months, but 65% of U.S.-based firms don’t plan on making use of the technology within that timeframe.
The lack of regulatory clarity, and the fast-paced evolution of AI that sees new risk areas emerging in near real-time, might be driving this hesitancy. However, LLMs like ChatGPT are primarily data engines, and savvy firms will already be familiar with how capturing and leveraging data compliantly can secure better outcomes.
How are LLMs like ChatGPT being used in financial services?
New use cases for LLMs and AI are being brought to the fore constantly, but the technologies have already carved out key areas of success within firms. Fabio Caversan, vice president of digital business and innovation at digital consultancy Stefanini Group, summarizes:
“At this point in development, companies in the financial sector can have a degree of confidence when using LLMs for practice areas where these tools have a demonstrated record of success.”
Caversan highlights that LLMs are particularly strong in ‘functions that primarily involve language’, including customer service and support, marketing, and human resources. However, the scope of AI spans much further than just these areas, with a few examples including:
- Automated document processing – LLMs are able to quickly review and process financial documentation including loan applications, insurance claims, and account opening forms., and extract relevant information from unstructured data
- Customer service – AI can complement traditional ‘chatbot’ approaches, helping to assist customer decision-making through better language processing and more natural conversational capabilities
- Streamlined risk assessments – LLMs excel at analyzing data at a huge scale, and identifying patterns, trends, and correlations, enabling firms to better identify risks and make more informed lending decisions
- Reduced compliance costs – By decreasing the time and resource needed to read through and comprehend documents, AI can reduce compliance costs. By cross-checking a firm’s policies or workflows against regulations, LLMs can help identify potential violations and free up compliance staff for more high-value work
- Insight summaries – LLMs can communicate complex insights and information in plain language that is easier to understand. This can help customers by demystifying complex financial products and services, and firms through summarizing key insights from forecasting and gap, trend, and spend analysis
- Automated, performant trading – Studies have proved that ChatGPT is capable of creating algorithms to yield high returns. Testing using public markets data and news showed that the platform could generate returns exceeding 500% – considerably higher than traditional methods
What are the risks of LLMs in financial services?
The potential risks posed by AI and LLMs to financial services firms are much discussed. However, some of the key risk areas firms should be aware of include:
- ‘Hallucinations’ and misinformation – LLMs can be prone to ‘hallucinations’, providing incorrect or even fictitious outputs. A legal case saw two lawyers and their firm fined for utilizing “fictitious legal research” provided by an LLM. The case judge ruled that there was nothing “inherently improper” about using AI to assist in legal work, but that lawyers need to ensure their filings are accurate
- Bias – As AI is ‘fed’ on data, that data can have potential biases included in it, which will lead to biased outcomes from platforms. As an example, Amazon discontinued use of an AI recruitment tool after it favored male candidates based on the language used in their applications, reflecting male dominance of the tech industry
- Data privacy and security – LLMs are trained on a wide range of datasets available online. However, there have been some concerns about potential data privacy and consent implications related to this training data. As AI processes and holds information at scale, there are concerns about the potential for data breaches or hacks, especially if the information could be from sensitive industries such as finance
- Explainability and transparency – There are growing calls for regulators to prioritize the need for ‘explainability’ of AI models within legislation. Firms that utilize these technologies should be able to explain why models acted in a certain way when given certain inputs. With AI models having been proved to be able to conduct insider trading – and to cover it up – firms need to enshrine a high level of explainability to prove misconduct was not their intended outcome
- Off-channel communications – Where used for business purposes such as customer communication or marketing, there are likely recordkeeping requirements that are not being satisfied
How can data capture empower firms to use ChatGPT?
With AI presenting substantial rewards, but also proven risks, firms looking to implement technologies like LLMs will be weighing up their options. However, implementing a solution to ensure that ChatGPT data is comprehensively captured and stored can be the key to unlocking the power of AI while remaining on the right side of regulators.
1) Ensure ready access to complete data
By directly capturing ChatGPT enterprise conversations and attachment data, and storing it in a compliant archive, firms have a full record of all data that has been inputted into the platform, including prompts and image inputs. Firms are able to show a record of explainability from input to output, and by retaining data for 99 years (as opposed to Chat GPT’s current 30-day retention terms) have a reliable source of data history.
2) Leverage data at volume to power AI models and algorithms
Utilizing AI is about constant refinement and improvement. By capturing ChatGPT data and storing it, firms can call on this data any time to review and optimize prompt processes for better outcomes, and to ‘feed’ back into AI models to train them with stronger, more relevant datasets.
By storing your ChatGPT enterprise conversation data alongside data captured from other communications channels, you have a localized, holistic view of all your data, which can be leveraged to provide better AI, surveillance, and eDiscovery outcomes. ChatGPT can also be used to refine existing surveillance and automated policies to better streamline and enhance compliance workflows.
3) Increased security for sensitive data
As with many platforms used in financial services, ChatGPT and other LLMs can contain sensitive data and business-critical information, making them a tempting target for a data breach. By natively capturing ChatGPT data directly from the source, firms can ensure that none of this critical data is lost or corrupted, and can be transferred directly into your compliant archive. By utilizing an end-to-end compliance solution that directly integrates with OpenAI’s API, firms can use ChatGPT with confidence knowing there is a minimized risk of a ‘weak link’ in the chain leading to data breach or loss.
4) Optimize ChatGPT use and train out bias
Leveraging a solution with 99 years of retained gives firms access to all of their ChatGPT prompts and inputs, rather than just within the last 30 days. Access to this data means firms can review their strategy to see what is working, what’s not, and optimize outcomes.
This also empowers firms to tweak training to ensure ChatGPT users are learning from one another and utilizing the tool as effectively as possible. By having a clear, comprehensive record of this data, firms are also able to analyze inputs and outcomes for potential signs of bias, and better educate users to ensure consistently unbiased outcomes.
5) Leverage ChatGPT data to improve surveillance and compliance
Comprehensively capturing ChatGPT data also allows firms to improve surveillance and compliance outcomes. ChatGPT enterprise conversation data can be fed into a firm’s wider surveillance offering and have automated policies applied to flag potential signs of misconduct.
This can include potential financial misconduct like insider trading, such as unauthorized users feeding in sensitive trading data to look for investment opportunities and patterns. It can also include non-financial misconduct, such as signs of users using the platform to generate content that might be utilized to bully, harass, or intimidate, or marketing content that might not comply with regulations like the Securities and Exchange Commission (SEC)’s Marketing Rule.
By ensuring that all ChatGPT enterprise conversation data is captured and retained in a compliant archive, firms also automatically comply with regulatory recordkeeping requirements – and get out in front of any AI specific regulation that may be yet to come.