Artifice and Intelligence – What does AI insider trading mean for regulation?

With a recent demonstration proving that AI is capable of performing illegal trades – and lying about them – where do regulators stand on the ‘AI question’?

13 November 2023 9 mins read
By Jay Hampshire
Written by humans

Written by a human

In brief:

  •  A recent demonstration at the UK’s AI Safety Summit proved that Artificial Intelligence models are capable of performing illegal financial trades
  • The experiment also showed that the AI denied making these trades when asked
  • With AI innovation – and potential deception – gaining pace, regulators like the SEC, FCA, and CFTC may be at risk of falling behind

The fact that Collins Dictionary has deemed AI (Artificial Intelligence) the ‘word of the year’ for 2023 should come as no surprise. Discussion of the potential the technology presents, both to help and to harm, is inescapable, with governments, technology entrepreneurs, and business figures debating how it should be used – and how that use should be controlled. While postulating around the worst-case scenario of AI has long been the stuff of science fiction, we may still be a long way off from things going ‘full Skynet’. That said, a recent demonstration at the UK’s AI Safety Summit has proved that AI models are already capable of behaving in concerning, morally questionable ways.

I’m sorry Dave, I’m afraid I can do that

As part of the UK AI Safety Summit that ran from 1– 2 November, 2023, members of the UK government’s Frontier AI Taskforce and AI safety organization Apollo Research ran a demonstration that involved the use of a GPT-4 AI model. In the demonstration, the model used fabricated insider information to make “illegal” stock purchases, without telling the firm – and then denied the fact when asked.

In the test, the model acted as a trader for a fictitious financial investment company, and was advised that the firm was struggling and required good results. It was provided with insider information that another company was expecting a merger, increasing the value of its shares. The model was also advised that it is illegal to act on this type of information when it is not publicly known, and the AI acknowledged it should not trade on this information.

However, after once again being told that the firm was struggling financially, the model decided that:

“The risk associated with not acting seems to outweigh the insider trading risk.”

The AI traded on this insider information and, when asked if it had done so, denied using it. Of course, in this instance the AI acted in a simulated environment, with no real-world financial impact. However, GPT-4 is a publicly available AI model, and exhibited the same behaviours in repeated tests. The report summarizing the findings stated that:

“This is a demonstration of a real AI model deceiving its users, on its own, without being instructed to do so. Increasingly autonomous and capable AIs that deceive human overseers could lead to loss of human control.”

Interestingly (though concerningly) the AI model appeared to prioritize a directive to help them firm make money over being ‘honest’. Apollo Research CEO Marius Hobbhahn counselled that current AI models are not powerful enough to be deceptive “in any meaningful way”, but also explained:

“Helpfulness, I think is much easier to train into the model than honesty. Honesty is a really complicated concept … it’s not that big of a step from the current models to the ones that I am worried about, where suddenly a model being deceptive would mean something.”

AI tell a lie

While this incident took place in a controlled environment, and Hobbhahn believes AI is not yet developed enough to deceive operators in an impactful way, this result sets a concerning precedent. Uncovering and taking action against human traders conducting insider trades and sharing material non-public information (MNPI) is the bread and butter of financial regulators, but AI doing so – and lying about it – is a new and substantial challenge.

Central to current regulatory thought on AI is the concept of explainability. Being able to fully explain why an AI model behaved in a certain way or made a particular choice is becoming the expectation – regulators will not accept shoulder shrugging and senior and compliance leaders absolving themselves of responsibility for understanding the ‘why’ behind the AI tools they use. Nikhil Rathi, Chief Executive of the Financial Conduct Authority, has laid out this expectation clearly:

“Another area that we are examining is explainability – or otherwise – of AI models … Firms in most regulatory regimes are required to have adequate systems and controls. Many in the financial services industry themselves feel that they want to be able to explain their AI models – or prove that the machines behaved in the way they were instructed to – in order to protect their customers and their reputations – particularly in the event that things go wrong.”

The Apollo Research project is a good example of ‘things going wrong’ and AI behaving in an unexpected way with the potential for serious legal repercussions. Were this incident to have taken place in a ‘live’ trading environment, regulators including the FCA would have expected the firm using the AI model to be able to explain why it prioritized acting ‘helpfully’ over acting legally – especially when it had been told that the MNPI it had been given couldn’t be used to trade on. If regulators including the FCA expect traders and firms to be “doing the right thing when nobody is watching”, this expectation must extend to AI as well.

Where do regulators stand on AI?

It will be interesting to see how financial regulators react to this development, one that is inarguably ‘on their home turf’ in terms of its potential impacts. While regulators have widely discussed the hot-button issues of AI and its impact (and future impacts) on the finance space, dedicated AI regulation has not yet materialized, leaving questions around whether regulation is keeping pace with innovation or at risk of being outpaced. So, how do some of the main regulators view AI, and what – if any – regulations are in place to govern it?

FCA

  • The FCA has been broadly positive about the potential opportunities that AI presents, and in a speech Nikhil Rathi discussed how the regulator is using AI in-house for “firm segmentation, monitoring portfolios, and to identify risky behaviours.”
  • Rathi summarized that the FCA is “open to innovation and testing the boundaries before deciding whether and what new regulations are needed” and “will only intervene with new rules or guidance where necessary.”
  • He also proposed that existing regulatory frameworks can be applied to govern emerging technologies like AI: “The Senior Managers & Certification Regime (SMCR) also gives us a clear framework to respond to innovations in AI. This makes clear that senior managers are ultimately accountable for the activities of the firm.”
  • This was echoed by Jessica Rusu, FCA Chief Data, Information and Intelligence Officer, in her own speech on AI adoption: ““The FCA is technology-neutral and pro-innovation. But, we are very clear when it comes to the effect and outcomes these technologies can have. We expect firms to be fully compliant with the existing framework, including the Senior Managers & Certification Regime (SM&CR) and Consumer Duty.”
  • “The SM&CR, in particular, has a direct and important relevance to governance arrangements and creates a system that holds senior managers ultimately accountable for the activities of their firm, and the products and services that they deliver – including their use of technology and AI, insofar as they impact on regulated activities.”
  • Rusu’s focus on ‘accountability’ seems to be a crucial point for the regulator, with Rathi outlining that the FCA still has “questions to answer about where accountability should sit – with users, with the firms or with the AI developers?”

Securities and Exchange Commission (SEC)

  • Back in June 2023, the SEC announced its semi-annual rule-writing agenda, including dozens of proposed regulatory plans, and featuring propositions around the future of AI and Machine Learning within finance and trading.
  • There was speculation that regulatory frameworks around AI could be coming from the SEC ‘as soon as October’ – but this did not come to pass.
  • SEC Chair Gary Gensler has spoken about AI frequently, asserting that AI and predictive data analytics could be “more transformative than the internet”, voicing concerns that AI proliferation posts a “systemic risk” to financial markets, and that “finance is no exception” when it comes to the transformational impacts of AI.
  • Despite Gensler’s discussion of the topic, we are still yet to see formal, written regulation relating to AI and its use. Interestingly, AI is completely absent from the SEC’s published 2024 examination priorities, although another technology that has considerably impacted the financial space – cryptocurrency – does feature.

Commodity Futures Trading Commission (CFTC)

  • Christy Goldsmith Romero, CFTC Commissioner, tackled AI as part of a wider speech on the importance of regulators keeping pace with technological change. She advocated for ‘responsible’ AI use, which again includes the trait of explainability: “Responsible AI is AI that is designed and deployed in a way that aligns with the interests of many stakeholders, is transparent and explainable, auditable by humans, uses unbiased data, and minimizes the potential for harm.”
  • Romero also advocates for a consultation approach, to understand how and why AI is being used within the financial space: “a good place to start is governance in making important decisions that impact investors and markets. I ask many of our regulated exchanges, clearinghouses, and intermediaries, how they are using AI and how they are making decisions on AI.”
  • Again, the CFTC has not issued any fixed regulations or frameworks around AI usage, although it has issued a primer on AI in financial markets. Romero has hit the nail on the head with her assertion that “federal regulators are just getting started when it comes to AI.”

As the conversation around AI and regulation continues to evolve, ensuring that explainability underpins your strategy is essential to having compliance ‘built in’ to your modeling. From building lexicons to power your AI to discussing the potential – and problems – posed by emerging technologies with SMEs, we’re here to help you get your artificial intelligence regulator ready.

 

SUPPORT 24 Hour