Pressing pause on AI: can regulation keep pace with innovation?
Elon Musk and other technology luminaries have called for a halt to the training of AI while governance systems are “rapidly accelerated”. The UK government has published its principles-based approach.
Written by a human
On March 30, 2023, Elon Musk of both Tesla and SpaceX tweeted:
“Old joke about agnostic technologists building artificial super intelligence to find out if there’s a God.
They finally finish & ask the question.
AI replies: “There is now, mfs!!”“
We’re not often compelled to cite Elon Musk’s tweets within our blogs, but his recent comment on AI aptly summarizes a series of events that have unfolded across the industry in the past several weeks.
As we recently noted, the pace of development for generative artificial intelligence (AI) has increased at an alarming rate over recent months. ChatGPT, for example, was initially released to the public in November 2022. Further iterations have since been released, with GPT-4 being the latest development. The difference between ChatGPT and GPT-4 is that the latter accepts prompts composed of both images and text, with purported improved safety and other advancements.
Subsequently, industry luminary Bill Gates proclaimed that “the age of AI has begun”, citing generative AI as “the most important advance in technology since the graphical user interface”.
This week, a change in tack. Or rather, a significant foot on the brakes.
On March 22, an open letter was published on the Future of Life Institute, entitled “Pause giant AI experiments”. Specifically, the open letter had a clear and evocative call to action:
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
The letter, which has been signed by the likes of Elon Musk, Steve Wozniak (Co-founder at Apple), and other leading experts, goes on to note that “contemporary AI systems are now becoming human-competitive at general tasks” and that “no one – not even their creators – can understand, predict, or reliably control” AI. Despite these advancements there is not, the letter suggests, sufficient planning and management.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
The time, it says, for implementing independent review before training future systems “is now”.
The letter goes on to call for a six-month pause on the development of AI, which should be enforced by governments, if necessary. AI labs and independent experts should then take time to “develop and implement a set of shared safety protocols for advanced AI design and development”. As well as this, AI developers must work with policymakers to “dramatically accelerate development of robust AI governance systems.”
UK government announces “pro-innovation” approach to AI
On March 29, 2023 – one week after the initial publication of the aforementioned open letter – the UK government published a Policy Paper on its approach to AI. Presented to Parliament by the Secretary of State for Science, Innovation and Technology, the Rt Hon Michelle Donelan MP, the Policy Paper looks to ensure that the UK government is “putting the UK on course to be the best place in the world to build, test, and use AI technology”.
Initially echoing the urgency of the Open Letter, the Policy Paper highlights that “given the pace at which AI technologies and risks emerge, and the scale of the opportunities at stake, we know there is no time to waste”.
Despite this, the Policy Plan does not introduce new legislation surrounding AI, but does instead set out a new principles-based framework, which it expects regulators to consider in their management of AI. These five principles are:
+ Fairness
+ Accountability and governance
+ Contestability and redress
+ Appropriate transparency and explainability
+ Safety, security, and robustness
Regulators will be charged with leading the implementation of this framework by, for instance, issuing guidance on best practice for adherence to these principles.
While regulation is not imminent, the Policy Paper confirms that the government is looking to publish a regulatory roadmap for AI, which can be expected in the next six months. Through this, the UK government looks to develop a “proportionate and pro-innovation regulatory framework”, one that can foster innovation while addressing the risks that AI poses.
In order to facilitate the creation of new regulation, the government plans to “pilot a new AI sandbox or testbed”. In the coming 12 months, the government will “encourage” key regulators to “publish guidance on how the cross-sectoral principles apply within their remit” – and will continue to develop regulation within a sandbox.
Can regulation keep up with technology?
It is interesting to see that the government acknowledges that, while AI is currently regulated through existing legal frameworks, some AI risks “arise across, or in the gaps between, existing regulatory remits”. Despite this acknowledgment, it does not appear that we will see clear regulatory guidance for 12 months, at least. Of course, regulating for the unknown takes time – and bad regulation can often be worse than none – but the pace of innovation will fast outweigh the pace of regulation here.
There is a marked difference in tone between the UK government’s Policy Paper and the open letter above. While one calls for immediate pause and reflection, the other adopts an approach whereby innovation and regulation are designed side-by-side. The latter of these approaches may prove challenging. Looking to the U.S., for example, the Securities and Exchange Commission (SEC) only recently amended its Recordkeeping Rule 17a-4, for the first time in 25 years. On the other hand, Amazon’s Alexa was introduced in 2014 and 8 years later AI is so advanced that it can pass the Bar exam.
How should firms approach AI?
In the absence of regulatory parameters, or even regulatory guidance, firms will need to take a risk-based approach to AI. When implementing AI, consider whether it is explainable – could you show (and tell) the regulator how your AI-attributed processes are completing tasks? Are you adding a layer of human-review to AI-derived decisions? Consideration, caution, and explainability will be key.
There is no doubt that AI has unlimited benefit for financial services, but when its creators cease to understand how it is evolving, it might be worth taking your time.
Global Relay has built AI-enabled solutions engineered to seamlessly integrate throughout the Global Relay ecosystem of archive, collaboration, and compliance applications. Our clients use AI to enhance business efficacy across surveillance, behavioral analytics, customer support, and more. Our approach to innovation is to build it right, the first time – so that all our products are robust and secure.
If you’d like to know more about our AI-enabled products, speak to the team or visit our product page.