Response to the Government’s AI White Paper

In our response to the Government's AI White Paper, we emphasise the need to implement the proposals as quickly as possible, adequately fund and empower the new AI risk function, and develop a cross-sectoral approach to regulate general purpose AI models.

29 Mar
2023
|
8
min read

Author: James Baker

Introduction

Today, the Government’s Department for Science, Innovation and Technology published its AI White Paper: ‘AI regulation: pro-innovation approach’. The White Paper contains the Government’s proposals for the establishment of a new regulatory framework to ‘guide and inform the responsible development and use of AI in all sectors of the economy’. It also sets out a series of questions on AI regulation to which the Government requests responses by 21 June 2023.

Below, we set out a summary of the central features of the new regulatory framework, our initial commentary on its strengths and weaknesses, and our recommendations for improvement.

Note this is a rapid reaction to the White Paper, written to inform debate in Parliament, the Labour Party and beyond. Over the coming weeks and months, Labour for the Long Term will be considering what an integrated Labour approach to AI and emerging technology should look like - encompassing regulation, partnerships with the private sector, access to compute and international cooperation.

We welcome the release of the White Paper, and many of the features of its framework. However, given the time lag to full implementation, sectoral approach and limited risk management powers, we believe the framework is unlikely to keep pace with the rapid AI developments we are witnessing or provide adequate certainty to UK businesses.

Recommendations for strengthening the framework:
  • The Government’s proposals should be implemented as soon as possible, rather than wait for another year.
  • The Government should introduce a duty requiring regulators to have due regard to the regulatory principles in this parliamentary session.
  • The central risk function should be adequately funded and empowered to fulfil its remit.
  • A cross-sectoral approach is more suitable for regulating general-purpose AI ‘foundation models’ (such as GPT and other large language models) - as opposed to implementation by individual regulators.
The Framework

The framework sets out a set of five principles that form the basis of the new regulatory regime:

  • Safety, security and robustness;
  • Appropriate transparency and explainability;
  • Fairness;
  • Accountability and governance; and
  • Contestability and redress.

Rather than enforce these principles centrally, the Government intends to distribute responsibility to the existing set of sectoral regulators - such as the Information Commissioner’s Office (ICO), Financial Conduct Authority (FCA) and Medicines and Healthcare products Regulatory Agency (MHRA). Regulators will initially have discretion to implement the principles as they see fit, monitoring the development of AI tools and applications within their sectors of the economy on the basis of their existing powers and resources. For the time being, the principles will be put on a statutory basis, meaning no new primary legislation will be enacted to roll out the new AI framework.

The Government also intends to establish a new set of cross-cutting central functions to support regulators as they develop and enforce their individual rules and a risk function to monitor future risks.

Faster implementation needed

The White Paper is a step in the right direction, towards comprehensive regulation that limits the harms of AI and gives businesses the confidence to safely deploy AI. The cross-cutting central functions and risk unit are particularly welcome. But it is also a limited step, that will be implemented too slowly. In September 2021, the Government’s National AI Strategy committed to publish this White Paper in early 2022. The White Paper is a year late, and yet many of the most important commitments are another year away at least - meaning that the full framework will not come into effect after the US and EU equivalents.

We cannot afford to wait another year for fully functioning AI regulation. In the four months since the release of OpenAI’s ChatGPT, the use of AI tools and applications has multiplied across the economy. Law firms are using Harvey to automate legal research and writing, Copilot is being used to write computer code, and the British Government is considering the use of ChatGPT in the Civil Service. AI models are growing more powerful by the day, with ever-greater implications for society. GPT-4, released two weeks ago, significantly outperforms ChatGPT, and is able to ace graduate-level exams and understand input not just from text but also images. The pace of these advances is causing significant concern amongst experts, and today the Future of Life Institute published an open letter signed by over a thousand senior professors and industry leaders calling for a six month pause on the training of all AI systems more powerful than GPT-4, to allow time for greater levels of Government regulation and oversight to be put in place.

Recommendation: the Government’s proposals should be implemented as soon as possible, rather than wait for another year.

No duty for regulators and no extra resource

Even if all these commitments were implemented immediately, the regulatory framework would still be lacking. According to the current proposals, the new principles will not have a statutory backing and the Government has not set out how it will adequately resource regulators, especially those beyond the Digital Regulation Cooperation Forum, who are already struggling with the impact of AI on their work. Without a new legal basis, the Government will likely have difficulty obliging regulators to follow the principles laid down in the paper, and ultimately regulators could deprioritise or ignore these regulatory principles if they come into conflict with existing statutory duties and other pressing demands on their resources.

The Government has stated that it anticipates bringing forward legislation in future, and may introduce a new duty requiring regulators to have due regard to the principles. However, given the pace of development in AI, this approach risks too little scrutiny and too little accountability for companies developing transformative capabilities at a pivotal moment in their introduction to society and the economy.

Recommendation: the Government should introduce a duty requiring regulators to have due regard to the regulatory principles in this parliamentary session.

Risk function

We welcome the proposal in the Government’s framework to establish a central risk function, with a remit that includes covering “‘high impact but low probability’ risks such as existential risks posed by artificial general intelligence or AI biosecurity risks.”

However, to be able to do its job effectively, this central risk function needs to be given the power to proactively monitor AI developments within individual firms developing foundational models, and if necessary intervene to mitigate risks. This will require additional resources and potentially a new legal basis. The White Paper provides some discussion of how a horizon scanning function might work, but appears to rely on voluntary provision of information by industry and academia. This is unlikely to be adequate to keep the Government fully informed of emerging trends and risks.

Ideally, the framework would also include a requirement for companies developing AI foundational models to regularly update the Government on their progress and a right for Government to visit and inspect AI labs if necessary (as the head of OpenAI called for last week), giving the Government the opportunity to review and audit models before they are publicly released.

Recommendation: the central risk function should be adequately funded and empowered to fulfil its remit.

Cross-cutting approach

We also welcome the proposal to establish cross-cutting functions to support regulators. As unions, think-tanks, business and regulators themselves have said, coordinated regulation is desperately needed when dealing with a technology with such wide-ranging applications, and associated significant risks and opportunities.

However, ‘foundation model’ systems like GPT-4 are not comprehensively covered by existing legislation and will cut across the remit of almost every sectoral regulator. They are increasingly used for powerful applications in medicine and healthcare, finance, the life sciences and chemistry, and increasingly will be deployed throughout Microsoft and Google’s suites of workplace software.

We recognise that this is a fast-moving area, but that makes it all the more important for the regulatory environment to keep pace. Citizens and consumers will need reassurance that these foundation models are actually following the principles that the Government has outlined. And it will benefit both regulators and the frontier AI firms they need to oversee to have a consistent cross-sectoral approach to regulating these systems, rather than inconsistent standards across sectors - and companies being bombarded with requests from several different regulators only seeing a part of the picture.

Recommendation: a cross-sectoral approach is more suitable for regulating general AI models (such as GPT and other large language models) - as opposed to implementation by individual regulators.

Wider recommendations and next steps

The White Paper on regulation forms just one limb of the UK Government’s wider approach to AI and emerging technology, complementing the International Tech Strategy, Future of Compute Review, Science & Technology Framework and Integrated Review Refresh, all of which have been published in the past two months. Alongside this commentary on the White Paper, Labour for the Long Term have responded to the International Tech Strategy and made submissions on compute governance and semiconductor supply chains.

Over the coming weeks and months, Labour for the Long Term will be considering what an integrated Labour approach to AI and emerging technology should look like - encompassing regulation, partnerships with the private sector, access to compute and international cooperation. If you are interested in supporting us with this project, please get in touch.

About the author

James Baker is Labour for the Long Term’s Executive Director for Policy and Operations.

Never miss the latest policy research and guidance

Get long-term, evidence-based policy recommendations and guidance straight to your inbox.