Featured image

UK policy white paper takes “pro-innovation approach” to AI regulation – What you need to know

Rather than introducing new legal requirements, the white paper outlines the UK government’s "pro-innovation approach to AI regulation," focusing on the benefits and transformative potential of artificial intelligence. 

LOCATION: UK

EFFECTIVE: March 29, 2023 

30-second take | 3-minute deeper dive | 3-second links

30-second take 

  • The UK government recently published a white paper on artificial intelligence (AI) regulation, setting out the UK's formal policy position on AI. It is open for public consultation until June 21, 2023. 
  • The white paper outlines the government’s "pro-innovation approach to AI regulation," with ample focus on the benefits and transformative potential of AI in medicine, scientific research, manufacturing, and other business areas. 
  • No new legal requirements are being introduced in the policy white paper. Rather, the UK is taking a "proportionate" approach, seeking to avoid "unnecessary burdens for businesses." 
  • This approach differs from the EU AI Act, a comprehensive regulation which imposes a broad range of mandatory requirements on the developers and deployers of AI systems across all sectors.
  • To ensure that certain risks are mitigated and that public trust in AI is maintained, the policy white paper proposes a unique regulatory framework of five cross-functional principles. 
  

3-minute deeper dive 

On March 29, 2023, the UK government released a white paper on artificial intelligence (AI) regulation, setting out the UK's formal policy position on AI. The policy white paper, which is open for public consultation until June 21, outlines the government’s "pro-innovation approach to AI regulation," with ample focus on the benefits and transformative potential of AI in medicine, scientific research, manufacturing, and other business areas.  

The policy white paper is not proposing any new legal requirements around AI at this stage. Presently, there is a strong reluctance in the UK government to introduce new regulation which could stifle innovation and growth. Instead, the government states it is taking a "proportionate" approach, seeking to avoid "unnecessary burdens for businesses." This approach contrasts with that of the European Union. The EU AI Act is a comprehensive regulation which imposes a broad range of mandatory requirements on the developers and deployers of AI systems across all sectors.

The absence of legal requirements, however, does not mean the UK goverment is not concerned with mitigating risk. AI poses risks in a wide variety of areas—including human rights, security, personal safety, fairness, privacy and agency, and societal wellbeing. To ensure these risks are mitigated and that public trust in AI is maintained, the policy white paper proposes a unique regulatory framework of five values-focused, cross-functional principles to guide and inform the responsible development and use of AI in all sectors. 

  • Safety, security, and robustness. AI systems should function in a robust, secure and safe way throughout the AI life cycle, and risks should be continually identified, assessed and managed. 
  • Appropriate transparency and explainability. An appropriate level of transparency and explainability will mean that regulators have sufficient information about AI systems and their associated inputs and outputs to give meaningful effect to the other principles (e.g. to identify accountability). 
  • Fairness. AI systems should not undermine the legal rights of individuals or organizations, discriminate unfairly against individuals or create unfair market outcomes. Actors involved in all stages of the AI life cycle should consider definitions of fairness that are appropriate to a system’s use, outcomes and the application of relevant law. 
  • Accountability and governance. Governance measures should be in place to ensure effective oversight of the supply and use of AI systems, with clear lines of accountability established across the AI life cycle. AI life cycle actors should take steps to consider, incorporate and adhere to the principles and introduce measures necessary for the effective implementation of the principles at all stages of the AI life cycle. 
  • Contestability and redress. Where appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm. 

The aim is for regulators to take account of these principles when developing context-specific rules and guidance around AI for their sectors. In future, there will be a statutory duty on regulators to pay "due regard" to these principles. On the subject foundation models (e.g. ChatGPT) and other particular AI technologies, the UK government will be doing further work to explore risks and regulatory need in this area.

Overall, the stance outlined in the policy white paper prsents a regulator-led approach to AI regulation. Regulators will have ample flexibility in developing and enforcing AI rules, including entities like the: 

  • Information Commissioner's Office (ICO) 
  • Competition and Markets Authority (CMA) 
  • Office of Communications (Ofcom) 
  • Financial Conduct Authority (FCA) 
  • Bank of England  
  • Medicines and Healthcare products Regulatory Agency (MHRA) 

In addition, the UK government will provide the following central functions to support regulators: 

  • Monitoring and evaluation of the overall regulatory framework’s effectiveness and the implementation of the principles, including the extent to which implementation supports innovation. 
  • Assessing and monitoring risks across the economy arising from AI.
  • Conducting horizon scanning and gap analysis, including by convening industry, to inform a coherent response to emerging AI technology trends. 
  • Supporting testbeds and sandbox initiatives to help AI innovators get new technologies to market. 
  • Providing education and awareness to give clarity to businesses and empower citizens to make their voices heard as part of the ongoing iteration of the framework. 
  • Promoting interoperability with international regulatory frameworks.

 

3-second links 

 

The information provided on this website does not, and is not intended to, constitute legal advice; instead, all information, content, and materials available on this site are for general informational purposes only. Readers should consult with their own attorney regarding legal matters.