In late August, UK Members of Parliament criticised the UK government for failing to lead in regulating artificial intelligence (AI). The criticism came ahead of an AI summit the UK government is hosting in November, and followed the release of a government white paper on the subject.
In March of this year, the UK Secretary of State for Science, Innovation, and Technology released the aforementioned white paper, laying out the government’s approach to regulating AI. The white paper established the views of the government on what AI regulation should entail before suggesting a regulatory framework. The white paper noted several of the government’s underlying concerns regarding AI. Central to the white paper was the government’s view that AI has the capacity to stimulate the UK economy under the proper regulatory conditions. Furthermore, the white paper expressed the view shared by both the government and Parliament that, as countries around the world begin to establish rules for governing AI, the UK government needs to act quickly to lead the global discussion on AI regulation.
The white paper also identified several factors that make the implementation of AI regulation critical, including the potential harm that AI could pose to physical and mental health, privacy, and human rights. As new AI innovations emerge, proper regulation to address potential bias and discrimination in AI will also be key to maintaining public trust, which is necessary for business investments in the technology.
When it comes to developing a regulatory framework, the government proposed regulating the uses of AI, rather than the technology itself. The government argued that this will enable it to ensure the regulation is not “cumbersome” to businesses and further proposes that the principles outlined in the white paper should not initially be enshrined in law. This will ensure that innovation is not obstructed. Instead, any AI regulation should be implemented by existing regulatory agencies that have industry-specific expertise.
The regulatory approach set out by the government’s white paper varies significantly from the EU’s proposed approach to regulating AI, as outlined in the EU AI Act. Where the UK approach seems to leave decisions to the discretion of regulatory agencies on an industry-by-industry basis, the EU has provided a list of banned AI systems and expanded their classification of high-risk AI systems. The EU approach expands obligations for producers and distributors, while the UK approach prioritises minimising disruptions to AI innovations.
As the UK and EU continue to chart their own legislative paths post-Brexit, the regulatory landscape will become increasingly complex for businesses that operate in both markets. At this time, the UK government’s proposed approach to AI regulation does not directly expand obligations for producers and distributors, but this may change considering criticisms from Members of Parliament and other input received from stakeholders during the consultation period.
With so much of the UK’s AI policy undetermined, manufacturers and distributors could still face significant upheaval. Given these evolving complexities, businesses who develop AI or use the technology in their products face continued risk from sustained regulatory oversight, despite the lack of concrete rules to govern AI. Businesses with an interest in AI regulations should take advantage of opportunities to participate in the development of legislation and should pay close attention to new developments.
Trusted by the world’s leading brands, Sedgwick brand protection has managed more than 5,000 of the most time-critical and sensitive product recalls in 100+ countries and 50+ languages, over 25 years. To find out more about our product recall and remediation solutions, visit our website here.