The impact of increased AI regulation in the EU, UK, the US and China will be felt beyond the borders of these four markets. Businesses within these markets, as well as those that trade with the EU, UK, the US and China, will have to comply with increasingly comprehensive regulations. While compliance can present challenges, a global drive for AI regulation that promotes responsible use, transparency, data protection and high ethical standards can potentially transform the AI landscape for the better.
“The OECD and UNESCO developed the first guiding principles on AI, which set an important ethical framework for AI use,” says Michael Bąk, Executive Director for the Forum on Information and Democracy. “Building on extensive discussions and global concerns, the European Commission (EC) has taken steps to address the gaps in regulatory frameworks where AI harms could percolate.”
“Since the EC first proposed an EU AI regulatory framework back in 2021, we've seen significant milestones like the UK AI Safety Summit, the Hiroshima Process and the [forthcoming] adoption of the EU AI Act,” Bąk continues. “Their objectives have been clear – to analyse and categorise AI systems based on the risks they pose to users.”
To prepare for more stringent regulations. Andrej Savin, Professor with Special Responsibilities in IT and Internet Law, Copenhagen Business School, says that organisations that use AI technologies “will require separate compliance considerations and possibly a separate compliance function.”
Minesh Tanna, Partner and Global AI Lead , Simmons & Simmons, says organisations need to consider AI regulation “holistically”. As well as AI-specific regulations coming into force in China, “notably in the EU” and “particularly at a state level” in the US, Tanna adds that “non-AI-specific regulations – particularly, in the areas of data privacy, consumer protection and antitrust – are increasingly being used to regulate AI.”
“Navigating the EU regulatory landscape is especially tricky,” says Tanna. “Apart from the upcoming EU AI Act, which will be the first comprehensive AI regulation globally, other EU digital regulations, such as the Digital Services Act (DSA), can apply to AI, as can other regulations; notably, the GDPR.”
Savin says the EU AI Act focuses on high-risk AI, reflecting existing risk-based regulation, such as GDPR. He explains that “although the EU goes after producers and modifiers of high-risk AI systems, the presence of other digital rules will likely require knowledge and compliance.” Savin adds that many private and public sector organisations are rushing to get advice on how to comply with the increasing raft of EU legislation.
The DSA regulates platforms “asymmetrically”, according to Savin, meaning the larger the platform, the more onerous the rules. The Digital Markets Act (DMA) regulates gatekeepers with “a hybrid set of competition-like rules”, with all gatekeepers subject to the EU AI Act as well.
Additionally, Savin says that other laws, such as those regulating taxation and insurance, may affect AI use, as clarity may be required on taxes surrounding AI and how to insure against anything that might go wrong when using AI.
Bąk’s main criticism of the EU AI Act is that “progress remains slow, leaving organisations to self-regulate.” The Forum on Information and Democracy regrets that the act will not apply to the private sector, but Bąk adds that the Council of Europe’s final negotiations on what appears to be a groundbreaking Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law “would be the world’s first legally binding treaty on AI.”
He says this may serve as a global standard for AI regulation and the process for the framework to reach this point has “involved a diverse range of stakeholders beyond member states of the Council of Europe”, so it should have far-reaching impact beyond European and UK borders.
“The Committee on Artificial intelligence, which set the agenda and debated the many critical issues, trade-offs and potential harms, included not only Council of Europe member states, but also observer countries like Argentina, Australia, Japan, Peru, the United States, Uruguay, among others,” Bąk explains. “Additionally, international and regional organisations, multilaterals, private sector, and civil society, research and academic institutions have been actively involved.”
Tom Whittaker, Director at independent law firm Burges Salmon, says that the UK takes “a context-specific approach, regulating based on the use cases and sectors, rather than regulating AI technology directly.”
In 2023, the UK’s Department for Science, Innovation and Technology’s white paper consultation outlined a framework for regulating AI based on five key principles – safety, security, transparency, fairness and accountability. Voluntary safety and transparency measures for developers of advanced AI models are expected to complement regulatory efforts.
“What these five principles look like in practice will differ between sectors,” Whittaker says. “For example, the UK white paper on AI regulation recognises that an AI chatbot in fashion retail requires a different regulatory approach to an AI chatbot in medical diagnosis.”
“The regulators’ responses to the White Paper demonstrate that while there is some similarity between approaches, there is expected to be differences between sectors and regulator approaches,” he continues. “However, regulators and government are developing methods to improve coherence between regulators where possible, and enabling organisations to seek further guidance and clarification quickly, and promptly.”
“Whether differing regulatory approaches will cause issues remains to be seen” Whittaker notes. “Many organisations have experience of navigating multiple regulatory regimes, but the risks and complexities of AI, and speed at which the market is moving, means that organisations need to prepare and constantly monitor their governance frameworks and the legal landscape”.
While the framework won't become law immediately, targeted legislative interventions are anticipated to address gaps in regulating complex AI systems. Organisations should prepare for increased regulation, including guidelines, data collection and enforcement, while international firms will need to navigate multiple and varying differences.
Unlike the EU's AI Act, which creates new compliance obligations for a range of AI stakeholders, including providers, importers, distributors and deployers, the UK government’s planned principles-based framework will mean existing regulators can interpret and apply within their sectors. Since the UK left the EU following the 2016 referendum, this softer regulatory touch reflects a drive towards making the UK attractive to foreign investors.
UK organisations that work with the EU will still need to comply with the bloc’s regulations. Looking ahead, Tanna says that a post-Brexit UK “is unlikely to regulate AI heavily, despite increasing pressure.”
“This makes the UK a favourable jurisdiction for AI from a regulatory perspective, although the EU AI Act is likely to apply to many organisations developing or using AI in the UK, given that it has extra-territorial application – and because many UK-based organisations will have operations in the EU that are subject to the EU AI Act.”
By 2023, 21 US regulatory agencies were addressing AI issues, with the Department of Transportation, Department of Energy, and the Occupational Safety and Health Administration introducing AI-related regulations for the first time.
In October 2023, President Joe Biden signed an executive order mandating the State and Commerce departments to set standards for AI partnerships, with focus on cybersecurity. North of the US border, Canada's Artificial Intelligence Data Act is nearing finalisation, but won't be enacted for another two years.
Copyright and AI has also been under the legal spotlight in the US. In February 2023, the US Copyright Office ruled that AI-generated images do not qualify for copyright protection because they lack human authorship. Even if AI-generated images are original, the office found they don’t meet the criteria of human creation. However, the office stated that it would register creations by humans containing AI-generated images provided the overall work is copyrightable.
“The US remains broadly light-touch when it comes to digital regulation – especially as compared with the EU – but there are now AI regulations at the federal level, such as the AI Executive Order, and state level, such as the New York AI Hiring Law, which apply to AI in different ways,” observes Tanna.
To mitigate security threats posed by lifelike virtual content and multimodal media, particularly AI-generated deepfakes, China has implemented regulations governing deep synthesis technology, targeting providers and users. Requirements include content screening, adherence to legal standards, user authentication, explicit consent for biometric editing, data protection measures and enforcement of content moderation policies. These regulations aim to improve accountability and safeguard against potential harm from the creation and dissemination of deceptive or malicious content.
In November 2023, the Beijing Internet Court ruled that AI-produced artwork qualifies for copyright protection. The court found in favour of the plaintiff who claimed copyright infringement by a blogger who used an AI-generated image, deeming the image “original”, owing to the intellectual effort used to create it.
While the court mandated that AI use must be disclosed for transparency, the decision highlights a contrast between Chinese and European law, according to an article by Gowling WLG lawyers Ivy Liang, Celine Bey and Ines Rosen, which surmises that “on a practical level, the recognition of copyright in works generated by AI encourages companies to use such tools, especially as it increases the commercial value of the products generated in this way.”
While the authors agree this decision chimes with China’s objective to become a world leader in digital economy and AI, they add that “within the EU, and especially in France, a broad consensus is emerging to deny copyright protection to productions generated exclusively by AI.”
As China takes a heavy-handed regulatory approach, Tanna says there are very few AI-specific regulations in neighbouring countries, although proposals are being considered in Australia, Japan and Thailand, “making Asia – currently – relatively easier to navigate from an AI regulation perspective.”
Ultimately, constructive AI regulation can help businesses worldwide embrace a responsible approach to its use, as Var Shankar, Executive Director, Responsible AI Institute, explains. He says that he is seeing organisations “take thoughtful and measured approaches to responsible AI use and look to emerging laws and regulations to provide guidance.”
“In many cases, understanding the risks of Generative AI – such as hallucinations, privacy and security risks, and fairness issues – has driven a broader appreciation for AI risks writ large,” Shankar concludes.