Although we are still in the early stages of generative artificial intelligence (AI), its potentialto drive growth and improve customer engagement is apparent.
Whether that is through creating compelling content in real-time, simplifying complex tasks or hyper-personalisation, the technology has caught the attention of business leaders worldwide with many eager to learn how they can harness its power, and with good reason. Research has suggested that a whopping 40% of all working hours can be impacted by generative AI.
But as generative AI technology becomes more widely available, governments and lawmakers are taking a more proactive role in its governance, with the aim of minimising risk and ensuring the safe usage of the technology. Because of this, it's crucial to stay up-to-date on the legal landscape, to help you avoid risk and adhere to guidelines.
In this E-Book, we will explore how different countries are approaching their regulation of the technology, while providing you with key steps to help stay informed and in control of your generative AI journey.
The usage of generative AI has raised several concerns that have prompted lawmakers to take action and start discussions around implementing regulations and guidelines for safe usage. The most common concerns of generative AI are:
With these concerns in mind, what regulations are countries planning to implement, and what has already been put into action?
At a glance: Expected to finalise the landmark AI Act which will introduce the world's first AI regulations. This far reaching legislation aims to classify AI by levels of risk, while introducing strict penalties for breaking the law.
The European Union (EU) has been actively working on the AI Act for several years, making them by far the most advanced in terms of implementing AI regulations. It is expected to be completed by the end of 2023, which will likely be followed by a multi-year transition period before the laws are formally put into action.
The AI Act is an upcoming set of regulations that aim to categorise AI according to different levels of risk. Its primary goal is to enforce stricter monitoring regulations on high-risk applications of AI, and outright banning AI technologies that have an unacceptable risk level.
Some of the unacceptable uses that the EU has identified include:
Fortunately, generative AI doesn’t fall into these categories. In fact, the first draft of the AI Act, published in 2021, did not specifically reference generative AI at all. However, this has since changed given the meteoric rise of large language model technologies throughout 2022 and 2023.
Amendments were proposed to the AI Act in June 2023 to give generative AI its own category, “General Purpose AI systems. This way, the technology wouldn't be constrained by the “high-risk” and “low-risk” categorisations that the AI Act applies to other forms of AI technologies. This categorisation recognises that generative AI can be applied to a wide range of tasks with numerous outcomes, and may produce unintended outputs. This is in stark contrast compared to an AI technology such as facial recognition which has a more clearly defined use case.
The AI Act aims to introduce the following requirements for generative AI usage
It's important to keep in mind that the AI Act is subject to change, but here are a few observations in relation to the current draft:
At a glance: An artificial intelligence whitepaper has been published that advocates supporting current regulators to oversee AI using their existing resources. The paper takes a “pro-innovation” stance regarding generative AI.
Published in March 2023, the UK government’s “AI regulation: a pro-innovation approach” outlined an agile framework to guide the development and use of AI, underpinned by 5 principles:
The whitepaper mainly discusses the usage of artificial intelligence at a broad level, but references generative AI a number of times.
A few points to note: at present, there is no dedicated AI regulator in the UK and it seems that this situation will remain unchanged. Instead, the whitepaper states that existing regulators will use the guidelines provided in the whitepaper to monitor and regulate the use and growth of AI in their respective fields.
Second, the 5 guiding principles above will not be subject to specific laws. The whitepaper states that “new rigid and onerous legislative requirements on businesses could hold back AI innovation and reduce our ability to respond quickly and in a proportionate way to future technological advances.” It is clear from this that the UK is taking a more off-hands approach to regulating AI technologies.
However, it is important to note that the whitepaper suggests a statutory duty may be established in the future, subject to an evaluation of how well regulators uphold the guidance principles.
On generative AI, the whitepaper highlights its benefits to society such as its potential in the field of medicine, and otherwise, its potential to grow the economy.
In terms of regulation, the Whitepaper states it was “taking forward” the proposals outlined by the Government Chief Scientific Adviser (GCSA), most notably on the subject of intellectual property law and generative AI.
Also taking a “pro-innovation” stance, the GCSA’s recommendation was to enable the mining of data inputs and utilise existing protections of copyright and IP law on the output, helping to simplify the process of using generative AI while providing clear guidelines for users.
The GCSA also suggested that in accordance with international standards, AI-generated content should be labeled with a watermark showing that it was generated by AI.
Both the GCSA’s recommendations and the whitepaper underscore the importance of a clear code of conduct for AI usage, but not impacting on creativity and productivity. The whitepaper states that “[the UK] will ensure we keep the right balance between protecting rights holders and our thriving creative industries, while supporting AI suppliers to access the data they need.”
The consultation period comes to a close September 2023, where regulators are expected to voice their opinions on the framework and how they plan to implement it, and whether any modifications will be recommended.
In the same vein, the Competition and Markets Authority (CMA) is currently conducting a review of AI foundational models, such as ChatGPT, with a focus on consumer protection. It's expected that this review will be released by the end 2023.
At a glance: Progressing towards more comprehensive AI legislation. The latest development saw the voluntary commitment of the seven largest AI companies to establish minimum safety, security, and public trust guardrails.
The USA is generally considered to be lagging behind European counterparts in terms of governing the usage of AI. However, there have been many developments over the past few years signaling the intent of lawmakers to implement guidelines and legislation to promote safe usage of AI.
Of note, these include:
However, similar to the UK, these are guidance documents not upheld by specific laws. The first of which, published in October 2022, the “Blueprint for an AI Bill of Rights” outlined 5 principles;
In May 2023, the AI Risk Management Framework referenced the usage of generative AI, where it suggested that previous frameworks and existing laws are unable to “confront the challenging risks related to generative AI.” Based on this, it can be inferred that generative AI will likely become subject to legislation in the future.
In July 2023 the Whitehouse announced that seven companies, engaged in the development of generative AI, voluntarily committed to managing the risks associated with the technology. The companies are Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. Of the commitments, the ones that stand to have the most effect on generative AI include:
It's also worth noting the precedent-setting Washington, D.C. federal court case of August 2023, which established that artwork created solely by artificial intelligence is not eligible for copyright protection unlike human-generated art. In which US District Judge Beryl stated: "We are approaching new frontiers in copyright as artists put AI in their toolbox," which will raise "challenging questions" for copyright law.
In summary, the USA is making significant strides towards ensuring safe usage of AI-related tools and providing guidance for organisations in their development and implementation. While these guidelines are not currently backed by any specific laws, they will likely serve as a foundation for future legislation.
Regulations for AI and generative AI are still a few years away, as we have explored in this e-book. The EU's AI Act is the closest to being finalised, but lawmakers are still making amendments and examining developments in the technology closely. Even after the final draft of the AI Act is approved, it will undergo an implementation period, which could take several years more before any laws are put in place (possibly 2025).
In contrast, the USA and UK have opted for a decentralised approach. Both are currently in consultation periods, and their primary goal at this stage is to create industry guidelines for safe generative AI usage. Similarly, it is anticipated that regulations, if they come to fruition, are several years away.
However, as generative AI continues to advance, it's crucial for companies to have a well-defined strategy in place for AI ethics and compliance. While the USA and UK have not outlined clear penalties for breaches of guidelines, the EU AI Act proposes steep non-compliance penalties, where companies can be imposed with fines that can reach up to €30 million or 6% of global income!
No matter where you do business, it is vital to adhere to AI ethics guidelines to avoid potential penalties, and to remain agile so you can adapt to any future regulations.
As demonstrated by the recent amendments to the EU’s AI Act to account for generative AI, the regulatory landscape seems to be constantly shifting. To better prepare for the future and to adhere to current generative AI guidelines, here are some practical steps you can take.
Bringing together all those involved, including legal, IT, human resources, front line employees and management teams will help create robust policies for generative AI usage that prioritise security and ethics.
As generative AI becomes more prevalent, it is essential for companies to take accountability for its ethical use. By following the guidelines of current AI frameworks and implementing early safeguards, businesses can ensure the secure and safe use of generative AI while remaining agile to potential future regulations. The creation of an ethical framework, strong labeling and data tracking capabilities, and the establishment of an AI governance team can significantly contribute to this.