Nailing your AI Governance

AI governance refers to the policies, regulations, standards and practices that guide the development, deployment and usage of AI technologies. As governments around the world continue to pass increasingly detailed and stringent AI regulations, against the backdrop of the UN’s Global Digital Compact (GDC), getting AI governance right is more important than ever.

Governance encompasses ethical considerations, transparency, accountability, data protection, fairness and risk management to ensure AI solutions are used responsibly, addressing potential risks and challenges, while leveraging its benefits to business and wider society.


Global regulation overview

With many organisations having to comply with the regulations of their own country and those of the countries where they do business, a strong governance strategy that complies with legislation and ensures AI is used responsibly, transparently and ethically has never been more important.

The Stanford University 2024 AI Index Report analysed legislation containing “artificial intelligence” that had been passed in 128 countries between 2016 and 2023. Of these, 32 countries have enacted at least one AI-related bill, with 148 AI-related bills passing into law across the world in this period, as well as the EU AI Act. In 2023, Belgium passed five AI-related laws, the most of any country that year, followed by France, South Korea, and the United Kingdom, each of which passed three.

Since 2016, the US has passed the most AI-related laws at 23, followed by 15 in Portugal and 12 in Belgium. Canada initiated the first national AI strategy in March 2017 and so far, 75 national AI strategies have been unveiled. By 2023, 21 US regulatory agencies were addressing AI issues, with the Department of Transportation, Department of Energy, and the Occupational Safety and Health Administration introducing AI-related regulations for the first time.

In October 2023, President Joe Biden signed an executive order mandating the State and Commerce departments to set standards for AI partnerships, with focus on cybersecurity. North of the US border, Canada's Artificial Intelligence Data Act is nearing finalisation, but won't be enacted for another two years.

China, meanwhile, has enacted regulations to control deep synthesis technology to tackle security issues linked to creating lifelike virtual content and multimodal media, such as deepfakes. The rules apply to providers and users, requiring measures like content screening, legal adherence, user authentication, consent for biometric editing, data safeguarding, and content moderation enforcement.

AI Governance in the real world

Companies across the world are setting high standards for AI governance, enabling them to operate within the law, use the technology responsibly and embrace the key principle of the GDC of “shared principles for an open, free and secure digital future for all”. With 2024 being a year of record year for elections as more than 2 billion voters go to the polls, many of the world’s leading AI companies have signed the Tech Accord to Combat Deceptive Use of AI in 2024 Elections, a set of commitments to combat deepfake images, videos and audio of political candidates. The companies include Amazon, Google, LinkedIn, Microsoft and TikTok.

Randstad, a multinational HR consultancy, published a position paper on using AI in the recruitment process and labour market. The consultancy is a member of the World Employment Confederation (WEC) and, as such, follows WEC principles regarding the ethical use of AI in recruitment and employment,placing a high value on the right to privacy, data protection and data governance: “While WEC principles state that those deploying AI systems remain at all times responsible and accountable for their use, the industry also believes that transparent and accountable governance frameworks should be in place.”

For employers using AI for HR purposes, the Randstad position paper offers three key pieces of advice. These are for employers to “understand how ethical AI can benefit both employers and workers, while being clear about potential negative consequences; develop company-wide guidelines for the responsible use of Generative AI tools with a focus on fairness and mitigating bias, trust and transparency, accountability, social benefit, privacy and security and human rights; [and to] ensure the combination of tech & touch by having human oversight at all times on the development and use of AI-driven technologies.”

Global financial services giant UBS is optimistic that increased regulations and the increased governance that is required by businesses, particularly in the US, is largely a positive for industry. An October 2023 update from UBS’s Chief Investment Office states that “while overly onerous government regulation can be a drag on industry, we don't believe the latest moves will stifle AI innovation.”

Instead, the update says that “standardised rules … can help to accelerate commercial adoption for disruptive tech”, as long as a balance is struck between growth and regulation to help businesses meet governance requirements: “On regulation, we think that establishing consensus rules of the road now, while adoption and capabilities are still at an early stage, has its own merits. The alternative, where very rapid growth is suddenly cut down by heavy regulation – examples include cryptocurrencies or China's e-commerce markets – can lead to potentially lasting damage.”

“We wouldn’t take this step up in AI regulatory scrutiny as a negative for the sector,” the UBS update concludes.

However, a panel discussion at UBS’s 2023 Private Company Showcase, highlighted the possibly governance implications of so-called black box AI solutions – AI systems whose operations and inputs are not visible to users – and cautioned that while black box applications “may fast-track efficiencies, [they bring] a lack of transparency for how some AI models arrive at their inputs.”

It was pointed out that using black box solutions could present “a major compliance risk”, especially in highly regulated industries, such as finance and insurance: “Businesses in those sectors will have to own their models as well as the risk – and be able to show effective model risk analysis and governance.”


AI governance trends

Increased use of AI solutions, along with the fast pace of technological change, has been a disruptor that required governance strategies to keep up and adapt. Global Corporate Governance Trends for 2024, a report by academics from the Harvard Law School Forum on Corporate Governance, said that innovations such as AI, particularly the proliferation of Generative AI tools, such as ChatGPT, “coupled with growing information security and privacy threats are front of mind for business leaders and stakeholders.”

The Harvard report found that “more than 30 percent of the S&P 500 and roughly 17 percent of Russell 3000 companies addressed AI” in their 2023 proxy statements. This is expected to grow in 2024, according to the researchers, with more shareholder proposals tipped to emphasise AI governance, its impact on workforces, and ethical use of AI. Regulation, such as the EU’s Digital Services Act, which affects how business is conducted with data captured via smart devices and what digital services can be offered by companies operating in the EU, will inform AI governance trends. The report found that “regulatory rollouts are expected to set new rules and limitations” for businesses, citing the example of Brazil where companies need to establish “robust cybersecurity procedures and defences to thwart potential digital attaches and safeguard against data leaks.”

Writing for the International Association of Privacy Professionals (IAPP) website, member contributors Alexandra Schlight and Jevan Hutson highlighted the impact of more businesses, governments and individuals adopting innovative AI applications on a global scale. They agree that “more legislation and regulatory scrutiny around the uses of AI is expected” for 2024 and beyond, which will continue to put pressure on AI governance strategies. In particular, the EU AI Act will come into effect this year, which is expected to influence similar laws in other countries.

Schlight and Hutson expect more state governments across the US to mandate data protection assessments for profiling and automated decision-making, as well as passing state regulations in regard to using AI when employing people around notifying people when AI is used, bias analysis and rights to information. If such laws are passed, businesses will likely need to step up AI governance protocols to use AI in recruitment ethically. Additionally, Schlight and Hutson expect more laws and enforcement actions to help protect against AI leading to discrimination in credit scoring, insurance sales, advertising and access to services, reflecting an increased awareness of AI’s impact on society and the need for safeguarding against potential harm.

The Randstad position paper highlights a trend among governments that are introducing AI legislation or strengthening existing laws: “Governments are already looking at how to introduce fit-for-purpose guardrails in the form of a regulatory framework to support and increase the positive aspects of AI application while mitigating potential risks.”

This trend is described in the paper as “important” because “good regulation ensures predictability and legal certainty, while promoting the responsible and sustainable development and application of new technology-enabled business models.”

Randstad urges governments and policymakers to keep up with AI trends and play their part in helping companies improve AI governance by creating “tools and instruments that facilitate the enforcement and compliance of such regulatory frameworks to ensure that a level playing field is preserved and protected, without introducing overbearing requirements for employers.”


Return to Home