Training & Development in Responsible AI

In almost any organisation, the implementation of AI and Generative AI solutions will involve a period of transition – and the need for ongoing training and development. From overcoming employee scepticism to refresher training when rules and regulations change or technology advances, a culture of continuous improvement – with responsibility and ethics at its heart – is crucial for organisations serious about AI implementation.

Tess Buckley, Programme Manager – Digital Ethics and AI Safety, techUK, the UK's technology trade association, says ensuring users are informed and comfortable with the introduction of AI solutions is an important starting point for trust, which supports the adoption of these systems.

“Like any new technology, we’re going to see some discomfort and a transition period, but I’m not of the belief that AI will replace people,” she says. Instead, UK businesses are discovering and applying the appropriate use cases that will augment the work already being performed, supporting employees and building their trust in the new applications.

“Organisations need to find ways to improve confidence in users because we might make the best technology in the world, but if people don’t trust it, they won’t take it on,” she says. “So, this is where integrating AI assurance mechanisms can really help. Applying assurance mechanisms, such as bias audits, risk management frameworks or red-teaming exercises, allows for evidenced action that companies are actively mitigating potential risks while building justified trust.”



Martin Woodward, Director Global Legal – Head of Rech Legal, Randstad, a multinational talent company, reiterates the importance of building trust. He says it is an integral part of the company’s AI governance programme for its clients and employment candidates, describing trust as “the licence to operate for a company like Randstad.”

“We want to make sure that the tools we use – such as AI systems – reinforce this relationship of trust,” Woodward continues. “That is why we have developed specific training on a variety of aspects surrounding AI, such as ones that discuss the incredible business opportunities and risks surrounding the use of generative AI, as well as dedicated compliance training and training on AI prompting.”

Encouragingly, acceptance of the positive impact of AI is growing. PwC’s Global Workforce Hopes and Fears Survey 2024 found that more than 80% of workers who use Generative AI daily expect it will make their time at work “more efficient in the next 12 months”.

Responsible AI training around the world

While the UK government and the EU AI Act have not specifically legislated for mandatory AI training, ensuring employees have access to up-to-date training and development is becoming increasingly important at an official level.

In the UK, it remains to be seen if a change of government will mean a departure from or continuation of the regulatory path set out by the Department for Science, Innovation and Technology. In April 2024, 13 regulators submitted their response to the 2023 AI Regulation White Paper.

Some regulators cited the importance of training in their responses. For example, the Bank of England has a mandatory curriculum to teach supervisors the fundamentals of AI, as well as optional courses on AI technology and use cases, while growing its AI training culture. The Information Commissioner's Office plans to develop an internal data literacy initiative to boost data and analytics skills, along with a suite of AI training and resources to ensure AI adoption at the office “aligns with our regulatory expectations of others.”

Asavin Wattanajantra, SMB expert, Sage, explained in an article that the EU AI Act “does not explicitly mandate specific educational or training requirements … [but] it does imply a need for adequate knowledge and understanding of AI technologies for those who develop, deploy and use them – especially in high-risk scenarios.”

Against this implied legislative backdrop, ensuring employees are well-informed about the content and implications of the EU AI Act is important, and regular training can help create organisations that are focused on compliance. The business case for compliance- and responsibility-focused training is strong, as Buckley explains.

“The EU AI Act is hard law coming down the pipe from our neighbours, and we have regulators, such as the Financial Conduct Authority and the Competitions and Markets Authority, that will come knocking and ask for some form of compliance,” she says. “You want to train your employers to use AI appropriately, because if there is inappropriate use, such as leaking data by accident or not, investors might pull out of decisions – they don’t want to be placing their capital in companies that are faced with scandal – and companies don’t want a scandal either. Using AI technologies irresponsibly is seen as a portfolio risk for investors.”

Around the world, governments and joint projects are prioritising AI training to benefit businesses. In Finland, Elements of AI, a free AI training course that is open to everyone has been launched by the government initiative in conjunction with the Finnish Center For Artificial Intelligence. Canada’s CIFAR has launched the Pan-Canadian AI Strategy, which focuses on investing in AI research and development via three national AI research centres, along with being part of the country’s plan to attract, retain and maintain AI talent. Egypt’s Ministry of Communications and Information has made developing technical AI skills a priority, alongside non-technical skills, such as problem-solving, analysis and creativity, with its Digital Egypt strategy.

The practicalities of ethical training and development

It is vital to implement practical training and development strategies that centre on building trust among users, while moving away from the idea that the pace of technological change is so fast that companies won’t be able to keep up.

“Similar to a regulatory sandbox, where firms can test new innovations under the supervision of a regulator, an isolated space is where developers can test and debug software before deploying it to a live system. Companies can create internal spaces for responsible innovation, or sandboxes, which allow them to determine what good looks like without the associated risks,” Buckley advises. “So, if they are using AI tools, companies are – and can – use them internally to figure out what the best responsible use case is before applying them externally.”

Ensuring representatives from all departments are present at AI training or upskilling sessions is important because of the different use cases across organisations, with Buckley advocating the idea of “creating curious corners”: “Get your technical teams to sit down and translate what they are building, developing and coding to non-technical individuals. Then get those non-technical individuals to share what they have learned about AI to your communications and marketing teams. We need more translation.”

“We aim to tailor the type of training to the subject matter at hand – for example, more generic or introductory trainings are often delivered through online, self-paced learning, which in some cases might be mandatory,” says Woodward. “On the other hand, when training people on specific elements of ethical and responsible AI – or some of the legal and compliance aspects of AI – very few methods of training beat old-fashioned, interactive face-to-face delivery.”

Companies can build their own AI solutions or introduce pre-built ones. Building in-house allows control over data and customisation, but requires investment and talent. Meanwhile, pre-built options are quicker to use, but may offer less control. Whichever approach is chosen, companies need to understand the ethical implications of AI.

For companies building their own AI solutions, Buckley says the benefits include “increased confidentiality in data security, competitive advantage in having proprietary technology and building the model to their use case, customised to meet unique business needs.”

Training for these companies, alongside any large language model, should include “technical and socio-technical evaluations”, according to Buckley.

“For UK-based businesses, it will be important to understand the five ethical principles that underpin the UK's white paper and how these principles are being operationalised through assurance mechanisms and standards." she says. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Refresher training is important too. Buckley says the narrative of technology moving so fast that we can’t keep up, or attacking something that should be reined in, is “not necessarily helpful”. She advocates iterative model evaluations and training quarterly or twice a year, depending on the organisation, as well as creating a mechanism for monitoring AI policy and governance. This is because “approaches to AI governance would impact the way that organisations can build and use AI, so they might have to revisit training or guidelines of use.”

For Randstad, Woodward says that while the company’s adoption of AI is not a recent development, its widespread adoption is, so comprehensive refresher training is part of their plans going forward. However, since the advent of Generative AI and more dedicated AI laws, Randstad has already updated its AI principles from the original version, which was written in 2019.




Embedding ethics and responsibility

With AI solutions becoming more widespread, a culture of ethical use and responsibility is imperative. Training and development will be vital to ensure this culture becomes the norm across organisations. Buckley emphasises the importance of not neglecting ethics and responsibility when training technical staff, who can then impart this culture across non-technical and support teams.

“We need support to embed ethics, as they are close to the systems, at the coalface – we call it ethics by design – which would benefit from understanding of AI assurance mechanisms,” says Buckley.

The UK government’s Introduction to AI Assurance guidance, published in February 2024, explains that regarding AI, assurance “measures, evaluates and communicates the trustworthiness of AI systems.” The guidance advises employers to upskill within their organisations, even if they are in the early stages of introducing AI ecosystems, including the development of an “understanding of AI assurance”, as well as “anticipating likely future requirements.”

“We’re starting to see a push in universities to include ethics modules in computer science degrees. This is crucial because in industry, efforts are being made to bridge the gap between ethical principles and practical implementation. However, there is still a need for more progress in this area,” Buckley concludes.

The PwC Global Workforce Hopes and Fears Survey 2024 recommends “transformative leadership helmed by those who can challenge the status quo in a way that inspires and empowers others to embrace change.” In an era of change, leaders of organisations can create cultures where the positive benefits of AI are introduced responsibly and ethically to resilient employees who are empowered to navigate exciting new technologies.


Return to Home