According to research by McKinsey, AI has the potential to generate $4.4 trillion in annual profits. Understandably, businesses are excited, with 58% planning to increase their investment in the technology over the next year and half already allocating up to 20% of their tech budget. However, 48% aren’t sure how they’ll optimise their use of the technology.
In many ways, that problem is an ethical one. 90% of organisations say they’re aware of instances in which AI systems have caused ethical issues internally; given that 56% of Americans won’t buy from unethical companies, optimising AI usage to the fullest requires companies to reassure consumers that their AI usage is responsible. For that usage to be responsible, employees need the education to understand responsible AI practices.
Businesses face a significant educational barrier, with some 76% of IT professionals currently receiving either no or merely informal support with AI ethical issues. And with only 37–38% of employers recognising the need to give staff that support in the form of AI training, the few who do take the time to understand the ethical challenges and educate their team will be poised to seize the revolution for all its worth. Considering the cost of hiring a new employee can be as much as seven times the cost of upskilling an existing one, that education to use AI responsibly may even save money in the long run. The question is: how to offer it?
Only 43% of executives believe their company’s leadership team has sufficient AI skills and knowledge. It’s not going unnoticed, with 75% of workers saying their employer offers no straightforward guidance or policies for using AI at work.
The training to use AI responsibly starts at the top. It isn’t a box to tick once as an afterthought; to obtain the buy-in of an entire organisation, leaders must champion the training from the top down and embed its values into the organisation’s mission, strategy, operations, and company culture.
They can do this by first developing a comprehensive code of AI ethics – with input from all key stakeholders – that outlines the ethical principles and policies guiding the organisation’s AI usage and education. This can then form the basis of a vision statement detailing said principles for all the company. To ensure it’s meaningful and easily understood, the statement ought to prioritise transparency and explainability – that gives employees a clear example to follow when education commences.
Fortunately, leaders don’t need to start from scratch when creating the code on which to base their AI responsibility training. The training needs to cover a wide range of subjects and by dovetailing with an existing framework companies can ensure none are missed – indeed, over 50% of companies endorse common principles of AI ethics. Developed by various organisations, governments, and trade bodies, options include: the European Commission’s Ethics Guidelines for Trustworthy AI, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, the OECD’s AI Principles, and AI4People's Ethical Framework for a Good AI Society. All, while placing different emphasis on different principles, offer a sturdy foundation on which to base an education in responsible AI. An organisation should choose the framework best suited to its values and when introducing it to employees help them understand how it relates to their role specifically.
The subjective nature of ethical questions means it’s important for leadership not to be dictatorial in its vision statement, as healthy debate and a diversity of opinion are essential components of ethical decision-making. To promote a culture in which ethics and responsibility are truly at the forefront of AI usage, leaders must encourage open discussion about ethical dilemmas and welcome feedback. An open dialogue encourages proactive contributions and a sense of personal responsibility, which – in increasing workplace productivity by 25% – will greatly improve engagement in any workplace training to use AI responsibly.
If determining what ethical, responsible AI looks like is a complex, collaborative effort between policymakers, tech professionals, ethicists, business leaders, and more, the education to deliver it needs to be cross-functional too. Training needs to cover a wide range of intersecting subjects; AI that’s responsible needs to be fair, transparent, accountable, secure, unbiased, and consensual, among other things. Delivering transparency might primarily be the domain of leadership and comms teams, while ensuring accountability might lie with developers, and security with data scientists.
Clearly, making AI responsible requires more than the input of developers alone. So for training to be effective and well-structured multiple teams need to help one another to understand their respective areas of focus and gain a holistic, well-rounded understanding of responsible AI. Cross-functionality isn’t an added benefit; it’s a necessity.
Workshops ought to be curated in this cross-functional spirit too, with multiple teams in the room covering everything from foundational AI principles (what it is, how it works) to the aforementioned core ethics to how those ethics relate to different AI applications within the organisation. In particular, it’s important employees attend workshops covering bias (awareness, prevention, detection, and mitigation), and data (privacy, protection, minimisation, informed consent, and regulation). Both are among the top six things worrying the public about AI. Also, students should be taught about the procedures for reporting concerns about irresponsible AI usage.
There are numerous tips for bringing these subjects to life. For example, it’s also crucial that workshops train employees to identify potential risks or instances of AI irresponsibility and many teachers enliven this process through scenario-based exercises designed to develop decision-making strategies. By referring to real-life scenarios of past ethical dilemmas – and asking students how they’d respond – they can offer hands-on experience of the practical challenges involved in maintaining AI’s responsibility. And experiential learning of this nature results in knowledge-retention rates as high as 90%. Team-based exercises are similarly productive; if, for instance, the students were asked to decide how they’d respond to that dilemma together, it’s likely to encourage collaborative problem-solving.
Inviting experts to deliver certain training modules or to give company-wide presentations on responsible AI – and to lead subsequent Q&As – can further stimulate discussion and critical thinking. Staff could even be assigned these experts as mentors, to provide more tailored, one-to-one training. For developers specifically, hackathon-style events that challenge them to develop a solution to an instance of AI irresponsibility could be beneficial. And as with more typical hackathons, if they present their workings and explain their struggles it’ll educate others.
70% of students and 73% of educators maintain education is better when delivered in-person but online training confers many benefits too – especially for businesses. For one thing, in-person training doesn’t scale well, whereas online training is more accessible, ensuring more widespread adoption. For another, online training enables students to learn at their own pace asynchonously – ensuring they’re not overwhelmed or alienated.
The logistical issues involved in arranging in-person workshops also risk delaying training or making it more intermittent. And given that AI is a rapidly evolving technology, raising rapidly evolving ethical questions, another vital element of training staff to use the technology responsibly is fostering a culture of continuous learning and assessment, so that their skills remain current. Online courses can help but, beyond both the virtual and physical classroom, the workplace itself presents several opportunities for employees to educate each other on an an ongoing basis. For example, forming cross-functional working groups for them to flag and assess ethical risks together can sustain the open dialogue about AI’s implications – and, with it, their education and engagement. Indeed, continuous learning is shown to increase employee engagement by 47%.
However, continuous learning begins much earlier. The training to use AI responsibly ought to be integrated into the onboarding for any AI system so that staff are familiar with the ethical implications of AI from day one. Then, once they’re introduced to the basics, they need to move on to more advanced courses and workshops. And after that they require regular refreshers of their AI responsibility training – either on a schedule or as new ethical developments arise – but, importantly, those new developments mean that training programmes need to be regularly reviewed and adapted.
Other strategies for fostering a culture of continuous learning include making learning sessions bite-sized so that they can fit more regularly into people’s schedules and establishing mechanisms for employees to offer feedback on training and AI usage. Feedback on the former can spur the necessary process of reviewing and adapting educational programmes and feedback on the latter can bring attention to new ethical dilemmas to cover in the training. Discussing those, cross-functionally, ensures employees are always engaged in the open dialogue that’s vital for ongoing education.
Getting your educational materials right is a process that’s specific to each workplace. While many have great success with the aforementioned scenario-based exercises – as well as with case studies and articles illustrating real-life AI responsibility dilemmas – it’s important to curate these materials so that they’re tailored to the audience. Cross-functionality can greatly improve learning outcomes but it risks engendering a sense of irrelevancy for certain departments if educational materials concerning a different department’s issues aren’t tailored to show how those issues relate to everyone. When curating materials like case studies, it’s helpful to choose those with a broad field of reference or if not to consider how a teacher might reframe a more niche example to invite the perspectives, interpretations, and input of every department so that training is kept engaging and relevant.
Ongoing assessments are another essential element of an education in AI responsibility. To check that the education is effective and to measure students’ progress in the case it is or adapt the education in the case it isn’t, training programmes use various assessment materials, such as quizzes, peer reviews, and scenario-based evaluations. On the other hand, some organisations prefer to test staff within the workplace, by checking that their work is implementing the responsible AI practices taught in the classroom.
According to a survey by Deloitte, 78% of organisations believe that the widespread proliferation of AI will require even more government regulation. Understanding that regulation is crucial to any serious education in AI responsibility. The guidelines that apply to the regions in which an organisation operates need to be included in educational materials and explained in detail, with staff taught how to comply with data collection, storage, and usage practices, as well as a plethora of other regulations.
The regulations need to also be made available for ongoing reference. However, with ever-more regulation expected it’s likely those guidelines will be out-of-date before long and, in the spirit of fostering a culture of continuous learning, AI responsibility education should seek to keep students aware of the newest guidelines so they can fully comply. To that end, educators can also show students: regulatory experts to follow on social media, online networks where regulation’s discussed, and industry publications, webinars, blogs, podcasts, and websites – like the site of the International Association of Privacy Professionals, which informs workers of all things related to AI regulation.
81% of IT professionals believe it’s important for technologists to have and demonstrate “ethical credentials.”6 Numerous organisations, including the IEEE and AI4People, offer a range of practical and theoretical courses to certify that an employee is trained in responsible AI ethics and that they’re qualified to enforce regulatory guidelines. Most such courses follow the model and principles outlined here but the choice of a specific certification should be based on an employee’s role. Teal has collated a helpful list, identifying the certificates best suited to data scientists, developers, policymakers, researchers, project managers, product managers, consultants, compliance teams, legal teams, HR professionals, and more, led by various universities, initiatives, and educational institutions.
Employees who receive formal training are almost 90% more likely than those who don’t to say AI will have an extremely positive impact on their productivity and efficiency at work. But training employees to use AI responsibly does more than improve efficiency; it empowers them to mitigate risks, identify bias, promote fairness, and protect privacy – preventing the AI misuse that can cause societal harm, jeopardise consumer trust, and damage profits.