According to a survey by Deloitte, 57% of tech and business professionals believe cognitive technologies, such as generative AI, pose serious ethical risks. Despite that 56% suggest no ethical principles guide their companies’ use of AI or that they don’t know if they do.
But inferring a sinister or careless attitude from such statistics does a disservice to an industry in which 90% of the decision-makers believe companies need to implement AI ethics policies and 69% welcome active government regulation. Moreover, a study by Intel suggests that 70% of AI adopters – which now account for 72% of organisations globally – already give their technologists ethical training and 63% refer to ethics committees for regular reviews of their use of AI. So, why do those 56% of tech professionals perceive an action-intention gap?
Speed might be the answer. If unpacking the ethical issues posed by AI weren’t tricky enough, the technology is developing at a rate that vastly outpaces the speed of most people’s understanding. By the time they’re aware of one issue, a new development alters, augments, or outright replaces that issue with another. Thus, despite companies’ best efforts to act, many are still perceived to be falling short. The solution? To stop treating AI ethics as an issue in isolation and start taking a holistic view that integrates the topic into daily business operations.
Among other things, the overwhelming popularity of ChatGPT – and users’ reliance on the tool for countless tasks it’s arguably ill-suited to – demonstrates that while many are aware of AI comparably few have understood or explored the technology beyond a basic level. That’s an issue because it increases the demand for AI, which rapidly increases its development, before people are aware of the ethical risks. And without a public dialogue about those risks, tech leaders, business leaders, data scientists, and data engineers can’t come to a consensus on what exactly the ethical adoption of AI looks like.
The fact that a consensus is now emerging around AI’s longer-standing ethical issues – misinformation, copyright infringement, deepfakes, etc. – attests to the power of awareness and dialogue. 70% of Americans are concerned about AI spreading misinformation and 68% by the creation of deepfakes. Consequently, solutions are being proposed, such as state and federal legislation to regulate deepfakes through: legal consequences, the compulsory disclosure of AI-use to enable people to identify manipulated content, and more. However, with the forethought to raise awareness earlier those solutions could’ve been proposed, debated, amended, and implemented long before the issues caused so many problems.
While some of the ethical issues raised by AI are raised hypothetically (for instance, 64% of Americans are already concerned by the possibility of AI functioning independently of humans), most are raised after the fact, as with AI misinformation and deepfakes in the wake of numerous election threats – or copyright infringement following mass intellectual property theft. So, currently, AI raises awareness of ethical questions itself, by causing problems, rather than the industry doing so before AI has a chance to. Worse, the problems are often addressed just as sluggishly.
Instead of exhaustively listing the myriad possible ethical issues AI might pose, combatting each issue might be better achieved by identifying how and why examples of past issues were allowed to fester to a point that the public took notice. Understanding the commonalities in that process illustrates how best to prevent it from occurring for any issue, regardless of a given issue’s nuances. This provides a framework for tackling not just AI’s current ethical issues but those yet to emerge – expediting interventions.
Frequently the problem is one of awareness and, therefore, transparency. So, what’s the best way to maintain a transparent, open, and solutions-centric dialogue about AI’s emerging ethical issues, from as early a point as possible?
Firstly, organisations need to be proactive rather than reactive.9 It’s their responsibility to initiate this awareness-raising dialogue, so that they can educate users and policymakers can develop regulation. This transparency often begins with a gesture as simple as a company publicly issuing a vision statement detailing its AI ethical principles, how it leverages AI, and how that connects with the company’s mission – in a spirit that welcomes feedback. This is certain to appeal to the 58% of consumers who want companies to be clear when they’re using AI and the 60% concerned by AI usage currently (it might also regain the faith of the 65% of Americans who don’t trust the companies building and selling AI tools).
To make consumers, stakeholders, and regulators truly aware of the meaning of that vision statement, it ought to make the complex ideas behind AI explainable. With a view to doing that, AI models should be designed to be interpretable and to provide explanations for their decisions and outputs, enabling stakeholders to understand the reasoning behind them, so that when ethical issues do arise the cause is identifiable and resolvable.
That the widespread lack of explainability underpins many of AI’s current ethical issues suggests redressing it will ease many of AI’s future issues too. Because developers are often unable to precisely explain why particular outputs occur, it’s been difficult to determine whether, and in what way, outputs might’ve infringed on a copyrighted work, slowing the resolution of the issue by reducing AI’s accountability. Laying the right foundations for establishing that accountability is a fundamental aspect of quickly understanding and mitigating all AI ethical issues. A problem can’t truly be resolved without awareness of its root cause.
To deliver explainability organisations can start by identifying the specific developers who oversee the explainability requirements of an AI decision system and assigning them ultimate responsibility for it. In turn, those developers have a range of methods at their disposal, such as identifying which inputs influence an AI’s decision-making process the most, documenting that process (with explanations of the outcomes), and using model interpretability techniques like LIME (Local Interpretable Model-Agnostic Explanations) or SHAP (Shapley Additive Explanations) to explain individual predictions. They can also select algorithms designed to provide clear reasoning for their outputs, such as decision trees, rule-based systems, or specific explainable deep learning techniques. Together, these methods enable them to explain how AI works and, in doing so, meaningfully raise awareness of any ethical questions.
In the banking sector the black-box nature of some AI algorithms is particularly problematic. Techniques like deep neural nets offer limited explainability, which is an issue when assessing high-risk credit applications – not only for banks but for financial regulators, which require transparency when it comes to how AI models make decisions on credit scoring, fraud prevention, and more. So, both for their own benefit and for the sake of compliance, J.P. Morgan uses XAI to explain credit risk models to internal auditors and external regulators. These models predict the likelihood of a borrower defaulting on a loan, playing a crucial role in the bank’s choice of whether or not to approve a loan and at what interest rate. By applying XAI techniques, they’ve been able to open the ‘black box’ of these AI-driven decisions and offer understandable explanations of their credit risk models – raising awareness of the ethics, increasing trust, and enabling their optimisation and adaptation to changing market conditions.
Explainability is also necessary for another equally vital aspect of preventing ethical issues: human oversight. Only 1% of people would trust AI to make significant workplace decisions, underscoring the demand for the human touch. And if AI outputs aren’t explainable to humans, humans can’t meaningfully oversee them. Misinformation, deepfakes, and copyright infringement all proliferated in part because of a lack of proportionate human oversight.
Consequently, human oversight serves as the bedrock of any responsible AI ethical policy – mitigating biases, fostering alignment with societal values, and building public trust. To implement that oversight, organisations can ensure all employees know the rules governing their AI systems and understand how the AI works, what it does, and its purpose.
The more human oversight the quicker ethical issues are detected and, so, resolved. To that end, organisations can: train staff, provide access to expert help, set up escalation procedures for handling issues, offer tools to visualise and explain AI outputs, regularly test AI systems for evidence of bias or other unwanted outcomes, set tracking metrics and performance goals, implement procedures to override or stop AI systems, provide tools to adjust parameters or retrain systems, regularly review and update oversight processes, incorporate feedback, stay updated on regulatory changes, and conduct regular audits – by either internal or external AI ethics committees.
With transparency and human scrutiny, accountability follows: organisations are able to identify who bears responsibility and liability for any AI-induced harm or unethical decisions.19 And that’s essential because, without repercussions for such errors, those errors are more prone to repeat, delaying their resolution and reducing trust in AI systems. In contrast, reasonable repercussions increase trust and promote the development of safer, more reliable technology
To streamline the hiring process Amazon developed an AI tool capable of screening CVs and identifying top talent. But the tool, trained on CVs submitted over the previous decade, inadvertently reinforced gender biases in the male-dominated tech industry, favouring male candidates and penalising terms associated with women. While the issue highlights the importance of careful design and diverse data in preventing ethical issues, it also highlights the role of human oversight, awareness, intervention, and continuous improvement – because without team members flagging the issue it might never have been detected and fixed. That’s what Amazon did, dismantling the system and rebuilding it using more equitable, inclusive, and fair tools, which have now led to more diverse hiring outcomes.
User data and privacy
90% of organisations agree that data protection and privacy are important for delivering trustworthy and accountable AI.35 Indeed, the failure to protect user data and privacy – and to integrate that protection across business operations – plays a pivotal role in the ethical failures of misinformation, deepfakes, and copyright infringement. Additionally, the improper or exploitative use of private data also ingrains bias, discrimination, cyber-security threats, and likely a plethora of future issues.
So, it’s another fundamental. Strategies for ensuring user data is protected and used ethically include ensuring AI systems use strong security measures to prevent unauthorised access to personal data (such as encryption, access controls, and secure data storage facilities). Also, organisations overseeing AI systems should be transparent about: what personal data they collect, how the data’s used, and who the data’s shared with, so that people are able to give their informed consent (or even flag practices that aren’t compliant with regulation). People need clear opt-out mechanisms too, to withdraw or delete their data. As for the data itself, the processes of minimisation (collecting only what’s necessary), anonymisation (removing individuals’ personal identifying information) and pseudonymisation (replacing identifying information with an artificial ID) can help protect privacy as well and reduce breaches by ensuring data can’t be traced back to a specific individual during AI analysis. There are business benefits too, with 54% of people willing to share personal data to improve AI products and services if it’s anonymised.
The aim behind all this is to integrate ethical AI data practices into daily operations, so that an organisation remains in a state of continual awareness and stays on top of privacy on an ongoing basis, instead of pursuing privacy retroactively after a breach. For instance, if a clear limit is set on how long data can be retained, it prevents the unnecessary long-term accumulation of personal information by forcing organisations to regularly purge outdated, irrelevant data – minimising the quantity of data at risk and reducing the likelihood of exposure in a breach, beforehand. Again, with ethical issues prevention is the best medicine.
Other useful strategies include: mapping data collection points for comprehensive oversight, conducting regular data audits, using privacy-focused AI tools, and making employees aware of privacy regulations, like GDPR and CCPA.
While the onus is on organisations to protect the data and privacy of their users, those protections must comply with data privacy laws too. To guarantee that they do, organisations can conduct privacy impact assessments to evaluate a given AI tool’s potential data privacy risks and to test whether it meets regulatory standards.
That said, regularly monitoring and auditing an AI system to identify any ethical or compliance issues isn’t sufficient if an organisation isn’t aware of the latest regulatory changes. The regulations organisations need to comply with are ever-changing, so it’s useful to hire a trusted third-party partner dedicated to understanding regulatory changes and monitoring organisations’ data flows to maintain compliance in real-time. Either that or an internal employee focused on compliance – who’s ideally incorporated into the leadership team, so compliance is taken seriously throughout the organisation. Both can be aided by software that deploys AI-driven algorithms to constantly monitor AI data practices and notify users of upcoming regulatory changes.
For everyone else, keeping abreast of regulatory changes is best done by following privacy experts on social media, joining networks where the topic’s discussed, and referring to industry bodies like the FTC or International Association of Privacy Professionals, whose website – which features a daily dashboard and regional digests – informs the industry of all things related to AI data ethics.
Recognising that complying with regional variations of AI regulations posed a significant challenge for multinational organisations, EY invested in cultivating a sense of shared responsibility for compliance across its global network – but coordinated centrally by its risk management team, to drive a unified vision. In that spirit, EY leveraged the experience of its public policy team to distil complex regulations into actionable guidance. “Even with the most robust frameworks and models in place, our success depends on winning the hearts and minds of EY people,” said Yvonne Zhu.
To foster the cultural shift necessary for this extensive cross-functional coordination, EY also established transparent AI ethical principles and funded targeted training programmes that fostered a proactive, awareness-based approach to making AI ethical and compliant. They also collaborated with policymakers and regulators, often participating in international forums on AI policy to enable the company to be aware of future compliance changes and adapt quickly. Their proactive stance is already paying dividends, elevating EY’s role as a trusted AI advisor to its external clients as they too navigate an evolving, fragmented regulatory landscape.
As an insider at EY said, “AI regulation will continue to evolve – but adherence to our principles remains constant.” If you begin on a similarly aware, principled, proactive, holistic foot, the ethical questions posed by AI in the future won’t catch you off guard – you’ll already be prepared.