Should I be worried about AI Ethics? The answer is yes.
A brief overview of the 3 cornerstones of AI ethics
Add bookmarkWhat are AI ethics?
Artificial intelligence (AI) is already changing the way we live, work and govern, often in ways we don’t even notice. From deciding what social media content we see to calculating how much we pay for car insurance, a significant portion of our lived experience is being shaped by AI.
Though the examples above may seem innocuous enough, underneath the surface lie numerous, profound ethical questions. It’s already been well-established that social media algorithms tend to push content that is inflammatory and harmful. As for AI-generated pricing, these tools often rely on data sets that are incomplete and reflect historical bias. As a result, low income and nonwhite drivers are more likely to pay for insurance than their white counterparts with similar driving records.
As AI becomes more entrenched into society, many experts are sounding the alarm about its potential impact and emphasising the need for AI ethics.
As defined by the Alan Turing Institute, AI ethics is “a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.” The goal of AI ethics is to minimize the negative impact of AI to individuals and society as a whole.
No matter what your AI project, it is critical that you adopt an ethics-first approach to AI development not only to safeguard your company against undue risk or even ensure regulatory compliance, but ensure you develop AI technologies that actually deliver real, long-term value.
To help you get started, below we’ve outlined 3 key components of AI ethics every organization needs to consider.
AI Bias
One of the more talked about subsets of AI ethics, AI bias is when an AI algorithm produces results that reflect the implicit values of the humans who created it. Certain biases can be built into the algorithm itself or the datasets in which they learn from.
For example, the infamous MIT/Microsoft paper, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” found that while commercially available facial recognition systems could accurately identify the faces of light-skinned men, the failure rate skyrocketed to 40% when it came to identifying dark-skinned women.
Environmental Impact
The environmental impact of AI and “Big Data” are nothing short of catastrophic. In fact, researchers have found that training a single big language model produces around 300,000 kg of carbon dioxide emissions, the equivalent of 25 round-trip flights between New York and Beijing. The bigger AI models get, the more data and computing power they require, the larger their carbon footprint will grow.
Though there are certainly ways to mitigate the environmental impact of AI models (i.e. shifting processing to a location with a greener power grid, using neural network-specific chips vs. GPUs, and using more efficient coding languages), measuring and reducing the environmental impact of AI projects can be an incredibly complex undertaking. However, smaller, more efficient AI models perform better than large, overly-complex ones making this pursuit not only a moral imperative but a business/economic one as well.
READ NEXT: 3 Things to Every Business Leader Should Know About Data Ethics
Human Dignity & Wellbeing
There are few better examples of how AI can impact wellbeing than the recent Facebook scandal. For years, Facebook researchers knew its content recommendation algorithms "exploit the human brain's attraction to divisiveness," amplifying harmful content ranging from pro-anorexia tips to vaccine misinformation to hate speech. However, the company has allegedly refused to confront these issues and now finds itself mired in controversy.
Another example is facial recognition software. Though originally pioneered by law enforcement agencies with public safety in mind, it has now been widely adopted by authoritarian governments as an instrument of repression and societal control.
However, for most companies experimenting with AI, the negative impacts may not be so alarming or obvious. However, that does not mean there aren't ethical considerations that need to be made.
One area that is often overlooked is how AI impacts employee experience. Though most AI technologies are intended to enhance human decision making and behavior, many workers complain that workplace AI such actually decreases human autonomy and freedom. As AI will never be able to replicate human intelligence and ingenuity (at least any time soon), it's important that your human workers and their wellbeing remain your priority.
Additional Reading:
- Ethics of AI: Benefits and risks of artificial intelligence by Tiernan Ray
- Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford
- Are AI ethics teams doomed to be a facade? By Sage Lazzaro
- A Unified Framework of Five Principles for AI in Society by Luciano Floridi and Josh Cowls