Content

About

Can Machines Be Ethical? A Conversation with Author Reid Blackman

Elliot Leavy | 10/13/2022

We sat down with Reid Blackman, AI Ethics Advisor and author of 'Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI' to discuss the convergence of legality and ethics, whether or not black box AI is a problem, and how he helps enterprises ensure that their ethical frameworks make sense. 

Elliot Leavy: Can machines be ethical?

Reid Blackman: What’s interesting about AI is that there are several decisions that designers, developers, coders, data scientists, data engineers and product owners make which have ethical impacts independently of the intentions of the makers or indeed the users themselves. 

One thing I like to contrast this with is something like a screwdriver. Is it an ethical tool? The ethics of the tool don’t matter, as its ethics are completely inherited from the intentions of the user. You want to build houses for the poor - that's great. But if you want to build concentration camps, that's obviously not great. There's nothing intrinsically ethical or unethical about the tool itself, it all depends on the user.

Yet I don’t quite think this is the same with AI. There are loads of decisions that the designers make that influence the impact and of how the tool works. All of the different elements involved in building these systems are going to have ethical impacts independently of what the user is trying to do. 

So, someone might adopt an AI system for hiring people but because of biases within that system some people may have been discriminated against. This was not the intention of the user of the software, — in fact, it wasn't even the intention of the engineer who built it in the first place. So, I think that they can be ethical or unethical in the sense that there are various kinds of decisions which are made that impact how the tool operates and by extension the ethical impacts of the tool regardless of the motives of the user.

Elliot Leavy: So just how big is this problem?

Reid Blackman: I don't know, but it’s clearly quite significant, and there are a few things to say in this regard.

Number one, AI operates at scale — that's the whole point of it — you don't use AI if you’re trying to figure out one problem for one individual, you use it because you want to automate things across hundreds of thousands of people. So the impact is always big which means you don’t have any tiny ethical screw ups. It is always affecting people at scale. 

The other thing is another thing to say is that the sources of the risks are embedded across many decisions and stem from everyone from data engineers to data scientists, to product owners, to executives. Such a mass of influences increases the probabilities of ethical risks occurring. But most of those people in that chain are completely unaware of the ethical impacts of their decisions. 

The third thing to say is that people are looking out for the stuff more than ever before. So, it's also a serious risk to the brand and businesses have quickly become aware of that.

And then the last thing I'll say is, we have regulations coming down the pipeline such as the EU AI Act, Canada's AI & Data Act etc. This means that there is a growing immediate need for multinationals to be responsive to these regulations and they need to start doing these things now because it will take them years to change their organization such that it can be compliant. 

Because this is not just something you can tell your data scientists to do, and if you're going to be compliant and if you're going to be ready for the ethical risks it means weaving in an awareness of the ethical and regulatory risks not only through on the technology side of your house, but also the non-technology side in terms of HR risks and compliance.

Elliot Leavy: A lot of our own research reveals that there is often a barrier in the form of leadership when it comes to AI adoption. How do you convince business leaders about actioning this?

Reid Blackman: I don't try to convince anyone. The people I work with already know and see that the writing is already on the wall. They read about it, they see it on social media, on the news reports, and they often have junior people pushing them to do something about it internally as well. They want to innovate, and they don't want to put their brand at risk. That's why they come to me. 

Elliot Leavy: Would we be having these ethical conversations without the incoming regulations?

Reid Blackman: Of course, that’s why we have the push for regulation: ethics. Everyone’s seeing the immense risks of moving forward without such considerations. If you imagine a Venn diagram of ethical risks and regulatory risks, there is overlap there. But the whole push for AI ethical risk mitigation is the push to get those circles to overlap more — so that the regulatory risks cover more of the ethical risks that they do not currently cover.

Elliot Leavy: How do you even begin to map out an ethical framework for clients? I personally believe that some ethics aren’t subjective, but that is just my own view. How do you map ethics into business?

Reid Blackman: There’s some difference between ethical guardrails and ethical grey areas. The ethical guardrails are quite easy to agree upon in society today: anti-discrimination, non-violation of privacy, enacting explainable AI when appropriate etc. It’s easy to get everyone onboard with these ideals.

And then there can be grey area cases which will differ from organization to organization — and that’s okay. Just like we have ethical disagreements between individuals, we can allow reasonable ethical disagreements between organizations. Obviously, we don't want the organizations that are beyond the moral pale, just like we don't want individuals who are beyond the moral pale. That’s a good starting point when trying to frame the conversation.

Elliot Leavy: And then how do you build atop of that?

Reid Blackman: The first thing that I do is start with a seminar or workshop: something educational so that people can understand what it is we're talking about and make sure that we're on the same page. Getting people to understand the problem of bias and AI, the importance of explainability, risk to brand etc. That's step one. 

Step two is usually something like an ethics statement: iat least a first pass of defining the standards of the organization. 

And then from there we move onto something akin to a feasibility analysis. Here are your goals, let’s examine your existing governance and find out how close or far away you are from those standards. What does your product lifecycle look like? What does your workflow look like right now? What does your governance of AI look like right now? What are your HR policies and risk policies and cybersecurity policies? What does procurement look like right now? Who is looking at the AI products you might be purchasing as a multinational and checking them for ethical and reputational risks right now? Because the start-up isn’t, it just wants to get paid — they aren’t concerned with the long-term reputation of your organization. It’s all that stuff, that’s the feasibility analysis.

Elliot Leavy: Are black boxes always unacceptable?

Reid Blackman: Eliminating black-box AI always comes at a cost. It might require a reduction in accuracy, it might require choosing a different learning algorithm, you might forfeit deep learning in exchange for something like linear regression. And then there is the resource cost as well. 

But is this cost necessary if the risk isn’t there? How high risk is this situation? Are we just tagging photos of our dogs? If so, that's pretty low risk and we probably don’t really need to care about explainability. I might need it in order to debug it but ethically speaking, it doesn’t really matter. But if I’m diagnosing patients and they are being recommended treatment because of these systems — clearly we need to understand how they work because the risk is life or death.

That said there may still be high risk situations where it's still okay. Maybe I run an investment advisory firm and I tell you that I've got this AI that performs 50% better than the best human does on historical data. ‘It works’, I tell you, ‘but, cards on the table — it’s black box and we don’t really understand why it works’. In this scenario, there’s still informed consent, so it would be ethically reasonable for the black box system to exist.

Elliot Leavy: A question I always ask is about demographic differences in ethics. So, Gen Z is repeatedly shown to not care as much about privacy as older generations. 

Reid Blackman: So there's a couple of things here. First of all, Gen Z are against violations of privacy — they're just different about what constitutes a violation of privacy. 

The second thing to say is that the things that Gen Z are saying now, millennials were saying a decade or so ago also. And what happened was the demographic got older, had kids and got jobs and realized they didn’t like their bosses snooping on their home lives so much. I think the same realization will come to Gen Z also, I don’t see it as an unalterable feature of a generation.

And finally, there’s the whole issue here that businesses rarely just target one group. More often than not they sell to millennials, Gen X, baby boomers and so forth. What this does is torment and a base level of practice that is respected across all of our demographics as opposed to trying to tweak it for every age group.

Elliot Leavy: That’s sort of what has happened with the incoming regulation isn’t it? Once one market brings it in, it almost makes sense to just standardize that across all markets. 

Reid Blackman: Exactly. It’s the overlap on that Venn diagram where legal risks and ethical risks meet and so it makes sense to act on both and to, ultimately, work towards increasing that overlap.

Upcoming Events


CDO Healthcare Exchange

February 11 - 12, 2025
Le Méridien Dania Beach, Fort Lauderdale
Register Now | View Agenda | Learn More


Data Management for Generative AI in Automotive

11th - 13th February 2025
NH München Ost Conference Center, Munich, Germany
Register Now | View Agenda | Learn More


GenAI DACH 2025

24 - 26 February, 2025
H4 Hotel Berlin
Register Now | View Agenda | Learn More

MORE EVENTS