Content

About

The Rise of Algorithmic Auditing

Last week, we sat down with one of the founders of Holistic.AI, a London-based startup that aims to be the barometer of algorithmic quality in artificial intelligence, both functional and ethical via the burgeoning world of algorithm auditing. 

Elliot Leavy: So, what was the wider context in which Holistic.AI came to be?

Emre Kazim: I wrote an article about how digital technologies were disrupting the structures of the state following my PhD in Philosophy, about how we needed to re-evaluate the social contract following the rise of technologies like AI and blockchain and this led me to begin working with others who were concerned with what we called digital ethics at the time.

The world was made aware of the Cambridge Analytic involvement in elections, there were some high-profile tech algorithmic bias stories around and the criminal justice system stateside had stories about how algorithms were deciding on people’s actual freedom. So, there was a lot going on.

And the engineers understood this was something that required just more than themselves, and so they needed to bring in sociologists, philosophers, lawyers and the like to help solve this growing problem.

Elliot Leavy: And that developed into Holistic.AI?

Emre Kazim:Not for at least another two years! But then came along the national AI certification service which was attached to a lot of big government and industry bodies as well academia, and the idea of algorithmic auditing spiraled out of there.

Elliot Leavy: So what is the purpose of algorithmic auditing?

Emre Kazim: To create a public standard that could facilitate trust and answer the question of whether or not AI algorithms should be regulated or not. We believe there should be.

Elliot Leavy: Is that debate settled?

Emre Kazim: Basically yes, and now its mostly just a debate about what regulations will be brought in and why rather than if there will be any at all. Recently we’ve seen huge interventions with the EU AI act, and lots of regulatory activity in the UK and in the United States.

Elliot Leavy: How do you audit an algorithm then?

Emre Kazim: Adriano Soares Koshiyama [Holistic.AI’s Chief Executive Officer], the lead author on this put together an auditing framework that looked at two different kinds of risks. One as technical, and the other was non-technical, more governance focussed risks.

Technical risks are things such as explainability – looking at how much of a black box problem the algorithm has and does it explain to us how it comes to its conclusions. Then we look at implicit biases that affect demographics differently, which there is a lot of focus on right now. After that we look at how it works in terms of privacy, and finally we move onto the robustness of the algorithm – is it reproducible and stuff like that. This is all done via stress testing.

Elliot Leavy: Then what about the non-technical risks?

Emre Kazim: This comes down to looking at the algorithms in terms of incoming regulations – are they future proof for example – and look at how much command and control these companies have over the algorithms, are there reasonable governance structures in place and if at the end of the day these algorithms will be bad for the company’s overall reputation or not.

Elliot Leavy: So, what’s the benchmark here? Judges are renowned for giving harsher sentences before lunchtime. How do you mitigate that sort of human bias when it's humans who make the algorithms in the first place?

Emre Kazim: It’s a good point and is something that comes up time and time again when you do aggregate analysis of such things. There are all sorts of biases that we don’t even realise we have – no judge is going to say that he sent someone to prison because they were Muslim, he’ll find a compelling reason to convince himself that it’s the right thing to do. But we are always saying to people, if your benchmark is human performance then you might as well go for an algorithm and then make that algorithm the best it can be.

Upcoming Events


CDO Healthcare Exchange

February 11 - 12, 2025
Le Méridien Dania Beach, Fort Lauderdale
Register Now | View Agenda | Learn More


Data Management for Generative AI in Automotive

11th - 13th February 2025
NH München Ost Conference Center, Munich, Germany
Register Now | View Agenda | Learn More


GenAI DACH 2025

24 - 26 February, 2025
H4 Hotel Berlin
Register Now | View Agenda | Learn More

MORE EVENTS