In this interview, Olivia Gambelin discusses her journey with Responsible AI, tracing her roots back to when it was still known as AI ethics and navigating the challenges of an emerging industry. She shares insights into her successes, including the recent publication of her book on Responsible AI and the growth of her company, Ethical Intelligence. Olivia also offers valuable advice to organizations on prioritizing people and processes over technology in their AI efforts and reflects on the impact of the EU AI Act on her work.
Could you summarise your journey with responsible AI thus far?
"So I started out in Responsible AI back when it was still called AI ethics and it was making the jump out of academia into industry. So back then, with responsible for what was known as AI ethics it was is still a brand-new industry. There was really no clear path towards any direction in it. It was kind of like, AI ethics and Responsible AI was even more of a wild west because not only did we have to understand AI and how companies were implementing or building it, we also had to understand the ethical implications, how to embed human values, what constituted a responsible practice. We were quite literally building the plane while we were flying it of trying to build responsible practices and just general standards in AI at the same time. So, I was one of the first movers. Sometimes I joke that it's just completely out of being stubborn that I stuck with this industry. There's a bit of an old guard of us now that have weathered the storm of this changing industry. I've seen it through many different phases and life cycles and different emphasis, and so it has been really interesting to see since the generative AI boom that Responsible AI, the term itself, has been solidified as well as emerged to the forefront as something really important. Before the generative AI boom, it was still like is it AI ethics? Is it Responsible AI? Is it AI risk and safety? Is it AI governance? Is it trust and safety? Trustworthy AI? Responsible AI then became this field that all of a sudden wasn't just top of mind, but like top of priority for boards and companies. I’ve been through gauntlet of understanding where this field has gone and seeing it's many different phases and cutting my own path as well as watching some other brilliant minds cut their own paths in this industry. And then to my current day where we were actually able to pull out standard practices and Responsible AI, hence a lot of standard practices which what I wrote about in my book, quite literally titled Responsible AI. So that's present day."
Are there any tangible results successes you can share?
"Me as Olivia, right now I just published my first book. I am stepping out under my own name. I do independent consulting now and advisory work as Olivia Gambelin. So, it's the tangible success and results of having enough of an established name and expertise attached to it that I'm able to do that. I also focus a lot on ethics by design and Responsible AI as an innovation practice, so I'm often a bit of a different direction than necessarily what you're seeing a lot around the AI governance sides to Responsible AI. My approach is gaining more and more attention and excitement and momentum. So, for me that's a success. Then there is Ethical Intelligence. So Ethical Intelligence has been my company I started 5 years ago, and we're now a collective of Responsible AI practitioners. I've seen success as a founder there and going through waves with that company of just understanding how to adapt it to the market needs and EI. We've served clients from teeny tiny series A startups, all the way up to like Fortune 100 banks. We've seen a wide range of clients and success cases. The interesting thing with Responsible AI is that each client has a different success story that they want to focus on. For some clients, it was a sense of community and their employees coming back around and saying, hey, I actually feel confident in my work now. Other companies it was, hey, we're saving time and resources on our governance processes. Or hey, we prevented this lawsuit because we caught something that was wrong in the data early on that would have resulted in a lawsuit eventually down the line. No matter the company, no matter the industry, because again everyone was having different problems. I think biggest success was being able to walk away and say I have confidence in my work now and I have a more in-depth understanding of the technology itself."
What have been your biggest lessons?
"Pieces of advice to organisations navigating Responsible AI This is the biggest one that I will tell over and over again. Responsible AI is not a necessarily a technical problem. Companies see the biggest block being technical and think to see value in their technology and ensure that they’re getting the intended use out of AI, they must focus on the technology. 9 times out of 10, the roots of the challenges and problems and the solutions, lie in the people in the process. So, AI at the end of the day is and always will be a human opportunity, challenge, risk, etc. Understanding the humans behind it all, understanding the people that are building or adopting your AI, understanding the processes that they're working on, have far more of an impact on the outcome of Responsible AI efforts than, let's say, a fairness metric that you can use on your algorithms or models. My biggest piece of advice is look beyond the technology and look at who is building or adopting and how are they building or adopting the technology."
How has the EU AI Act impacted your work?
"I would say it has impacted and it hasn't impacted at the same time. Right now, as the act is coming into effect it’s like this quiet before the storm where everyone's sitting there waiting for the ball to drop off in order to be in compliance with it. There's still a lot of uncertainty there. So, in one way, it hasn't necessarily started impacting my work because companies are still trying to get their minds wrapped around it. And in some cases, it isn't on their radar yet. So, it's kind of an interesting and similar to what happened with the GDPR. And in other ways it has impacted my work because companies will focus and say, well, we're not going to do any of this Responsible AI stuff, we're just going to wait for the EU AI Act to tell us what to do, which is a mistake in and of itself. I also am a bit of a unique case because I don't work on compliance regulation. You'll have ethics teams that focus specifically on that policy regulation compliance. That's only one side to ethics, it’s a small portion of the bigger picture of Responsible AI. I don't necessarily work directly on that. I'm aware of the policies. I will advise on when to follow these policies, I’ll advise on how to follow these policies and regulations, but I'm not doing the actual compliance work. I sit more on with leadership teams and design teams. So, I'm looking at how to use ethics as an innovation tool as well as how to equip executives to understand what AI is and how to become an AI enabled organisation on responsible foundations. It's not really necessarily driven by the EU AI act and it's more driven out of we just need good standard business practice in AI."
Are there any potentially surprising, unexpected changes you made?
"Things you've implemented that have made a big difference to responsible efforts that you might recommend to others. I would say it's not necessarily a change that I have made, but an aspect of Responsible AI that I focus on and usually I'm brought in to bring a change to Responsible AI and the narrative there. And what I mean by that is a lot of the narrative of Responsible AI has been focused on risk mitigation and that is a very important side to Responsible AI but it can also be used as an innovation practice where you're actually designing for values. And I work a lot actually on helping enable that mindset shift with teams where it's not just a risk practice, but it's something that can be used to drive innovation, and so it's moving ethics out of that blocker mindset to an enabler mindset. I would recommend to others is to open up their minds to see Responsible AI as not just a compliance or risk mitigation practice. It's not just about safeguarding, it's about enabling AI innovation."
Congratulations on the release of your new book! Do tell us about it.
"My new book quite literally titled 'Responsible AI: Implement an Ethical Approach in your Organisation.' This is basically it's an end-to-end manual guide on how to design and implement an AI strategy. It delves into how to be AI enabled and how your organisation can de-risks and increase the opportunity for potential growth and competitive edge. The practical use of these AI systems in a way that's not just taking on unnecessary risk. I mean, AI is risky business and Responsible AI is enabling companies to get their intended use out of AI. My book is manual on how to design those strategies. So, at the centre of this book is a tool called the values canvas (Download it for free here). It is a tool that is a holistic management template for embedding values specific ethical values into AI strategies. It's the full picture of how to enable AI either development or adoption within your organisation."
Can you give us a teaser for your workshop at the upcoming Responsible AI Summit?
"The workshop that I'll be giving at the Responsible AI Summit is based off the values canvas, so I will be walking everyone through how to move from mindset shift from risk to innovation mindset and then practically how do you bring this to life. So if you're sat there going, where do I start? what am I missing both in terms of AI and Responsible AI? This workshop is designed for you. We're going to be walking through where you start. What are you missing and how do we get you up off the ground running so that you are enabled to take and go and adapt to your company's needs? Not one-size-fits-all to instead enabling you the individual."
Why are you looking forward to the conference this September?
"Well, I would say selfishly the Responsible AI space and is very close knit. I'm looking at all the other speakers and I see tons of friends and colleagues on that list. And so selfishly I am really looking forward to the conference in September to be able to see my people, there's a great lineup of just brilliant minds coming together here. As an attendee, that's looking to understand Responsible AI, you can't find a better lineup than this, so I'm selfishly just excited to learn from my colleagues and to see them all in one place. We're tight knit group, we're tight knit space. We don't have too many excuses and places where we can all gather."