Interview with Ryan Carrier, The Executive Director and Founder of ForHumanity

Tell me about yourself and about ForHumanity.

My name's Ryan Carrier. I'm the executive director and founder of ForHumanity, which is a nonprofit public charity established both in the US and in Europe. We have a mission. It's very simple to examine and analyze downside risks associated with AI, algorithmic and autonomous systems. But to accept all tools where they are and aim to mitigate risk as much as possible from those tools.

So, we want to maximize risk mitigations. Why? Because if we figure that we've mitigated risk to all these tools then we maximize the benefit of those tools for humanity, which is where the overly ambitious name of the organization comes from.

We are more than 2200 Members from 98 countries around the world. We get together and crowdsourced transparent audit rules, certification schemes in support of independent audit of AI systems. We believe that all high-risk AI systems should go through mandatory third-party independent audits, and we support that ecosystem by establishing rules and training people on those rules. We submit our rules to the authority of governments and regulators and seek market adoption voluntarily.

Could you summarise your journey so far with responsible AI?

The very nature of responsible AI is the work that we do where we are defining or looking to mitigate risks. If you mitigate risks associated with AI, algorithmic and autonomous systems, then by definition you're being responsible.

Our primary mission of supporting independent audit of AI systems involves drafting risk controls, treatments and mitigations (for we call them AAA systems), AI, algorithmic, autonomous systems. Across AAA systems, we've drafted more than 7000 individual risk controls, treatments and mitigations already based on law, regulation, best practices, standards, and more, we draft them into binary audit rules. And when I say binary, what I mean is that an independent third-party auditor can look at your compliance and say you did it or you did not comply with certainty. That's our main journey, to seek out all the best practices, all the standards, all the new laws, all the regulations and pull it all together in a globally harmonized way.

Are there any tangible results or successes that you can share?

The main thing is that our GDPR certification scheme for AAA systems is being reviewed by the European Data Protection Board currently. Assuming that is approved, let's say the next 6 to 12 months, that would be the first certification scheme in the world to be approved for AI systems under data protection under GDPR. That's a big deal. In addition to that, we have numerous groups who license our work to implement either solutions or to become compliant themselves. And finally, we have more than 1600 people who have joined our AI education and training center to learn what compliance looks like, to learn what independent audit of AI systems is about.

I think those are the successes and just the growth of the organization from one person in March of 2020 to more than 2200 from 98 countries today demonstrates the value that we're bringing to the marketplace.

What has been your biggest lesson? Or, you know, piece of advice that you can give to organizations who are trying to navigate responsible AI.

Everything takes longer than you think and or hope. What we're doing in producing responsible AI, producing these audit criteria, following the law, and documenting the law compliance with it seems easy. But the process is so dependent on the use case, the scope, nature, context and purpose of each system that it just creates so much nuance of grey areas. Managing expectations of time, pay, and speed is probably the best lesson.




You previously mentioned the EU AI act? How was that regulation and I guess other regulations impacted your work?

Our primary work is based in the law. So, we started in April 28th of 2021, reading the proposed EU AI Act law for two hours every week, and then began to draft out of criteria. We have not stopped since 2021, we've still been doing that for two hours every week. It's actually now up to three hours every week we go through word by word, line by line, definition by definition, to ensure that our certification scheme abides by the intent of the law, at least as far as we interpret it. There's always going to be further interpretations, further guidance, and further regulations.

That's why they created the AI board, which is just getting started. And the things that they produce will further guide how we do the work that we do. It really helps us to learn what governments, regulators, what people want in terms of compliance, in terms of bidding, by laws and regulations, figuring out essentially what are the best practices. We think about solutions when we don't see obvious answers or legal solutions, we produce best practices ourselves. And so that's why we do this in a crowd source transparent way because it's about lots of people putting their heads together and thinking creatively about how to solve these problems.

Are there any potentially surprising or unexpected changes that you made or things that you implemented that you think made a big difference to your responsible AI efforts that you would recommend to others?

Nothing stands out with the exception of the approach. Because the approach using audit rules that are binary is a very detail-oriented approach. But what that does is it builds confidence and greater certainty that you've met the obligations under the law or under the regulation. I think the idea or the challenge of getting into the minutiae details and then convincing people to pay attention to those details. That's the biggest hurdle to this whole process.

Can you give us a teaser for your workshop at the upcoming responsible AI summit?

The workshop will get into some of these details. Whether we're talking about risk management frameworks, data management and governance, building a quality management system, human interactions or monitoring, we're going to dabble in the details of what it means to look at compliance with the idea being that I want to leave people who attend the workshop with an understanding of the scope and the depth that is required to achieve genuine compliance with the law.

What are you looking forward to at the conference this September?

I'm looking forward to seeing if people are ready. The law doesn't go into full force until 2026 now, but some aspects have so, are people ready to move now or they starting to scope out what moving looks like? I really want to gauge the marketplace for adaption and I think we'll see a range of adaption. I think we'll see some who are proactive. They want to get ahead of it and they recognize that it takes time to meet those challenges. And others who might be a bit paralyzed in fear because compliance is difficult. The law is dense, and many of these tools were not built compliance by design, and so backfilling a lot of these AI tools is going to be a big question. I'm looking forward to gauging from attendees where they're at.

Return to Home