Responsible AI Summit Main Conference Day 2 - Wednesday 24 September


Morning Opening Session

8:00 am - 8:40 am Morning Networking Breakfast & Coffee

img

Paul Dongha

Head of Responsible AI & AI Strategy
NatWest Group

As AI continues to reshape industries, the EU AI Act’s emphasis on AI literacy has made it a critical priority for enterprises. Ensuring that employees are well-versed in AI technologies isn’t just about compliance - it's essential for fostering innovation, mitigating risk, and building trust in AI systems. Organisations must invest in upskilling their workforce to ensure a smooth transition into an AI-powered future while adhering to new regulations.

·       Equipping employees with AI literacy to meet EU AI Act mandates and compliance standards.

·       Reskilling and upskilling talent for an AI-powered, regulated workforce.

·       Building a culture of collaboration where human expertise and AI complement one another.

img

Oliver Patel

Head of Enterprise AI Governance
AstraZeneca

img

Carol Wilson

AI Ethics and Governance Fellowship of Information Privacy
Royal London

img

Dara L. Sosulski

Managing Director, Head of Artificial Intelligence and Model Management
HSBC

img

James Fletcher

Head of Responsible AI
BBC

9:15 am - 9:45 am Presentation - Scaling AI Governance in Healthcare: Balancing Regulation, Risk, and Real-World Impact at Philips

Arlette Van Wissen - Responsible and Sustainable AI Lead, Philips
Ger Janssen - AI Ethics & Compliance Lead, Philips

As a multinational health technology company, Philips operates at the intersection of AI innovation, medical regulation, and enterprise governance. In this session, the Responsible AI team shares how they’re embedding scalable AI risk frameworks into enterprise risk structures - while also navigating the evolving regulatory landscape of the EU AI Act within an already heavily regulated medical domain. From bias mitigation to sustainability, this talk explores what responsible AI looks like when patient safety and compliance are non-negotiable.

·       Aligning AI governance with enterprise risk management across a highly regulated global organisation

·       Translating evolving AI-specific regulations into practical controls within clinical-grade systems

·       Driving bias mitigation strategies tailored to the complexities of healthcare data and use cases

img

Arlette Van Wissen

Responsible and Sustainable AI Lead
Philips

img

Ger Janssen

AI Ethics & Compliance Lead
Philips

9:45 am - 10:30 am RAI Real Talk Roundtable Session

James Fletcher - Head of Responsible AI, BBC

Ever go to conferences hoping for practical insights that can actually help you do responsible AI better day to day, but come away empty handed after yet another high-level discussion about the EU AI Act? Well this session is for you. It will be interactive, it will be fun, and most of all it will be a chance to get answers to your burning questions about how to survive and thrive as a responsible AI practitioner doing it for real in the fast-changing, complex world of AI..

img

James Fletcher

Head of Responsible AI
BBC

10:30 am - 11:00 am Morning Coffee Networking Break

Late Morning Session

11:00 am - 11:30 am Presentation – AI Governance Done Right. How?

Petar Tsankov - CEO and Co-Founder, LatticeFlow AI

Many organizations have defined and committed to AI governance principles. But when it comes to enforcing them, governance too often stalls at the abstract, high-level checklist stage, disconnected from real AI systems and state-of-the-art technical evaluations.

In this session, we'll show you how to take AI governance beyond the checklist by mapping principles to real AI risks, executable controls, and deep technical evaluations, grounded in cutting-edge AI research.

This is AI Governance. Done Right. Because AI governance only works when it's grounded in real-world technical assessments.

You will walk away with:

● A clear, practical approach to turn AI governance principles into enforceable technical requirements

● Real-world examples of AI governance in action: what works, what fails

● Proven ways to scale AI governance with continuous AI assessments embedded in every AI deployment

If you're accountable for AI governance, risk, or compliance, this is how you close the last mile where trust, performance, and compliance meet.

img

Petar Tsankov

CEO and Co-Founder
LatticeFlow AI

11:30 am - 12:00 pm Presentation – From Principles to Practice: Evolving a Human-Centered Responsible AI Framework at Reckitt

Anastasia Zygmantovich - Global Data Science and Data Visualisation Director, Reckitt
Tomasz Piechula - Responsible AI Governance Lead, Reckitt

As AI becomes embedded across business functions, translating Responsible AI principles into scalable, context-aware practice is an enterprise-wide challenge. Over the past year, Reckitt has evolved its Responsible AI governance model from a conceptual framework to a robust, human-centered process tailored to the company’s operational reality. In this session, Anastasia and Tomasz will walk through key milestones in Reckitt’s RAI journey, sharing practical insights into how risk is evaluated, scoped, and mitigated across diverse AI initiatives. Attendees will gain a clear view of how Reckitt has adapted its governance approach over time - and what others can take from that experience.

  • Defining and adapting evaluation scopes based on AI system complexity and business context
  • Embedding human-centered reflection into RAI processes to address real-world organizational needs
  • Iterating governance models by learning from frontline challenges and cross-functional collaboration


img

Anastasia Zygmantovich

Global Data Science and Data Visualisation Director
Reckitt

img

Tomasz Piechula

Responsible AI Governance Lead
Reckitt

12:00 pm - 12:30 pm Presentation – Operationalising Responsible AI for Critical Infrastructure Resilience

Ronnie Chung - Group Head of Responsible AI, Centrica

As energy systems digitise and GenAI adoption accelerates, critical infrastructure operators face new regulatory scrutiny, cyber threats, and resilience risks. At Centrica, building a Responsible AI Framework has been key to scaling innovation while safeguarding operations, customers, and society. Attend this talk to understand how Ronnie and his team are:

·       Embedding AI governance across GenAI, ML, and critical energy infrastructure systems

·       Aligning risk tiering with regulatory, cyber, and environmental resilience expectations

·       Translating responsible AI principles into action across complex, distributed operations

img

Ronnie Chung

Group Head of Responsible AI
Centrica

AI’s increasing role in workplace decisions, from hiring to performance management, raises important ethical concerns. This session will explore the ethical implications of AI-driven decisions, focusing on transparency, fairness, and accountability. We will address the role of governance in ensuring AI systems are used responsibly and in ways that uphold organizational values and employee rights.

·       Examining the ethical impact of AI-assisted decisions on the workforce.

·       Discussing strategies for ensuring fairness, transparency, and accountability in AI algorithms.

·       Managing the ethical challenges of algorithmic management in the workplace.

img

Philippa Penfold

Responsible AI & Data Science Manager
Elsevier

img

Sarah Mathews

Group Responsible AI Manager
The Adecco Group

img

Felix Muckenfuß

Strategic Data & AI Governance Solutions Executive
OneTrust

Lunch

1:00 pm - 2:00 pm Lunch in the Exhibition Hall: Network With Your Peers

Early Afternoon Session

2:00 pm - 2:30 pm Presentation – Minimising Friction in AI Governance: Balancing Trust, Innovation, Governance and Intellectual Property

Andi McAleer - Head of Data & AI Governance, Financial Times

In an industry where content is the product, AI governance and decision making involves carefully maintaining a balance between risk to assets and potential value. Andi, Head of Data and AI Governance at the Financial Times, shares how a 135-year-old premium news organisation effectively governs a wide programme of AI-enabled solutions and experiments that enable experimentation without undermining their intellectual property or journalistic integrity

·       Creating flexible, multi-tiered governance tools that serve different stakeholder needs—from quick checklists to comprehensive consultations

·       Positioning AI governance as an innovation partner rather than a gatekeeper through approachable, frictionless processes

·       Developing practical methods to evaluate AI use cases against ethical frameworks while maintaining competitive advantage

·       Navigating the tension between exploring new AI-driven business models and safeguarding premium content value

img

Andi McAleer

Head of Data & AI Governance
Financial Times

2:30 pm - 3:00 pm Presentation – Framework to Function: Scaling Responsible AI with Tools, Governance, and Trust in the Development Area of Novo Nordisk

Per Rådberg Nagbøl - Senior Data & AI Governance Professional, Novo Nordisk

As AI adoption accelerates across industries, ensuring responsible, scalable, and consistent deployment is more critical than ever - especially in highly regulated sectors like pharma. At Novo Nordisk, responsible AI is not a siloed initiative but combines collective effort across the enterprise with area and domain specific requirements. From aligning with evolving legislation like the EU AI Act to making it easier for practitioners to comply, this session explores how practical tooling, a trustworthy AI council, and other strong AI governance can turn frameworks into action.

  • Establishing tools that embed compliance into AI workflows and decisions
  • Building governance structures that scale with enterprise-wide AI adoption
  • Treating AI as collaborative work, not standalone machine decision
img

Per Rådberg Nagbøl

Senior Data & AI Governance Professional
Novo Nordisk

3:00 pm - 3:30 pm Presentation: Regulating AI, Using AI: The FCA’s Dual Role in Shaping and Embedding Responsible AI

Fatima Abukar - Principal Advisor, Responsible AI and Data, FCA

As AI transforms financial services, the Financial Conduct Authority is playing a dual role: setting expectations for responsible innovation across the sector while embedding responsible AI practices within its own organisation. In this session, Fatima, Principal Advisor for Responsible AI and Data, offers a rare window into both sides of that journey. From aligning internal frameworks with data privacy, cyber and legal requirements, to collaborating with DSIT and Ofcom on national policy, Fatima explores what responsible AI means in practice - for regulators and the regulated alike.

  • Applying principles internally to support safe and responsible use of AI
  • Shaping future-facing regulation through collaboration, research, and open engagement
  • Building internal capability through data strategy, literacy, and AI-specific governance structures and Data and AI Ethics Frameworks.
img

Fatima Abukar

Principal Advisor, Responsible AI and Data
FCA

Afternoon Closing Session

3:30 pm - 4:00 pm Afternoon Networking Refreshment Break

The authors of Governing the Machine – how to navigate the risks of AI and unlock its true potential will provide a practical, flexible framework for building comprehensive and robust AI governance to ensure organizations can reap the benefits of AI without unintended liabilities.

The talk will explain the process of defining AI principles and policies, understanding and assessing risks, developing safeguards and selecting the right technical tools and training. It will cover strategies to effectively govern traditional AI systems, the emerging complexities of generative AI and autonomous agents. Whether you're just beginning your AI journey or refining your approach, this is your essential guide to seize the opportunity - and avoid the pitfalls - of AI systems.

img

Paul Dongha

Head of Responsible AI & AI Strategy
NatWest Group

img

Ray Eitel-Porter

Accenture Luminary & Senior Research Associate, Intellectual Forum
Jesus College, Cambridge

As AI continues to evolve, so must the skills of the workforce. This session will examine best practices for developing training and upskilling programs that enable employees to understand and responsibly engage with AI technologies. From foundational education to specialized workshops, we’ll cover how to structure learning paths that meet the needs of both technical and non-technical teams.

·       Design training programs tailored to both technical and non-technical employees.

·       Create learning paths that support responsible AI integration and adoption.

·       Foster continuous AI education to stay ahead in an evolving landscape.

img

Danielle Langford

Responsible AI Specialist
Zurich Insurance

img

Georgiana Marsic

Former Principal AI Manager
Jaguar Land Rover

img

Oriana Medlicott

Responsible AI Lead
Admiral Group

img

Harry Muncey

Senior Director of Data Science, Responsible AI
Elsevier

5:00 pm - 5:00 pm Chairs Closing Remarks & End of Conference