As AI continues to reshape industries, the EU AI Act’s emphasis on AI literacy has made it a critical priority for enterprises. Ensuring that employees are well-versed in AI technologies isn’t just about compliance - it's essential for fostering innovation, mitigating risk, and building trust in AI systems. Organisations must invest in upskilling their workforce to ensure a smooth transition into an AI-powered future while adhering to new regulations.
· Equipping employees with AI literacy to meet EU AI Act mandates and compliance standards.
· Reskilling and upskilling talent for an AI-powered, regulated workforce.
· Building a culture of collaboration where human expertise and AI complement one another.
As a multinational health technology company, Philips operates at the intersection of AI innovation, medical regulation, and enterprise governance. In this session, the Responsible AI team shares how they’re embedding scalable AI risk frameworks into enterprise risk structures - while also navigating the evolving regulatory landscape of the EU AI Act within an already heavily regulated medical domain. From bias mitigation to sustainability, this talk explores what responsible AI looks like when patient safety and compliance are non-negotiable.
· Aligning AI governance with enterprise risk management across a highly regulated global organisation
· Translating evolving AI-specific regulations into practical controls within clinical-grade systems
· Driving bias mitigation strategies tailored to the complexities of healthcare data and use cases
Ever go to conferences hoping for practical insights that can actually help you do responsible AI better day to day, but come away empty handed after yet another high-level discussion about the EU AI Act? Well this session is for you. It will be interactive, it will be fun, and most of all it will be a chance to get answers to your burning questions about how to survive and thrive as a responsible AI practitioner doing it for real in the fast-changing, complex world of AI..
Many organizations have defined and committed to AI governance principles. But when it comes to enforcing them, governance too often stalls at the abstract, high-level checklist stage, disconnected from real AI systems and state-of-the-art technical evaluations.
In this session, we'll show you how to take AI governance beyond the checklist by mapping principles to real AI risks, executable controls, and deep technical evaluations, grounded in cutting-edge AI research.
This is AI Governance. Done Right. Because AI governance only works when it's grounded in real-world technical assessments.
You will walk away with:
● A clear, practical approach to turn AI governance principles into enforceable technical requirements
● Real-world examples of AI governance in action: what works, what fails
● Proven ways to scale AI governance with continuous AI assessments embedded in every AI deployment
If you're accountable for AI governance, risk, or compliance, this is how you close the last mile where trust, performance, and compliance meet.
As AI becomes embedded across business functions, translating Responsible AI principles into scalable, context-aware practice is an enterprise-wide challenge. Over the past year, Reckitt has evolved its Responsible AI governance model from a conceptual framework to a robust, human-centered process tailored to the company’s operational reality. In this session, Anastasia and Tomasz will walk through key milestones in Reckitt’s RAI journey, sharing practical insights into how risk is evaluated, scoped, and mitigated across diverse AI initiatives. Attendees will gain a clear view of how Reckitt has adapted its governance approach over time - and what others can take from that experience.
As energy systems digitise and GenAI adoption accelerates, critical infrastructure operators face new regulatory scrutiny, cyber threats, and resilience risks. At Centrica, building a Responsible AI Framework has been key to scaling innovation while safeguarding operations, customers, and society. Attend this talk to understand how Ronnie and his team are:
· Embedding AI governance across GenAI, ML, and critical energy infrastructure systems
· Aligning risk tiering with regulatory, cyber, and environmental resilience expectations
· Translating responsible AI principles into action across complex, distributed operations
AI’s increasing role in workplace decisions, from hiring to performance management, raises important ethical concerns. This session will explore the ethical implications of AI-driven decisions, focusing on transparency, fairness, and accountability. We will address the role of governance in ensuring AI systems are used responsibly and in ways that uphold organizational values and employee rights.
· Examining the ethical impact of AI-assisted decisions on the workforce.
· Discussing strategies for ensuring fairness, transparency, and accountability in AI algorithms.
· Managing the ethical challenges of algorithmic management in the workplace.
In an industry where content is the product, AI governance and decision making involves carefully maintaining a balance between risk to assets and potential value. Andi, Head of Data and AI Governance at the Financial Times, shares how a 135-year-old premium news organisation effectively governs a wide programme of AI-enabled solutions and experiments that enable experimentation without undermining their intellectual property or journalistic integrity
· Creating flexible, multi-tiered governance tools that serve different stakeholder needs—from quick checklists to comprehensive consultations
· Positioning AI governance as an innovation partner rather than a gatekeeper through approachable, frictionless processes
· Developing practical methods to evaluate AI use cases against ethical frameworks while maintaining competitive advantage
· Navigating the tension between exploring new AI-driven business models and safeguarding premium content value
As AI adoption accelerates across industries, ensuring responsible, scalable, and consistent deployment is more critical than ever - especially in highly regulated sectors like pharma. At Novo Nordisk, responsible AI is not a siloed initiative but combines collective effort across the enterprise with area and domain specific requirements. From aligning with evolving legislation like the EU AI Act to making it easier for practitioners to comply, this session explores how practical tooling, a trustworthy AI council, and other strong AI governance can turn frameworks into action.
As AI transforms financial services, the Financial Conduct Authority is playing a dual role: setting expectations for responsible innovation across the sector while embedding responsible AI practices within its own organisation. In this session, Fatima, Principal Advisor for Responsible AI and Data, offers a rare window into both sides of that journey. From aligning internal frameworks with data privacy, cyber and legal requirements, to collaborating with DSIT and Ofcom on national policy, Fatima explores what responsible AI means in practice - for regulators and the regulated alike.
The authors of Governing the Machine – how to navigate the risks of AI and unlock its true potential will provide a practical, flexible framework for building comprehensive and robust AI governance to ensure organizations can reap the benefits of AI without unintended liabilities.
The talk will explain the process of defining AI principles and policies, understanding and assessing risks, developing safeguards and selecting the right technical tools and training. It will cover strategies to effectively govern traditional AI systems, the emerging complexities of generative AI and autonomous agents. Whether you're just beginning your AI journey or refining your approach, this is your essential guide to seize the opportunity - and avoid the pitfalls - of AI systems.
As AI continues to evolve, so must the skills of the workforce. This session will examine best practices for developing training and upskilling programs that enable employees to understand and responsibly engage with AI technologies. From foundational education to specialized workshops, we’ll cover how to structure learning paths that meet the needs of both technical and non-technical teams.
· Design training programs tailored to both technical and non-technical employees.
· Create learning paths that support responsible AI integration and adoption.
· Foster continuous AI education to stay ahead in an evolving landscape.