This talk will divulge insights into the latest developments and implications of the EU AI Act, crucial for businesses navigating AI regulations.
• Staying informed: Keeping abreast of updates and guidelines issued by regulatory authorities to ensure compliance.
• Engaging with regulators: Collaborating with regulators and industry peers to shape responsible AI policies.
• Embracing regulatory sandboxes: Leveraging regulatory sandboxes to test and refine AI systems in a controlled environment.
Recent studies have shown that brands associated with AI are behind on trust. And as long as only about 5% of AI experiments are operationalised, businesses won't get the full potential of AI. The time for talk is over. We will discuss how to operationalise AI, and your data in a trusted and governed way to achieve AI innovation at scale.
- How do you mitigate the trust deficit associated with AI?
- How do you move your AI experiments from your sandboxes to full scale production?
This interactive session takes participants on a journey towards continually advancing Responsible AI operationalization by intertwining three critical elements. This is a chance for you & your peers to delve into the critical aspects of AI governance, spanning design, deployment, and ongoing monitoring, to ensure ethical and accountable AI practices.
• Establishing clear guidelines and policies for AI development aligned with ethical standards.
• Implementing robust monitoring mechanisms to track AI performance and identify potential biases.
• Fostering cross-functional collaboration to ensure alignment between AI initiatives and organizational values.
This session dives into the critical aspects of managing third-party vendors in the responsible AI landscape.
• Understanding multi-faceted risks: Ensuring vendor models align with ethical standards and mitigate biases.
• Implement rigorous evaluation processes: Utilize red teaming and constant monitoring to identify potential issues.
• Optimizing sustainability and cost-effectiveness: navigating the challenges in running own compute for flexibility and sustainability benefits.
Exploring the critical intersection of Responsible AI and HR practices from a range of expert perspectives, this panel addresses the urgency of governing AI in recruitment and selection. With large language models (LLMs) and other AI systems permeating the HR landscape, businesses must prioritize ethical governance frameworks in this area, in particular as the EU’s AI Act qualifies many of them as ‘high risk’:
• Establishing clear guidelines for AI application in HR, aligning with organizational values and legal standards.
• Fostering cross-functional collaboration between HR, legal, and technology departments to ensure comprehensive governance.
• Implementing practical training programs to empower HR professionals in responsible AI decision-making.
Explore the critical implications of AI on employment and strategies for navigating this transformational shift. This panel discusses proactive approaches for businesses to openly discuss, understand & mitigate the impact on jobs while embracing AI innovation responsibly.
• Investing in reskilling and upskilling programs to empower employees for future roles in AI-driven environments.
• Fostering open communication channels to address concerns and ensure transparency about AI integration plans.
• Implementing inclusive AI strategies that prioritize human well-being and job retention alongside technological advancement.
No matter how well an AI is trained, there is a certainty that it will make mistakes. The systematic nature of these errors means users in any industry need to mitigate the potential downsides of adopting this technology to secure the full benefits of deploying AI across their business use cases.
This talk will explore:
• Different types of AI risks for various business use cases, and how companies can identify their particular risk profile
• Why businesses should consider risk mitigation strategies as a critical part of their AI deployment plans
• The upsides of effective risk management of AI tools
Delve into critical aspects of ensuring trustworthy AI, vital for fostering user trust and safety.
• Fostering a culture of transparency: Prioritizing clear communication and openness about AI systems and their capabilities.
• Implementing robust risk mitigation strategies: Integrating human oversight and thorough testing protocols to mitigate potential risks.
• Prioritizing user experience: Designing AI systems with a user-centric approach to enhance trust and usability.
Join us for an illuminating talk aimed at addressing and unraveling some of the ethical complexities of the future AI landscape. As AI technologies rapidly advance, it’s crucial to proactively navigate the ethical challenges and opportunities that lie ahead.
Developing a comprehensive AI strategy encompassing governance plans and human-machine team philosophies.
• What does investing in your AI workforce look like?
• Addressing emerging ethical issues such as AI in elections, deepfakes detection, and brain-computer interfaces governance.