Main Day 1 - Tuesday 14th January 2025

8:30 am - 8:35 am Chair Remarks

8:35 am - 9:00 am Deploying AI at Scale: Flexible, Scalable, and Cost-Effective Solutions for Dynamic Environments

  • Ensuring AI-Ready Data: Learn how to manage and prepare high-quality data to power AI models effectively, ensuring optimal results in dynamic and data-rich environments.
  • Integrating Cutting-Edge AI Models: How to seamlessly incorporate the latest AI advancements into existing infrastructure, ensuring smooth integration and optimised performance.
  • Scalable AI Infrastructure Solutions: Explore vendor-provided solutions that deliver specialized infrastructure for scaling AI workloads, enhancing flexibility, and maintaining cost efficiency.
  • Optimising AI for Enterprise Deployment: Gain insights into real-world AI deployment strategies that tackle infrastructure, model complexity, and scalability challenges, ensuring flexibility and long-term success.

9:00 am - 9:40 am PANEL: How to Architect Your Gen AI Applications

Alberto Romero - Director, GenAI Platform Engineering, Citi
  • Tools and techniques for model selection, training, and customisation, including foundation models, fine-tuned models, and centralised repositories to streamline AI development.
  • Deploying and Orchestrating AI/ML Models: How to effectively deploy, distribute, and orchestrate AI/ML models to ensure your solution is accurate, reliable, and scalable. Considering specific requirements for training and inference when designing AI/ML solutions. How to establish and meet performance benchmarks for different components of your system, ensuring optimal operation.
  • The importance of a robust infrastructure layer, including cloud platforms and specialized hardware (GPUs, TPUs), to support the intensive training and inference workloads of generative AI models.
  • How platform engineering for MLOps enhances the orchestration of AI models, focusing on adapting, deploying, and monitoring models effectively within end-user applications.
img

Alberto Romero

Director, GenAI Platform Engineering
Citi

9:40 am - 10:10 am SPONSORED SESSION: Overcoming Infrastructure Challenges in Custom AI Deployment

  • Managing AI Compute Resources: Overcoming hurdles in managing compute resources like GPUs and TPUs is crucial for efficient AI deployments. Limited access can delay projects and drive-up costs, but tailored infrastructure solutions can optimize resource allocation and improve overall efficiency.
  • Architecting for Scalability and Performance: Developing a robust structural framework and best practices for AI infrastructure is key to ensuring scalability, high performance, and cost optimization. Standardized reference architectures help simplify resource management and meet the growing demands of AI systems.
  • Optimizing Workflows and Integration: Efficiently allocate computational resources and optimize workflows by integrating critical components of the AI ecosystem. This ensures seamless interaction between hardware, software, and AI models, leading to successful and scalable deployments.
  • Tailored Infrastructure for Custom AI Solutions: Custom AI deployments benefit from specialized infrastructure designed to handle specific use cases. These solutions enable fast response times, cost efficiency, and scalability, ensuring the seamless integration and performance of both open-source and custom AI models.

10:10 am - 10:35 am Networking & Refreshments

10:35 am - 10:40 am Chair Remarks

10:40 am - 11:05 am How to Architect Your Gen AI Applications

  • Modernising enterprise data and data science platforms and introducing GenAI models with LLMs.
  • Creating data science models which will generate ROI

11:05 am - 11:30 am Developing and Deploying Custom AI Models


11:30 am - 12:00 pm SPONSORED SESSION: Deployment and Integrating Advanced AI Models

  • Delivering Enterprise-Grade Generative AI Powered by a Purpose-Built, Full Stack Platform: Strategies for integrating AI model outputs into existing business workflows.
  • Infrastructure adaptations needed to support widespread use of foundation models. Leveraging scalable and elastic infrastructure for efficient and affordable model training and deployment.

12:00 pm - 12:40 pm PANEL: Real World Applications of Custom Models to Build an AI Infrastructure

Dara Sosulski - Head of Artificial Intelligence and Model Management, HSBC
  • Applications of Custom Foundation Models: How tailored models provide competitive advantages. Real-world impact of deploying custom foundation models in enterprise environments.
  • Emerging Architectures and Hardware Developments: The impact of hardware developments on custom model training and deployment. Integrating advanced hardware solutions to optimise AI workflows
img

Dara Sosulski

Head of Artificial Intelligence and Model Management
HSBC

12:40 pm - 1:35 pm Lunch

1:35 pm - 1:40 pm Chair Remarks

1:40 pm - 2:05 pm From Model Development to Real-World Integration

Robin Mobasseri - Executive VP of AI and Analytics Implementation and Services, Wells Fargo
img

Robin Mobasseri

Executive VP of AI and Analytics Implementation and Services
Wells Fargo

2:05 pm - 2:35 pm SPONSORED SESSION: Enterprise-Scale AI: From Concept to Production

  • Scalability Strategies for AI Models: Explore the best practices for deploying AI models that can scale across large enterprise environments, ensuring performance and efficiency.
  • End-to-End Automation of AI Pipelines: Learn how to automate the lifecycle of AI applications, from data ingestion to model training and deployment, using modern MLOps practices.
  • Ensuring Security and Compliance in AI Deployments: Discover how to secure AI solutions at scale, while meeting regulatory and compliance requirements in various industries.

2:35 pm - 3:00 pm Deploying AI Products in a Rapidly Evolving Landscape: Integration and Data Readiness

  • Adapting to the Evolution of Foundational AI Models: Strategies for staying current with the latest AI advancements and integrating them into your deployment pipeline.
  • Seamlessly Integrating AI into Products: Best practices for embedding AI capabilities into your products while managing model lifecycle and user experience.
  • Ensuring AI-Ready Data for Optimal Performance: The importance of preparing, managing, and maintaining high-quality data as the foundation of successful AI models.
  • Scaling AI Models in a Dynamic Environment: Techniques for effectively scaling AI solutions, addressing challenges related to infrastructure, data volume, and model complexity.

3:00 pm - 3:40 pm PANEL: Specialised Infrastructure to Scale AI Applications and Adding Value to AI Output

Patrick Lastennet - VP Strategic Partnerships, OPCORE
  • Scalability and delivery of AI custom workloads through specialised infrastructure, enabling rapid deployment and scaling of AI models across distributed environments.
  • Insights into the capabilities necessary to deliver a custom compute platform that meets the demanding requirements of AI, ensuring optimized performance and efficiency.
  • How hyperscalers are enhancing AI infrastructure, providing scalable, high-performance resources that can support vast AI workloads while maintaining flexibility and cost-effectiveness.
img

Patrick Lastennet

VP Strategic Partnerships
OPCORE

3:40 pm - 4:05 pm Networking & Refreshments

4:05 pm - 4:10 pm Chair Remarks

4:10 pm - 4:35 pm Critical Role of Responsible AI in Building Robust and Scalable AI Infrastructure

Tania Dias - Global VP, AI Adoption & Governance, IKEA

Responsible AI is the key to creating scalable and robust AI infrastructures that can sustain long-term growth while ensuring ethical practices. This presentation delves into how aligning AI operations with responsible frameworks enhances system reliability, optimizes infrastructure, and fosters trust among consumers and stakeholders. Discover strategies to mitigate risks, improve performance, and build AI systems that not only meet business needs but also uphold ethical standards for a better, more sustainable future.

img

Tania Dias

Global VP, AI Adoption & Governance
IKEA

4:35 pm - 5:00 pm Building a Unified ML journey (from Exploratory to Production) Across the ML community

Hisham Mohamed - Director of Engineering, Machine Learning Platform, Expedia Group
img

Hisham Mohamed

Director of Engineering, Machine Learning Platform
Expedia Group

img

Sanchit Juneja

Director-Product (Big Data & Machine Learning Platform)
booking.com

5:25 pm - 6:05 pm PANEL: Delivering Scalable Enterprise Deployment

  • Enterprise-wide AI Integration: Focus on best practices for integrating custom AI models across diverse enterprise platforms, ensuring consistent performance, scalability, and streamlined workflows to accelerate AI-driven innovation throughout the organisation.
  • Implementing efficient MLOps practices across the organisation.
  • Optimizing Scalable AI Infrastructure: Structure infrastructure that supports seamless deployment of custom AI models across the enterprise by leveraging cloud-based platforms, distributed computing, and hybrid solutions. Balance cost, performance, and operational efficiency while addressing evolving computational and data demands for scalable enterprise-wide adoption.

6:05 pm - 6:10 pm Closing Remarks and Networking Drinks