Tools and techniques for model selection, training, and customisation, including foundation models, fine-tuned models, and centralised repositories to streamline AI development.
Deploying and Orchestrating AI/ML Models: How to effectively deploy, distribute, and orchestrate AI/ML models to ensure your solution is accurate, reliable, and scalable. Considering specific requirements for training and inference when designing AI/ML solutions. How to establish and meet performance benchmarks for different components of your system, ensuring optimal operation.
The importance of a robust infrastructure layer, including cloud platforms and specialized hardware (GPUs, TPUs), to support the intensive training and inference workloads of generative AI models.
How platform engineering for MLOps enhances the orchestration of AI models, focusing on adapting, deploying, and monitoring models effectively within end-user applications.
Check out the incredible speaker line-up to see who will be joining Andre.
The browser you are using is not supported that will prevent you from accessing certain features of the website. We want you to have the best possible experience. For this you'll need to use a supported browser and upgrade to the latest version.