This talk presents an end-to-end approach to MLOps applicable to both GenAI and Predictive AI, that leverages Cloud Native techniques in Kubernetes and Containerization to better secure and trace ML models throughout their entire lifecycle.
We start from a DevOps pipeline and introduce a Model Registry for lifecycle tracking, package models with OCI to secure them with signatures and attestations. Finally we deploy models with KServe ModelCar and policy controllers to safeguard ML assets.
Attendees will gain practical insights to enhance the security, traceability, and compliance of their MLOps workflows.