Unlocking and Accelerating Generative AI with OpenVINO™ Toolkit

Generative AI has recently taken the world by storm, with models like Stable Diffusion and GPT demonstrating capabilities to create high-quality data and perform complex problem-solving. However, evaluating and adopting Foundation Models & Generative AI can quickly prove challenging due to large compute requirements and roadblocks with deployment. In this tech talk, we’ll discover how Intel® OpenVINO™ helps accelerate the end-to-end process of Generative AI building, optimization, and deployment. Key takeaways: We’ll demonstrate how to run stable diffusion and advanced transformer models on Intel® HW, including CPUs and GPUs. We will then walk-through key optimizations with OpenVINO™, including OpenVINO™ support for HuggingFace, to unlock flexible ways to develop and deploy your generative AI applications on the edge. CTA: Accelerate and deploy Generative AI on Intel® CPUs and GPUs for your use cases, leveraging Intel® hardware acceleration and OpenVINO™.

Download Presentation

×


Learn about joining the UXL Foundation:

Join now