Accelerate TensorFlow* Model Inference on CPU with Intel ® AI Software and Hardware Technology

Accelerating AI inference on CPU is a common requirement when AI projects are implemented. Intel has developed a hardware technology to accelerate low precision inference on CPU: Intel ® Deep learning boost technology; At the same time, Intel also launched ® AI analytics toolkit. Through the combination of the two, it helps users to accelerate AI inference on the CPU simply and conveniently.

This training session focuses on Intel’s optimization of TensorFlow * on Xeon platform and AI model optimization quantification tool: Intel ® Neural Compressor.

At the same time, an end-to-end demo will be shown: train and get a TensorFlow model of FP32; Via Intel ® Neural compressor quantizes and optimizes the model to obtain an INT8 model; Then Test and compare the performance improvement and accuracy loss of fp32 and int8 models on Intel ® Xeon in the DevCloud environment with deep learning boost technology ®

×


Learn about joining the UXL Foundation:

Join now