The World's Largest Deep Learning Challenge at Supercomputer Fugaku

Deep learning is developing rapidly, and the computational requirements for model development continue to increase. This is due to the fact that computationally intensive learning is performed over and over again by changing hyper-parameters and other conditions. Therefore, to further improve the performance of deep learning, it is important to develop learning techniques that take advantage of high-performance systems with many computers, such as supercomputers, on a large scale. In this talk, I will introduce our efforts in achieving the world’s best performance in the MLPerfHPC* benchmark, which measures learning performance by simultaneously training multiple models on a large scale using the world’s first Arm-instruction-set-based supercomputer Fugaku.

Download Presentation Deck

×


Learn about joining the UXL Foundation:

Join now