Nvidia Deep Learning Accelerator Will Be Integrated into Arm’s Machine Learning Platform

By on April 5, 2018
Pin It

Nvidia and Arm have announced their new partnership to bring deep learning inferencing to the billions of mobile, consumer electronics, and internet of things devices, just few days ago.

Based on Nvidia’s Xavier AI chip, the autonomous machine system on a chip, Nvidia Deep Learning Accelerator (NVDLA) is a free and open architecture to promote a standard way of designing deep learning inference accelerators. Arm and Nvidia aim to integrate NVDLA architecture into Arm’s Project Trillium platform for machine learning.

The partnership will make it simple for IoT chip companies to integrate AI into their designs and help put intelligent, affordable products into the hands of billions of consumers worldwide.

“This is a win/win for IoT, mobile and embedded chip companies looking to design accelerated AI inferencing solutions,” said Karl Freund, lead analyst for deep learning at Moor Insights & Strategy. “NVIDIA is the clear leader in ML training and Arm is the leader in IoT end points, so it makes a lot of sense for them to partner on IP.”

The integration of NVDLA with Project Trillium will give deep learning developers high levels of performance as they leverage Arm’s flexibility and scalability across a wide range of IoT devices.

About Luca Ruggeri

Leave a Reply

Your email address will not be published. Required fields are marked *