Skip to main content


Learn About Our Meetup

5000+ Members



Join our meetup, learn, connect, share, and get to know your Toronto AI community. 



Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.



Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[R] AI Benchmark: All About Deep Learning on Smartphones in 2019

[R] AI Benchmark: All About Deep Learning on Smartphones in 2019

[arXiv Abstract]: The performance of mobile AI accelerators has been evolving rapidly in the past two years, nearly doubling with each new generation of SoCs. The current 4th generation of mobile NPUs is already approaching the results of CUDA-compatible Nvidia graphics cards presented not long ago, which together with the increased capabilities of mobile deep learning frameworks makes it possible to run complex and deep AI models on mobile devices. In this paper, we evaluate the performance and compare the results of all chipsets from Qualcomm, HiSilicon, Samsung, MediaTek and Unisoc that are providing hardware acceleration for AI inference. We also discuss the recent changes in the Android ML pipeline and provide an overview of the deployment of deep learning models on mobile devices. All numerical results provided in this paper can be found and are regularly updated on the official project website:

Performance evolution of mobile AI accelerators rs: image throughput for the float Inception-V3 model.

The paper discusses the following topics:

  1. Four generations of mobile NPUs
  2. Hardware acceleration resources for AI inference on each of Android mobile SoC platforms
  3. Android ecosystem for running deep learning models
  4. Quantized and Floating-point performance of all generations of mobile NPUs
  5. Performance comparison of FP inference on mobile NPUs vs. Intel CPUs vs. Nvidia GPUs.

The full paper is available on arXiv:

submitted by /u/aiff22
[link] [comments]