| 注册
首页|期刊导航|高技术通讯(英文版)|Optimizing deep learning inference on mobile devices with neural network accelerators

Optimizing deep learning inference on mobile devices with neural network accelerators

Zeng Xi Xu Yunlong Zhi Tian

高技术通讯(英文版)2019,Vol.25Issue(4):417-425,9.
高技术通讯(英文版)2019,Vol.25Issue(4):417-425,9.DOI:10.3772/j.issn.1006-6748.2019.04.010

Optimizing deep learning inference on mobile devices with neural network accelerators

Optimizing deep learning inference on mobile devices with neural network accelerators

Zeng Xi 1Xu Yunlong 2Zhi Tian3

作者信息

  • 1. Intelligent Processor Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, P.R.China
  • 2. University of Chinese Academy of Sciences, Beijing 100049, P.R.China
  • 3. Cambricon Technologies Corporation Limited, Beijing 100191, P.R.China
  • 折叠

摘要

关键词

machine learning inference/neural network accelerator ( NNA )/low latency/kernel fusion/in-advance compilation

Key words

machine learning inference/neural network accelerator ( NNA )/low latency/kernel fusion/in-advance compilation

引用本文复制引用

Zeng Xi,Xu Yunlong,Zhi Tian..Optimizing deep learning inference on mobile devices with neural network accelerators[J].高技术通讯(英文版),2019,25(4):417-425,9.

基金项目

Supported by the National Key Research and Development Program of China (No.2017YFB1003101, 2018AAA0103300, 2017YFA0700900), the National Natural Science Foundation of China (No.61702478, 61732007,61906179), the Beijing Natural Science Foundation (No.JQ18013), the National Science and Technology Major Project (No.2018ZX01031102) and the Beijing Academy of Artificial Intelligence. (No.2017YFB1003101, 2018AAA0103300, 2017YFA0700900)

高技术通讯(英文版)

OAEI

1006-6748

访问量0
|
下载量0
段落导航相关论文