首页|期刊导航|高技术通讯(英文版)|Optimizing deep learning inference on mobile devices with neural network accelerators
高技术通讯(英文版)2019,Vol.25Issue(4):417-425,9.DOI:10.3772/j.issn.1006-6748.2019.04.010
Optimizing deep learning inference on mobile devices with neural network accelerators
Optimizing deep learning inference on mobile devices with neural network accelerators
摘要
关键词
machine learning inference/neural network accelerator ( NNA )/low latency/kernel fusion/in-advance compilationKey words
machine learning inference/neural network accelerator ( NNA )/low latency/kernel fusion/in-advance compilation引用本文复制引用
Zeng Xi,Xu Yunlong,Zhi Tian..Optimizing deep learning inference on mobile devices with neural network accelerators[J].高技术通讯(英文版),2019,25(4):417-425,9.基金项目
Supported by the National Key Research and Development Program of China (No.2017YFB1003101, 2018AAA0103300, 2017YFA0700900), the National Natural Science Foundation of China (No.61702478, 61732007,61906179), the Beijing Natural Science Foundation (No.JQ18013), the National Science and Technology Major Project (No.2018ZX01031102) and the Beijing Academy of Artificial Intelligence. (No.2017YFB1003101, 2018AAA0103300, 2017YFA0700900)