基于遗传算法的晶圆级芯片映射算法研究OA北大核心CSTPCD
Research on wafer-scale chip mapping task based on genetic algorithm
近年来,随着人工智能领域的发展,深度学习已经成为如今最重要的计算负载之一,下一代人工智能以及高性能计算应用对计算平台的算力与通信能力提出了前所未有的需求,晶圆级芯片通过在整片晶圆上集成超高密度的晶体管数量以及互连通信能力,有望为未来的人工智能与超算平台提供革命性的算力解决方案.而其中,晶圆级芯片具有的超大计算资源和独特的新架构使得任务映射算法面临前所未有的新问题,相关研究成为近年来学术界的研究重点.专注于研究人工智能任务在晶圆级硬件资源的映射算法,即通过将人工智能算法表达为多个卷积核,再考虑卷积核的算力特性来基于遗传算法设计晶圆级芯片的映射算法.一系列映射任务下的仿真结果验证了映射算法的有效性,并揭示了执行时间、适配器成本等参数对代价函数的影响.
In recent years,with the development of artificial intelligence,deep learning has become one of the most important computing loads today.The next generation of artificial intelligence(AI)and high-performance computing applications have put unprecedented demands on the computing power and communication capabilities of computing platforms.Wafer-scale chips integrate ultra-high-density tran-sistors and interconnect communication capabilities on the entire wafer,so it is expected to provide revo-lutionary computing power solutions for future AI and super-computing platforms.Among them,the huge computing resources and unique new architecture of wafer-scale chips pose unprecedented challen-ges to task mapping algorithms.Related research has become a major focus of academic research in re-cent years.This paper focuses on studying the mapping methods of AI tasks on wafer-scale hardware re-sources.By expressing the AI algorithm as multiple convolutional kernels and considering the computa-tional power characteristics of convolutional kernels,a mapping algorithm for wafer-scale chips is de-signed based on genetic algorithms.The simulation results under a series of mapping tasks verifies the effectiveness of the mapping algorithm and revealed the impact of parameters such as execution time and adapter cost on the cost function.
李成冉;方佳豪;尹首一;魏少军;胡杨
清华大学集成电路学院,北京 100084
计算机与自动化
晶圆级芯片遗传算法卷积网络映射人工智能通信开销
wafer-scale chipgenetic algorithmconvolutional network mappingartificial intelli-gencecommunication overhead
《计算机工程与科学》 2024 (006)
993-1000 / 8
评论