论文标题
一项有关计算高效的神经体系结构搜索的调查
A Survey on Computationally Efficient Neural Architecture Search
论文作者
论文摘要
神经体系结构搜索(NAS)最近在深度学习社区中变得越来越流行,主要是因为它可以提供一个机会,让感兴趣的用户没有丰富的专业知识,从而从深度神经网络(DNNS)的成功中受益。但是,NAS仍然很费力且耗时,因为在NAS的搜索过程中需要进行大量的性能估计,而训练DNNS在计算上是密集的。为了解决NAS的这一主要局限性,提高计算效率在NAS的设计中至关重要。但是,仍然缺乏对计算有效NAS(CE-NAS)方法的系统概述。为了填补这一空白,我们通过将现有工作分类为基于代理和替代辅助的NAS方法,对CE-NAS的最先进进行了全面的调查,以及对其设计原理的详尽讨论以及对其性能和计算复杂性的定量比较。还讨论了其余的挑战和开放研究问题,并提出了这个新兴领域的有前途的研究主题。
Neural architecture search (NAS) has become increasingly popular in the deep learning community recently, mainly because it can provide an opportunity to allow interested users without rich expertise to benefit from the success of deep neural networks (DNNs). However, NAS is still laborious and time-consuming because a large number of performance estimations are required during the search process of NAS, and training DNNs is computationally intensive. To solve this major limitation of NAS, improving the computational efficiency is essential in the design of NAS. However, a systematic overview of computationally efficient NAS (CE-NAS) methods still lacks. To fill this gap, we provide a comprehensive survey of the state-of-the-art on CE-NAS by categorizing the existing work into proxy-based and surrogate-assisted NAS methods, together with a thorough discussion of their design principles and a quantitative comparison of their performances and computational complexities. The remaining challenges and open research questions are also discussed, and promising research topics in this emerging field are suggested.