论文标题
在具有多个出口的DNN中无监督的早期出口
Unsupervised Early Exit in DNNs with Multiple Exits
论文作者
论文摘要
深神经网络(DNN)通常被设计为依次级联的可区分块/层,其预测模块仅连接到其最后一层。 DNN可以与沿主链的多个点的预测模块相连,其中推理可以在中间阶段停止而无需通过所有模块。最后一个退出点可能会提供更好的预测错误,但还涉及更多的计算资源和延迟。就预测错误和成本而言,一个“最佳”的出口是可取的。最佳出口点可能取决于任务的潜在分布,并且可能会从一个任务类型变为另一种任务类型。在神经推断期间,实例的基础真理可能无法获得,并且每个出口点的错误率无法估算。因此,人们面临着在无监督环境中选择最佳出口的问题。先前的工作在离线监督设置中解决了此问题,假设可以使用足够的标记数据来估计每个出口点的错误率并调整参数以提高准确性。但是,经过预培训的DNN通常被部署在新领域中,可能无法提供大量地面真相。我们将退出选择的问题建模为无监督的在线学习问题,并使用匪徒理论来识别最佳出口点。具体而言,我们专注于弹性Bert,这是一种预先训练的多EXIT DNN,以证明它“几乎”满足了强大的优势(SD)属性,从而可以在不知道地面真相标签的情况下学习在线设置中的最佳退出。我们开发了名为UEE-UCB的基于上限(UCB)的上限算法(UCB),该算法可证明在SD属性下实现了子线性遗憾。因此,我们的方法提供了一种自适应学习多种exit DNN中特定于域的最佳出口点的方法。我们从IMDB和Yelp数据集上进行了验证算法验证我们的算法。
Deep Neural Networks (DNNs) are generally designed as sequentially cascaded differentiable blocks/layers with a prediction module connected only to its last layer. DNNs can be attached with prediction modules at multiple points along the backbone where inference can stop at an intermediary stage without passing through all the modules. The last exit point may offer a better prediction error but also involves more computational resources and latency. An exit point that is `optimal' in terms of both prediction error and cost is desirable. The optimal exit point may depend on the latent distribution of the tasks and may change from one task type to another. During neural inference, the ground truth of instances may not be available and error rates at each exit point cannot be estimated. Hence one is faced with the problem of selecting the optimal exit in an unsupervised setting. Prior works tackled this problem in an offline supervised setting assuming that enough labeled data is available to estimate the error rate at each exit point and tune the parameters for better accuracy. However, pre-trained DNNs are often deployed in new domains for which a large amount of ground truth may not be available. We model the problem of exit selection as an unsupervised online learning problem and use bandit theory to identify the optimal exit point. Specifically, we focus on Elastic BERT, a pre-trained multi-exit DNN to demonstrate that it `nearly' satisfies the Strong Dominance (SD) property making it possible to learn the optimal exit in an online setup without knowing the ground truth labels. We develop upper confidence bound (UCB) based algorithm named UEE-UCB that provably achieves sub-linear regret under the SD property. Thus our method provides a means to adaptively learn domain-specific optimal exit points in multi-exit DNNs. We empirically validate our algorithm on IMDb and Yelp datasets.