论文标题
自我监督的隐式注意力:模型本身引导的关注
Self-Supervised Implicit Attention: Guided Attention by The Model Itself
论文作者
论文摘要
我们提出了自我监督的隐式关注(SSIA),这是一种新的方法,可自适应地指导深度神经网络模型,从而通过利用模型本身的特性来吸引注意力。 SSIA是一种新颖的注意机制,在推理过程中不需要任何额外的参数,计算或内存访问成本,这与现有的注意机制相反。简而言之,通过将注意力重量视为高级语义信息,我们重新考虑了现有注意机制的实现,并进一步提出了从较高网络层中生成监督信号,以指导下层网络层以进行参数更新。我们通过使用网络本身的层次特征来构建一个自我监督的学习任务来实现这一目标,该任务仅在培训阶段起作用。为了验证SSIA的有效性,我们在卷积神经网络模型中执行了特定的实现(称为SSIA块),并在几个图像分类数据集上验证了它。实验结果表明,SSIA块可以显着改善模型性能,即使表现优于许多需要其他参数和计算成本的流行注意力方法,例如挤压和激发和卷积块注意模块。我们的实施将在GitHub上提供。
We propose Self-Supervised Implicit Attention (SSIA), a new approach that adaptively guides deep neural network models to gain attention by exploiting the properties of the models themselves. SSIA is a novel attention mechanism that does not require any extra parameters, computation, or memory access costs during inference, which is in contrast to existing attention mechanism. In short, by considering attention weights as higher-level semantic information, we reconsidered the implementation of existing attention mechanisms and further propose generating supervisory signals from higher network layers to guide lower network layers for parameter updates. We achieved this by building a self-supervised learning task using the hierarchical features of the network itself, which only works at the training stage. To verify the effectiveness of SSIA, we performed a particular implementation (called an SSIA block) in convolutional neural network models and validated it on several image classification datasets. The experimental results show that an SSIA block can significantly improve the model performance, even outperforms many popular attention methods that require additional parameters and computation costs, such as Squeeze-and-Excitation and Convolutional Block Attention Module. Our implementation will be available on GitHub.