论文标题

ATTX:在情感识别中融合可穿戴信号的细心交叉连接

AttX: Attentive Cross-Connections for Fusion of Wearable Signals in Emotion Recognition

论文作者

Bhatti, Anubhav, Behinaein, Behnam, Hungler, Paul, Etemad, Ali

论文摘要

我们提出了跨模式的细心连接,这是一种从可穿戴数据中学习的新型动态和有效技术。我们的解决方案可以集成到管道的任何阶段,即在任何卷积层或块之后,以在负责处理每种模式的单个流之间创建中间连接。此外,我们的方法从两个属性中受益。首先,它可以单向共享信息(从一种方式到另一种方式)或双向。其次,可以同时将其集成到多个阶段中,以进一步允许以几个接触点交换网络梯度。我们对三个公共多模式可穿戴数据集(Wesad,Swell-KW和案例)进行了广泛的实验,并证明我们的方法可以有效地调节不同方式之间的信息,以学习更好的表示。我们的实验进一步表明,一旦整合到简单的基于CNN的多模式溶液(2、3或4模态)中,我们的方法就会导致卓越或竞争性的性能,而不是最先进的,并且表现优于各种基线单模式和经典的多模态方法。

We propose cross-modal attentive connections, a new dynamic and effective technique for multimodal representation learning from wearable data. Our solution can be integrated into any stage of the pipeline, i.e., after any convolutional layer or block, to create intermediate connections between individual streams responsible for processing each modality. Additionally, our method benefits from two properties. First, it can share information uni-directionally (from one modality to the other) or bi-directionally. Second, it can be integrated into multiple stages at the same time to further allow network gradients to be exchanged in several touch-points. We perform extensive experiments on three public multimodal wearable datasets, WESAD, SWELL-KW, and CASE, and demonstrate that our method can effectively regulate and share information between different modalities to learn better representations. Our experiments further demonstrate that once integrated into simple CNN-based multimodal solutions (2, 3, or 4 modalities), our method can result in superior or competitive performance to state-of-the-art and outperform a variety of baseline uni-modal and classical multimodal methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源