论文标题
SANCL:多模式审查的帮助预测,并具有选择性的关注和自然的对比度学习
SANCL: Multimodal Review Helpfulness Prediction with Selective Attention and Natural Contrastive Learning
论文作者
论文摘要
随着电子商务的繁荣,旨在根据预测的有用性得分对产品评论进行分类的多模式评论有益性预测(MRHP)已成为研究热点。此任务上的先前工作集中在基于注意力的模态融合,信息集成和关系建模上,该模型主要暴露了以下缺点:1)由于其不加区分的注意力表述,该模型可能无法捕获真正的基本信息; 2)缺乏适当的建模方法,可以充分利用提供的数据之间的相关性。在本文中,我们提出了SANCL:MRHP的选择性关注和自然对比学习。 SANCL采用基于探测的策略来对更大意义的区域进行高度注意权重。它还基于数据集中的自然匹配属性构建了对比度学习框架。两个基准数据集的实验结果(包括三个类别)表明,SANCL可实现最先进的基线性能,并且记忆消耗较低。
With the boom of e-commerce, Multimodal Review Helpfulness Prediction (MRHP), which aims to sort product reviews according to the predicted helpfulness scores has become a research hotspot. Previous work on this task focuses on attention-based modality fusion, information integration, and relation modeling, which primarily exposes the following drawbacks: 1) the model may fail to capture the really essential information due to its indiscriminate attention formulation; 2) lack appropriate modeling methods that take full advantage of correlation among provided data. In this paper, we propose SANCL: Selective Attention and Natural Contrastive Learning for MRHP. SANCL adopts a probe-based strategy to enforce high attention weights on the regions of greater significance. It also constructs a contrastive learning framework based on natural matching properties in the dataset. Experimental results on two benchmark datasets with three categories show that SANCL achieves state-of-the-art baseline performance with lower memory consumption.