论文标题

通过检测和消除输入噪声来提高摘要模型的鲁棒性

Improving the Robustness of Summarization Models by Detecting and Removing Input Noise

论文作者

Krishna, Kundan, Zhao, Yao, Ren, Jie, Lakshminarayanan, Balaji, Luo, Jiaming, Saleh, Mohammad, Liu, Peter J.

论文摘要

抽象性摘要模型的评估通常使用与培训数据相同分布的测试数据。在现实世界实践中,要汇总的文档可能包含由文本提取工件或数据管道错误引起的输入噪声。由这种噪声引起的分配变化下的模型性能的鲁棒性相对研究。我们提出了一项大型的经验研究,该研究量化了各种数据集和模型尺寸的不同类型的输入噪声的性能有时严重损失(最多12个Rouge-1点)。然后,我们提出了一种轻巧的方法,用于在模型推断期间检测和消除输入中的此类噪声,而无需任何额外的训练,辅助模型,甚至不需要对噪声类型的先验知识。我们提出的方法有效地减轻了性能的损失,恢复了大部分性能下降的一部分,有时大至11个Rouge-1点。

The evaluation of abstractive summarization models typically uses test data that is identically distributed as training data. In real-world practice, documents to be summarized may contain input noise caused by text extraction artifacts or data pipeline bugs. The robustness of model performance under distribution shift caused by such noise is relatively under-studied. We present a large empirical study quantifying the sometimes severe loss in performance (up to 12 ROUGE-1 points) from different types of input noise for a range of datasets and model sizes. We then propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any extra training, auxiliary models, or even prior knowledge of the type of noise. Our proposed approach effectively mitigates the loss in performance, recovering a large fraction of the performance drop, sometimes as large as 11 ROUGE-1 points.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源