论文标题

逻辑指导的数据增强和正则化,以解决一致的问题回答

Logic-Guided Data Augmentation and Regularization for Consistent Question Answering

论文作者

Asai, Akari, Hajishirzi, Hannaneh

论文摘要

许多自然语言问题需要两个实体或事件之间的定性,定量或逻辑比较。本文解决了通过整合逻辑规则和神经模型来提高对比较问题回答的准确性和一致性的问题。我们的方法利用逻辑和语言知识来增强标记为训练数据,然后使用基于一致性的正规器来训练模型。提高了预测的全球一致性,我们的方法在各种问题答案(QA)任务(包括多项选择性定性推理,原因效应推理和提取机器阅读理解理解)中,对以前的方法实现了很大的进步。特别是,我们的方法将基于罗伯塔的模型的性能显着提高了1-5%。我们在Wiqa和Quarel上将艺术的现状提高了约5-8%,并将侵犯一致性违规的危险降低了58%。我们进一步证明我们的方法可以从有限的数据中有效学习。

Many natural language questions require qualitative, quantitative or logical comparisons between two entities or events. This paper addresses the problem of improving the accuracy and consistency of responses to comparison questions by integrating logic rules and neural models. Our method leverages logical and linguistic knowledge to augment labeled training data and then uses a consistency-based regularizer to train the model. Improving the global consistency of predictions, our approach achieves large improvements over previous methods in a variety of question answering (QA) tasks including multiple-choice qualitative reasoning, cause-effect reasoning, and extractive machine reading comprehension. In particular, our method significantly improves the performance of RoBERTa-based models by 1-5% across datasets. We advance the state of the art by around 5-8% on WIQA and QuaRel and reduce consistency violations by 58% on HotpotQA. We further demonstrate that our approach can learn effectively from limited data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源