论文标题

关系意识的语言图形变压器提问回答

Relation-Aware Language-Graph Transformer for Question Answering

论文作者

Park, Jinyoung, Choi, Hyeong Kyu, Ko, Juyeon, Park, Hyeonjin, Kim, Ji-Hoon, Jeong, Jisu, Kim, Kyungmin, Kim, Hyunwoo J.

论文摘要

问题回答(QA)是一项需要在自然语言上下文上推理的任务,许多相关的作品增强语言模型(LMS)具有图形神经网络(GNNS)来编码知识图(KG)信息。但是,大多数现有的基于GNN的模块用于QA,并不能利用KG的丰富关系信息,而取决于LM和KG之间的信息相互作用有限。为了解决这些问题,我们提出了问题回答变形金刚(QAT)的问题,该问题旨在以统一的方式共同推理语言和图形。具体而言,QAT构造了元路径代币,这些标记以不同的结构和语义关系学习为中心的嵌入。然后,我们的关系感知的自我发项模块通过跨模式相对位置偏见全面整合了不同的模态,这指导了不同模态相关的entites之间的信息交换。我们验证了QAT对回答ComponSensensensenseqa和OpenBookQA等数据集的有效性,以及在回答数据集MedQA-USMLE的医学问题上。在所有数据集中,我们的方法都可以实现最先进的性能。我们的代码可在http://github.com/mlvlab/qat上找到。

Question Answering (QA) is a task that entails reasoning over natural language contexts, and many relevant works augment language models (LMs) with graph neural networks (GNNs) to encode the Knowledge Graph (KG) information. However, most existing GNN-based modules for QA do not take advantage of rich relational information of KGs and depend on limited information interaction between the LM and the KG. To address these issues, we propose Question Answering Transformer (QAT), which is designed to jointly reason over language and graphs with respect to entity relations in a unified manner. Specifically, QAT constructs Meta-Path tokens, which learn relation-centric embeddings based on diverse structural and semantic relations. Then, our Relation-Aware Self-Attention module comprehensively integrates different modalities via the Cross-Modal Relative Position Bias, which guides information exchange between relevant entites of different modalities. We validate the effectiveness of QAT on commonsense question answering datasets like CommonsenseQA and OpenBookQA, and on a medical question answering dataset, MedQA-USMLE. On all the datasets, our method achieves state-of-the-art performance. Our code is available at http://github.com/mlvlab/QAT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源