论文标题

歧视性语言模型作为语义一致性得分手,用于迅速的几声文本分类

Discriminative Language Model as Semantic Consistency Scorer for Prompt-based Few-Shot Text Classification

论文作者

Xie, Zhipeng, Li, Yahe

论文摘要

本文提出了一种基于新颖的及时弹药方法(称为DLM-SCS),用于使用鉴别性语言模型Electra,用于区分代币是原始的还是生成的,用于几个弹头文本分类。潜在的想法是,与带有错误标签的其他提示相比,使用TRUE标签实例化的提示应具有更高的语义一致性得分。由于提示通常由几个组件(或零件)组成,因此可以相应地分解其语义一致性。然后,通过使用预告片的Electra模型来计算每个组件的语义一致性,而无需引入额外的参数。广泛的实验表明,我们的模型的表现优于几种基于最新的及时及时的几种方法。

This paper proposes a novel prompt-based finetuning method (called DLM-SCS) for few-shot text classification by utilizing the discriminative language model ELECTRA that is pretrained to distinguish whether a token is original or generated. The underlying idea is that the prompt instantiated with the true label should have higher semantic consistency score than other prompts with false labels. Since a prompt usually consists of several components (or parts), its semantic consistency can be decomposed accordingly. The semantic consistency of each component is then computed by making use of the pretrained ELECTRA model, without introducing extra parameters. Extensive experiments have shown that our model outperforms several state-of-the-art prompt-based few-shot methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源