论文标题

部分可观测时空混沌系统的无模型预测

PromptAttack: Prompt-based Attack for Language Models via Gradient Search

论文作者

Shi, Yundi, Li, Piji, Yin, Changchun, Han, Zhaoyang, Zhou, Lu, Liu, Zhe

论文摘要

随着预训练的语言模型(PLM)的继续增长,精细调整PLM的硬件和数据要求也会增长。因此,研究人员提出了一种称为\ textit {提示学习}的较轻方法。但是,在调查过程中,我们观察到及时的学习方法是脆弱的,很容易被一些非法构造的提示攻击,从而导致分类错误和PLM的严重安全问题。当前的大多数研究都忽略了基于及时方法的安全问题。因此,在本文中,我们提出了一种恶意提示模板构建方法(\ textbf {stressAttack})来探测PLM的安全性能。研究了几种不友好的模板构建方法,以指导该模型错误分类任务。在三个数据集和三个PLM上进行了广泛的实验证明了我们提出的方法提示的有效性。我们还进行了实验,以验证我们的方法是否适用于几种镜头。

As the pre-trained language models (PLMs) continue to grow, so do the hardware and data requirements for fine-tuning PLMs. Therefore, the researchers have come up with a lighter method called \textit{Prompt Learning}. However, during the investigations, we observe that the prompt learning methods are vulnerable and can easily be attacked by some illegally constructed prompts, resulting in classification errors, and serious security problems for PLMs. Most of the current research ignores the security issue of prompt-based methods. Therefore, in this paper, we propose a malicious prompt template construction method (\textbf{PromptAttack}) to probe the security performance of PLMs. Several unfriendly template construction approaches are investigated to guide the model to misclassify the task. Extensive experiments on three datasets and three PLMs prove the effectiveness of our proposed approach PromptAttack. We also conduct experiments to verify that our method is applicable in few-shot scenarios.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源