论文标题
奖章:医学缩写歧义歧义数据集用于自然语言理解的预科
MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining
论文作者
论文摘要
禁止在临床环境中使用许多当前NLP方法的最大挑战之一是公共数据集的可用性。在这项工作中,我们提出了奖牌,这是一个策划缩写歧义的大型医学文本数据集,旨在自然语言理解医学领域的预训练。我们预先训练了该数据集上的几种常见架构模型,并经验表明,这种预训练会导致在下游医疗任务进行微调时提高性能和收敛速度。
One of the biggest challenges that prohibit the use of many current NLP methods in clinical settings is the availability of public datasets. In this work, we present MeDAL, a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. We pre-trained several models of common architectures on this dataset and empirically showed that such pre-training leads to improved performance and convergence speed when fine-tuning on downstream medical tasks.