论文标题
从不完美的注释中学习
Learning from Imperfect Annotations
论文作者
论文摘要
如今,许多机器学习系统都接受了大量人类通知数据的培训。需要高度能力的数据注释任务使数据获取昂贵,而所产生的标签通常是主观的,不一致的,并且可能包含各种人类偏见。为了提高数据质量,从业人员通常需要按照示例收集多个注释,并在培训模型之前汇总它们。这种多阶段的方法会导致冗余注释,并且通常会产生不完美的“地面真相”,从而可能限制训练准确的机器学习模型的潜力。我们提出了一个新的端到端框架,使我们能够:(i)将聚合步骤与模型培训合并,从而使深度学习系统可以学会直接从可用数据中预测地面真理估计,以及(ii)模型的示例困难并学习注释者的表示,使我们能够估算并考虑其能力。我们的方法是一般的,并且具有许多应用程序,包括在人群数据,集合学习以及从未标记的数据中估算分类器的准确性估算中,培训更准确的模型。我们对5个众包数据集的方法进行了广泛的实验评估,并且与当前的最新注释方法相比,准确性提高高达25%,以及所需的注释冗余的显着降低。
Many machine learning systems today are trained on large amounts of human-annotated data. Data annotation tasks that require a high level of competency make data acquisition expensive, while the resulting labels are often subjective, inconsistent, and may contain a variety of human biases. To improve the data quality, practitioners often need to collect multiple annotations per example and aggregate them before training models. Such a multi-stage approach results in redundant annotations and may often produce imperfect "ground truth" that may limit the potential of training accurate machine learning models. We propose a new end-to-end framework that enables us to: (i) merge the aggregation step with model training, thus allowing deep learning systems to learn to predict ground truth estimates directly from the available data, and (ii) model difficulties of examples and learn representations of the annotators that allow us to estimate and take into account their competencies. Our approach is general and has many applications, including training more accurate models on crowdsourced data, ensemble learning, as well as classifier accuracy estimation from unlabeled data. We conduct an extensive experimental evaluation of our method on 5 crowdsourcing datasets of varied difficulty and show accuracy gains of up to 25% over the current state-of-the-art approaches for aggregating annotations, as well as significant reductions in the required annotation redundancy.