论文标题
强大的元学习,通过采样噪声和标签噪声通过本特征噪声
Robust Meta-learning with Sampling Noise and Label Noise via Eigen-Reptile
论文作者
论文摘要
近年来,人们对解决少量学习(FSL)问题的元学习技术引起了人们的兴趣。但是,由于只有几个可用的样本,可以将元学习者容易过度拟合,因此可以将其识别为干净的数据集中的采样噪声。此外,在使用嘈杂的标签处理数据时,元学习者可能对损坏数据集上的噪声非常敏感。为了应对这两个挑战,我们提出了特征性的(ER),该挑战将以历史特定任务参数的主要方向更新元参数,以减轻采样和标签噪声。具体而言,主要方向以快速方式计算,其中计算出的矩阵的比例与梯度步骤的数量而不是参数数量有关。此外,为了在存在许多嘈杂的标签的情况下为本征回复获得更准确的主要方向,我们进一步提出了内省的自定进度学习(ISPL)。我们在理论上和实验上证明了所提出的征元和ISPL的健全性和有效性。特别是,我们对不同任务的实验表明,与具有或没有嘈杂标签的其他基于梯度的方法相比,所提出的方法能够胜过或实现高度竞争性能。为研究目的提供了建议的方法的代码和数据https://github.com/anfeather/eigen-reptile。
Recent years have seen a surge of interest in meta-learning techniques for tackling the few-shot learning (FSL) problem. However, the meta-learner is prone to overfitting since there are only a few available samples, which can be identified as sampling noise on a clean dataset. Moreover, when handling the data with noisy labels, the meta-learner could be extremely sensitive to label noise on a corrupted dataset. To address these two challenges, we present Eigen-Reptile (ER) that updates the meta-parameters with the main direction of historical task-specific parameters to alleviate sampling and label noise. Specifically, the main direction is computed in a fast way, where the scale of the calculated matrix is related to the number of gradient steps instead of the number of parameters. Furthermore, to obtain a more accurate main direction for Eigen-Reptile in the presence of many noisy labels, we further propose Introspective Self-paced Learning (ISPL). We have theoretically and experimentally demonstrated the soundness and effectiveness of the proposed Eigen-Reptile and ISPL. Particularly, our experiments on different tasks show that the proposed method is able to outperform or achieve highly competitive performance compared with other gradient-based methods with or without noisy labels. The code and data for the proposed method are provided for research purposes https://github.com/Anfeather/Eigen-Reptile.