论文标题

基于Correntropy的回归的新见解

New Insights into Learning with Correntropy Based Regression

论文作者

Feng, Yunlong

论文摘要

CorrentRopy标准及其在机器学习任务中的应用已得到广泛探索和研究。它在回归问题上的应用导致鲁棒性增强了回归范式,即基于Correntropy的回归。在绘制了各种各样的成功现实应用程序之后,最近还从统计学习观点的一系列研究中研究了其理论属性。最终的全局是,基于Correntropy的回归回归对条件模式函数或在某些条件下有条件的平均功能。继续这一趋势并进一步发展,在本研究中,我们报告了一些有关此问题的新见解。首先,我们表明,在添加噪声回归模型下,可以从最小距离估计得出这样的回归范式,这意味着所得估计器本质上是最小距离估计器,因此具有稳健性的性能。其次,我们表明回归范式实际上为回归问题提供了统一的方法,因为它接近条件均值,条件模式以及在某些条件下的条件中位数功能。第三,当利用它通过在条件$(1+ε)$ - 时刻假设下开发其误差界和指数收敛速率来学习条件平均函数时,我们会提出一些新的结果。仍然发生在$(1+ε)$ - 时刻假设下观察到的已建立收敛速率的饱和效应仍会发生,这表明回归估计量的固有偏置。这些新颖的见解加深了我们对基于Correntropy的回归的理解,有助于巩固理论Correntropy框架,还使我们能够研究由一般有限的非凸损失函数引起的学习方案。

Stemming from information-theoretic learning, the correntropy criterion and its applications to machine learning tasks have been extensively explored and studied. Its application to regression problems leads to the robustness enhanced regression paradigm -- namely, correntropy based regression. Having drawn a great variety of successful real-world applications, its theoretical properties have also been investigated recently in a series of studies from a statistical learning viewpoint. The resulting big picture is that correntropy based regression regresses towards the conditional mode function or the conditional mean function robustly under certain conditions. Continuing this trend and going further, in the present study, we report some new insights into this problem. First, we show that under the additive noise regression model, such a regression paradigm can be deduced from minimum distance estimation, implying that the resulting estimator is essentially a minimum distance estimator and thus possesses robustness properties. Second, we show that the regression paradigm, in fact, provides a unified approach to regression problems in that it approaches the conditional mean, the conditional mode, as well as the conditional median functions under certain conditions. Third, we present some new results when it is utilized to learn the conditional mean function by developing its error bounds and exponential convergence rates under conditional $(1+ε)$-moment assumptions. The saturation effect on the established convergence rates, which was observed under $(1+ε)$-moment assumptions, still occurs, indicating the inherent bias of the regression estimator. These novel insights deepen our understanding of correntropy based regression, help cement the theoretic correntropy framework, and also enable us to investigate learning schemes induced by general bounded nonconvex loss functions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源