论文标题

多重生分布回归的学习率的估计

Estimates on Learning Rates for Multi-Penalty Distribution Regression

论文作者

Yu, Zhan, Ho, Daniel W. C.

论文摘要

本文通过利用两阶段采样的分布回归来与功能学习有关。我们研究了在学习理论框架下的分布回归的多核正则化算法。该算法旨在从概率度量中回归到实际有价值的输出。关于分布回归的理论分析远非成熟和具有挑战性,因为仅在实际环境中可以观察到第二阶段样本。在算法中,为了将信息从样本转换,我们将分布嵌入到繁殖的内核Hilbert Space $ \ Mathcal {H} _K $与Mercer内核$ k $通过均值嵌入技术相关的复制。本文的主要贡献是提出一种新型的多培养化正则化算法,以捕获分布回归的更多特征,并得出该算法的最佳学习率。这项工作还得出了在非标准设置中的分布回归的学习率$f_ρ\ notin \ mathcal {h} _k $,这在现有文献中未探索。此外,我们提出了一种基于分布回归的分布式学习算法,以面对大规模的数据或信息挑战。为分布式学习算法得出了最佳学习率。通过提供新的算法并显示其学习率,我们在文献中不同方面改善了现有工作。

This paper is concerned with functional learning by utilizing two-stage sampled distribution regression. We study a multi-penalty regularization algorithm for distribution regression under the framework of learning theory. The algorithm aims at regressing to real valued outputs from probability measures. The theoretical analysis on distribution regression is far from maturity and quite challenging, since only second stage samples are observable in practical setting. In the algorithm, to transform information from samples, we embed the distributions to a reproducing kernel Hilbert space $\mathcal{H}_K$ associated with Mercer kernel $K$ via mean embedding technique. The main contribution of the paper is to present a novel multi-penalty regularization algorithm to capture more features of distribution regression and derive optimal learning rates for the algorithm. The work also derives learning rates for distribution regression in the nonstandard setting $f_ρ\notin\mathcal{H}_K$, which is not explored in existing literature. Moreover, we propose a distribution regression-based distributed learning algorithm to face large-scale data or information challenge. The optimal learning rates are derived for the distributed learning algorithm. By providing new algorithms and showing their learning rates, we improve the existing work in different aspects in the literature.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源