论文标题

FEL:通过联邦合奏学习的高容量学习和推荐和排名

FEL: High Capacity Learning for Recommendation and Ranking via Federated Ensemble Learning

论文作者

Hejazinia, Meisam, Huba, Dzmitry, Leontiadis, Ilias, Maeng, Kiwan, Malek, Mani, Melis, Luca, Mironov, Ilya, Nasr, Milad, Wang, Kaikai, Wu, Carole-Jean

论文摘要

联邦学习(FL)已成为满足消费者隐私需求的有效方法。 FL已成功应用于某些机器学习任务,例如训练智能键盘模型和关键字发现。尽管FL最初取得了成功,但许多重要的深度学习用例,例如排名和推荐任务,受到了设备学习的限制。实际采用基于DL的排名和建议所面临的主要挑战之一是现代移动系统无法满足的过度资源要求。我们建议联合合奏学习(FEL)作为解决深度学习排名和推荐任务的庞大记忆要求的解决方案。 FEL通过同时在客户端设备的分离群中训练多个模型版本,从而实现大规模排名和建议模型培训。 FEL通过架构层将受过训练的子模型集成到服务器上托管的集合模型中。我们的实验表明,与传统的联合学习设备相比,FEL导致0.43-2.31%的模型质量改进 - 对排名和建议系统用例的重大改进。

Federated learning (FL) has emerged as an effective approach to address consumer privacy needs. FL has been successfully applied to certain machine learning tasks, such as training smart keyboard models and keyword spotting. Despite FL's initial success, many important deep learning use cases, such as ranking and recommendation tasks, have been limited from on-device learning. One of the key challenges faced by practical FL adoption for DL-based ranking and recommendation is the prohibitive resource requirements that cannot be satisfied by modern mobile systems. We propose Federated Ensemble Learning (FEL) as a solution to tackle the large memory requirement of deep learning ranking and recommendation tasks. FEL enables large-scale ranking and recommendation model training on-device by simultaneously training multiple model versions on disjoint clusters of client devices. FEL integrates the trained sub-models via an over-arch layer into an ensemble model that is hosted on the server. Our experiments demonstrate that FEL leads to 0.43-2.31% model quality improvement over traditional on-device federated learning - a significant improvement for ranking and recommendation system use cases.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源