论文标题

使用增压树从异质域中解释的MTL

Interpretable MTL from Heterogeneous Domains using Boosted Tree

论文作者

Zhang, Ya-Lin, Li, Longfei

论文摘要

多任务学习(MTL)旨在通过利用其中包含的有用信息来提高几个相关任务的概括性能。但是,在工业场景中,总是需要解释性,并且不同任务的数据可能是在异质域中,使现有方法不合适或不满意。在本文中,遵循增强树的哲学,我们提出了一种两阶段的方法。在第一阶段,建立了一个共同的模型,以利用所有实例的共同特征来学习共同点。与传统增强树模型的培训不同,我们提出了一种正则化策略和一种早期的机制来优化多任务学习过程。在第二阶段,首先拟合了通用模型的残余误差,由特定于任务的实例构建了一个特定的模型,以进一步提高性能。基准和现实世界数据集的实验验证了所提出方法的有效性。更重要的是,可以自然地从基于树的方法中获得可解释性,从而满足工业需求。

Multi-task learning (MTL) aims at improving the generalization performance of several related tasks by leveraging useful information contained in them. However, in industrial scenarios, interpretability is always demanded, and the data of different tasks may be in heterogeneous domains, making the existing methods unsuitable or unsatisfactory. In this paper, following the philosophy of boosted tree, we proposed a two-stage method. In stage one, a common model is built to learn the commonalities using the common features of all instances. Different from the training of conventional boosted tree model, we proposed a regularization strategy and an early-stopping mechanism to optimize the multi-task learning process. In stage two, started by fitting the residual error of the common model, a specific model is constructed with the task-specific instances to further boost the performance. Experiments on both benchmark and real-world datasets validate the effectiveness of the proposed method. What's more, interpretability can be naturally obtained from the tree based method, satisfying the industrial needs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源