论文标题

学习如何在非平稳设置中批准机器学习算法的更新

Learning how to approve updates to machine learning algorithms in non-stationary settings

论文作者

Feng, Jean

论文摘要

医疗保健中的机器学习算法有可能不断从医疗保健交付过程中产生的现实世界数据中学习并适应数据集偏移。因此,FDA正在寻求设计可以自主批准对机器学习算法进行修改的政策,同时维持或提高部署模型的安全性和有效性。但是,选择固定的批准策略,先验是很困难的,因为其性能取决于数据的平稳性和提议的修改的质量。为此,我们研究了一种学习到学习方法(L2A),该方法使用累积的监视数据来学习如何批准修改。 L2A定义了一系列策略在其“乐观主义”中变化 - 更乐观的政策具有更快的批准率----使用逐个加权的平均预报员对这个家庭进行搜索。为了控制部署模型的累积风险,我们可以通过对L2A进行否定的风险,而不是在固定的固定范围内置于固定的风险。假设分布的变化是平稳的。

Machine learning algorithms in healthcare have the potential to continually learn from real-world data generated during healthcare delivery and adapt to dataset shifts. As such, the FDA is looking to design policies that can autonomously approve modifications to machine learning algorithms while maintaining or improving the safety and effectiveness of the deployed models. However, selecting a fixed approval strategy, a priori, can be difficult because its performance depends on the stationarity of the data and the quality of the proposed modifications. To this end, we investigate a learning-to-approve approach (L2A) that uses accumulating monitoring data to learn how to approve modifications. L2A defines a family of strategies that vary in their "optimism''---where more optimistic policies have faster approval rates---and searches over this family using an exponentially weighted average forecaster. To control the cumulative risk of the deployed model, we give L2A the option to abstain from making a prediction and incur some fixed abstention cost instead. We derive bounds on the average risk of the model deployed by L2A, assuming the distributional shifts are smooth. In simulation studies and empirical analyses, L2A tailors the level of optimism for each problem-setting: It learns to abstain when performance drops are common and approve beneficial modifications quickly when the distribution is stable.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源