论文标题

贝叶斯优化,最佳优先

Bayesian Optimization with a Prior for the Optimum

论文作者

Souza, Artur, Nardi, Luigi, Oliveira, Leonardo B., Olukotun, Kunle, Lindauer, Marius, Hutter, Frank

论文摘要

虽然贝叶斯优化(BO)是一种非常流行的方法来优化昂贵的黑盒功能,但它无法利用域专家的体验。这导致BO对专家已经知道工作障碍的不良设计选择(例如机器学习超级标准)进行浪费功能评估。为了解决此问题,我们在最佳(Bopro)的先验中介绍了贝叶斯优化。 BOPRO允许用户以先验的形式将知识注入优化过程,以表明输入空间的哪些部分将产生最佳性能,而不是BO的标准先验,而不是对用户直观的功能。然后,Bopro将这些先验与BO的标准概率模型相结合,以形成用于选择下一步评估的伪后者。我们表明,BoPro在常见的基准套件上的最先进方法快6.67倍,并在现实世界中的硬件设计应用程序上实现了新的最新性能。我们还表明,即使最佳先验的先验并不完全准确,并且从误导性的先验中恢复了良好的速度,Bopro的收敛速度也更快。

While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts. This causes BO to waste function evaluations on bad design choices (e.g., machine learning hyperparameters) that the expert already knows to work poorly. To address this issue, we introduce Bayesian Optimization with a Prior for the Optimum (BOPrO). BOPrO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO's standard priors over functions, which are much less intuitive for users. BOPrO then combines these priors with BO's standard probabilistic model to form a pseudo-posterior used to select which points to evaluate next. We show that BOPrO is around 6.67x faster than state-of-the-art methods on a common suite of benchmarks, and achieves a new state-of-the-art performance on a real-world hardware design application. We also show that BOPrO converges faster even if the priors for the optimum are not entirely accurate and that it robustly recovers from misleading priors.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源