论文标题
连续的下型最大化:超出DR-sodumardity
Continuous Submodular Maximization: Beyond DR-Submodularity
论文作者
论文摘要
在本文中,我们提出了第一个连续优化算法,该算法可以实现单调连续下二一个最大化问题的恒定因子近似保证,但要受到线性约束。 We first prove that a simple variant of the vanilla coordinate ascent, called Coordinate-Ascent+, achieves a $(\frac{e-1}{2e-1}-\varepsilon)$-approximation guarantee while performing $O(n/\varepsilon)$ iterations, where the computational complexity of each iteration is roughly $ O(n/\ sqrt {\ varepsilon}+n \ log n)$(在这里,$ n $表示优化问题的维度)。然后,我们提出了坐标++,在执行相同数量的迭代时,可以实现紧密的$(1-1/e- \ varepsilon)$ - 近似保证,但在较高的计算复杂性下,大约$ o(n^3/\ varepsilon^{2.5} {2.5}+n^3 \ log n^\ log n/^log n/f varepsilon^2)但是,每轮坐标为++的计算可以很容易地并行化,因此每个机器的计算成本为$ o(n/\ sqrt {\ varepsilon}+n \ log n)$。
In this paper, we propose the first continuous optimization algorithms that achieve a constant factor approximation guarantee for the problem of monotone continuous submodular maximization subject to a linear constraint. We first prove that a simple variant of the vanilla coordinate ascent, called Coordinate-Ascent+, achieves a $(\frac{e-1}{2e-1}-\varepsilon)$-approximation guarantee while performing $O(n/\varepsilon)$ iterations, where the computational complexity of each iteration is roughly $O(n/\sqrt{\varepsilon}+n\log n)$ (here, $n$ denotes the dimension of the optimization problem). We then propose Coordinate-Ascent++, that achieves the tight $(1-1/e-\varepsilon)$-approximation guarantee while performing the same number of iterations, but at a higher computational complexity of roughly $O(n^3/\varepsilon^{2.5} + n^3 \log n / \varepsilon^2)$ per iteration. However, the computation of each round of Coordinate-Ascent++ can be easily parallelized so that the computational cost per machine scales as $O(n/\sqrt{\varepsilon}+n\log n)$.