论文标题
通过gershgorin光盘完美对齐的签名图公制学习
Signed Graph Metric Learning via Gershgorin Disc Perfect Alignment
论文作者
论文摘要
给定一个真实对称矩阵$ \ m $的凸面和可区分的目标$ q(\ m)$,用于计算Mahalanobis距离的正定义(PD)锥体 - 我们提出了一个完全免费投影的快速通用度量学习框架。我们首先假设$ \ m $位于与平衡签名的图相对应的广义图Laplacian矩阵的空间$ \ cs $中。 $ \ m \ in \ cs $也称为图公制矩阵。与文献中常见的低级度量矩阵不同,$ \ cs $包括重要的仅对角线矩阵作为特殊情况。绕过完整欧吉分解和启用快速度量矩阵优化的关键定理是gershgorin disc的完美对齐(GDPA):给定$ \ m \ in \ cs $和对角线矩阵$§$ $§$,其中$ s_ {ii} = 1/v_i $ and $ \ v $ Gershgorin圆盘的相似性转换$ \ b =§\m§^{ - 1} $在最小的特征$λ_ {\ min} $上完全对齐。使用该定理,我们用迭代的最紧密的线性约束替换了公制学习问题中的PD锥约束,因此可以通过Frank-Wolfe方法在$ \ m $中以$ \ m $进行对角线 /外词项的交替优化,以$ \ m $的形式有效地求解。我们使用本地最佳的块预处理结合梯度(LOBPCG)更新$ \ v $,随着$ \ m $中的条目连续优化。实验表明,我们的图度量优化比锥体预测方案要快得多,并产生竞争性的二进制分类性能。
Given a convex and differentiable objective $Q(\M)$ for a real symmetric matrix $\M$ in the positive definite (PD) cone -- used to compute Mahalanobis distances -- we propose a fast general metric learning framework that is entirely projection-free. We first assume that $\M$ resides in a space $\cS$ of generalized graph Laplacian matrices corresponding to balanced signed graphs. $\M \in \cS$ that is also PD is called a graph metric matrix. Unlike low-rank metric matrices common in the literature, $\cS$ includes the important diagonal-only matrices as a special case. The key theorem to circumvent full eigen-decomposition and enable fast metric matrix optimization is Gershgorin disc perfect alignment (GDPA): given $\M \in \cS$ and diagonal matrix $§$, where $S_{ii} = 1/v_i$ and $\v$ is $\M$'s first eigenvector, we prove that Gershgorin disc left-ends of similarity transform $\B = §\M §^{-1}$ are perfectly aligned at the smallest eigenvalue $λ_{\min}$. Using this theorem, we replace the PD cone constraint in the metric learning problem with tightest possible linear constraints per iteration, so that the alternating optimization of the diagonal / off-diagonal terms in $\M$ can be solved efficiently as linear programs via the Frank-Wolfe method. We update $\v$ using Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) with warm start as entries in $\M$ are optimized successively. Experiments show that our graph metric optimization is significantly faster than cone-projection schemes, and produces competitive binary classification performance.