摘要翻译:
研究了在图中寻找最大权独立集(MWIS)的消息传递算法。首先,我们研究了经典的loopy最大乘积置信传播的性能。我们证明了max-product的每个不动点估计可以以自然的方式映射到与MWIS问题相关的LP多面体的一个极值点。但是,这个极值点可能不是使节点权值最大化的那个极值点;最终收敛的极值点取决于最大乘积的初始值。然后我们证明,如果max-product是从无信息消息的自然初始化开始的,它总是解决正确的LP--如果它收敛的话。这个结果是通过对迭代算法的直接分析得到的,而不是只看不动点就能得到的。因此,LP松弛的紧密性对于最大乘积最优性是必要的,但这是不够的。在这个观察的激励下,我们证明了最大乘积的一个简单修改在LP的对偶上变成梯度下降,并收敛到对偶最优。我们还开发了一个消息传递算法,从下降算法的输出中恢复原始MWIS解。当图是二部图且MWIS是唯一的时,我们证明了这两种算法联合得到的MWIS估计是正确的。最后,我们证明了有限域上概率分布的MAP估计问题可以归结为MWIS问题。我们相信这种减少将产生新的见解和算法的地图估计。
---
英文标题:
《Message-passing for Maximum Weight Independent Set》
---
作者:
Sujay Sanghavi, Devavrat Shah and Alan Willsky
---
最新提交年份:
2008
---
分类信息:
一级分类:Computer Science        计算机科学
二级分类:Artificial Intelligence        
人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
一级分类:Computer Science        计算机科学
二级分类:Information Theory        信息论
分类描述:Covers theoretical and experimental aspects of information theory and coding. Includes material in ACM Subject Class E.4 and intersects with H.1.1.
涵盖信息论和编码的理论和实验方面。包括ACM学科类E.4中的材料,并与H.1.1有交集。
--
一级分类:Mathematics        数学
二级分类:Information Theory        信息论
分类描述:math.IT is an alias for cs.IT. Covers theoretical and experimental aspects of information theory and coding.
它是cs.it的别名。涵盖信息论和编码的理论和实验方面。
--
---
英文摘要:
  We investigate the use of message-passing algorithms for the problem of finding the max-weight independent set (MWIS) in a graph. First, we study the performance of the classical loopy max-product belief propagation. We show that each fixed point estimate of max-product can be mapped in a natural way to an extreme point of the LP polytope associated with the MWIS problem. However, this extreme point may not be the one that maximizes the value of node weights; the particular extreme point at final convergence depends on the initialization of max-product. We then show that if max-product is started from the natural initialization of uninformative messages, it always solves the correct LP -- if it converges. This result is obtained via a direct analysis of the iterative algorithm, and cannot be obtained by looking only at fixed points.   The tightness of the LP relaxation is thus necessary for max-product optimality, but it is not sufficient. Motivated by this observation, we show that a simple modification of max-product becomes gradient descent on (a convexified version of) the dual of the LP, and converges to the dual optimum. We also develop a message-passing algorithm that recovers the primal MWIS solution from the output of the descent algorithm. We show that the MWIS estimate obtained using these two algorithms in conjunction is correct when the graph is bipartite and the MWIS is unique.   Finally, we show that any problem of MAP estimation for probability distributions over finite domains can be reduced to an MWIS problem. We believe this reduction will yield new insights and algorithms for MAP estimation. 
---
PDF链接:
https://arxiv.org/pdf/0807.5091