摘要翻译:
研究了分散Pomdp(Dec-Pomdp)最优有限时域控制的N-智能体联合策略问题。这是一个非常复杂的问题(n>=2)。本文提出了一种新的数学规划方法。我们的方法基于两个思想:首先,我们用序列形式而不是树形形式来表示每个Agent的策略,从而获得了联合策略集的非常紧凑的表示。其次,利用这种紧致表示,我们将此问题作为组合优化的一个实例进行求解,并为此构造了一个混合整数线性规划(MILP)。MILP的最优解直接得到DEC-POMDP的最优联合策略。计算经验表明,与现有算法相比,制定和求解MILP所需的求解基准Dec-Pomdp问题的时间要少得多。例如,地平线4的多智能体老虎问题用MILP在72秒内解决,而现有的算法需要几个小时才能解决。
---
英文标题:
《Mixed Integer Linear Programming For Exact Finite-Horizon Planning In
Decentralized Pomdps》
---
作者:
Raghav Aras (INRIA Lorraine - LORIA), Alain Dutech (INRIA Lorraine -
LORIA), Fran\c{c}ois Charpillet (INRIA Lorraine - LORIA)
---
最新提交年份:
2007
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Artificial Intelligence
人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
---
英文摘要:
We consider the problem of finding an n-agent joint-policy for the optimal finite-horizon control of a decentralized Pomdp (Dec-Pomdp). This is a problem of very high complexity (NEXP-hard in n >= 2). In this paper, we propose a new mathematical programming approach for the problem. Our approach is based on two ideas: First, we represent each agent's policy in the sequence-form and not in the tree-form, thereby obtaining a very compact representation of the set of joint-policies. Second, using this compact representation, we solve this problem as an instance of combinatorial optimization for which we formulate a mixed integer linear program (MILP). The optimal solution of the MILP directly yields an optimal joint-policy for the Dec-Pomdp. Computational experience shows that formulating and solving the MILP requires significantly less time to solve benchmark Dec-Pomdp problems than existing algorithms. For example, the multi-agent tiger problem for horizon 4 is solved in 72 secs with the MILP whereas existing algorithms require several hours to solve it.
---
PDF链接:
https://arxiv.org/pdf/0707.2506