摘要翻译:
研究马尔可夫决策过程中的在线规划问题。在在线规划中,代理只关注其当前状态,从该状态开始考虑可能的策略集,当中断时,使用探索性考虑的结果来选择下一步执行的操作。在线规划算法的性能用简单遗憾来评估,简单遗憾是当执行所选择的动作而不是最优的动作时,智能体期望的性能损失。到目前为止,最先进的在线规划算法一般MDPs要么是尽力而为,要么保证随着时间的推移,简单遗憾的多项式率减少。本文介绍了一种新的蒙特卡罗树搜索算法BRUE,该算法保证了简单后悔和错误概率的指数级降低。该算法基于一种简单但非标准的状态空间采样方案MCTS2e,其中每个样本的不同部分用于不同的探索目标。我们的经验评估表明,BRUE不仅提供了优越的性能保证,而且在实践中非常有效,与现有技术相比也是有利的。然后我们用“通过遗忘学习”的变体扩展BRUE。由此得到的算法集BRUE(alpha)推广了BRUE,提高了其约简率上界的指数因子,并表现出更有吸引力的经验性能。
---
英文标题:
《Simple Regret Optimization in Online Planning for Markov Decision
Processes》
---
作者:
Zohar Feldman, Carmel Domshlak
---
最新提交年份:
2012
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Artificial Intelligence
人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
一级分类:Computer Science 计算机科学
二级分类:Machine Learning
机器学习
分类描述:Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.
关于机器学习研究的所有方面的论文(有监督的,无监督的,强化学习,强盗问题,等等),包括健壮性,解释性,公平性和方法论。对于机器学习方法的应用,CS.LG也是一个合适的主要类别。
--
---
英文摘要:
We consider online planning in Markov decision processes (MDPs). In online planning, the agent focuses on its current state only, deliberates about the set of possible policies from that state onwards and, when interrupted, uses the outcome of that exploratory deliberation to choose what action to perform next. The performance of algorithms for online planning is assessed in terms of simple regret, which is the agent's expected performance loss when the chosen action, rather than an optimal one, is followed. To date, state-of-the-art algorithms for online planning in general MDPs are either best effort, or guarantee only polynomial-rate reduction of simple regret over time. Here we introduce a new Monte-Carlo tree search algorithm, BRUE, that guarantees exponential-rate reduction of simple regret and error probability. This algorithm is based on a simple yet non-standard state-space sampling scheme, MCTS2e, in which different parts of each sample are dedicated to different exploratory objectives. Our empirical evaluation shows that BRUE not only provides superior performance guarantees, but is also very effective in practice and favorably compares to state-of-the-art. We then extend BRUE with a variant of "learning by forgetting." The resulting set of algorithms, BRUE(alpha), generalizes BRUE, improves the exponential factor in the upper bound on its reduction rate, and exhibits even more attractive empirical performance.
---
PDF链接:
https://arxiv.org/pdf/1206.3382