摘要翻译:
我们提出了自动学习概率分层任务网络(pHTNs),通过只观察用户的行为来捕捉用户对计划的偏好。HTNs是规划中用于各种目的的一种常见的表示选择,包括规划中的学习工作。我们的贡献是(a)学习结构和(b)代表偏好。相比之下,以前使用HTNs的工作考虑学习方法的前提条件(而不是结构)和表示领域物理或搜索控制知识(而不是偏好)。首先,我们将假设观察到的计划分布是用户偏好的准确表示,然后推广到可行性约束经常阻止偏好计划执行的情况。为了学习计划上的分布,我们从(概率)文法归纳的学科中采用了一种期望最大化(EM)技术,将任务约简看作是上下文无关文法对原语动作的产物。为了解释可能的和首选的计划的分布之间的差异,我们随后修改了这个核心EM技术,简而言之,通过重新标度它的输入。
---
英文标题:
《Learning Probabilistic Hierarchical Task Networks to Capture User
Preferences》
---
作者:
Nan Li, William Cushing, Subbarao Kambhampati, Sungwook Yoon
---
最新提交年份:
2010
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Artificial Intelligence
人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
---
英文摘要:
We propose automatically learning probabilistic Hierarchical Task Networks (pHTNs) in order to capture a user's preferences on plans, by observing only the user's behavior. HTNs are a common choice of representation for a variety of purposes in planning, including work on learning in planning. Our contributions are (a) learning structure and (b) representing preferences. In contrast, prior work employing HTNs considers learning method preconditions (instead of structure) and representing domain physics or search control knowledge (rather than preferences). Initially we will assume that the observed distribution of plans is an accurate representation of user preference, and then generalize to the situation where feasibility constraints frequently prevent the execution of preferred plans. In order to learn a distribution on plans we adapt an Expectation-Maximization (EM) technique from the discipline of (probabilistic) grammar induction, taking the perspective of task reductions as productions in a context-free grammar over primitive actions. To account for the difference between the distributions of possible and preferred plans we subsequently modify this core EM technique, in short, by rescaling its input.
---
PDF链接:
https://arxiv.org/pdf/1006.0274