摘要翻译:
决策理论智能体使用环境的模型或本体来预测和评估其行为的结果。Agent的目标或效用函数也可以根据其本体的状态或本体内的实体来指定。如果agent可以升级或替换其本体,它将面临一个危机:agent最初的目标可能没有根据它的新本体进行很好的定义。在代理人制定实现其目标的计划之前,必须解决这一危机。在本文中,我们讨论了哪些类型的Agent将经历本体危机,以及为什么我们可能要创建这样的Agent。我们给出了一些具体的例子,并认为需要一个定义良好的程序来解决存在论危机。我们指出了解决这个问题的一些可能的方法,并在我们的例子中对这些方法进行了评估。
---
英文标题:
《Ontological Crises in Artificial Agents' Value Systems》
---
作者:
Peter de Blanc
---
最新提交年份:
2011
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Artificial Intelligence
人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
---
英文摘要:
Decision-theoretic agents predict and evaluate the results of their actions using a model, or ontology, of their environment. An agent's goal, or utility function, may also be specified in terms of the states of, or entities within, its ontology. If the agent may upgrade or replace its ontology, it faces a crisis: the agent's original goal may not be well-defined with respect to its new ontology. This crisis must be resolved before the agent can make plans towards achieving its goals. We discuss in this paper which sorts of agents will undergo ontological crises and why we may want to create such agents. We present some concrete examples, and argue that a well-defined procedure for resolving ontological crises is needed. We point to some possible approaches to solving this problem, and evaluate these methods on our examples.
---
PDF链接:
https://arxiv.org/pdf/1105.3821