摘要翻译:
描述现实世界一部分的知识系统通常不包含完整的信息。用不完全信息进行推理容易出错,因为从S得出的任何信念在当今世界的状态下都可能是错误的。错误的信念可能会暗示错误的决定,并导致有害的行动。所以一个重要的目标是让错误的信念尽可能不可能发生。本文引入了“典型原子”和“典型模型”的概念,并证明了在使用不完全信息的所有方式下,用典型模型进行推理可以使期望的错误信念数最小化。研究了典型模型的各种性质,特别是典型模型所暗示的信念的正确性和稳定性,以及它们与遗忘推理的关系。
---
英文标题:
《Typical models: minimizing false beliefs》
---
作者:
Eliezer L. Lozinskii
---
最新提交年份:
2011
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Artificial Intelligence
人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
---
英文摘要:
A knowledge system S describing a part of real world does in general not contain complete information. Reasoning with incomplete information is prone to errors since any belief derived from S may be false in the present state of the world. A false belief may suggest wrong decisions and lead to harmful actions. So an important goal is to make false beliefs as unlikely as possible. This work introduces the notions of "typical atoms" and "typical models", and shows that reasoning with typical models minimizes the expected number of false beliefs over all ways of using incomplete information. Various properties of typical models are studied, in particular, correctness and stability of beliefs suggested by typical models, and their connection to oblivious reasoning.
---
PDF链接:
https://arxiv.org/pdf/1105.3833