摘要翻译:
本文提出了一个实体的认知系统和决策系统之间的仔细分离。最重要的是,贝叶斯反事实是由认知系统估计的;不是通过决策系统。基于这句话,我证明了类纽科姆问题的存在,对于这些问题,一个认识系统必然期望实体做出一个反事实的错误决定。然后,我处理纽科姆悖论的(一个轻微的概括)。我解决了一个特定的情况,即玩家认为预测器使用玩家可用的所有数据集来应用贝叶斯规则。我证明1盒策略的反事实最优性取决于玩家对预测者附加数据的先验。如果这些额外的数据不能充分减少预测者对玩家决策的不确定性,那么玩家的认知系统将反事实地倾向于2-盒。但是,如果预测者的数据被认为是准全知的,那么1方框将是反事实的首选。然后讨论了分析的含义。更一般地说,我认为,为了更好地理解或设计一个实体,清楚地分离该实体的认知、决策以及数据收集、奖励和维护系统是有用的,无论该实体是人类的、算法的还是机构的。
---
英文标题:
《Purely Bayesian counterfactuals versus Newcomb's paradox》
---
作者:
L\^e Nguy\^en Hoang
---
最新提交年份:
2020
---
分类信息:
一级分类:Economics        经济学
二级分类:Theoretical Economics        理论经济学
分类描述:Includes theoretical contributions to Contract Theory, Decision Theory, Game Theory, General Equilibrium, Growth, Learning and Evolution, Macroeconomics, Market and Mechanism Design, and Social Choice.
包括对契约理论、决策理论、博弈论、一般均衡、增长、学习与进化、宏观经济学、市场与机制设计、社会选择的理论贡献。
--
一级分类:Computer Science        计算机科学
二级分类:Artificial Intelligence        
人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
---
英文摘要:
  This paper proposes a careful separation between an entity's epistemic system and their decision system. Crucially, Bayesian counterfactuals are estimated by the epistemic system; not by the decision system. Based on this remark, I prove the existence of Newcomb-like problems for which an epistemic system necessarily expects the entity to make a counterfactually bad decision. I then address (a slight generalization of) Newcomb's paradox. I solve the specific case where the player believes that the predictor applies Bayes rule with a supset of all the data available to the player. I prove that the counterfactual optimality of the 1-Box strategy depends on the player's prior on the predictor's additional data. If these additional data are not expected to reduce sufficiently the predictor's uncertainty on the player's decision, then the player's epistemic system will counterfactually prefer to 2-Box. But if the predictor's data is believed to make them quasi-omniscient, then 1-Box will be counterfactually preferred. Implications of the analysis are then discussed. More generally, I argue that, to better understand or design an entity, it is useful to clearly separate the entity's epistemic, decision, but also data collection, reward and maintenance systems, whether the entity is human, algorithmic or institutional. 
---
PDF链接:
https://arxiv.org/pdf/2008.04256