摘要翻译:
在这篇文章中,我们介绍了Arcade Learning Environment(ALE):既是一个挑战性的问题,也是一个评估通用的、与领域无关的AI技术发展的平台和方法。ALE为数百个Atari 2600游戏环境提供了一个接口,每个环境都不同,有趣,并为人类玩家设计了一个挑战。ALE为强化学习、模型学习、基于模型的规划、模仿学习、迁移学习和内在动机提出了重要的研究挑战。最重要的是,它为评估和比较解决这些问题的方法提供了一个严格的测试平台。我们通过开发与领域无关的代理并对其进行基准测试来说明ALE的前景,这些代理使用了成熟的AI技术来进行强化学习和规划。在这样做的时候,我们还提出了一种ALE的评估方法,报告了超过55个不同游戏的实证结果。所有的软件,包括基准代理,都是公开的。
---
英文标题:
《The Arcade Learning Environment: An Evaluation Platform for General
  Agents》
---
作者:
Marc G. Bellemare, Yavar Naddaf, Joel Veness, Michael Bowling
---
最新提交年份:
2013
---
分类信息:
一级分类:Computer Science        计算机科学
二级分类:Artificial Intelligence        
人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
---
英文摘要:
  In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available. 
---
PDF链接:
https://arxiv.org/pdf/1207.4708