摘要翻译:
我们从统计原理出发,导出了一个时间差分学习方程。具体来说,我们从变分原理开始,然后引导产生贴现状态值估计的更新规则。所得方程类似于具有合格轨迹的时域差分学习的标准方程,即所谓的TD(lambda),但它缺少指定学习速率的参数α。在这个自由参数的位置上,现在有一个学习速率的方程,它是特定于每个状态转换的。我们用TD(lambda)对新的学习规则进行了实验测试,发现它在各种设置下都提供了优越的性能。最后,我们对如何将新的时间差分算法扩展到强化学习进行了初步的研究。为此,我们将我们的更新方程与Watkins的Q(lambda)和Sarsa(lambda)结合起来,发现它在没有学习速率参数的情况下再次提供了优越的性能。
---
英文标题:
《Temporal Difference Updating without a Learning Rate》
---
作者:
Marcus Hutter and Shane Legg
---
最新提交年份:
2008
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Machine Learning
机器学习
分类描述:Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.
关于机器学习研究的所有方面的论文(有监督的,无监督的,强化学习,强盗问题,等等),包括健壮性,解释性,公平性和方法论。对于机器学习方法的应用,CS.LG也是一个合适的主要类别。
--
一级分类:Computer Science 计算机科学
二级分类:Artificial Intelligence
人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
---
英文摘要:
We derive an equation for temporal difference learning from statistical principles. Specifically, we start with the variational principle and then bootstrap to produce an updating rule for discounted state value estimates. The resulting equation is similar to the standard equation for temporal difference learning with eligibility traces, so called TD(lambda), however it lacks the parameter alpha that specifies the learning rate. In the place of this free parameter there is now an equation for the learning rate that is specific to each state transition. We experimentally test this new learning rule against TD(lambda) and find that it offers superior performance in various settings. Finally, we make some preliminary investigations into how to extend our new temporal difference algorithm to reinforcement learning. To do this we combine our update equation with both Watkins' Q(lambda) and Sarsa(lambda) and find that it again offers superior performance without a learning rate parameter.
---
PDF链接:
https://arxiv.org/pdf/0810.5631