摘要翻译:
策略迭代(PI)是为解决最优决策/控制问题或强化学习(RL)问题而进行的策略评估和改进的递归过程。PI也是发展RL方法的基础。本文提出了两种PI方法,分别称为微分PI(DPI)和积分PI(IPI)及其变体,用于连续时间和空间(CTS)中的一般RL框架,其中环境是由常微分方程组(ODEs)建模的。该方法继承了经典RL和最优控制中的PI思想,从理论上支持了CTS中已有的RL算法:TD-learning和基于值梯度(VGB)的贪婪策略更新。本文对Bellman方程(BE)解的可容许性、唯一性、Hamilton-Jacobi-Bellman方程(HJBE)解的单调改进性、收敛性和最优性等基本数学性质进行了深入研究,并对现有理论进行了改进。最后,本文用倒立摆模型进行了仿真,给出了基于模型和部分无模型的实现,以支持理论,并对其进行了进一步的研究。
---
英文标题:
《Policy Iterations for Reinforcement Learning Problems in Continuous Time
and Space -- Fundamental Theory and Methods》
---
作者:
Jaeyoung Lee and Richard S. Sutton
---
最新提交年份:
2020
---
分类信息:
一级分类:Computer Science 计算机科学
二级分类:Artificial Intelligence
人工智能
分类描述:Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
涵盖了人工智能的所有领域,除了视觉、机器人、机器学习、多智能体系统以及计算和语言(自然语言处理),这些领域有独立的学科领域。特别地,包括专家系统,定理证明(尽管这可能与计算机科学中的逻辑重叠),知识表示,规划,和人工智能中的不确定性。大致包括ACM学科类I.2.0、I.2.1、I.2.3、I.2.4、I.2.8和I.2.11中的材料。
--
一级分类:Computer Science 计算机科学
二级分类:Machine Learning
机器学习
分类描述:Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.
关于机器学习研究的所有方面的论文(有监督的,无监督的,强化学习,强盗问题,等等),包括健壮性,解释性,公平性和方法论。对于机器学习方法的应用,CS.LG也是一个合适的主要类别。
--
一级分类:Computer Science 计算机科学
二级分类:Systems and Control 系统与控制
分类描述:cs.SY is an alias for eess.SY. This section includes theoretical and experimental research covering all facets of automatic control systems. The section is focused on methods of control system analysis and design using tools of modeling, simulation and optimization. Specific areas of research include nonlinear, distributed, adaptive, stochastic and robust control in addition to hybrid and discrete event systems. Application areas include automotive and aerospace control systems, network control, biological systems, multiagent and cooperative control, robotics, reinforcement learning, sensor networks, control of cyber-physical and energy-related systems, and control of computing systems.
cs.sy是eess.sy的别名。本部分包括理论和实验研究,涵盖了自动控制系统的各个方面。本节主要介绍利用建模、仿真和优化工具进行控制系统分析和设计的方法。具体研究领域包括非线性、分布式、自适应、随机和鲁棒控制,以及混合和离散事件系统。应用领域包括汽车和航空航天控制系统、网络控制、生物系统、多智能体和协作控制、机器人学、强化学习、传感器网络、信息物理和能源相关系统的控制以及计算系统的控制。
--
一级分类:Electrical Engineering and Systems Science 电气工程与系统科学
二级分类:Systems and Control 系统与控制
分类描述:This section includes theoretical and experimental research covering all facets of automatic control systems. The section is focused on methods of control system analysis and design using tools of modeling, simulation and optimization. Specific areas of research include nonlinear, distributed, adaptive, stochastic and robust control in addition to hybrid and discrete event systems. Application areas include automotive and aerospace control systems, network control, biological systems, multiagent and cooperative control, robotics, reinforcement learning, sensor networks, control of cyber-physical and energy-related systems, and control of computing systems.
本部分包括理论和实验研究,涵盖了自动控制系统的各个方面。本节主要介绍利用建模、仿真和优化工具进行控制系统分析和设计的方法。具体研究领域包括非线性、分布式、自适应、随机和鲁棒控制,以及混合和离散事件系统。应用领域包括汽车和航空航天控制系统、网络控制、生物系统、多智能体和协作控制、机器人学、强化学习、传感器网络、信息物理和能源相关系统的控制以及计算系统的控制。
--
---
英文摘要:
Policy iteration (PI) is a recursive process of policy evaluation and improvement for solving an optimal decision-making/control problem, or in other words, a reinforcement learning (RL) problem. PI has also served as the fundamental for developing RL methods. In this paper, we propose two PI methods, called differential PI (DPI) and integral PI (IPI), and their variants, for a general RL framework in continuous time and space (CTS), where the environment is modeled by a system of ordinary differential equations (ODEs). The proposed methods inherit the current ideas of PI in classical RL and optimal control and theoretically support the existing RL algorithms in CTS: TD-learning and value-gradient-based (VGB) greedy policy update. We also provide case studies including 1) discounted RL and 2) optimal control tasks. Fundamental mathematical properties -- admissibility, uniqueness of the solution to the Bellman equation (BE), monotone improvement, convergence, and optimality of the solution to the Hamilton-Jacobi-Bellman equation (HJBE) -- are all investigated in-depth and improved from the existing theory, along with the general and case studies. Finally, the proposed ones are simulated with an inverted-pendulum model and their model-based and partially model-free implementations to support the theory and further investigate them beyond.
---
PDF链接:
https://arxiv.org/pdf/1705.0352