Thumbnail
Access Restriction
Open

Author Wawrzynski, Pawel ♦ Pacut, Andrzej
Source CiteSeerX
Content type Text
File Format PDF
Language English
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science
Subject Keyword Model-free Off-policy Reinforcement Learning ♦ Reinforcement Learning ♦ Action Space ♦ Continuous Environment ♦ Available Information ♦ Agentenvironment Interaction ♦ Continuous State ♦ Entire History ♦ Classical Reinforcement ♦ Estimation Process ♦ Stochastic Convergence ♦ Control Policy
Description Proceedings of the INNS-IEEE International Joint Conference on Neural Networks
We introduce an algorithm of reinforcement learning in continuous state and action spaces. In order to construct a control policy, the algorithm utilizes the entire history of agentenvironment interaction. The policy is a result of an estimation process based on all available information rather than result of stochastic convergence as in classical reinforcement learning approaches. The policy is derived from the history directly, not through any kind of a model of the environment.
Educational Role Student ♦ Teacher
Age Range above 22 year
Educational Use Research
Education Level UG and PG ♦ Career/Technical Study
Learning Resource Type Article
Publisher Date 2004-01-01