Thumbnail
Access Restriction
Open

Author Buck, Sebastian ♦ Beetz, Michael ♦ Schmitt, Thorsten
Source CiteSeerX
Content type Text
File Format PDF
Language English
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science
Subject Keyword Characteristic Shape ♦ Accurate Value Function ♦ Exploration Run ♦ Continuous Space Reinforcement Learning ♦ Command Parameter ♦ Continuous Space Value Function ♦ Learned Function ♦ State Space ♦ System State ♦ Neural Network ♦ Possible Problem ♦ Value Function ♦ Robot Control ♦ Robot Navigation Task
Description Many robot learning tasks are very difficult to solve: their state spaces are high dimensional, variables and command parameters are continuously valued, and system states are only partly observable. In this paper, we propose to learn a continuous space value function for reinforcement learning using neural networks trained from data of exploration runs. The learned function is guaranteed to be a lower bound for, and reproduces the characteristic shape of, the accurate value function. We apply our approach to two robot navigation tasks, discuss how to deal with possible problems occurring in practice, and assess its performance.
In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2002
Educational Role Student ♦ Teacher
Age Range above 22 year
Educational Use Research
Education Level UG and PG ♦ Career/Technical Study
Learning Resource Type Article
Publisher Date 2002-01-01