Thumbnail
Access Restriction
Open

Author Dean, Thomas ♦ Kaelbling, Leslie Pack ♦ Kirman, Jak ♦ Nicholson, Ann
Source CiteSeerX
Content type Text
File Format PDF
Language English
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science
Subject Keyword Efficient Planning ♦ Optimal Policy ♦ Markov Decision Problem ♦ Future Reward ♦ Complete Plan ♦ Prioritized Combination ♦ Transition Probability ♦ State Variable ♦ Reward Function ♦ World State ♦ Standard Goal ♦ Stochastic Domain ♦ Starting State
Description In Proceedings of the Eleventh National Conference on Artificial Intelligence
We provide a method, based on the theory of Markov decision problems, for efficient planning in stochastic domains. Goals are encoded as reward functions, expressing the desirability of each world state; the planner must find a policy (mapping from states to actions) that maximizes future rewards. Standard goals of achievement, as well as goals of maintenance and prioritized combinations of goals, can be specified in this way. An optimal policy can be found using existing methods, but these methods are at best polynomial in the number of states in the domain, where the number of states is exponential in the number of propositions (or state variables) . By using information about the starting state, the reward function, and the transition probabilities of the domain, we can restrict the planner's attention to a set of world states that are likely to be encountered in satisfying the goal. Furthermore, the planner can generate more or less complete plans depending on the time it has avail...
Educational Role Student ♦ Teacher
Age Range above 22 year
Educational Use Research
Education Level UG and PG ♦ Career/Technical Study
Learning Resource Type Article
Publisher Date 1993-01-01