Planning to Maximize Reward: Markov Decision processes Brian C. Williams 16410 December 8 h. 2003 Slides adapted from Reid simmons. Tom Mitchell. CMU Reading and Assignments Markov decision processes Read AIMA Chapters 17 sections 1-3 A This Lecture based on development in Machine Learning by Tom Mitchell Chapter 13: Reinforcement Learning
Planning to Maximize Reward: Markov Decision Processes Brian C. Williams 16.410 December 8th, 2003 Slides adapted from: Manuela Veloso, Reid Simmons, & Tom Mitchell, CMU 2 Reading and Assignments • • This Lecture based on development in: “Machine Learning” by Tom Mitchell Chapter 13: Reinforcement Learning Markov Decision Processes Read AIMA Chapters 17 sections 1 – 3. 1
How Might a mouse search a Maze for Cheese? heese · State Space Search? As a Constraint Satisfaction Problem? Goal-directed Planning As a rule or production System? What is missing? Ideas in this lecture Objective is to accumulate rewards rather than goal states Task is to generate policies for how to act in all situations rather than a plan for a single starting situation Policies fall out of value functions which describe the greatest lifetime reward achievable at every state Value functions are iteratively approximated
3 How Might a Mouse Search a Maze for Cheese? • • • • • Cheese 4 Ideas in this lecture • rather than goal states. • rather than a plan for a single starting situation. • greatest lifetime reward achievable at every state. • State Space Search? As a Constraint Satisfaction Problem? Goal-directed Planning? As a Rule or Production System? What is missing? Objective is to accumulate rewards, Task is to generate policies for how to act in all situations, Policies fall out of value functions, which describe the Value functions are iteratively approximated
MDP EXamples: TD-Gammon [Tesauro, 1995 Learning Through Reinforcement Learns to play Backgammon Board configurations(1020) · Moves Rewards +100 if win 100 if lose ·0 for all other states Trained by playing 1.5 million games against self Currently, roughly equal to best human player MDP EXamples: Aerial Robotics [Feron et al. Computing a Solution from a Continuous Model Courtesy of Eric Feron Courtesy of Eric Feron Courtesy of Eric Feron
5 MDP Examples: TD-Gammon [Tesauro, 1995] Learning Through Reinforcement States: • 20) Actions: • Rewards: • • • • Î Currently, roughly equal to best human player. 6 Computing a Solution from a Continuous Model Learns to play Backgammon Board configurations (10 Moves +100 if win - 100 if lose 0 for all other states Trained by playing 1.5 million games against self. MDP Examples: Aerial Robotics [Feron et al.] Courtesy of Eric Feron. Courtesy of Eric Feron. Courtesy of Eric Feron
Markov Decision processes · Motivation What are Markov Decision Processes(MDPS)? Lifetime reward Policies Computing policies From a Mode Summary MDP Problem State Reward Action Environment Given an environment model as a MDP create a policy for acting that maximizes lifetime reward
7 Markov Decision Processes • • • • lici • 8 MDP Problem Agent s0 r0 a0 s1 a1 r1 s2 a2 r2 s3 State Reward Action Given an environment model as a MDP create a lifetime reward Motivation What are Markov Decision Processes (MDPs)? Models Lifetime Reward • Po es Computing Policies From a Model • Summary Environment policy for acting that maximizes
MDP Problem Model State Reward Action Environment o Given an environment model as a MDP create a policy for acting that maximizes lifetime reward Markov Decision ProcessesMDPs) Model Process Finite set of states S Observe state s, in s Finite set of actions A · Choose action a,inA (Probabilistic) state Receive immediate reward rt transitions, as, a) · State changes to s Reward for each state and action, R(s, a) Example 9s,a522 o 10 Legal transitions shown Reward on unlabeled transitions is o
9 MDP Problem: Model Agent s0 r0 a0 s1 a1 r1 s2 a2 r2 s3 State Reward Action Given an environment model as a MDP create a lifetime reward 10 Markov Decision Processes (MDPs) Model: • Fi S • Fi i A • (Probabilistic) state transitions, δ(s,a) • and action, R(s,a) Process: G 10 10 10 transitions is 0. s0 r0 a0 s1 a1 r1 s2 a2 r2 s3 Example: s1 a1 • t in S • t in A • t • t+1 Environment policy for acting that maximizes nite set of states, nite set of act ons, Reward for each state • Legal transitions shown • Reward on unlabeled Observe state s Choose action a Receive immediate reward r State changes to s