Difference between revisions of "Markov decision process"

From Simulace.info
Jump to: navigation, search
Line 23: Line 23:
 
A Markov Process is defined by (P, S), where S are the states and P is state-transition probability. It consists of a sequence of random states s1, s2 ..., where all states obey Markov Property. Pss' is the probabilty of jumping to state s' from current state s.
 
A Markov Process is defined by (P, S), where S are the states and P is state-transition probability. It consists of a sequence of random states s1, s2 ..., where all states obey Markov Property. Pss' is the probabilty of jumping to state s' from current state s.
 
Let's consider an example of an automatic vacuum cleaner. When it is next to a wall there is probability of 10% that it will crash it and 90% probabilty that it will change direction and proceed with cleaning. So the probability of state s' (crashed in our case) is 0.1 with respect to current state (next to a wall).
 
Let's consider an example of an automatic vacuum cleaner. When it is next to a wall there is probability of 10% that it will crash it and 90% probabilty that it will change direction and proceed with cleaning. So the probability of state s' (crashed in our case) is 0.1 with respect to current state (next to a wall).
 +
 +
 +
=== Markov Reward Process ===
 +
<math>R_s = E[R_t+1 | S_t = s]</math>
 +
 +
Markov Reward process is defined by '''(S, P, R, \gama)''', where S are states, P is state-transition probability, R_s is reward and \gama is the discount factor. R_s is the expected reward of all possible states that one can transition from state s. By convention, it is said that the reward is received after the agent leaves the state and hence, regarded as R_t+1. For example, let's consider already mentioned automatic vacuum cleaner, if it decides to go cleaning near to a wall there is 10% probability that it will crash what will bring utility of -5, however if it manages to avoid the wall and will clean the area sucessfully (probability is 90% as there are only two probable outcomes) it will get the utility 10. Hence the expected reward that the vacuum cleaner gets when going close to the wall is '''0.1 * (-5) + 0.9 * 10 = (-0.5) + 9 = 8.5'''

Revision as of 01:20, 2 January 2021

Introduction

Markov decision process is a mathematical framework used for modeling decision-making problems when the outcomes are partly random and partly controllable.


Terminology

Agent: an agent is the entity which we are training to make correct decisions (we teach a robot how to move around the house without crashing).

Enviroment: is the sorrounding with which the agent interacts (a house), the agent cannot manipulate its sorroundings, it cannot only control its own actions (a robot cannot move a table in the house, it can walk around it in order to avoid crashing).

State: the state defines the current situation of the agent (the robot can be in particular room of the house, or in a particular posture, states depend on a point of view).

Action: the choice that the agent makes at the current step (move left, right, stand up, bend over etc.). We know all possible options for actions in advance.

Characteristics

Markov Property

Markov property says that current state of the agent (for example a Robot) depends solely on the previous state and doesn't depend in any way on states the agent was in prior the previous state.

Markov Process/Markov Chain

Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://en.wikipedia.org/api/rest_v1/":): {\displaystyle Pss' = P[S_t+1 = s' | S_t = s]}

A Markov Process is defined by (P, S), where S are the states and P is state-transition probability. It consists of a sequence of random states s1, s2 ..., where all states obey Markov Property. Pss' is the probabilty of jumping to state s' from current state s. Let's consider an example of an automatic vacuum cleaner. When it is next to a wall there is probability of 10% that it will crash it and 90% probabilty that it will change direction and proceed with cleaning. So the probability of state s' (crashed in our case) is 0.1 with respect to current state (next to a wall).


Markov Reward Process

Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://en.wikipedia.org/api/rest_v1/":): {\displaystyle R_s = E[R_t+1 | S_t = s]}

Markov Reward process is defined by (S, P, R, \gama), where S are states, P is state-transition probability, R_s is reward and \gama is the discount factor. R_s is the expected reward of all possible states that one can transition from state s. By convention, it is said that the reward is received after the agent leaves the state and hence, regarded as R_t+1. For example, let's consider already mentioned automatic vacuum cleaner, if it decides to go cleaning near to a wall there is 10% probability that it will crash what will bring utility of -5, however if it manages to avoid the wall and will clean the area sucessfully (probability is 90% as there are only two probable outcomes) it will get the utility 10. Hence the expected reward that the vacuum cleaner gets when going close to the wall is 0.1 * (-5) + 0.9 * 10 = (-0.5) + 9 = 8.5