Markov Decision Processes Verified Solutions
Markov decision processes ️️MDP - formally describe an environment for reinforcement learning - environment is fully observable - current state completely characterizes the process - Almost all RL problems can be formalised as MDP - optimal con...
Markov decision processes ✔️✔️MDP - formally describe an environment for reinforcement learning
- environment is fully observable
- current state completely characterizes the process
- Almost all RL problems can be formalised as MDP
- optimal control primarily deals with continuous MDPs
- Partially observable problems can be converted into MDPs
- Bandits are MDPs with one state
Markov Property ✔️✔️- future is independent of the past given the present
-the state captures all relevant information from the history
- once the state is known the history can be thrown away
- the state is a sufficient statistic of the future
State transition Matrix ✔️✔️- markov state s and successor state s', the state transition probability
- state transition matrix P defines transition probabilities from all states s to all successor states s'
Markov Process ✔️✔️- markov process is a memoryless random process i.e, a sequence of random
states S1, S2... with the markov property
-Markov process (or Markov Chain) is a tuple <S,P>
- S is a (finite) set of states
- P is a state transition probability matrix
Markov reward process ✔️✔️- A markov reward process is a Markov Chain with values
- Markov reward process is a tuple <S,P,R,Y>
- S is a finite set of a states
- P is a state transition probability matrix
, - R is a reward function
-Y is a discount factor
Return ✔️✔️- Return Gt is the total discounted reward from time-step t
- the discount Y is the present value of future rewards
- value of receiving reward R after k+1 time-steps is Y^k R
- values immediate reward above delayed reward
- y lose to 0 leads to "myopic" evaluation
- y close to 1 leads to "far sighted" evaluation
Discount ✔️✔️- mathematically convenient to discount rewards
- Avoids infinite returns in cyclic Markov Processes
- Uncertainty about the future may not be fully represented
- if reward is financial, immediate rewards may earn more interest than delayed rewards
- animal/human behavior shows preference for immediate reward
- sometimes possible to use undiscounted Markov reward processes if all sequences terminate
Value Function ✔️✔️-Value function v(s) gives the long-term value of state s
- state value function v(s) of an MRP is the expected return starting from state s
Bellman Equation for MRPs ✔️✔️the value function can be decomposed into two parts:
- immediate reward Rt+1
- discounted value of successor state Yv(St+1)
Bellman Equation in Matrix Form ✔️✔️- Bellman equation can be expressed concisely using matrices,
v=R+yPv
v is a column vector with on entry per state
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller CertifiedGrades. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $9.49. You're not tied to anything after your purchase.