100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Reinforcement Learning + Markov Decision Processes $9.99   Add to cart

Exam (elaborations)

Reinforcement Learning + Markov Decision Processes

 0 view  0 purchase
  • Course
  • Reinforcement Learning Ma-rk-ov Decision Processes
  • Institution
  • Reinforcement Learning Ma-rk-ov Decision Processes

Reinforcement Learning + Markov Decision Processes Reinforcement learning generally ️️given inputs x and outputs z but the outputs are used to predict a secondary output y and function with the input y=f(x) z Markov Decision Process ️️in reinforcement learning we want our agent to l...

[Show more]

Preview 2 out of 12  pages

  • October 30, 2024
  • 12
  • 2024/2025
  • Exam (elaborations)
  • Questions & answers
  • Reinforcement Learning Ma-rk-ov Decision Processes
  • Reinforcement Learning Ma-rk-ov Decision Processes
avatar-seller
CertifiedGrades
Reinforcement Learning + Markov Decision Processes

Reinforcement learning generally ✔️✔️given inputs x and outputs z but the outputs are used to
predict a secondary output y and function with the input



y=f(x) z



Markov Decision Process ✔️✔️in reinforcement learning we want our agent to learn a ___ ___ ___.

For this we need to discretize the states, the time and the actions.



states in MDP ✔️✔️states are the set of tokens that represent every state that one could be in (can
include a state even if we never go there)



model in MDP ✔️✔️aka transition function

the rules of the game, function of state action and another state - and it gives the probability of
transitioning to the another state given that you were in the first state and you took the action



actions in MDP ✔️✔️things you can do in a particular state (up,down,left,right) or allowed to do



✔️✔️



how to get around the markovian property and why the workaround could be bad ✔️✔️you can make
the state remember everything you need from the past

but this means that you might be in every state once which would make it hard to learn anything



properties of markov decision making ✔️✔️-only the present matters

- the rules don't change over time (stationary)

, reward in mdp ✔️✔️- a scalar value for being in a state - if you get to the goal you get a dollar, or if
you get to the bad one you lose a dollar

- different types of ways to look at rewards R(s), R(s,a), R(s,a,s')

- usually delayed reward



policy in mdp ✔️✔️function that takes in a state and returns an action (as a command)

- not a sequence of actions but just an action to take in a particular state

kinda the next best thing

- kinda looks like a vector field



how to find the solution in MDP ✔️✔️find the optimal policy that maximizes the long term expected
reward

given a bunch of states (x), actions, and rewards (z), find the function that gives the optimal action (y)



temporal credit assignment problem ✔️✔️-refers to the fact that rewards, especially in fine grained
state-action spaces, can occur terribly temporally delayed

-such reward signals will only very weakly affect all temporally distant states that have preceded it

-almost as if the influence of a reward gets more and more diluted over time and this can lead to bad
convergence properties of the RL mechanism

-Many steps performed by any iterative reinforcement-learning algorithm to propagate the influence of
delayed reinforcement to all states and actions that have an effect on that reinforcement



why do you have a small negative reward for each step before terminating? ✔️✔️-similar to walking
across a hot beach into the ocean - encourages you to end the game and not stay where you are



why do minor changes matter in MDP? ✔️✔️- because if you change your reward function to less
negative, could lead you to end up in the bad area more than if you had a harsher reward

- if the reward is too harsh, then the bad outcome may be better than staying in the game



what part of MDP can incorporate our domain knowledge? ✔️✔️the reward - how important it is to
get to the end

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller CertifiedGrades. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $9.99. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

83100 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$9.99
  • (0)
  Add to cart