ECSE 506: Stochastic Control and Decision Theory

Aditya Mahajan
Winter 2022

About  |  Lectures  |  Notes  |  Coursework

Lectures

Whenever possible, I will post notes on some of the material covered in class, but that is not guaranteed. This is a graduate class and you are responsible for taking notes in class and reading the appropriate chapters of the textbooks.

The notes will be updated as we move along in the course. Please check the dates on the first page to keep track. If you find any typos/mistakes in the notes, please let me know. Pull requests welcome.

Week 1

Introduction and course overview.

Week 2

Examples of MDPs

Week 3

Proof of optimality of dynamic programming

Week 4

Monotonicty in Markov decision processes

Week 5

Introduction to infinite horizon discounted problems

Week 6

Bellman operators, value iteration, and policy iteration

  • Monotonicity, contraction, and their implications
  • Value iteration and stopping conditions
  • Policy iteration and convergence guarantees
  • See notes on infinite horizon MDPs
Week 7

Properties of value functions

Week 8

Approximate dynamic programming

Week 9

Model approximation

Week 10

POMDPs

Week 11

Approximations for POMDPs

  • Approximate information states
Week 12

Decentralized control

  • Decentralized POMDPs / Dynamic team problems
  • Common information approach
  • Delayed sharing and control sharing models

This entry was last updated on 06 Jan 2022