ECSE 506: Stochastic Control and Decision Theory

Aditya Mahajan
Winter 2022

About  |  Lectures  |  Notes  |  Coursework

Prelim: Stochastic dominance

Stochastic dominance is a partial order on random variables. Let \(\ALPHABET S\) be a totally ordered finite set, say \(\{1, \dots, n\}\) and let \(\Delta(\ALPHABET S)\) denote the state of pmfs on \(\ALPHABET S\).

Definition

Suppose \(S^1\) and \(S^2\) are \(\ALPHABET S\) valued random variables where \(S^1 \sim \mu^1\) and \(S^2 \sim \mu^2\). We say \(S^1\) stochastically dominates \(S^2\) if for any \(s \in \ALPHABET S\), \[\begin{equation}\label{eq:inc-prob} \PR(S^1 \ge s) \ge \PR(S^2 \ge s). \end{equation}\]

Stochastic domination is denoted by \(S^1 \succeq_s S^2\) or \(\mu^1 \succeq_s \mu^2\).

Let \({\rm M}^1\) and \({\rm M}^2\) denote the CDF of \(\mu^1\) and \(\mu^2\). Then \eqref{eq:inc-prob} is equivalent to the following: \[\begin{equation}\label{eq:cdf} {\rm M}^1_s \le {\rm M}^2_s, \quad \forall s \in \ALPHABET S. \end{equation}\] Thus, visually, \(S^1 \succeq_s S^2\) means that the CDF of \(S^1\) lies below the CDF of \(S^2\).

Example

\(\left[0, \frac 14, \frac 14, \frac 12\right] \succeq_s \left[\frac 14, 0, \frac 14, \frac 12 \right] \succeq_s \left[\frac 14, \frac 14, \frac 14, \frac 14 \right].\)

\(\succeq_s\)
\(\succeq_s\)

Stochastic dominance is important due to the following property.

Theorem

Let \(f \colon \ALPHABET S \to \reals\) be a (weakly) increasing function and \(S^1 \sim \mu^1\) and \(S^2 \sim \mu^2\) are random variables defined on \(\ALPHABET S\). Then \(S^1 \succeq_s S^2\) if and only if \[\begin{equation}\label{eq:inc-fun} \EXP[f(S^1)] \ge \EXP[f(S^2)]. \end{equation}\]

Proof

We first prove the “if” part. For ease of notation, we use \(f_i\) to denote \(f(i)\) and define \({\rm M}^1_0 = {\rm M}^2_0 = 0\). Consider the following: \[ \begin{align*} \sum_{i=1}^n f_i \mu^1_i &= \sum_{i=1}^n f_i ({\rm M}^1_i - {\rm M}^1_{i-1}) \\ &= \sum_{i=1}^n {\rm M}^1_{i-1} (f_{i-1} - f_{i}) + f_n {\rm M}^1_n \\ &\stackrel{(a)}{\ge} \sum_{i=1}^n {\rm M}^2_{i-1} (f_{i-1} - f_{i}) + f_n {\rm M}^2_n \\ &= \sum_{i=1}^n f_i ({\rm M}^2_i - {\rm M}^2_{i-1}) \\ &= \sum_{i=1}^n f_i \mu_i, \end{align*} \] which completes the proof. In the above equations, \((a)\) uses the following facts:

  1. For any \(i\), \({\rm M}^1_{i-1} \le {\rm M}^2_{i-1}\) (because of \eqref{eq:cdf}) and \(f_{i-1} - f_{i} < 0\) (because \(f\) is increasing function). Thus, \[{\rm M}^1_{i-1}(f_{i-1} - f_i) \ge {\rm M}^2_{i-1}(f_{i-1} - f_i). \]
  2. \({\rm M}^1_n = {\rm M}^2_n = 1\).

Now consider the “only if” part. Suppose for any increasing function \(f\), \eqref{eq:inc-fun} holds. Given any \(i \in \{1, \dots, n\}\), define the function \(f_i(k) = \IND\{k > i\}\), which is an increasing function of \(k\). Then, \[ \EXP[f_i(S)] = \sum_{k=1}^n f_i(k) \mu^1_k = \sum_{k > i} \mu^1_k = 1 - {\rm M}^1_i. \] By a similar argument, we have \[ \EXP[f_i(S^2)] = 1 - {\rm M}^2_i. \] Since \(\EXP[f_i(S)] \ge \EXP[f_i(S^2)]\), we have that \({\rm M}^1_i \le {\rm M}^2_i\).


1 Stochastic monotonicity

It is possible to extend the notion of stochastic dominance to Markov chains.

Suppose \(\ALPHABET S\) is a totally ordered set and \(\{S_t\}_{t \ge 1}\) is a time-homogeneous Markov chain on \(\ALPHABET S\) with transition probability matrix \(P\). Let \(P_i\) denote the \(i\)-th row of \(P\). Note that \(P_i\) is a PMF.

Definition

A Markov chain with transition matrix \(P\) is stochastically monotone if \[ P_i \succeq_s P_j, \quad \forall i > j. \]

An immediate implication is the following.

Theorem

Let \(\{S_t\}_{t \ge 1}\) be a Markov chain with transition matrix \(P\) and \(f \colon \ALPHABET S \to \reals\) is a weakly increasing function. Then, for any \(s^1, s^2 \in \ALPHABET S\) such that \(s^1 > s^2\), \[ \EXP[f(S_{t+1}) | S_t = s^1] \ge \EXP[ f(S_{t+1}) | S_t = s^2], \] if and only if \(P\) is stochatically monotone.


Exercises

  1. Let \(T\) denote a upper triangular matrix with 1’s on or below the diagonal and 0’s above the diagonal. Then \[ T^{-1}_{ij} = \begin{cases} 1, & \text{if } i = j, \\ -1, & \text{if } i = j + 1, \\ 0, & \text{otherwise}. \end{cases}\]

    For example, for a \(4 \times 4\) matrix \[ T = \MATRIX{1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1}, \quad T^{-1} = \MATRIX{1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1 }. \]

    Show the following:

    1. For any two PMFs \(μ^1\) and \(μ^2\), \(\mu^1 \succeq_s \mu^2\) iff \(\mu^1 T \ge \mu^2 T\).
    2. A Markov transition matrix \(P\) is stochastic monotone iff \(T^{-1} P T \ge 0\).
  2. Show that the following are equivalent:

    1. A transition matrix \(P\) is stochastically monotone
    2. For any two PMFs \(μ^1\) and \(μ^2\), if \(\mu^1 \succeq_s \mu^2\) then \(\mu^1P \succeq_s \mu^2P\).
  3. Show that if two transition matrices \(P\) and \(Q\) have the same dimensions and are stochastically monotone, then so are:

    1. \(\lambda P + (1 - \lambda) Q\), where \(\lambda \in (0,1)\).
    2. \(P Q\)
    3. \(P^k\), for \(k \in \integers_{> 0}\).
  4. Let \(\mu_t\) denote the distribution of a Markov chain at time \(t\). Suppose \(\mu_0 \succeq_s \mu_1\). Then \(\mu_t \succeq_s \mu_{t+1}\).

References

Stochastic dominance has been employed in various areas of economics, finance, and statistics since the 1930s. See Levy (1992) and Levy (2015) for detailed overviews. The notion of stochastic monotonicity for Markov chains is due to Daley (1968). For a generalization of stochastic monotonicity to continuous state spaces, see Serfozo (1976). The characterization of stochastic monotonicity in Exercises 1–4 are due to Keilson and Kester (1977).

Daley, D.J. 1968. Stochastically monotone markov chains. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete 10, 4, 305–317. DOI: 10.1007/BF00531852.
Keilson, J. and Kester, A. 1977. Monotone matrices and monotone markov processes. Stochastic Processes and their Applications 5, 3, 231–241.
Levy, H. 1992. Stochastic dominance and expected utility: Survey and analysis. Management Science 38, 4, 555–593. DOI: 10.1287/mnsc.38.4.555.
Levy, H. 2015. Stochastic dominance: Investment decision making under uncertainty. Springer. DOI: 10.1007/978-3-319-21708-6.
Serfozo, R.F. 1976. Monotone optimal policies for markov decision processes. In: Mathematical programming studies. Springer Berlin Heidelberg, 202–215. DOI: 10.1007/bfb0120752.

This entry was last updated on 25 Aug 2022 and posted in Stochastics and tagged stochastic orders, stochastic dominance, stochastic monotonicity.