2  Introduction

Updated

May 12, 2023

Let’s start with the simplest optimization problem. A decision maker has to choose an action \(a \in \ALPHABET A\). Upon choosing the action \(a\), the decision maker incurs a cost \(c(a)\). What action should the decision maker pick to minimize the cost?

Formally, the above optimization problem may be written as \[\begin{equation} \label{eq:basic} \min_{a \in \ALPHABET A} c(a). \end{equation}\]

When the action space \(\ALPHABET A\) is finite, say \(\ALPHABET A = \{1, \dots, m\}\), solving the optimization problem \(\eqref{eq:basic}\) is conceptually straight-forward: enumerate the cost of all possible actions, i.e., enumerate the set \(C = \{ c(a) : a \in \ALPHABET A \}\) and find the smallest element.

When the action space \(\ALPHABET A\) is continuous, say a compact subset of a Euclidean space, solving the optimization problem \eqref{eq:basic} is conceptually straight-forward only when the cost function \(c\) satisfies some regularity conditions. For example, when \(c\) is convex, the optimal action can be obtained for solving \[ \dfrac {d c(a) }{ da } = 0. \]

In the absence of appropriate regularity conditions, it is not possible to solve an optimization problem over continuous action spaces.

2.1 The stochastic optimization problem

Now consider the simplest stochastic optimization problem. A decision maker has to choose an action \(a \in \ALPHABET A\). Upon choosing the action \(a\), the decision maker incurs a cost \(c(a,W)\), where \(W \in \ALPHABET W\) is a random variable with known probability distribution. Assume that the decision maker is risk neutral and, therefore, wants to minimize \(\EXP[ c(a, W) ]\), where the expectation is with respect to the random variable \(W\).

Formally, the above optimization problem may be written as \[\begin{equation} \label{eq:stochastic} \min_{a \in \ALPHABET A} \EXP[ c(a, W) ]. \end{equation}\]

Define \(J(a) = \EXP[ c(a, W) ]\). Then Problem \eqref{eq:stochastic} is conceptually the same as Problem \eqref{eq:basic} with the cost function \(J(a)\). Numerically, Problem \eqref{eq:stochastic} is more difficult because computing \(J(a)\) involves evaluating an expectation, but we ignore the computational complexity for the time being.

2.2 Key simplifying idea

In the stochastic optimization problems considered above, the decision maker does not observe any data before making a decision. In many situations, the decision maker does observe some data, which is captured by the following model. Suppose a decision maker observes a random variable \(S \in \ALPHABET S\) and then chooses an action \(A \in \ALPHABET A\) as a function of his observation according to a decision rule \(π\), i.e., \[ A = π(S). \]

Upon choosing the action \(A\), the decision maker incurs a cost \(c(S,A,W)\), where \(W \in \ALPHABET W\) is a random variable. We assume that the primitive random variables \((S,W)\) are defined on a common probability space and have a known joint distribution. Assume that the decision maker is risk neutral and, therefore, wants to minimize \(\EXP[ c(S, π(S), W)]\), where the expectation is taken with respect to the joint probability distribution of \((S,W)\).

Formally, the above optimization problem may be written as \[\begin{equation} \label{eq:obs} \tag{P1} \min_{π \colon \ALPHABET S \to \ALPHABET A} \EXP[ c(S, π(S), W) ]. \end{equation}\]

Define \(J(π) = \EXP[ c(S, π(S), W) ]\). Then, Problem \eqref{eq:obs} is conceptually the same as Problem \eqref{eq:basic} with one difference: In Problem \eqref{eq:basic}, the minimization is over a parameter \(a\), while in Problem \eqref{eq:obs}, the minimization is over a function \(π\).

When \(\ALPHABET S\) and \(\ALPHABET A\) are finite sets, the optimal policy can be obtained by an exhaustive search over all policies as follows: for each policy \(π\) compute the performance \(J(π)\) and then pick the policy \(π\) with the smallest expected cost.

Such an exhaustive search is not satisfying for two reasons. First, it has a high computational cost. There are \(| \ALPHABET A |^{| \ALPHABET S |}\) policies and, for each policy, we have to evaluate an expectation, which can be expensive. Second, the above enumeration procedure does not work when \(\ALPHABET S\) or \(\ALPHABET A\) are continuous sets.

There is an alternative way of viewing the problem that simplifies it considerably. Instead of viewing the optimization problem before the system starts running (i.e., the ex ante view), imagine that the decision maker waits until they see the realization \(s\) of \(S\) (i.e., the interim view). they then asks what action \(a\) should they take to minimize the expected conditional cost \(Q(s,a) := \EXP[ c(s,a, W) | S = s]\), i.e., they consider the problem

\[\begin{equation} \label{eq:cond-1} \tag{P2} \min_{a \in \ALPHABET A} \EXP[ c(s,a,W) | S = s], \quad \forall s \in \ALPHABET S. \end{equation}\]

Thus, Problem \eqref{eq:obs}, which is a functional optimization problem, has been reduced to a collection of parameter optimization problems (Problem \eqref{eq:cond-1}), one for each possible of \(s\).

Now define \[ \begin{equation} \label{eq:cond} \tag{P2-policy} π^∘(s) = \arg \min_{a \in \ALPHABET A} \EXP[ c(s,a, W) | S = s] \end{equation} \] where ties (in the minimization) are broken arbitrarily.

Theorem 2.1 The decision rule \(π^∘\) defined in \eqref{eq:cond} is optimal for Problem \ref{eq:basic}.

Remark

We restricted the proof to finite \(\ALPHABET S\), \(\ALPHABET A\), \(\ALPHABET W\). This is to avoid any measurability issues. If \(\ALPHABET S\) and \(\ALPHABET A\) are continuous sets, we need to restrict to measurable \(π\) in Problem \ref{eq:basic} (otherwise the expectation is not well defined; of course the cost \(c\) also has to be measurable). However, it is not immediately obvious that \(π^∘\) defined in \eqref{eq:cond} is measurable. Conditions that ensure this are known as measurable selection theorems.

Let \(π\) be any other decision rule. Then, \[ \begin{align*} \EXP[ c(S, π(S), W) ] &\stackrel{(a)}= \EXP[ \EXP[c(S, π(S), W) | S ] ] \\ &\stackrel{(b)}\ge \EXP[\EXP[ c(S, π^∘(S), W) | S ] ] \\ &\stackrel{(c)}= \EXP[ c(S, π^∘(S), W) ], \end{align*} \] where \((a)\) and \((c)\) follow from the law of iterated expectations and \((b)\) follows from the definition of \(π^∘\) in \eqref{eq:cond}.

We can also provide a partial converse of Theorem 2.1.

Theorem 2.2 If \(\PR(S = s) > 0\) for all \(s\), then any optimal policy \(π^∘\) for Problem \(\ref{eq:basic}\) must satisfy \(\eqref{eq:cond}\).

We prove this by contradiction. Suppose \(π^*\) is an optimal policy that does not satisfy \eqref{eq:cond}. By definition of \(π^∘\), it must be the case that for all states \[\begin{equation} \EXP[ c(s, π^∘(s), W) | S = s ] \le \EXP[ c(s, π^*(s), W) | S = s ] . \label{eq:ineq:1} \end{equation}\] Now, since \(π^*\) does not satisfy \eqref{eq:cond}, there exists some state \(s^∘ \in \ALPHABET S\) such that \[\begin{equation} \EXP[ c(s^∘, π^*(s^∘), W) | S = s^∘ ] > \EXP[ c(s^∘, π^∘(s^∘), W) | S = s^∘ ] . \label{eq:ineq:2} \end{equation}\] Therefore, \[\begin{align*} \EXP[ c(S, π^*(S), W) ] &= \sum_{s \in \ALPHABET S} \PR(S = s) \EXP[ \EXP[ c(s, π^*(s), W) | S = s ] ] \\ & \stackrel{(a)}> \sum_{s \in \ALPHABET S} \PR(S = s) \EXP[ \EXP[ c(s, π^∘(s), W) | S = s ] ] \\ &= \EXP[ c(S, π^∘(S), W) ] \end{align*}\] where \((a)\) follows from \eqref{eq:ineq:1} and \eqref{eq:ineq:2} and the inequality is strict becase \(\PR(S = s^∘) > 0\). Thus, \(J(π^*) > J(π^∘)\) and, hence, \(π^*\) cannot be an optimal policy.

2.3 Blackwell’s principle of irrelevant information

In many scenarios, the decision maker may observe data which is irrelevant for evaluating performance. In such instances, the decision maker may ignore such information without affecting performance. Formally, we have the following result, which is known as Blackwell’s principle of irrelevant information.

Theorem 2.3 (Blackwell’s principle of irrelevant information) Let \(\ALPHABET S\), \(\ALPHABET Y\), \(\ALPHABET W\), and \(\ALPHABET A\) be standard Borel spaces and \(S \in \ALPHABET S\), \(Y \in \ALPHABET Y\), \(W \in \ALPHABET W\) be random variables defined on a common probability space.

A decision maker observes \((S,Y)\) and chooses \(A = π(S,Y)\) to minimize \(\EXP[c(S,A,W)]\), where \(c \colon \ALPHABET S \times \ALPHABET A \times \ALPHABET W \to \reals\) is a measurable function.

Then, if \(W\) is conditionally independent of \(Y\) given \(S\), then there is no loss of optimality in choosing \(A\) only as a function of \(S\).

Formally, there exists a \(π^* \colon \ALPHABET S \to \ALPHABET A\) such that for all \(π \colon \ALPHABET S \times \ALPHABET Y \to \ALPHABET A\), \[ \EXP[c(S, π^*(S), W)] \le \EXP[ c(S, π(S,Y), W) ]. \]

We prove the result for the case when \(\ALPHABET S\), \(\ALPHABET Y\), \(\ALPHABET W\), \(\ALPHABET A\) are finite.

Define \[π^*(s) = \arg \min_{a \in \ALPHABET A} \EXP[ c(s,a, W) | S = s]. \] Then, by construction, for any \(s \in \ALPHABET S\) and \(a \in \ALPHABET A\), we have that \[ \EXP[ c(s, π^*(s), W ) | S = s] \le \EXP[ c(s,a,W) | S = s]. \] Hence, for any \(π \colon \ALPHABET S \times \ALPHABET Y \to \ALPHABET A\), and for any \(s \in \ALPHABET S\) and \(y \in \ALPHABET Y\), we have \[ \begin{equation} \label{eq:opt} \EXP[ c(s, π^*(s), W) | S = s] \le \EXP[ c(s, π(s,y),W) | S = s]. \end{equation} \] The result follows by taking the expectation of both sides of \eqref{eq:opt}.

The above proof doesn’t work for general Borel spaces because \(π^*\) defined above may not exist (inf vs min) or may not be measurable. See Blackwell (1964) for a formal proof.

Exercises

Exercise 2.1 (Computing optimal policies) Suppose \(\ALPHABET S = \{1, 2 \}\), \(\ALPHABET A = \{1, 2, 3\}\), and \(\ALPHABET W = \{1, 2, 3\}\). Let \((S,W)\) be random variables taking values in \(\ALPHABET S × \ALPHABET W\) with joint distribution \(P\) shown below.

\[ P = \MATRIX{ 0.25 & 0.15 & 0.05 \\ 0.30 & 0.10 & 0.15 } \]

Here the row corresponds to the value of \(s\) and the column corresponds to the value of \(w\). For example \(\PR(S=2, W=1) = P_{21} = 0.30\).

The cost function \(c \colon \ALPHABET S \times \ALPHABET A \times \ALPHABET W \to \reals\) is shown below

\[ c(\cdot,\cdot,1) = \MATRIX{3 & 5 & 1 \\ 2 & 3 & 1 }, \quad c(\cdot,\cdot,2) = \MATRIX{4 & 3 & 1 \\ 1 & 2 & 8 }, \quad c(\cdot,\cdot,3) = \MATRIX{1 & 2 & 2 \\ 4 & 1 & 3 }. \]

Here the row corresponds to the value of \(s\) and the column corresponds to the value of \(a\). For example \(c(s=1,a=2,w=1) = 5\).

Find the policy \(π \colon \ALPHABET S \to \ALPHABET A\) that minimizes \(\EXP[ c(S, π(S), W) ]\).

Exercise 2.2 (Blackwell’s principle) Suppose \(\ALPHABET S = \{1, 2\}\), \(\ALPHABET Y = \{1, 2\}\), \(\ALPHABET A = \{1, 2, 3\}\), and \(\ALPHABET W = \{1, 2, 3\}\). Let \((S,Y,W)\) be random variables taking values in \(\ALPHABET S × \ALPHABET Y × \ALPHABET W\), with joint distribution \(Q\) shown below. \[ Q_{Y = 1} = \MATRIX{0.15 & 0.10 & 0.00 \\ 0.15 & 0.05 & 0.10} \qquad Q_{Y = 2} = \MATRIX{0.10 & 0.05 & 0.05 \\ 0.15 & 0.05 & 0.05} \] For a fixed value of \(y\), the row corresponds to the value of \(s\) and the column corresponds to the value of \(w\). For example \(\PR(S = 1, Y = 1, W = 3) = 0\).

The cost function \(c \colon \ALPHABET S × \ALPHABET A × \ALPHABET W \to \reals\) is the same as the previous exercise.

  1. Find the policy \(π \colon \ALPHABET S × \ALPHABET Y \to \ALPHABET A\) that minimizes \(\EXP[c(S, π(S,Y), W)]\).

  2. Compare the solution with the solution of the previous exercise in view of Blackwell’s principle of irrelevant information. Clearly explain your observations.

Exercise 2.3 (Pollution monitoring) Consider the problem of monitoring the pollution level of a river. The river can have a high pollution level if there is a catastrophic failure of a factory upstream. There are then two “pollution states” indicating whether such a failure has not occured. We denote them by \(S = 0\) (indicating no failure) and \(S = 1\) (indicating catastrophic failure). Let \([p, 1-p]\) denote the prior probability mass function of \(S\).

The pollution monitoring system has a sensor which takes a measurement \(y\) of the pollution level. Let \(f_s(y)\) denote the probabiity density of the observation \(y\) conditional on the value of \(s\), \(s \in \{0, 1\}\). Two actions are available at the monitoring system: raise an alarm or not raise an alarm. The cost of raising the alarm is \(C_0\) if the state \(S\) is \(0\) or zero if the state \(S\) is \(1\); the cost of not raising the alarm is zero if the state \(S\) is \(0\) or \(C_1\) if the state \(S\) is \(1\).

Show that it is optimal to raise the alarm if \[ p f_0(y) C_0 < (1 - p) f_1(y) C_1. \] That is, it is optimal to raise the alarm if the likelihood ratio \(f_1(y)/f_0(y)\) exceeds the threshold value \(p C_0/(1-p) C_1\).

Notes

Theorem 2.3 is due to Blackwell (1964) in a short 2.5 page paper. A similar result was used by Witsenhausen (1979) to show the structure of optimal coding strategies in real-time communication. Also see the blog post by Maxim Ragisnsky.

Exercise 2.3 is adaptive from Whittle (1996). It is a special instance of Bayesian hypothesis testing problem. We will study a generalization of this model later in sequential hypothesis testing