14  Inventory management (revisted)

Updated

April 3, 2024

Keywords

inventory management, base-stock policy, reward shaping, structural results, stochastic optimization, infinite horizon, discounted cost

Summary

One of the potential benefits of modeling a system as infinite horizon discounted cost MDP is that it can be simpler to identify an optimal policy. We illustrate this using the inventory management example.

Let’s reconsider the model for inventory management and assume that it runs for an infinite horizon. We assume that the per-step cost is given by \[c(s,a,s_{+}) = p a + γ h(s_{+}), \] where \[ h(s) = \begin{cases} c_h s, & \text{if $s \ge 0$} \\ -c_s s, & \text{if $s < 0$}, \end{cases}\] where \(c_h\) is the per-unit holding cost, \(c_s\) is the per-unit shortage cost, and \(p\) is the per-unit procurement cost. Note that we have assumed that the holding or shortage cost is discounted because this cost is incurred at the end of the time period.

Recall that in the finite horizon setting, the optimal policy was a base-stock policy characterized by thresholds \(\{σ_t\}_{t \ge 1}\). In the infinite horizon discounted setting, we expect the optimal policy to be time-homogeneous, i.e., the thresholds \(\{σ_t\}_{t \ge 1}\) be a constant \(σ\) and not to depend on time.

As an illustration, let’s reconsider the example used for the finite horizon setting (where \(c_h = 2\), \(c_s = 5\), \(p=1\), and the demand is Binomial(50,0.4)). We consider the discount factor \(γ = 0.9\). The value function and optimal policy is this case are shown below.

(a) Value function
(b) Optimal policy
Figure 14.1: Optimal policy for the model with binomial demand.

We are interested in the following question: Is it possible to identify the optimal threshold of the base-stock policy without explicitly solving the dynamic program? In this section, we show that the answer is affirmative.

As a first step, we modify the per-step cost using reward shaping. In particular, we consider the following potential function

\[\varphi(s) = h(s) + \frac1{γ} p s - \frac{1}{1-γ}p\mu,\] where \(\mu = \EXP[W]\) is the expected number of arrivals at each time period.

Now consider a new cost function \[\begin{align*} c'(s,a,s_{+}) &= c(s,a,s_{+}) + \varphi(s) - γ \varphi(s_{+}) \\ &= pa + γ h(s_{+}) + h(s) + \frac{1}{γ} p s - \frac{1}{1-γ} p \mu - γ h(s_{+}) - p s_{+} - \frac{γ}{1-γ} p \mu \\ &= h(s) + \frac{1-γ}{γ} ps + p w - p \mu. \end{align*} \] Note that \[ \EXP[ c'(s,a,S_{+}) | S = s, A = a ] = h(s) + \frac{1-γ}{γ} ps =: c^*(s). \] Thus, the optimal policy of the original model is the same as that in which the per-step cost is given by \(c^*(s)\).

Recall that the optimal policy in the original model was a base stock policy. For the infinite horizon model, the threshold will become time-invariant. In particular, the infinite horizon dynamic programming with this modified cost is given by \[\begin{equation}\label{eq:DP} V^*(s) = \min_{a \in \reals_{\ge 0}} \bigl\{ c^*(s) + γ \EXP[ V^*(s + a - W) ] \bigr\}. \end{equation}\]

Define \(F(s) = \EXP[V^*(s-W)]\). Thus, the dynamic program may be written as \[\begin{equation}\label{eq:DP-1} V^*(s) = c^*(s) + γ \min_{a \in \reals_{\ge 0}} \bigl\{ F(s+a) \bigr\}. \end{equation}\] Recall that while proving the structure of the optimal policy, we had shown that \(σ = \arg\min_{s \in \reals}F(s)\) and \[ \min_{a \in \reals_{\ge 0}} F(s+a) = \begin{cases} F(σ), & s \le σ \\ F(s) , & s > σ. \end{cases} \] Consequently, \[ π^*(s) = \begin{cases} σ - s, & s \le σ \\ 0 , & s > σ. \end{cases} \] and \[\begin{equation}\label{eq:opt} V^*(s) = \begin{cases} c^*(s) + γ F(σ), & s \le σ \\ c^*(s) + γ F(s), & s > σ \end{cases} \end{equation}\]

Pick a \(s \le σ\) Substituting \(s' = s - W\) in \eqref{eq:opt} and taking expectations, we get \[ F(s) = \EXP[ c^*(s - W) ] + γ F(σ), \quad s \le σ. \] Taking \(s = σ\) and rearraning, we get \[ F(σ) = \frac{1}{1-γ} \EXP[ c^*(σ-W) ] \] Thus, \[\begin{equation}\label{eq:F} F(s) = \EXP[ c^*(s-W) ] + \frac{γ}{1-γ} \EXP[ c^*(σ - W) ], \quad s \le σ. \end{equation}\]

Lemma 14.1 \(F(s)\) and \(\EXP[c^*(s-W)]\) have the same minimizer, i.e., \[ \arg\min_{s \in \reals}F(s) = \arg\min_{s \in \reals}\EXP[c^*(s-W)] \] Therefore, \[ σ = \arg\min_{s \in \reals}\EXP[c^*(s-W)] \]

From the dynamic program, we know that for any \(s \le σ\), not ordering is worse than ordering \(σ - s\). Thus, \(F(s) \ge F(σ)\). Therefore, from \(\eqref{eq:F}\), we get \[ \EXP[ c^*(s-W) ] + \frac{γ}{1-γ} \EXP[ c^*(σ - W) ] \ge \EXP[ c^*(σ-W) ] + \frac{γ}{1-γ} \EXP[ c^*(σ - W) ] \] or simply \[\begin{equation}\label{eq:ineq-1} \EXP[ c^*(s-W) ] \ge \EXP[ c^*(σ-W) ], \quad s \le σ. \end{equation}\]

FIXME: I don’t have a proof of the other side. The result is taken from Whittle (1982) (vol 1, §13.3), but it is stated as an obvious result, without a proof. I cannot figure out a proof of the fact that \[\begin{equation}\label{eq:ineq-2} \EXP[ c^*(s-W) ] \ge \EXP[ c^*(σ-W) ], \quad s \ge σ. \end{equation}\] Please let me know if you know a proof!

Theorem 14.1 The optimal threshold \(σ\) is given by the value of \(σ\) which minimizes \(\EXP[ c^*(σ-W) ]\). In particular, if \(F\) denotes the CDF of the demand, then \[\begin{equation} σ = F^{-1}\left( \frac{c_s - p(1-γ)/γ}{c_h + c_s} \right). \label{eq:opt-threshold} \end{equation}\]

The proof idea is the same approach as that used for the newsvendor problem. In particular, let \(f\) denote the distribution of \(W\), \(F\) denote the CDF, and \(μ = \EXP[W]\). Then, \[\begin{align} \EXP[c^*(σ-W)] &= \EXP[h(σ-W)] + \frac{1-γ}{γ}p(σ - μ) \notag \\ &= \int_{0}^{σ} c_h(σ-w)f(w)dw + \int_{σ}^{∞} c_s(w-σ)f(w)dw + \frac{1-γ}{γ}p(σ - μ) \notag \end{align}\] Thereore, \[\begin{align} \frac{∂ \EXP[c^*(σ-W)]}{∂σ} &= \int_{0}^{σ} c_h f(w)dw + \int_{σ}^{\infty}[-c_s] f(w)dw + \frac{1-γ}{γ}p \notag \\ &= c_h F(σ) - c_s(1 - F(σ)) + \frac{1-γ}{γ}p \notag \\ \end{align}\] To find the optimal threshold, we set the derivative to \(0\) and simplify to obtain \eqref{eq:opt-threshold}.

We can use \eqref{eq:opt-threshold} to find the optimal threshold for the example used above. In particular, we need to find the value of \(σ\) at which the CDF of the demand equals the value \((c_s - p(1-γ)/γ)/(c_h + c_s)\), as shown in Figure 14.2 below.

Figure 14.2: The optimal threshold for the example shown above.

Exercises

Exercise 14.1 Suppose that the arrival process is exponential with rate \(1/\mu\), i.e., the density of \(W\) is given by \(e^{-s/\mu}/\mu\). Show that the optimal threshold is given by \[ σ = \mu \log \left[ \frac{ c_h + c_s} { c_h + p (1-γ)/γ} \right]. \]

Hint: Recall that the CDF the exponential distribution is \(F(s) = 1 - e^{-s/μ}\).

The plots below verify this result numerically. We simulate the system with parameters \(μ = 5\), \(p = 1\), \(γ = 0.9\).

Figure 14.3: Optimal policy for model with exponential demand. The black curve shows the optimal policy computed via value iteration and the red line shows the threshold computed via the formula of Exercise 14.1.

Notes

The idea of using reward shaping to derive a closed form expression for inventory management is taken from Whittle (1982). It is interesting to note that Whittle (1982) uses the idea of reward shaping more than 17 years before the paper by Ng et al. (1999) on reward shaping. It is possible that Whittle was using the results of Porteus (1975). As far as I know, the explicit formula of the threshold presented in Theorem 14.1 has not appeared in the literature before.

As established in the notes on Lipschitz MDPs, it can be shown that the optimal value function for the inventory management model above is Lipschitz continuous.

In the analysis above, we did not formally verify that all the functions exist and infinimizations were well defined. For a detailed discussion of such technical details, see Feinberg (2016).

For the models with positive ordering cost An efficient algorithm for computing the optimal \((s,S)\) policy is presented in Zheng and Federgruen (1991).