Stochastic dynamic programming (SDP) has been one of the techniques most commonly used in the solution of the strategic (long-range) problem of reservoir operation. This technique requires the definition of a state vector, which commonly defines two variables: the amount of water in storage in the reservoir at the beginning of a time period and the total inflow of water. Two versions of this formulation are commonly used: one considers the total inflow in the past period of time; the other uses the inflow forecast for the current period. As a result of using SDP and under Markovian assumptions, a steady-state policy for the operation of the reservoir is always obtained. Nevertheless, and because the differences involved in the definition of the state vector on the abovementioned versions, their steady-state policies are not always the same. Looking at the definition of the state vector of those versions, the former has the advantage of not dealing with forecasted information. However, some mathematical considerations indicating that the latter could be more desirable than the former under condition of reliable forecasts are presented in this paper. © 1991.
|Journal||Applied Mathematics and Computation|
|Publication status||Published - 1 Jan 1991|