This paper describes the class of infinite horizon linear programs that have finite optimal values. A sequence of finite horizon (T period) problems i...

1 downloads
9 Views
690KB Size

Infinite-horizon competitive programs are optimal

For infinite-horizon optimal-growth problems the standard result in the literature says that a program is optimal if and only if associated with it is a sequence of present-value prices at which the program satisfies (i) a set of myopic “competitive

Infinite-horizon choice functions

We analyze infinite-horizon choice functions within the setting of a simple technology. Efficiency and time consistency are characterized by stationary consumption and inheritance functions, as well as a transversality condition. In addition, we cons

Impulsive optimal control with finite or infinite time horizon

We consider a dynamical system subjected to feedback optimal control in such a way that the evolution of the state exhibits both sudden jumps and continuous changes. Previously obtained necessary conditions (Ref. 1) for such impulsive optimal feedbac

Pareto Efficiency of Finite Horizon Switched Linear Quadratic Differential Games

A switched linear quadratic (LQ) differential game over finite-horizon is investigated in this paper. The switching signal is regarded as a non-conventional player, afterwards the definition of Pareto efficiency is extended to dynamics switching situ

Direct trajectory optimization and costate estimation of finite-horizon and infinite-horizon optimal control problems using a Radau pseudospectral method

A method is presented for direct trajectory optimization and costate estimation of finite-horizon and infinite-horizon optimal control problems using global collocation at Legendre-Gauss-Radau (LGR) points. A key feature of the method is that it prov

Infinite-horizon boundary control of distributed systems

For a boundary controlled dynamic system, algorithms for solving the problem of tracking reference motion and the problem of tracking reference control are described. The algorithms are robust to information noise and computational errors. The soluti

The Russian option: Finite horizon

We show that the optimal stopping boundary for the Russian option with finite horizon can be characterized as the unique solution of a nonlinear integral equation arising from the early exercise premium representation (an explicit formula for the arb

Infinite horizon autonomous systems with unbounded cost

We discuss control systems defined on an infinite horizon, where typically all the associated costs become unbounded as the time grows indefinitely. It is proved, under certain lower semicontinuity and controllability assumptions, that a linear time

Monetary equilibria over an infinite horizon

Money provides liquidity services through a cash-in-advance constraint. The exchange of commodities and assets extends over an infinite horizon under uncertainty and a sequentially complete asset market. Monetary policy sets the path of rates of inte

An infinite-horizon multistage dynamic optimization problem

Multiprocess problems are dynamic optimization problems in which there is a collection of control systems coupled through constraints in the endpoints of the constituent trajectories and through the cost function. Optimality conditions for such probl

FINITE HORIZON APPROXIMATIONS OF INFINITE HORIZON LINEAR PROGRAMS

Richard C. G R I N O L D University of California, Berkeley, CA, U.S.A. and CORE, Louvain, Belgium Received 3 December 1974 Revised manuscript 9 December 1975 This paper describes the class of infinite horizon linear programs that have finite optimal values. A sequence of finite horizon (T period) problems is shown to approximate the infinite horizon problems in the following sense: the optimal values of the T period problems converge monotonically to the optimal value of the infinite problem and the limit of any convergent subsequence of initial T period optimal decisions is an optimal decision for the infinite horizon problem. 1. Introduction

This p a p e r examines long range planning models that can be presented as linear programs o v e r an infinite planning horizon. The main results characterize p r o b l e m s that h a v e finite optimal values and establish p r o c e d u r e s for approximating optimal solutions by solving a T period linear program. Three possible a p p r o a c h e s to this task are examined; in general, only one procedure leads to solvable T period problems. The T period problem is designed by decoupling the infinite problem into the sum of a T period problem and another infinite p r o b l e m that c o m m e n c e s at time T; call these problems 1 and 2. Any feasible solution of problem 1 produces an input into problem 2. The scheme calculates an a p p r o x i m a t e salvage value of the input f r o m problem 1 to problem 2. This salvage value is then included in the objective of problem 1. As T increases the error in calculating the salvage value b e c o m e s less and less significant. The main conclusions of the p a p e r are easily stated: (1) The infinite horizon problems are solved in the following sense. L e t V r be the optimal value of the T period approximation, and let x0~ be the time 0 optimal decision for the T period problem. V r decreases to the optimal value of the infinite problem and any limit of the sequence x0r is an optimal initial decision for the infinite problem. (2) Duality is not the p r i m a r y consideration in the study of infinite horizon linear programs. (3) The calculation of an equilibrium optimal policy, if one in fact exists, does not, in general, assist in the solution of a discounted criterion problem. The remainder of this section introduces the problem, summarizes results, relates this w o r k to others, and describes the notational conventions used in the paper. 1

2

R.C. Grinold/Finite horizon approximations

Time is discrete t = 0, 1, 2 . . . . ,. At any time t the state of the system is st. Decisions xt are constrained by the relations A x , = st, xt >=O. If decision xe is taken, then a reward with time zero value t~ ~px, is received and the state b e c o m e s A x t + 1 = st+ | = b + K x t . A and K are m x n matrices, p and n-vector, b an m - v e c t o r , st an m - v e c t o r and a a positive scalar less than 1. If the initial (time zero) state is So = s then the infinite sequence of decisions {xt} is constrained by Axo = s,

Xo >=O,

Axe = b + g X t _ l ,

Xt -->0,

t => 1.

(1.1)

L e t X ( s ) be the set of {x~} that satisfy (1.1). For any {xt} define T

p({xt}) = lim inf ~ a'pxt. T~

(1.2)

0

The optimization problem under consideration is maximize subject to

P ({xt}), {xt}~X(s).

(1.3)

L e t e be an n - v e c t o r of ones, a summation vector. A sequence {xt} is called a - c o n v e r g e n t if the increasing sequence ~,ffatext converges to a finite limit. Denote X , ( s ) as the set of a - c o n v e r g e n t {xt} ~ X ( s ) , and note that p ( { x t } ) = Eo a'pxt for {xt} E X~,(s). The formulation is slightly more general than it appears. Inequalities can be included using slack or surplus variables. In addition, if the data are &, /~ and bt = Orb, with decision variables y, then one can f o r m an equivalent problem with xt = yJO t, a = &0, and K =/~/0. Section 2 presents assumptions made in the paper and c o m m e n t s on their immediate consequences and the problem of verification. These assumptions hold throughout and are not restated for each result. Section 3 defines the optimal value of (1.3) as a function of the initial state So = s. The consequences of the assumptions not holding are investigated in Sections 3 and 4. Section 4 contains an important result: if a program {xt} is feasible but not a - c o n v e r g e n t , then {xt} is a bad program in the sense that p({xt})= -oo. Section 5 describes a sequence of solvable T period linear programs that can be used to solve (1.3). The optimal values of the T period problems decrease to the optimal value. A subsequence of the optimal initial decisions converge to an optimal initial decision of the infinite problem. Section 6 presents sufficient optimality criteria and c o m m e n t s on duality and the establishment of n e c e s s a r y conditions for optimality. Section 7 examines the question of calculating optimal equilibrium policies; i.e., xt = x for all t. The appendix contains the Statement and proof of two lemmas that are used in the main b o d y of the study. This paper is based on previous work by Manne [13], Hopkins [11, 12], Hopkins and Grinold [8], and Evers [3]. The work of Evers [3] motivated this study and is the genesis for the important assumption II in Section 2. Assumptions like II are implicitly made in [13] and [8], however, E v e r s was the first to

R.C. Grinold/Finite horizon approximations

3

state and exploit fully this type of assumption. The assumptions as stated in Section 2 are satisfied by the class of problems considered in [3]. In fact, in Sections 3 and 4 it is demonstrated that the assumptions are nearly the most general possible. E x c e p t for certain boundary cases, failure of assumption I or II implies the optimal value of the infinite horizon problem is either +o¢ or - ~ . The theorem in Section 3 generalizes one of Evers' results, [3] Theorem 5.2. With more restrictive assumptions, Evers is able to establish several attractive results that characterize optimal solutions. H o w e v e r , the main purpose of this paper is to delineate the class of solvable problems and to construct an operational solution procedure; the more restrictive assumptions of [3] are not necessary to accomplish this objective. Special applications of the approximation procedure (Section 5) can be found in [2], and in [6]-[8], and [10]-[13]. In [8] and [2], the procedure solves the infinite problem exactly when T = 0. To my knowledge, no general convergence results were available prior to the theorem in section five. Work on optimality conditions can be found in [3, 5,6, 11-13, 15]. The sufficiency result is identical in spirit to [12], and the remarks on duality are consistent with [9]. In Section 7, optimal equilibrium policies (where xt = x for all t), are discussed. This question has been studied in [1, 4, 10]. The following notational conventions are used in the paper. The symbol = 0 means nonnegative; /> semi-positive; and > strictly positive. The vector e is used for summations; each element of e equals one. Script 9 °'is used as a subsequence of integers and 9O~C 9o0 means that 9Ol is a refinement of 9o0Frequent use is made of the Heine-Borel theorem: e v e r y sequence in a closed bounded set of R" has a limit point in that set. Equations are numbered by section and order of o c c u r r e n c e in that section. Without implicating them, I would like to thank the referees, Curtis Eaves, Joop Evers, Alan Manne, Murray Protter, and Parvin Varaiya for their many useful suggestions.

2. Assumptions This section presents and explores the assumptions used in all the following sections. The assumptions are (I) There exists an a - c o n v e r g e n t solution; i.e., X , ( s ) ~ I~. (II) For every A ~ (0, a], the linear system u(A-AK)=v+p,

v>0,

(2.1)

has a solution (u, v). (III) At least one of the following holds. (i) The linear system uA = v + p, v > 0 has a solution. (ii) There exists a 0 E (0, a] and a solution of (2.1) (with A = 0) such that either uA >=p, or uA >O. (iii) There exists a 0 E (0, a] and a solution (u, v) of (2.1) (with A = 0) such that either uAxi >=0 or (v + OuK)xt >- 0 for all {xi} E X ( s ) .

4

R.C. Grinold[ Finite horizon approximations

We shall see in Section 4, that if (II) and (III) hold and (I) does not then either the problem is infeasible or each feasible solution {xt} has p ( { x , } ) = - ~ . A similar, not quite as strong, converse statement is true. Suppose I is true and for some 0 < h ~< a, there does not exist a solution (u, v) of u ( A - h K ) = p + v,

v >=O.

In this case, as is d e m o n s t r a t e d in Section 3, the problem is unbounded for almost any value of s. Both (I) and (II) cannot be verified in general. H o w e v e r , it is possible to gain some information about the assumptions. For example, if (I) is satisfied, then there exists a solution x of the system (A - aK)x

ab = s +- -

(2.2)

x >=O.

To see this, multiply the t th constraint of the infinite problem by a t, sum and note that x = E o a i x l satisfies (2.2). If (2.2) has no solution, then (I) fails. H o w e v e r , (2.2) m a y have a solution when (I) fails; the test is not conclusive. Assumption (II) is difficult to verify. H o w e v e r , (II) and (III) (ii), with 0 = a, will be satisfied if there exists a solution of either u(A - aK)-

v = p,

v>0,

u A >- p ;

(2.3)

v > O,

u A >=O.

(2.4)

or

u ( A - a K ) - v = p,

For (2.3) note that A-AK=

1 - - -d

A+--d(A-aK

).

Thus for all O < A < a, we have A

u ( A - h K ) >=-- v + p. Ol

For (2.4) note that (h)

u ( A - h K ) >= u ( A - a K ) = v + p.

General verification of (II) requires non-existence of a solution to the generalized eigenvalue problem

(rp with x, y>10, and 0 < h ~ a . Assumption (III) is a p a t c h w o r k of special cases to c o v e r the apparent loophole in (II) when A = O. N o t e that (III) (i) will be satisfied if (II) holds for all h, 0 = A = a, or, more directly, if either A x = O, x >I 0 has no solution or A x = O,

R.C. Grinold/ Finite horizon approximations

5

x i> 0 implies p x < 0, or A x = 0, x/> 0, p x >- 0 implies K x = 0. Assumption (III) is only n e c e s s a r y if A x = O, x >i O, px >- O, K x ~ 0 has a solution. Items (ii) and (iii) are directed to this particular case. It is a reasonable conjecture that assumption (III) is not necessary. At present, T h e o r e m 1, Section 4, cannot be p r o v e d without (III), and it has not been possible to construct a counter example based on the failure of (III). The details in (III) (ii) and (iii) are included to c o v e r all the cases used in [3], and, more significantly, to indicate the failure of a possible solution procedure: see E x a m p l e 3 in Section 5. In particular, (III) (iii) covers the "primal directed" case [3] in which A = (B, I), K = (H, 0) and p = (q, 0). Each row of B is either nonnegative, or the corresponding row of H and element of b are nonnegative. This assures Axt>~O for all t and { x , } E X ( s ) . Moreover, u ( A - a K ) = v + p implies u _->0; therefore d i d (iii) is satisfied.

3. The optimal value function

Define V ( s ) to be the optimal value of the optimization problem as a function of the initial state s. V ( s ) = sup {p({x,})l{xt} E X(s)}

(3.1)

L e t F = { s i X ( s ) ~ 0}, be the set initially feasible states. If s Z F, then V ( s ) is defined to be -oo. This section d e m o n s t r a t e s that V is c o n v e x and satisfies a dynamic p r o g r a m m i n g functional equation. The section closes by examining the consequence of assumption (II) not holding. L e t d o m V = {s [ V ( s ) > - ~ } . Both F and d o m V are convex; and d o m V C F. L e t F c_ F be the set of s such that (I) holds. P C d o m V _C F. Proposition 1. V ( s ) is concave.

Proof. Following Rockafellar, [14, p. 25], note that V is c o n c a v e if and only if all A, ,)/1, ,~2 S 1, S 2 satisfying 0

V ( S I ) > ]/I ,

V(s2) > ~ 2

imply that V((1 -/~.)S 1-[- AS2) > (1 -- h ) y 1+ 1~.~/2. Select { x l } E X ( s i) for i = 1 , 2 , such that {(1 - h)xlt+ hx2t} ~. X ( ( 1 - h )s ~+ As 2) and that

3,i

Note

V((1 - / ~ ) s I ~- / ~ s 2) ~---p { ( 1 - l~)xlt + i~x 2} _-->(1 - A)p{xl} + hp{x2t} > (1 - h ) y ' + h y z. In addition, V satisfies a dynamic programming functional equation. Proposition 2. On the c o n v e x set F, V satisfies the f u n c t i o n a l equation.

that

6

R.C. Grinold/Finite horizon approximations

V(s) = sup{px + aV(b + Kx)}, A x = s,

(3.2)

x >-O.

Proof. If s E F, and A x = s, x >--O, then V(s) >-px + a V ( b + Kx). For any • > 0, it is possible to find {xt} E X ( s ) such that V ( s ) >=p ( { x , } ) = pXo + a p ( { x , + l } ) --- V ( s ) - e.

Since {xt+l} ~ X ( b + Kxo), a V ( b + Kxo) >=ap({xt+l}). T h e r e f o r e V(s) >=pXo + a V ( b + Kxo) >- V(s) - ~.

Proposition 3 examines the implications of a failure in assumption (II). Failure of (II) implies for some 0 < ;t <=a, u ( A - ;tK) = v + p, v > 0 has no solution. To rule out a border line case, total failure here will mean u ( A - AK) = v + p , v _-__0 has no solution for some ,~ ~ (0, a].

Proposition 3. I f (I) holds and, f o r some A, 0 < )t =< a, the system u ( A - AK) >- p has no solution, then V(s) = +oo for s in the relative interior of dom V, and V(s) = -oo for sfY dom V. V can only be finite on the relative boundary o f dom V.

Proof. If u ( A - A K ) N p has no solution, then Farkas' L e m m a implies ( A A K ) y = O, y >=O, py > 0 has a solution. Suppose s E / ~ and { x t } E X ( s ) . Then { x t + Y / X t } E X ( s + K y [ A ) , and p({xt+ y/,~t}) = + ~ . This implies V(s) is an improper concave function and according to Rockafellar, [14, Theorem 7.2], V can only be finite on the relative boundary of its domain. The following example illustrates the result.

Example 1 (see Fig. 1). Z=[10

~],

p = (1, - 1),

K=[20 b =

02],

(00) ,

So =

a=½,

(S0l \S02/

V(s)

I =-~

if s ~ 0 ,

L= lim inf T(sol-- So2)

otherwise.

T--~

If (I) fails, then Theorem 1 of the next section indicates that V(s) = - ~ . Either X ( s ) = ~, or there are no a - c o n v e r g e n t solutions in X ( s ) . T h e o r e m 1 states that {xt} not a - c o n v e r g e n t implies p ( { x t } ) = - ~ .

R.C. Grinotd/Finite horizon approximations

%2 ,

V(s)

=

7

~~ Vls) .... =

~

\dora

s0i

Fig. 1.

4. The class of optimal solutions The section d e m o n s t r a t e s that, without loss of generality, the infinite horizon problem is equivalent to

~, a 'pxt,

maximize

(4.1)

0

{x,} E X,,(s).

subject to

T h e o r e m 1 d e m o n s t r a t e s that solutions that are not a - c o n v e r g e n t are not potentially optimal solutions.

Theorem 1. Under assumptions (II), and (III), if {x,} E X ( s ) and {xt}~-X~,(s), then p ({xt}) = - ~ . Proof. L e t p be the radius of convergence of the p o w e r series E o Atext. The p r o o f first shows that assumption (II) implies that any solution with radius of convergence p, 0 < p ~ < , a has p({xt}) = - o ~ . The remainder of the proof, and assumption (III), are needed to deal with {xt} that have a radius of c o n v e r g e n c e equal to zero. N o t e that 0 < p < a. If p > a, then {xt}E X,,(s). N o t e also that p = a, {xt}@X,~(s) implies E'~a~ext diverges. Multiply constraints zero through T of (1.1) by A ' and sum. This yields T

T

(A - AK) ~ A txt= s + ~ A tb - AKA ~rxr. 0

(4.2)

1

L e t M T ( A ) = E~Atext, and divide (4.2) by Mr(A). Define a subsequence 6e0 such that lim infr_~ A~ex~lM,r(A) is attained for T E 5~0. Then choose S¢1C 5¢o such that E~,~txt/MT~Ot)~x(A) and A r x f l M T ( A ) ~ y ( A ) for T E 5el . From (4.2) we obtain

(A - AK)x(A) = - A K y ( A ) , ex(A) = 1, x(X)_-----0,

(4.3) y(X)=> 0.

8

R.C. Grinold/Finite horizon approximations

L e t ~'k be a sequence of h ' s decreasing to p. L e m m a 1 in the Appendix, with at = e x , indicates y()~k)~0. Since the X(Ak) lie in a c o m p a c t set, there exists a

subsequence of the

~kk,

k ~ 50 decreasing to p, and an x such that

( A - o K ) x = O,

x(Ak) ~ x,

x >! O,

k ~ 5°.

(4.4)

F r o m assumption (II), there exists a (u, v) such that u(A-

p K ) = v + p,

v>O.

T h e r e f o r e vx + p x = 0. Since vx > 0, we must have p x < 0. A s s u m e first that O < a. Then px()tk) < 0 for )~k sufficiently close to p; O < )tk < a. This implies lim i n f r ~ 2~()tk)'PX , = --oo. F r o m L e m m a 2, it follows that p({x,})= -oo. If p = a, then Z'~atext diverges, thus the solution of (4.3) with )t = O has y ( p ) = 0 (see Lemrna 1). The logic used directly a b o v e can be used again to establish p x ( o ) < 0, therefore p({x,})=-o~. I f O = 0 , (4.3) remains valid. If )~k~0, then ).kY(Ak)~O. T h e r e f o r e we reach (4.4) without recourse to L e m m a 1. A subsequence of the X()~k) will converge to x which will satisfy A x =O,

(4.5)

x >10

N o w (III) (i) can be used exactly as (II) was used above. To complete the treatment of 0 = O, consider (III) (ii) or (III) (iii). F r o m (4.2) either T

us +

T-1 T O'ub = ~ O'vxt + ~ Oipxt + 0r(• + OuK)XT,

1

0

0

T+I

T

T

(4.6)

or us +

OtPXt q- 0 T+IUAXT+I. Eo'ub:Eo'vx,+E 1 0 0

(4.7)

If either (III) (ii) or (iii) holds, then either (v + OuK)xT >--0 or UAXT >- O. Thus either T

T-1

T

us+ E 0'ub E 0'vx,+ E 1

0

0

T+I

T

T

Otpxt

or

US+ E O t u h ~ E Otl')xtq- E otpxt. 1 o o

In both cases the left hand quantity is convergent. Since 0 = O, E0r Otvxt ~ +~, therefore ErO'px~ ~ - - ~ , and b y L e m m a 2, p ( { x t } ) - - ~ .

Corollary: I f a s s u m p t i o n s (II) and (III) are valid and a s s u m p t i o n (I) is not, then V(s) = -~.

Proof. If X ( s ) = O , then V ( s ) = - ~ . Otherwise e v e r y { x t } E X ( s ) convergent, by T h e o r e m 1 p({xt})=-0% thus V ( s ) = - ~ .

is not a -

R.C. Grinold/ Finite horizon approximations

9

From (4.5) in the proof of Theorem 1 one can conclude several things. First, if A x = O, x >t0 has no solution, then each {xt} ~ X ( s ) has a positive radius of convergence. Second, if A x = O, x ~ 0 implies p x < 0, then (III) (i) is satisfied. Third, if A x = O, x >i O, p x >=0 implies K x = 0, then (II) does not hold since ( A - a K ) x = O, x >~O, p x >=0 has a solution. The cases (III) (ii) and (iii) are only needed if A x = O, x >i O, p x >=0 and K x ~ O, has a solution.

5. Finite horizon problems This section considers finite (T period) horizon problems and demonstrates that the optimal solutions of the T period problems converge to the optimal solution of the infinite horizon problem. The T period problems can be motivated in two ways. First let ut be the marginal value of extra resources in period t. For t large enough one would expect that Ut+l = aut. If the system has dealt with the initial conditions, then the only value in getting the resources one period sooner is determined by the discounting of future rewards. If one formally writes the dual of the infinite horizon linear program, see (6.1), and requires ut = a t - r u t for t - T, a finite linear program (5.2) will result. The T period problem (5.1) is the dual of that linear program. The second method of motivating the T period problems is to replace the time T, T + 1. . . . constraints of (1.1) by a linear combination of those constraints. Multiply the constraints t, for t _-__T, by a t-r and sum. This yields (A - aK) ~ t=T

Define x~

as

b

ctt-rxt =

+ KXT-1. 1 --Ol

~t=Tott-Txt,

then we have the T period problem (5.1).

T

maximize

~]

O~tP X tr,

0

subject to

Axe=s,

x ~ >-O,

A x ~ = b + K X t _T l , b (A - aK)x(. = --+ 1-or

Kx~_l,

x T ~-~ 0

for l~

(5.1)

x'~ >=O.

Define VT(s) as the optimal value of (5.1).

Proposition 4. U n d e r a s s u m p t i o n s

(I), (II) a n d (III), P r o b l e m

o p t i m a l s o l u t i o n f o r all T, a n d

(i)

V ~ ( s ) >-_ Vr+l(s) _-> V ( s ) .

(ii)

vT+I(S) = max [px + o e v r ( b + K x ) ] ,

A x = s,

x >=O.

(5.1) h a s an

R.C. Grinold[ Finite horizon approximations

10

Proof. T h e dual of (5.1) is T-I

minimize

uTs + ~ u~tb + uTb/(1 -- a),

subjectto

uTA>--atp+uT÷IK for 0 = t - - T - 1 , uT(A -- a K ) >--a~p.

1

(5.2)

Let, b y (II), (u, v) satisfy u(A - a K ) - v = p, v > 0. T h e n u Tt = atu is feasible f o r (5.2). Also, s u p p o s e that {xt} E X~(s), then x Tt = lft~+T ~ Xt

at-Txt

if t < T , if t = T

(5.3)

is f e a s i b l e for (5.1). Since (5.1) and (5.2) h a v e feasible solutions, they h a v e optimal solutions. S u p p o s e u Tt solves the T period v e r s i o n of (5.2), then U T+I

fu~ taut

if t -<_ T, ift=T+l

(5.4)

is feasible f o r the T + 1 period version, with value VT(s). It t h e r e f o r e p r o v i d e s an upper b o u n d on VT+I(S). Definition (5.3) can be used to c o n s t r u c t a feasible solution of (5.2) with value p({xt}) f r o m a n y {xt}EX~(s). T h e o r e m 1 a s s u r e s V(s) = sup {p({xt}) [{xt} E X~(s)}, t h e r e f o r e VT(s) >=V(s). This p r o v e s item (i). I t e m (ii) follows f r o m item (i), the definition of VT(s), and the logic used in the p r o o f of P r o p o s i t i o n 2 in Section 3. T L e t the s e q u e n c e {x~} be an o p t i m a l solution of (5.1) with x t = 0 for t > T.

T h e o r e m 2. Under assumptions (I), (II), and (III): (i) There exists an optimal solution {xt} of the infinite horizon problem. (ii) For every z, there exists a sequence 9~ such that xT--~Xt for 0<= t <--'r,

T~. (iii) VT(s) = EO ~ CttPXtT decreases to V(s) = E o cflpxt. r W e first wish to s h o w that the s e q u e n c e Proof. L e t M T ---E 0 = a text. b o u n d e d . F o r all T,

(A-

aK)

atx

Mr

is

= s 4 1- a

S u p p o s e M T --> + ~ on s o m e s u b s e q u e n c e 9 v. T h e n there exists a s u b s e q u e n c e 5e1 C_ 5¢, and a v e c t o r z such that ZT -~- ~ ottxr/Mr --'>Z, o

( A - otK)z = O, z >i O.

T E 9~1, (5.6)

R.C. GrinoldjFinite horizon approximations

11

Since (II) is valid, there exists (u, v) such that u(A - a K ) = p + v, v > 0. Thus, 0 = pz + vz, vz > 0; so pz < 0 . A s s u m p t i o n (I) implies V ( s ) > - % therefore for TE501, v r ( s ) = Mrpzr>= V ( s ) > - ~ . H o w e v e r , M r ~ + ~ , p z r ~ p z

x r ~ x,,

T ~ 50.

O<=t <=¢.

Clearly {xt} is nonnegative. If {xt} f~ X ( s ) , there exists a t such that (5.6)

Axt = b + Kxt_l is not satisfied. H o w e v e r , for T E 50t, T > t, we have

Axrt = b + Kxrt_l and xrt ~ xt, xrt-l~ xt-1. T h e r e f o r e (5.6) must be satisfied. In addition, Zo atex, <=M, so {x,}EX~,(s). To see this, note that for T and ~-,

2 a'exTt <=M - 2 a'exTt <=M. 0

-+1

The limit over T E 50, yields 2 a'ext <=M. Thus {xt}EX~(s); and

0

Y~ a'px ~, = VT(s)>- V(s)>= 0

20 a'px,.

In the limit lim 2 a'pxTt = e. + 2 °ttPXt, e >---O.

T~. 0

0

If E = 0, the p r o o f is completed. Suppose in contrast e > 0. Then for each T oo

Y~ a l p ( x , ~ - x,) _-> ~ > 0. 0

L e t u, v solve u ( A - t~K) = p + v, v > 0 . F r o m (5.5)

aub 1-a

(5.7) o

o

Since {x,} E X~(s),

aub US-b----2 1-a

ottvxt = 2 o

ottpxt. o

(5.8)

12

R.C. Grinold/ Finite horizon approximations Subtract (5.8) f r o m (5.7) to obtain =

-x,)=~>O

0

0

or, for any r, a ~v(x, - x r) + 0

a'vx, >=e + ~+1

a v x , = e > 0.

(5.8)

'r+l

H o w e v e r , the left-hand side of (5.9) can be made arbitrarily small. First choose -r large enough so that 0 <=~

oe'vxt <~ El3.

~-+1

Then there exists a T* such that for all T @ 5e, T => T* [2

a t v ( x t - xr)[ ~<,/3.

Corollary 2. Under (I), (II), and (III) f o r s E F, V ( s ) = max [px + a V ( b + Kx)],

A x = s,

x >=O.

T h e o r e m 2 shows that the T period problems (5.1) approximates, in some sense, the infinite period problem. We now consider two alternative definitions of a T period problem and show that they do not, in general, lead to solvable problems. It is possible to construct a T period problem by requiring that xt = XT for t >_--T. Obviously xr must satisfy ( A - K ) x r = b, xr >I O. E x a m p l e 2 shows this is not generally applicable; (I), (II), and (III) are satisfied yet ( A - K ) x = b, x ~ 0 does not have a solution.

Example 2. A = I ,

K=1.6,

b=l,

p = 1, s - l ,

a=x;

xt=((1.6t+l-1)/0.6)

is

feasible and No a'xt = 2.222 . . . . yet ( A - K ) x = 1, x - 0 has no solution. A third T period problem can be obtained by ignoring constraints after time T. The variable x r will be constrained only by A x ~ = b + Kxr_l, x r >=O. E x a m p l e 3 indicates that these truncated problems can be unbounded for all T.

Example 3. A=

-1

p -- (1, 1,0, 0),

01 ] 10 0]1 b(;) s(;)

R.C. Grinold/ Finite horizon approximations

13

Note that (I), and (II) are satisfied. In addition, (III) (iii) is satisfied since for v > 0 has a every {xt}, Axt=b+Kx~_l>=O. In addition, U ( A - a K ) = v + p , solution with u > 0. To see the problem is unbounded, note that any positive multiple of the vector y l = (0, 1, 0, 1) can be added to x~. Thus arbitrarily large profits are possible at time T.

6. Optimality conditions; duality This section briefly considers optimality conditions, establishes sufficient conditions for optimality and comments on necessary conditions and duality.

Theorem 3. Under Assumptions (I), (II) and (III), if {xt} E X,~(s) and there exists {ut} such that

(i) Ilu, II--(ii) utA - vt = a'p + Ut+lK, vt >=O, (iii) vtxt = O, then {xt} is optimal. Proof. From (ii) and (iii)

~ u t a x t - ~ Ut+lKXt = ~ ut(b + K x t ) - ~ utKxt, 0

1

UoS "}- ~ 1

0

1

utb = ~ ottpxt . 0

If {yt} is another feasible solution not in X,,(s), then P~({Yt})=-oo. If {yt}E X~(s), then from (i) E~ utAyt < ~; thus

X a'PXt = UoS + ~_~ utb = E vtYt + 0

1

0

ottyt >-0

ettyt. 0

It is possible to define a dual infinite horizon program. T

minimize

lira sup UoS + ~ utb, T---~

subject to

1

u,A - vt = ottp + ut+lK,

vt >=O.

(6.1)

Proposition 4. Under (I), (II), and (III), the maximum primal infinite horizon problem has a value greater than or equal to the optimal value of the minimization problem (6.1). Proof. Consider the dual linear programs, (5.2). The optimal solution {u~} determines a feasible solution of (1) with u'r=t a'-7"u~ for t ~>-T. That solution has value Vr(s). Thus the infimum in (1) is less than or equal to V(s).

R.C. Grinold/ Finite horizon approximations

14

Suppose for p - 1, w ( A - o K ) >=0, wb < 0 has a solution. Then if u ( A - a K ) >= p, {atu + p t w } solves (6.1), and lim SUpT_~ (U + W)S + E r ( a t U + p t w ) b = - ~ . Thus to insure (6.1) has a finite lower bound, one must at least require (A - p K ) x = b, x = 0 has a solution for all p = 1. This is too stringent an assumption. If one requires that dual solutions satisfy Ilu, ll_< a t M for some M, then the infimum of (6.1) is equal to the m a x i m u m of the original problem. Recall the definition of u Tt in Section 5 and the construction of the optimal solution {xt}. Let M r = supt~'lluYll. If a subsequence 5e exists such that M r <=M for T ~ O°, then it is possible to establish the existence of {ut}, such that

(i) 'llu,II--- M, (ii) utA - vt = a tP + ttt+lK, vt >=O, (iii) VtXt = O, (iv) There exists 5¢, such that u r ~ ut for T E 5e, 0 = t = ~-. This result is easy to establish. H o w e v e r it is of little use for numerical computations unless conditions that bound M r can be deduced f r o m the data A, K , p , b , ot.

7. Equilibrium solutions In [1, 4, 10], the problem of finding an equilibrium optimal solution xt = x for all t was considered and solved in an elegant way. Define an optimal policy x : R " ~ R " as a function that satisfies A x ( s ) = s, x(s) >=0 and V(s) = p x ( s ) + a V ( b + Kx(s)). U n d e r special assumptions, [1] demonstrates that the solution of a linear complementarity problem yields a fixed point s* of the map b + K x ( s ) ; i.e., b + K x ( s * ) = s*. This information gives insight into the structure of the optimal policy and value functions V(s) and x(s) for s close to s*. The procedure outlined in Section 5 attempts to find the optimal initial decision x(so) and its value V(so) for the given initial condition so. The authors of [1, 4, 10] do not claim that knowledge of x(s*) and s* is useful in calculating X(So). The remainder of this section d e m o n s t r a t e s that x(s*) and s* are not, in general, helpful in calculating X(So). If the system (A-K)x=b,

(7.1)

x>=O

has no feasible solution equilibrium solutions, s*, does not exist. In the case where (7.1) is feasible note that if x satisfies (7.1) then {xt = x } E X ( b + K x ) and V(b + kX)>-_ p x / ( 1 - o~). N o w consider, the optimization problem maximum

px,

subject to

(A-K)x=b,

x>=O.

(7.2)

Suppose first that (7.2) has an unbounded solution; i.e., py > 0, (A - K ) y = O, y => 0 has a solution. Consider s E F , / x > 0, and note that {xt +/xy} E X ( s + I-~Ky) for any {xt} E X ( s ) , therefore, /x V(s + IxKy) >= V(s) + py. 1-~

R.C. Grinold/ Finite horizon approximations

15

In this case the function V is unbounded. If (7.2) has an optimal solution, then V ( b + K z ) >-_pz[(1 - a). N o w suppose an equilibrium optimal solution x ( s * ) exists; i.e., ( A K ) x ( s * ) = b, x(s*)>= O, V ( b + x ( s * ) ) = px(s*)/(1 - a ) . It follows that either x ( s * ) solves (7.2) or g ( b + K z ) >=

pz 1-a

px(s*) >-_ = V(s*). 1-a

(7.3)

Equation (7.3) indicates that the suboptimal policy xt = z for all t starting f r o m s = b + K z , is preferable to the optimal policy x~ = x ( s * ) starting at s = s*. The calculation of both z and x ( s * ) allow for a free choice of initial conditions. If one's actual task is to calculate x(so), the initial conditions cannot be ignored.

Appendix L e m m a 1. A s s u m e at >- 0 and that 0 < p < oo is the radius o f convergence o f the p o w e r series Z~ Atat; define A Ta T

h (A) = lim inf - 7 - - - - , T-*~ £ A ta t o

then

lim h(A) = h(p) = O. A ---,p+

Proof. Without loss of generality p = 1. The terms A rar/~,~Atat are nondecreasing in A, therefore h(A) is nondecreasing. In particular 0 = h(1)~< h(A) for all A _-> 1. Suppose inf{h(A) I 1 < A} = 28 > 0. Fix A at the value l + 8. There exists a T* such that T _---T* implies T A TaT ~ t~ Z Z tat, 0 or

aT >=8

(8.1)

a,. 0

Define the sequence b i by 'T* / l \ T * - I

[ar.+j,

if j _-->1.

16

R.C. Grinold/ Finite horizon approximations

Equation (8.1) becomes bj = 6

bj.

(8.2)

o

It is straightforward to demonstrate by induction that

bi >=(A( 1--6 1 i)' 6bo

(8.3)

for all J. Let 3' = A ( 1 - 6 ) = 1 - 6 2 < 1. From (8.3) we obtain J

T*+J

J(6boy r*) <=~ 3"r*+Jbi< ~ ytat. /=1

(8.4)

T*+I

Since (6bo3"r*) > 0, the leftmost quantity can be made arbitrarily large. However, 3' < 1, so the rightmost quantity has a finite upper bound. This contradiction proves the lemma. L e m m a 2. I f lim infT_~ EoT A tat = -oo a n d a >= h, then lim infT_~ Eor Ot tat = --oo. Proof. W i t h o u t loss of generality h = 1. Trivial if a = 1. L e t rT = Y~ffottat and /z = 1/a < 1. Then, T

T

~] at = (1 - / z ) ~ iztrt + / z r + l r T . o

(8.5)

0

If lim infr-~ rr > - ~ , then rr must be bounded below, i.e., rr --> M for all T. Thus from (8.5), T

T

2 at>-(1-.) ~ .t +l~r+'M= M o

o

which contradicts the hypotheses.

References [1] G.B. Dantzig and A.S. Manne, "A complementarity algorithm for an optimal capital path with invariant proportions", Journal o[ Economic Theory 9 (3) (1974) 312-323. [2] G.T. DeGhellinck and G. Eppen, "Linear programming solutions for separable Matkovian decision problems", Management Science 16 (1%7) 371-393. [3] J.J.M. Evers, Linear programming over an infinite horizon (University of Tilburg Press, Tilburg, The Netherlands, 1973). [4] J.J.M. Evers, "Linear infinite horizon programming", Research Memorandum EIT 45, Tilburg Institute of Economics, Tilburg, The Netherlands, (November 1973). [5] D. Gale, "A geometric duality theorem with economic application", Review of Economic Studies. 34 (1) (1967). [6] R.C. Grinold, K.T. Marshall and R.M. 0liver, "Longitudinal manpower planning models", Naval Research Logistics Quarterly 23 (2) (1976) 245-260. [7] R.C. Grinold and R.E. Stanford, "Optimal control of a graded manpower system," Management Science 20 (8) (1974).

R.C. Grinold/ Finite horizon approximations

17

[8] R.C. Grinold and D.S.P. Hopkins, "Computing optimal solutions for infinite-horizon mathematical programs with a transient stage", Operations Research 21 (1972). [9] R.C. Grinold and D.S.P. Hopkins, "Duality overlap in infinite linear programs," Journal of Mathematical Analysis and Applications 41 (1) (1973). [10] T. Hansen and T.C. Koopmans, "On the definition and computation of a capital stock invariant under optimization", Journal of Economic Theory 5 (1972) 407-523. [11] D.S.P. Hopkins, "Infinite-horizon optimality in an equipment replacement and capacity expansion model", Management Science 18 (1971) 145-156. [12] D.S.P. Hopkins, "Sufficient conditions for optimality in infinite-horizon linear economic models", Tech. Rept. No. 69.3, Department of Operations Research, Stanford, CA (1%9). [13] A.S. Manne, "Sufficiency conditions for optimality in an infinite-horizon development plan", Econometrica 38 (1) (1970). [14] R.T. Rockafellar, Convex analysis (Princeton University Press, Princeton, NJ, 1970). [15] M.L. Weitzman, "Duality theory for infinite-horizon convex models", Management Science 19 (7) (1973).

12 Views