J Comb Optim DOI 10.1007/s10878-014-9721-5
On single machine scheduling with resource constraint Lidong Wu · Cong-Dian Cheng
© Springer Science+Business Media New York 2014
Abstract The single machine scheduling with resource constraint is a nonlinear combinatorial optimization problem in cloud computing applications. Given a set of jobs and certain resource, the problem is to find a permutation of jobs and a distribution of resource to optimize certain objective function. The processing time of each job is a nonlinear function with respect to the resource assigned to it. In this paper, we propose a naive algorithm and study a subproblem in the algorithm that suppose the permutation of jobs is also given, how to find a resource distribution to minimize the total weighted flow time. We found a polynomial-time optimal solution for a special case and an approximation solution in general case. Keywords Scheduling · Resource constraint · Weighted flow time · Approximation algorithm · Error bound Mathematics Subject Classification
90B35 · 68C20
1 Introduction Consider n jobs J1 , J2 , · · · , Jn and a resource with total amount U . The processing time pi of each job Ji is non-preemptable, which is a non-increasing function of assigned resource u i , i.e., pi = f i (u i ). f i is defined on [0, ai ] with f (0) > 0
L. Wu Department of Computer Science, University of Texas at Dallas, Richardson, TX 75080, USA e-mail:
[email protected] C.-D. Cheng (B) College of Mathematics and Systems Science, Shenyang Normal University, Shenyang 110034, China e-mail:
[email protected]
123
J Comb Optim
for i ∈ [n] = {1, 2, · · · , n}. Let Ci be the completion time of job Ji . When jobs are scheduled with permutation π = (π(1), π(2), · · · , π(n)), we have Ci = ik=1 pπ(k) . π and a resource Let wi be the weight of job Ji . The problem is to find a permutation n wi Ci under allocation u = (u 1 , u 2 , · · · , u n ) tominimize F(π, u) = u + i=1 n constraint u ≤ U where u = i=1 u i . For simplicity of speaking, we call this problem (P1), which has an application background in cloud computing where energy efficiency is an important issue and the processing time of a job depends on energy allocated to the job. In general, functions f i may be nonlinear and hence the problem is a nonlinear scheduling problem, called the resource constrained scheduling problem, which has been studied extensively in literature Carlier and Rinnooy Kan (1982), Janiak (1991), Toker et al. (1991), Xie (1997), Gafarov et al. (2011) and Błazewicz et al. (1983). When all f i are constant functions, we maysimply take u i = 0 for n wi Ci and the proball i ∈ [n]. In this case, the objective function becomes i=1 lem becomes the classic one studied in Wayne (1956) and Lawler (1983), which is polynomial-time solvable. Motivated from this classic case, a naive algorithm is to solve the following two problems alternatively: (P2) Given resource allocation u, find a permutation π to minimize F(π, u). (P3) Given permutation π , find a resource allocation u to minimize F(π, u) under constraint u ≤ U . Since (P2) can be solved in time O(n log n) Lawler (1983), in this paper we put our effort on study of (P3). For conciseness, in the following part, we always assume: f j is differentiable, non-decreasing and continuous on [0, a j ]; a j > 0 and U > 0. For problem (P3), let
uˆ π(k) (π ) = sup {0} ∪ u ∈
[0, aπ(k) ]| f π(k) (u)
n
wπ(i) + 1 < 0
, (1.1)
i=k
ˆ ) = (uˆ j (π )). If u(π ˆ ) ≤ U , then u(π ˆ ) is the unique optimal solution and u(π ˆ ) ≤ U , we call (P3) resource of (P3), see Proposition 2. Due to this fact, if u(π sufficiency, denoted by (P3.S). Otherwise, we call (P3) resource insufficient, denoted by (P3.I).If (P3) is resource sufficiency for every π ∈ P, then (P1) is called resource affluence, denoted by (P1.A). Otherwise, (P1) is called resource non-affluence, denoted by (P1.NA).
2 Properties of optimal solutions This section discusses the properties of optimal solutions of (P1) and (P3). n
Lemma 1 For any π ∈ P, F(π, u) = nk=1 i=k wπ(i) f π(k) (u π(k) ) + u π(k) , and F(π, u) is continuous on the variant u in the area D = nj=1 [0, a j ]. Proof Firstly we have
123
J Comb Optim n n
wjCj + u j = wπ(k) Cπ(k) + u π(k)
F(π, u) =
j=1 n
=
wπ(k) k=1
n n
=
k=1
k
k=1
f π(i) (u π(i) ) + u π(k)
i=1
wπ(i)
f π(k) (u π(k) ) + u π(k) .
i=k
For f j is continuous in the interval [0, a j ], the formula above implies F(π, u) is continuous on the variant u in the area D. Hence Lemma 1 holds. Proposition 1 (i) The optimal solution of (P3) exists. (ii) The optimal solution of (P1) exists. Proof For (0) ∈ A, A = ∅. Hence {F(π, u)|u ∈ A} = ∅. Let a(π ) = inf{F(π, u)|u ∈ A}. Then there is a sequence {uk } of A such that limk→∞ F(π, uk ) = a(π ). Since a j < +∞ for all j ∈ J, D is bounded and closed. Hence there must be a convergent subsequence {uki } of {uk } such that limi→∞ uki ∈ A. Let u = limi→∞ uki . For F(π, u) is continuous on the variant u, we have a(π ) = limi→∞ F(π, uki ) = F(π, limi→∞ uki ) = F(π, u ). This implies {u|F(π, u) = a(π ), u ∈ A} = ∅. Let b(π ) = inf{u|F(π, u) = a(π ), u ∈ A}. Since u is also continuous on the variant u, following the method above, we can easily know that there exists a u∗ (π ) ∈ A such that u∗ (π ) = b(π ) and F(π, u∗ (π )) = a(π ). That is, (π, u∗ (π )) is an optimal solution of (P3). Hence (i) holds. Let a = inf{a(π )|π ∈ P}. For P is a finite set, there is at least one π ∈ P such that a(π ) = a. Hence {u∗ (π )|F(π, u∗ (π )) = a, π ∈ P} = ∅. Let b = min{u∗ (π )|F(π, u∗ (π )) = a, π ∈ P}. Choose π ∗ ∈ P such that u∗ (π ∗ ) = b. Then it can be easily verified that (π ∗ , u∗ (π ∗ )) is an optimal solution of (P1). Hence (ii) holds. ˆ ) is unique optimal solution of (P3.S). Proposition 2 u(π Proof loss of generality, assume π = [1, 2, · · · , n]. And let g j (u) = Without n i= j wi f j (u) + u. Then from Lemma 1, we have
F(π, u) =
n
⎡⎛ ⎣⎝
j=1
n
⎞
⎤
wi ⎠ f j (u j ) + u j ⎦ =
i= j
n
g j (u j ).
j=1
Hence, ∀u ∈ A, we have ˆ )) = F(π, u) − F(π, u(π =
u j
n j=1
g j (u j ) − g j (uˆ j (π ))
g j (u j ) − g j (uˆ j (π )) +
g j (u j ) − g j (uˆ j (π )) .
(2.1)
u j >uˆ j (π )
123
J Comb Optim
For f j is differentiable, g j is differentiable. So if u j < uˆ j (π ), noting (1.1), we have g j (u j ) − g j (uˆ j (π )) = −g j (ξ j )(uˆ j (π ) − u j ) ⎞ ⎤ ⎡⎛ n wi ⎠ f j (ξ j ) + 1⎦ (uˆ j (π ) − u j ) > 0, u j < ξ j < uˆ j (π ); = − ⎣⎝ i= j
(2.2) if u j > uˆ j (π ), noting (1.1), we have g j (u j ) − g j (uˆ j (π )) = g j (ξ j )(u j − uˆ j (π )) ⎤ ⎞ ⎡⎛ n wi ⎠ f j (ξ j ) + 1⎦ (u j − uˆ j (π )) ≥ 0, uˆ j (π ) < ξ j < u j . = ⎣⎝ i= j
(2.3) By combining (2.1), (2.2) and (2.3), we can easily know ˆ )) ≤ F(π, u); F(π, u(π
(2.4)
ˆ )) < F(π, u) F(π, u(π
(2.5)
and
ˆ ) ≤ U , we have u(π ˆ ) ∈ A. if there is at least one j ∈ J such that u j < uˆ j (π ).For u(π ˆ ) is the optimal solution of (P4.S). Hence (2.4) and (2.5) implies u(π ˆ ). And by (2.5) Finally, let u∗ be an optimal solution of (P4.S). Then u∗ = u(π ˆ ). That is, uˆ is unique optimal we have u ∗j ≥ uˆ j (π ) for all j ∈ J. Hence u∗ = u(π solution of (P4.S). Remark 1 Given permutation π , denote the problem to find u∗ ∈ A such that F(π, u∗ ) = inf{F(π, u)|u ∈ A} by 1|π, p j = f j (u j ),
u j ≤ U|
(w j C j + u j ).
(P4)
ˆ ) ≤ U , we call (P4) resource sufficiency, denoted by (P4.S). Otherwise, we If u(π call (P4) resource insufficient, denoted by (P4.I). From Proposition 2, it is obvious that ˆ ) is an optimal solution of (P4.S). However it may be not unique optimal solution u(π of (P4.S). Moreover, we can easily know that (P4.I) and (P3.I) are equivalent. ˆ ) Proposition 3 Let u∗ be an optimal solution of (P4.I). Then u∗ = U < u(π ˆ )) < F(π, u∗ ). and F(π, u(π Proof Without loss of generality, assume π = [1, 2, · · · , n]. For resource is insuffiˆ ) and there exists at least one j such that cient, it is obvious that u∗ = U < u(π ˆ )) < F(π, u∗ ). u ∗j < uˆ j (π ). Hence, from (2.5), we have F(π, u(π
123
J Comb Optim
Proposition 4 Let (π ∗ , u∗ ) be an optimal solution of (P1). Then u∗ is an optimal solution of (P1) with permutation π ∗ . If (P1) with permutation π ∗ is resource sufficiency, ˆ ∗ ). then u∗ = u(π Proof u∗ is an optimal solution of (P1) with permutation π ∗ is obvious. Hence, if (P1) with permutation π ∗ is resource sufficiency, from Proposition 2, we further have ˆ ∗ ). u∗ = u(π From Proposition 4, we can directly obtain the following conclusion. ˆ ∗ ). Corollary 1 Let (π ∗ , u∗ ) be an optimal solution of (P3). Then u∗ = u(π In the following part and Sect. 3, without loss of generality, we always assume n π = [1, 2, · · · , n], and set g j (u) = i= j wi f j (u) + u. Lemma 2 Let u∗ = (u ∗j ) be an optimal solution of (P3.I). Then we have the conclusions as follows. (i) If { j ∈ J|0 < u ∗j < a j } = ∅, then there exist natural numbers k, l and a real number λ, with 0 ≤ k ≤ n − 1, 0 ≤ l ≤ n − 1 and λ < 0, such that: u ∗ji = 0, g ji (0) ≥ λ, i = 1, 2, · · · , k; u ∗ji = a ji , g ji (a ji ) ≤ λ, i = k + 1, k + 2, · · · , l; 0 < u ∗ji < a ji , g ji (u ∗ji ) = λ, i = l + 1, l + 2, · · · , n. (ii) If { j ∈ J|0 < u ∗j < a j } = ∅, then there exist natural number k and a real number λ, with 1 ≤ k ≤ n − 1 and λ < 0, such that: u ∗ji = 0, g ji (0) ≥ λ, i = 1, 2, · · · , k; u ∗ji = a ji , g ji (a ji ) ≤ λ, i = k + 1, k + 2, · · · , n. Proof Pr oving (i). For { j ∈ J|0 < u ∗j < a j } = ∅, there are natural numbers k and l, with 0 ≤ k ≤ n − 1 and 0 ≤ l ≤ n − 1, such that: u ∗ji = 0, i = 1, 2, · · · , k; u ∗ji = a ji , i = k + 1, k + 2, · · · , l; 0 < u ∗ji < a ji , i = l + 1, l + 2, · · · , n. n ∗ Let U¯ = U − li=1 u ∗ji , and G = i=l+1 g ji (u ji ). For u = U , we have n ∗ ∗ ∗ ¯ i=l+1 u = U . For u = (u ) is an optimal solution of (P3.I), ji
j
G(u ∗jl+1 , u ∗jl+2 , · · · , u ∗jn ) n u ji = U¯ . Since 0 < u ∗ji < a ji is the minimum value of G under the condition i=l+1 ∗ ∗ ∗ for i = l + 1, l + 2, · · · , n, (u jl+1 , u jl+2 , · · · , u jn ) is an interior point of the area n i=l+1 [0, a ji ]. Let further L(u jl+1 , u jl+2 , · · · , u jn , λ) = G(u jl+1 , u jl+2 , · · · , u jn ) + λ
n
u ji − U¯ .
i=l+1
Based on the observations above, by the Lagrange’s method of multipliers, we assert (u ∗jl+1 , u ∗jl+2 , · · · , u ∗jn , λ) is a stable point of L. That is, we have the following equation set ∂L ∂L = 0. = 0, i = l + 1, k + 2, · · · , n; ∂u ji ∂λ
123
J Comb Optim
This yields to
n
∗ i=l+1 u ji
= U¯ and
g ji (u ∗ji ) = λ, i = l + 1, k + 2, · · · , n. If g j (0) < λ for some i ∈ {1, 2, · · · , k}, then g j (0) − g jl+1 (u ∗jl+1 ) < 0. Hence i i there is a δ(< a ji , u ∗jl+1 ) such that g j (δ) − g jl+1 (u ∗jl+1 − δ) < 0 because g j is i continuous for all j ∈ J. Let u ji = δ, u jl+1 = u ∗jl+1 − δ, u ji = u ∗ji , i = i , l + 1. Then, u ∈ A. Note that g j is non-decreasing, we know that there are ξ and η such that 0 < ξ < δ, u ∗jl+1 − δ < η < u ∗jl+1 , and F(π, u) − F(π, u∗ ) = [g ji (u ji ) − g ji (u ∗ji )] + [g jl+1 (u jl+1 ) − g jl+1 (u ∗jl+1 )] = [g ji (δ) − g ji (0)] − [g jl+1 (u ∗jl+1 ) − g jl+1 (u ∗jl+1 − δ)]
= g ji (ξ )δ − g jl+1 (η)δ < [g ji (δ) − g jl+1 (u ∗jl+1 − δ)]δ < 0. That is, F(π, u) < F(π, u∗ ). This contradicts u∗ = (u ∗j ) is an optimal solution. So g ji (0) ≥ λ for all i = 1, 2, · · · , k. If g j (a ji ) > λ for some i ∈ {k+1, k+2, · · · , l}, then g j (a ji )−g jl+1 (u ∗jl+1 ) > 0. i i Hence there is a δ(< a ji , a jl+1 − u ∗jl+1 ) such that g j (u ∗j − δ) − g jl+1 (u ∗jl+1 + δ) > 0. i i Let u ji = u ∗j − δ, u jl+1 = u ∗jl+1 + δ, u ji = u ∗ji , i = i , l + 1. Then, u ∈ A. Following i the above method, we can easily obtain F(π, u) < F(π, u∗ ). This contradicts u∗ is an optimal solution. So g ji (a ji ) ≤ λ for all i = k + 1, k + 2, · · · , l. Finally, it is obvious that λ ≤ 0. Assume λ = 0. Then we can easily know that ˆ ). This contradicts u ∗j ≥ uˆ j (π ) for all j ∈ J. Hence we have U = u∗ ≥ u(π resource insufficient. So λ < 0. The proof of (i) completes. Pr oving (ii). For { j ∈ J|0 < u ∗j < a j } = ∅, U > 0 and resource insufficient, there is a natural number k, with 1 ≤ k ≤ n − 1, such that: u ∗ji = 0, i = 1, 2, · · · , k; u ∗ji = a ji , i = k + 1, k + 2, · · · , n. If min{g ji (0)|i = 1, · · · , k} < max{g ji (a ji )|i = k + 1, · · · , n}, without loss of generality, assume g j1 (0) < g jk+1 (a jk+1 ). Then there is a δ(< a j1 , a jk+1 ) such that g j1 (δ) − g jk+1 (a jk+1 − δ) < 0. Let u j1 = δ, u jk+1 = u ∗jk+1 − δ, u ji = u ∗ji , i = 1, k + 1. Then F(π, u) < F(π, u∗ ). This contradicts (π, u∗ ) is an optimal solution. So min{g ji (0)|i = 1, · · · , k} ≥ max{g ji (a ji )|i = k + 1, · · · , n}. Let λ = max{g ji (a ji )|i = k + 1, · · · , n}. Then g ji (u ∗ji ) ≥ λ, i = 1, · · · , k; g ji (u ∗ji ) ≤ λ, i = k + 1, · · · , n. Note that u ∗ji = a ji = uˆ ji (π ), i = k + 1, · · · , n. For resource insufficient, we have λ < 0. This completes the proof of (ii). Lemma 3 Suppose u∗ = (u ∗j ), v∗ = (v ∗j ) are two optimal solutions of (P3.I). Let λ = max{g j (u ∗j )|u ∗j > 0}, η = max{g j (v ∗j )|v ∗j > 0}, A = { j ∈ J|g j (u ∗j ) = λ} and B = { j ∈ J|g j (v ∗j ) = η}. Then λ = η, A = B, u ∗j = v ∗j , ∀ j ∈ (J − A). Proof Assume g j (u ∗j ) < λ. Then, by Lemma 2, u ∗j = a j > 0. Let v ∗j < u ∗j . Then there is a j such that a j ≥ v ∗j > u ∗j ≥ 0 for u∗ = v∗ . Since u ∗j < a j , also
123
J Comb Optim
by Lemma 2, we have g j (u ∗j ) ≥ λ. Hence g j (v ∗j ) ≥ g j (u ∗j ) ≥ λ > g j (u ∗j ) ≥ g j (v ∗j ). For g j (v ∗j ) > g j (v ∗j ), there is a δ > 0 such that g j (v ∗j −δ) > g j (v ∗j +δ). Let v j = v ∗j − δ, v j = v ∗j + δ, v j = v ∗j , j = j , j . Then F(π, v) < F(π, v∗ ). This contradicts F(π, v∗ ) is optimal. So we have v ∗j ≥ u ∗j = a j ⇒ v ∗j = u ∗j = a j . Assume g j (v ∗j ) < η. With the same argument, we have u ∗j = v ∗j = a j . Assume now η < λ. When u ∗j = 0, by lemma 2, we have g j (0) ≥ λ > η. For g j (0) > η, also by lemma 2, we have v ∗j = 0. When u ∗j > 0 and g j (u ∗j ) < λ, by the above conclusion, we have v ∗j = u ∗j . When u ∗j > 0 and g j (u ∗j ) = λ, if v ∗j = 0, we have v ∗j < u ∗j ; if v ∗j > 0, we have g j (v ∗j ) ≤ η < λ = g j (u ∗j ) ⇒ v ∗j < u ∗j . Note that there is at least one u ∗j such that u ∗j > 0 and g j (u ∗j ) = λ. We have v ∗j < u ∗j . This contradicts v∗ = u∗ . Thus η ≥ λ. In the same way, we have λ ≥ η. So λ = η. Finally, from the above process and Lemma 2, we can easily know the fact as follows. If g j (u ∗j ) < λ, then v ∗j = u ∗j = a j . So g j (v ∗j ) < η. If g j (u ∗j ) > λ, then u ∗j = 0. For g j (0) > λ = η, we have v ∗j = 0 and g j (v ∗j ) > η. Similarly. If g j (v ∗j ) < η, then u ∗j = v ∗j = a j and g j (u ∗j ) < λ. If g j (v ∗j ) > η, then u ∗j = v ∗j = 0 and g j (u ∗j ) > λ. Hence, A = B, and u ∗j = v ∗j , ∀ j ∈ (J − A). In terms of Lemma 2 and Lemma 3, we can easily obtain the following conclusion. Theorem 1 For (P3.I ), there is a λ < 0 and a permutation of J: j1 , j2 , · · · , jk , jk+1 , jk+2 , · · · , jl , jl+1 , jl+2 , · · · , jn , with 0 ≤ k ≤ n − 1, 0 ≤ l ≤ n − 1, such that the following two state statements holds. (i) g ji (0) > λ, i = 1, 2, · · · , k; g ji (a ji ) < λ, i = 1 + k, 2 + k, · · · , l and {u ∈ [0, a ji ]|g ji (u ji ) = λ} = ∅, i = 1 + l, 2 + l, · · · , n. (ii) Let u ji = u¯ ji = 0, i = 1, 2, · · · , k; u ji = u¯ ji = a ji , i = 1 + k, 2 + k, · · · , l; u ji = inf{u ∈ [0, a ji ]|g ji (u) = λ}, u¯ ji = sup{u ∈ [0, a ji ]|g ji (u) = λ}, i = 1 + l, 2 + l, · · · , n. Then u∗ is an optimal solution, if and only if it satisfies u j ≤ u ∗j ≤ u¯ j ; u∗ = U.
(2.6)
¯ And in addition, we also have u ≤ U ≤ u. 3 Algorithm 1 This section designs and analyses the algorithms to solve (P3). Obviously, for Algorithm 1, if the output is ∅, then (P3) is resource insufficient; otherwise (P3) is resource sufficient. Conversely, for (P3.S) the output must be uˆ and it is just the unique optimal solution; for (P3.I) the output is ∅. To solve (P3.I), we establish the next approximation algorithm 2. Lemma 4 For (P3.I ), the following statements hold. (i) There is at least one j ∈ J such that g j (0) < 0. (ii) Let M = sup{|g j (u)|| j ∈ J, u ∈ [0, a j ]}, then 0 < M <
123
J Comb Optim
Algorithm 1: the algorithm to solve (P3).
1 2 3 4 5 6 7 8
Input: Input(P3). Output: uˆ = (uˆ j ) or ∅. Set j := 1. if g j (0) ≥ 0 then Set uˆ j = 0 else Set u := sup{u ∈ [0, a j ] : g j (u) < 0}. if u ≤ U then Set uˆ j := u, U := U − u else Output ∅ and stop
9 Set j := j + 1 if j ≤ n then 10 return to Line 2 11 otherwise 12 go to Line 14
ˆ = (uˆ j ). Output u. ˆ Stop. 13 Set u
+∞. (iii) ∀ε > 0, let δ = |F(π, u) − F(π, u )| < ε.
ε nM .
Then, when |u j − u j | < δ for all j ∈ J, we have
Proof In fact, if g j (0) ≥ 0 for all j ∈ J, then (P3) can not be resource insufficient. This implies (i) holds. Note that (i), g j is continuous and a j < +∞ for all j ∈ J. We immediately know that (ii) holds. By Lemma 4, we have
|F(π, u) − F(π, u )| ≤
n
|g j (u j ) − g j (u j )|
j=1
<
n j=1
This shows (iii) holds.
|g j (ξ j )||u j
− u j |
≤
n
Mδ = n Mδ < ε.
j=1
Remark 2 (i) For the operation of Algorithms and the analysis of the complexity can efficiently proceed, in Algorithm 1 and Algorithm 2, we make the assumptions as follows. (a) All the related digits are rational numbers. (b) The values of the related functions for all the feasible variables can be obtained and one time of obtaining value of functions is regarded as one time of operations. (ii) As f j (0) > 0 for all j ∈ J, A =OPT((P4.I))> 0. (iii) In the step 5, u j (i) denotes the ith update value of u j .
Theorem 2 The complexity of Algorithm 2 is O n 2 MU Aε + 2 . Proof By checking the process of Algorithm 2, we can obtain the following three observations. (i) The total times of operations in step 1, step 2 and step 7 are finite. (ii) the step 3, step 4, step 5 and step 6 make a circulation process, and the total times of operations in the circulation process are finite. (iii) The total circulating times is no
123
J Comb Optim
Algorithm 2: the algorithm to solve (P3). Input: (P3.I) and error parameter ε. Output: u = (u j ) εA 1 Set A =OPT((P4.I)), δ = n 2 M 2 Set u j := 0, k j := 0 and u j (k j ) = 0 for all j ∈ J. 3 Calculate g j = g j (min{u j + δ, a j }) − g j (u j ) for all j ∈ J. 4 Find j ∈ J such that g j = min{ g j | j ∈ J}. 5 Set := min{δ, a j − u j , U }, u j := u j + and U := U − . 6 Set k j := k j + 1 and u j (k j ) = u j . 7 if U > 0 then 8 return to Line 3 9 otherwise 10 go to Line 11 11 Set u = (u j ). Output u. Stop.
2 MU more than n Aε + n + 1 . In fact, the total circulating times are equal to the times of implementingthe step 5. And the times of implementing the step 5 is no more then n 2 MU 2 MU + 2 . Combining the three observations we can easily Aε + n + 1 ≤ n Aε know that Theorem 2 holds. The proof completes. Remark 3 (i) Theorem 2 shows that Algorithm 2 is a pseudo polynomial-time algo rithm (see Du et al. 2011; Korte and Vygen 2000). (ii) Let C = (P3.I)with MU ≤ C . A Then, for the class of problems C, the complexity of Algorithm 2 is O(n 2 1ε ), that is, it is polynomial-time algorithm for given ε. Lemma 5 For (P3.I), let permutation j1 , j2 , · · · , jk , jk+1 , jk+2 , · · · , jl , jl+1 , jl+2 , · · · , jn , and the real numbers u ji and u¯ ji be of Theorem 1. Let also u be the output of Algorithm 2. Then, for any j and j , we have the two statements as follows. (i) u j ≥ u¯ j +δ and u j ≤ u¯ j − δ can not hold at the same time. (ii) u j ≤ u j − δ and u j ≥ u j + δ can not hold at the same time. Here, δ is the defined in Algorithm 2. Proof Proving (i). Arguing by contradiction, assume u j ≥ u¯ j + δ and u j ≤ u¯ j − δ for some j and j . Then, from Theorem 1, j ∈ { j1 , j2 , · · · , jk } ∪ { jl+1 , jl+2 , · · · , jn } and j ∈ { jk+1 , jk+2 , · · · , jn }. Hence, for the λ of Theorem 1, we have g j (u¯ j ) ≥ λ and g j (u¯ j ) ≤ λ. By mean value theorem, there exist ξ and η, with u j (k j − 1) < ξ < min{u j (k j − 1) + δ, a j } and u j (i) < η < u j (i) + δ, i = 0, 1, 2, · · · , k j , such that
123
J Comb Optim
g j |u j (k j −1) = g j (min{u j (k j − 1) + δ, a j }) − g j (u j (k j − 1)) = g j (ξ ) min{δ, a j − u j (k j − 1)} > λδ; g j |(u j (i)) = g j (u j (i) + δ) − g j (u j (i)) = g j (η)δ ≤ λδ ⇒ g j |(u j (k j −1)) > g j |(u j (i)) , i = 0, 1, 2, · · · , k j . According to the operation mechanism of Algorithm 2, this implies u j ≤ u j (k j −1), which contradicts to u j = u j (k j ). So (i) holds. Proving (ii). Assume u j ≤ u j − δ and u j ≥ u j + δ for some j and j . Then, j ∈ { jk+1 , jk+2 , · · · , jn } and j ∈ { j1 , j2 , · · · , jk } ∪ { jl+1 , jl+2 , · · · , jn }. Hence g j (u j ) ≤ λ and g j (u j ) ≥ λ. Also, by mean value theorem, there exist ξ and η, with u j (i) < ξ < u j (i) + δ, i = 0, 1, 2, · · · , k j and u j (k j − 1) < η < min{u j (k j − 1) + δ, a j }, such that g j |(u j (i)) = g j (u j (i) + δ) − g j (u j (i)) = g j (ξ )δ < λδ; g j |(u j (k j −1)) = g j (min{u j (k j − 1) + δ, a j }) − g j (u j (k j − 1)) = g j (η) min{δ, a j − u j (k j − 1)}δ ≥ λδ ⇒ g j |(u j (i)) < g j |(u j (k j −1)) , i = 0, 1, 2, · · · , k j . According to the operation mechanism of Algorithm 2, this implies u j ≤ u j (k j −1), which contradicts to u j = u j (k j ). So (ii) holds too. The proof completes. Lemma 6 For (P3.I ), let u j and u¯ j be of Theorem 1. Let also u ∈ A with u = U , σ = u j
u¯ j (u j − u¯ j ). Then there exists an optimal j solution u˜ such that |u j − u˜ j | ≤ max{σ, σ } for all j ∈ J. Proof Assume σ < σ . We can firstly choose δ j ∈ [0, σ ), j ∈ J, such that u j = u j +δ j ≤ u j for u j < u j , u j = u j +δ j = u j for u j ≤ u j ≤ u¯ j and u j = u j −δ j = u¯ j for u j > u¯ j , |u j − u j | ≤ σ for all j, and u = U . In addition, σ − σ = u u (u j − u j ). So we can further choose δ j ∈ [0, σ − σ ), j ∈ J, such that j
j
˜ = U, u˜ j = u j + δ j = u j for u j < u j and u˜ j = u j − δ j ≥ u j for u j ≤ u j ≤ u¯ j , u and |u˜ j − u j | ≤ σ − σ ⇒ |u˜ j − u j | ≤ σ for all j. Assume σ ≥ σ . We can firstly choose δ j ∈ [0, σ ), j ∈ J, such that u j = u j +δ j = u j for u j < u j , u j = u j +δ j = u j for u j ≤ u j ≤ u¯ j and u j = u j −δ j ≥ u¯ j for u j > u¯ j , |u j − u j | ≤ σ for all j, and u = U . In addition, σ − σ = u >u¯ j (u j − u¯ j ). j ¯ we have u u¯ j (u j − u¯ j ). Hence we can Since U ≤ u, j
j
further choose δ j ∈ [0, σ − σ ), j ∈ J, such that u˜ j = u j + δ j = u j for u j < u j , ˜ = U, u˜ j = u j + δ j ≤ u¯ j for u j ≤ u j ≤ u¯ j and u˜ j = u j − δ j = u¯ j for u j > u¯ j , u and |u˜ j − u j | ≤ σ − σ ⇒ |u˜ j − u j | ≤ σ for all j. By Theorem 1, u˜ is an optimal solution. Hence Lemma 6 holds.
123
J Comb Optim
Theorem 3 For (P3.I ), let permutation j1 , j2 , · · · , jk , jk+1 , jk+2 , · · · , jl , jl+1 , jl+2 , · · · , jn , and the real numbers u ji and u¯ ji be of Theorem 1. Let also u be the output of Algorithm 2. Then the following two statements hold.(i) u ∈ A, u = U and 0 ≤ u ji < nδ, i = 1, 2, · · · , k; a ji − nδ < u ji ≤ a ji , i = k + 1, k + 2, · · · , l; u ji − nδ < u ji < u¯ ji + nδ, i = l + 1, l + 2, · · · , n. (ii) F(π, u) ≤ (1 + ε)A. Here, δ and A are the defined in Algorithm 2. Proof Proving (i). From step 5 and step 6, we can easily know u ∈ A, u = U . Assume u j ≥ nδ ≥ u¯ j +δ for some j ∈ { j1 , j2 , · · · , jk }. Then there is a j (= j ) ¯ and u¯ j = 0. (In fact, ifu j > u¯ j − δ for δ since u ≤ u such that u j ≤ u¯ j − all j = j , then u = nj=1 u j = u j + j= j u j > u¯ j + nδ + j= j (u¯ j − δ) > n ¯ ≥ U ). For u j ≥ u¯ j + δ and u j ≤ u¯ j − δ violate the (i) of j=1 u¯ j = u Lemma 5, we have u j < nδ for all i = 1, 2, · · · , k. Assume u j ≤ a j − nδ ≤ u j − δ for some j ∈ { jk+1 , jk+2 , · · · , jl }, then there is a j (= j ) such that u j ≥ u j + δ. This violates the (ii) of Lemma 5. Hence we have u j > nδ for all j ∈ { jk+1 , jk+2 , · · · , jl }. Assume u j ≤ u j − nδ ≤ u j − δ for some j ∈ { jl+1 , jl+2 , · · · , jn }, then there is a j such that u j ≥ u j + δ. This violates the (ii) of Lemma 5. So u j > u j − nδ for all j ∈ { jl+1 , jl+2 , · · · , jn }. Assume u j ≥ u¯ j + nδ ≥ u¯ j + δ for some j ∈ { jl+1 , jl+2 , · · · , jn }, then there is a j such that u j ≤ u¯ j − δ. This violates the (i) of Lemma 5. So u j < u¯ j − nδ for all j ∈ { jl+1 , jl+2 , · · · , jn }. Combining the above results we know that (i) is true. Proving (ii) shows as follows. Let C = { j ∈ J|u j ≤ u j − δ}, D = { j ∈ J|u j ≥ u¯ j + δ}, E = { j ∈ J|u j − δ < u j < u j oru¯ j < u j < u¯ j +δ}. If C ∪ D ∪ E = ∅, then (ii) is obviously true. Otherwise, we complete the proof as follows. If C = ∅, there exists a certain j such that u j ≤u j − δ. By Lemma 5, we have u j < u j + δ for all j ∈ J. This further leads to u j >u (u j − u j ) < nδ. Hence j σ < nδ. On the other hand, σ = u j u (u j − u j ) < nδ. So, j j by Lemma 6, there is an optimal solution u˜ such that |u j − u˜ j | < nδ for all j. If D = ∅, there exists a certain j such that u j ≥ u¯ j + δ. By Lemma 4.2, u j > u¯ j − δ for all j. This further leads to u j u¯ j (u j − u¯ j ) ≤ u j u j − δ for all j and u j < u¯ j + δ for all j. This further leads to σ = u j u¯ j (u j − u¯ j ) < nδ. So, j there is an optimal solution u˜ such that |u j − u˜ j | < nδ for all j.
123
J Comb Optim
Finally since u˜ is an optimal solution and |u j − u˜ j | < nδ for all j. By (iii) of ˜ < ε A ⇒ F(π, u) < Lemma 4, we have |F(π, u) − A| = |F(π, u) − F(π, u)| (1 + ε)A. Remark 4 (i) Let u be the output of Algorithm 2 and u∗ be an optimal solution of (P3.I). Then, from the (i) of Theorem 3, we can easily know |u j − u ∗j | < nδ for all j. So nδ is actually a error bound between u and u∗ . (ii) The conclusion (ii) of Theorem 3 shows that for given error parameter ε, Algorithm 2 is a (1 + ε)-approximation algorithm. It also shows, for the class of problems C in the (ii) of Remark 3, Algorithm 2 is a FPTAS (fully polynomial-time approximation scheme) (see Du et al. 2011; Korte and Vygen 2000). 4 Scheduling This section we propose a rule to find the optimal schedule of some (P1). For (P1), let f j (0) = b j for all j. Let also j , j ∈ J. We use j = j (PW) to b j b j w j = w j and w j = w j (⇔ b j = b j and w j = w j ); j ≤ j (PW) to b b denote wj ≤ wj and w j ≤ w j ; and j < j (PW) to denote j ≤ j (PW) and j j b j b j < or w j < w j holds. We say j and j with PW relation if j ≤ j (PW) w j w j or j ≤ j (PW). We say (P1) with PW property if j and j with PW relation for any
denote
j , j ∈ J.
Lemma 7 For (P3) with f j (u) = b j + f (u) and a j = a for all j, and with PW property, let λ be the permutation that satisfies λ(k) ≤ λ(k + 1)(PW) for all k = ˆ 1, 2, · · · , n − 1. Then (λ, u(λ)) is an optimal schedule. Moreover, if λ(k) < λ(k + ˆ 1)(PW) for all k = 1, 2, · · · , n − 1, then (λ, u(λ)) is the unique optimal schedule. ˆ Proof For simpleness, we institute u(λ) and uˆ λ(i) (λ) with u(λ) and u λ(i) , respectively, below. Assume π = [· · · , π(k), π(k + 1), · · · ] and σ = [· · · , π(k + 1), π(k), · · · ] with π(k) < π(k + 1)(PW) (1 ≤ k ≤ n − 1). π(l) = σ (l)
k + 1, π(l) = σ (l + 1) and π(l + 1) = σ (l), we have Note nfor l = k, n = w w i=l π(i) i=l σ (i) and u π(l) = u σ (l) for l = k + 1. Hence F(σ, u(σ ))−F(π, u(π ))
n
n n n wσ (i) f σ (l) (u σ (l) )+u σ (l) − wπ(i) f π(l) (u π(l) )+u π(l) = l=1
=
i=l
n
wσ (i)
f σ (k) (u σ (k) ) + u σ (k) +
i=k
+ u σ (k+1)
+
−
(
l=1
i=l n
n
wπ(i) ) f π(k) (u π(k) ) + u π(k)
wπ(k+1)
wσ (i)
i=k+1
i=k
n
i=k+1
123
f π(k+1) (u π(k+1) ) + u π(k+1)
f σ (k+1) (u σ (k+1) )
J Comb Optim
=
n
+ −
wσ (i) (bσ (k) + f (u σ (k) )) + u σ (k)
i=k
n
wσ (i) (bσ (k+1) + f (u σ (k+1) )) + u σ (k+1)
i=k+1 n
wπ(i) (bπ(k) + f (u π(k) )) + u π(k)
i=k
n
+
wπ(i) (bπ(k+1) + f (u π(k+1) )) + u π(k+1)
(4.1)
i=k+1
Further more, we have F(σ, u(σ )) − F(π, u(π )) = {[wσ (k+1) bσ (k+1) + (wσ (k) + wσ (k+1) )bσ (k) ] − [wπ(k+1) bπ(k+1) +(wπ(k) +wπ(k+1) )bπ(k) ]} n n + wσ (i) f (u σ (k) )+u σ (k) + wσ (i) f (u σ (k+1) ) + u σ (k+1) i=k
−
n
+
wπ(i)
i=k n
f (u π(k) ) + u π(k)
wπ(i)
i=k+1
f (u π(k+1) ) + u π(k+1)
i=k+1
= [wσ (k+1) bσ (k) − wπ(k+1) bπ(k) ]
n n + wσ (i) f (u σ (k+1) ) − wπ(i) f (u π(k+1) ) i=k+1
i=k+1
+[u σ (k+1) − u π(k+1) ] For
bπ(k) wπ(k)
≤
bπ(k+1) wπ(k+1) ,
(4.2)
we have
wπ(k) bπ(k+1) − wπ(k+1) bπ(k) = wπ(k+1) wπ(k)
bπ(k+1) bπ(k) ≥ 0, (4.3) − wπ(k+1) wπ(k)
and wπ(k) bπ(k+1) − wπ(k+1) bπ(k) = wπ(k+1) wπ(k)
bπ(k+1) bπ(k) > 0 (4.4) − wπ(k+1) wπ(k)
123
J Comb Optim
if
bπ(k+1) wπ(k+1)
>
bπ(k) wπ(k) .
n
n For wπ(k+1) ≥ wπ(k) = wσ (k+1) , we have i=k+1 wπ(i) ≥ i=k+1 wσ (i) . Note f (u) is non-decreasing. From (1.1), we have u π( j+1) = u σ ( j+1) , if wπ(k+1) = wπ(k) = wσ (k+1) ; u π( j+1) > u σ ( j+1) , if wπ(k+1) > wπ(k) = wσ (k+1) .
(4.5) (4.6)
If wπ(k+1) = wπ(k) , from (4.5), we have
n
wσ (i)
f (u σ (k+1) ) −
i=k+1
n
wπ(i)
f (u π(k+1) )
i=k+1
+ [u σ (k+1) − u π(k+1) ] = 0. If wπ(k+1) > wπ(k) , noting (4.6), f (u) < 0, we have
n
wσ (i)
i=k+1 n
i=k+1 wπ(i)
f (u σ (k+1) ) −
n
f (ξ ) + 1 < 0 for ξ < u π(k+1) and
wπ(i)
f (u π(k+1) ) + [u σ (k+1) − u π(k+1) ]
i=k+1
=
n
(4.7)
wπ(i) [ f (u σ (k+1) ) − f (u π(k+1) )] + [u σ (k+1) − u π(k+1) ]
i=k+1
+ wσ (k+1) f (u σ (k+1) ) − wπ(k+1) f (u σ (k+1) ) n =− wπ(i) f (ξ ) + 1 [u π(k+1) − u σ (k+1) ] i=k+1
+ f (u σ (k+1) )[wπ(k+1) − wσ (k+1) ] (u σ (k+1) < ξ < u π(k+1) ) > 0.
(4.8)
Combining (4.7) and (4.8), we have
n
wσ (i)
f (u σ (k+1) ) −
i=k+1
n
wπ(i)
f (u π(k+1) )
i=k+1
+ [u σ (k+1) − u π(k+1) ] ≥ 0 In case
bπ(k) wπ(k)
<
bπ(k+1) wπ(k+1) ,
(4.9)
combining (4.4) and (4.9), we have F(π, u(π )) < F(σ, u(σ )).
(4.10)
In case wπ(k+1) > wπ(k) = wσ (k+1) , combining (4.3) and (4.8), we have F(π, u(π )) < F(σ, u(σ )).
123
(4.11)
J Comb Optim
Note that (4.10) and F(λ, u(λ)) = F(π, u(π )) if π(k) = π(k +1)(PW). From (P3) with PW property and Theorem 1 of Lawler Lawler (1983), we can easily know: ∀π ∈ P, we have F(λ, u(λ)) < F(π, u(π )) or F(λ, u(λ)) = F(π, u(π )) and u(λ) ≤ u(π ). Further, by Proposition 2 and Proposition 3, we know: ∀(π, u) ∈ S, we have F(λ, u(λ)) < F(π, u) or F(λ, u(λ)) = F(π, u) and u(λ) ≤ u. Hence (λ, u(λ)) is an optimal schedule. Finally, it is obvious that (λ, u(λ)) is the unique optimal schedule if λ(k) < λ(k + 1)(PW) for all k = 1, 2, · · · , n − 1. This completes the proof. Theorem 4 For (P1) with f j (u) = b j + f (u) and a j = a for all j, let λ be the permutation that satisfies λ(k) < λ(k + 1)(PW) for all k = 1, 2, · · · , n − 1. If (P3) ˆ is resource sufficiency, then (λ, u(λ)) is the optimal solution. Moreover, if λ(k) < ˆ λ(k + 1)(PW) for all k = 1, 2, · · · , n − 1, then (λ, u(λ)) is the unique optimal schedule. ˆ Proof Let (π, u) ∈ S. According to Lemma 7, we have F(λ, u(λ)) ≤ F(π, u), and ˆ ˆ u(λ) = u if F(λ, u(λ)) = F(π, u). On the other hand, for (P3) is resource ˆ ˆ ˆ ∈ S. Hence (λ, u(λ)) is an optimal sufficiency, we have u(λ) ∈ A, that is, (λ, u(λ)) schedule. Moreover, it is obvious that (λ, u(λ)) is the unique optimal schedule if λ(k) < λ(k + 1)(PW) for all k = 1, 2, · · · , n − 1. So Theorem 4 is true. Acknowledgments The author cordially thanks the anonymous referees for their valuable comments which lead to the improvement of this paper.
References Błazewicz J, Ecker K, Schmidt G, Weglarz J (1983) Scheduling in computer and manufacturing systems. Springer, Berlin Carlier J, Rinnooy Kan AHG (1982) Scheduling subject to nonrenewable-resource constraints. Oper Res Lett 1(2):52–55 Du DZ, Ko K-I, Hu XD (2011) Design and analysis of approximation algorithms (in chinese). Higher Education Press, Beijing Gafarov ER, Lazarev AA, Werner F (2011) Single machine scheduling problems with financial resource constraints: some complexity results and properties. Math Soc Sci 62(1):7–13 Janiak JA (1991) Single machine scheduling problem with a common deadline and resource dependent release dates. Eur J Oper Res 53:317–325 Korte B, Vygen J (2000) Combinatorial optimization theory and algorithms. Springer, Berlin Lawler EL (1983) Recent results in the theory of machine scheduling. In: Bachem A, Grötschel M, Korte B (eds) Mathematical programming: the state of the art, Bonn 1982. Springer, Berlin, pp 202–234 Smith WE (1956) Various optimizers for single-stage production. Naval Res Logist Q 3:59–66 Toker A, Kondakci S, Erkip N (1991) Scheduling under a non-renewable resource constraint. J Opi Res Soc 42(9):811–814 Xie J (1997) Polynomial algorithms for single machine scheduling problems with financial constraints. Oper Res Lett 21:39–42
123