Optim Lett DOI 10.1007/s11590-012-0574-5 ORIGINAL PAPER
Scheduling with a position-weighted learning effect T. C. E. Cheng · Wen-Hung Kuo · Dar-Li Yang
Received: 11 March 2012 / Accepted: 24 September 2012 © Springer-Verlag Berlin Heidelberg 2012
Abstract In general, human learning takes time to build up, which results from a worker gaining experience from repeating similar operations over time. In the early stage of processing a given set of similar jobs, a worker is not familiar with the operations, so his learning effect on the jobs scheduled early is not apparent. On the other hand, when the worker has gained experience in processing the jobs his learning improves. So a worker’s learning effect on a job depends not only on the total processing time of the jobs that he has processed but also on the job position. In this paper we introduce a position-weighted learning effect model for scheduling problems. We provide optimal solutions for the single-machine problems to minimize the makespan and the total completion time, and an optimal solution for the singlemachine problem to minimize the total tardiness under an agreeable situation. We also consider two special cases of the flowshop problem. Keywords Scheduling · Learning effect · Makespan · Total completion time · Total tardiness 1 Introduction Learning and its effect on productivity are well recognized in industry. In general, a worker learns from experience in performing repetitive tasks and the learning effect
T. C. E. Cheng Department of Logistics and Maritime Studies, The Hong Kong Polytechnic University, Kowloon, Hong Kong W.-H. Kuo (B) · D.-L. Yang Department of Information Management, National Formosa University, Yun-Lin 632, Taiwan, ROC e-mail:
[email protected]
123
T. C. E. Cheng et al.
manifests itself in labour productivity improvement. However, a worker is not familiar with the operations at the beginning of processing a set of similar jobs, so his learning effect on the jobs scheduled early is not apparent. On the other hand, when the worker becomes experienced in processing the jobs, he enhances his learning. There are two main classes of learning effect models studied in the scheduling literature. One is position-based learning (see, e.g., [1–5]) and the other is sum-of-processing-timebased learning (see, e.g., [6,7]). For a comprehensive survey on this stream of research, the reader may refer to Biskup [8]. As for the learning effect, Biskup [8] stated that it may be more appropriate to assume that position-based learning takes place during machine setups only, while sum-of-processing-time-based learning occurs when workers have gained experience from producing similar jobs. Practical examples of the former class of learning are memory chips/circuit boards manufacturing and the operation of bottling plants. For the latter class of learning, the production of high-end machine tools and the maintenance of aircraft are two practical examples. Recently, some researchers introduced models that combine the two classes of learning for scheduling problems. Such learning models are appropriate for operations that contain both setup and human learning. For example, an operator performs the operations of drilling holes in parts, where different parts require different hole sizes. Hence, each part (i.e., job) requires a setup time and a drilling time. In such an environment, Wu and Lee [9] proposed a learning model whereby the actual processing time of job i scheduled in position r of a sequence is pir = pi
a1 r −1 k=1 p[k] r a2 , 1 + n p k k=1
(1)
where a1 < 0 and a2 < 0 denote the learning indices for sum-of-processing-timebased and position-based learning, respectively, pi denotes the normal processing time of job i, and the subscript [k] denotes the job in position k of a sequence. They showed that the single-machine problems to minimize the makespan and the total completion time are polynomially solvable under the proposed learning model, and that the problem to minimize the total weighted completion time has a polynomial optimal solution under certain agreeable conditions. Cheng et al. [10] considered a similar learning model whereby the actual processing time of job i scheduled in position r of a sequence is pir = pi
a1 r −1 k=1 p[k] r a2 , 1 − n p k=1 k
(2)
where a1 ≥ 1 and a2 < 0 denote the learning indices for sum-of-processing-timebased and position-based learning, respectively. They obtained results similar to those of Wu and Lee [9]. Besides, they provided polynomial optimal solutions for some special cases of the m-machine flowshop problems to minimize the makespan and the total completion time. Yin et al. [11] proposed another learning model whereby the actual processing time job i scheduled in position r of a sequence is
123
Scheduling with a position-weighted learning effect
pir = pi f
r −1
p[k] g(r )r = 1, 2, . . . , n,
(3)
k=1
where 0k=1 p[k] = 0, f : [0, +∞) → (0, 1] is a differentiable non-increasing function with f non-decreasing on [0, +∞) and f (0) = 1, and g : [1, +∞) → (0, 1] is a non-increasing function with g(1) = 1. They showed that the single-machine problems to minimize the makespan and the sum of the kth power of completion times remain polynomially solvable, and the problems to minimize the total weighted completion time and the maximum lateness are polynomially solvable under certain conditions. They also showed that for some special cases of the m-machine permutation flowshop problem, similar results can also be obtained under the proposed learning model. In general, jobs scheduled in different positions of a sequence should be subject to different effects of human learning. In the early stage of processing a given set of similar jobs, a worker is unfamiliar with the operations, so the learning effect on the jobs scheduled early is not apparent. On the other hand, when the worker becomes used to processing similar jobs, his learning improves. So a worker’s learning effect on a job depends not only on the total processing time of the jobs that he has processed but also on the job position. In this paper we introduce a positionweighted learning effect model for scheduling problems in the above situation. We consider some single-machine and flowshop scheduling problems under the proposed learning model to minimize three performance measures, namely the makespan n . , n }), the total completion time (T C = i=1 Ci ), and (Cmax = max{Ci |i= 1, . . n the total tardiness ( Ti = i=1 max{Ci − di , 0}), where Ci and di are the completion time and due date of job i, respectively. We note that our results are the same as those in Cheng et al. [10] and Yin et al. [11] withthe same objectives. Also, in [11], r −1 a general learning model pir = pi f k=1 p[k] g(r ) is proposed, so the learning −1 effect is a convolution of the function of position r and the function of rk=1 p[k] . Our a r −1 b learning model is p jr = p j 1 + k=1 wk p[k] r . If we drop the position weight a −1 wk , our learning model becomes p jr = p j 1 + rk=1 p[k] r b . It becomes one of the models in [11]. Therefore, our results generalize some of the results in [11].
2 Some single-machine scheduling problems Assume that there are n jobs J1 , J2 , . . . Jn to be processed on a single machine. All the jobs are non-preemptive. The machine can handle at most one job at a time and cannot stand idle until the last job assigned to it has finished processing. If p j is the normal processing time of job j, then the actual processing time of job j if it is scheduled in position r of a sequence is given by p jr = p j 1 +
r −1
a wk p[k]
r b j, r = 1, . . . , n,
(4)
k=1
123
T. C. E. Cheng et al.
where a ≤ 0 and b ≤ 0 are the learning indices for sum-of-processing-time-based and position-based learning, respectively, p[k] is the normal processing time of a job scheduled in position k of a sequence, and wk > 0 is the associated weight of the kth position. Since the worker is not familiar with the job operations at the beginning, the learning effect on a job is less apparent if it is scheduled early in a sequence. Hence, we assume that the position-weights follow a non-decreasing sequence, i.e., 0 < w1 ≤ w2 ≤ · · · ≤ wn . 2.1 Makespan minimization In this section we study the single-machine scheduling problem under the general learning effect model (4) to minimize the makespan. For convenience, we denote the general learning effect as L E pos . Using an extension of the three-field notation for scheduling problems introduced by Graham et al. [12], we denote the problem under study as 1/L E pos /Cmax . We show that the proposed problem can be solved optimally by the shortest processing time (SPT) rule. Before we prove this result, we first introduce the following lemmas, which are extended from the results in Kuo and Yang [6]. Lemma 1 1 − (1 + t)a l b + at (1 + t)a−1l b ≥ 0 if a ≤ 0, b ≤ 0, l ≥ 1 and t ≥ 0. Proof The proof is similar to that in Kuo and Yang [6] and so is omitted.
Lemma 2 λ(k − (1 + t)a l b ) − (k − (1 + λt)a l b ) ≥ 0 if λ ≥ 1, k ≥ 1, a ≤ 0, b ≤ 0, l ≥ 1 and t ≥ 0. Proof The proof is similar to that in Kuo and Yang [6] and so is omitted
Theorem 1 For the 1/L E pos /Cmax problem, there exists an optimal schedule in which the jobs are sequenced in non-decreasing order of pi , i.e., in the SPT order. Proof Let S1 = (π1 , Jh , J j , Ji , π2 ) denote a sequence with job j (J j ) in position r and job i(Ji ) in position (r + 1), where π1 and π2 are partial sequences of S1 , which may be empty. We show that swapping J j and Ji in S1 does not increase the makespan. Let S2 be a new sequence created by swapping J j and Ji . In addition, let Cl (S1 ) denote the completion time of Jl in sequence S1 and Cl (S2 ) denote the completion time of Jl in sequence S2 . To prove that the makespan of the problem 1/L E pos /Cmax is minimized by the SPT rule (i.e., pi ≤ p j ), it suffices to show that (a) C j (S2 ) ≤ Ci (S1 ) and (b) Cm (S2 ) ≤ Cm (S1 ) for any job Jm in π2 . We first prove (a) by pairwise interchanges of jobs. Note that Ci (S1 ) = C h (S1 )+ p j 1+
r−1 k=1
and
123
a wk p[k]
r + pi 1+wr p j + b
r −1 k=1
a wk p[k]
(r + 1)b
Scheduling with a position-weighted learning effect
C j (S2 ) = C h (S2 )+ pi 1+
r −1
a wk p[k]
r + p j 1+wr pi + b
k=1
r −1
a wk p[k]
(r + 1)b .
k=1
Therefore, Ci (S1 )−C j (S2 ) = ( p j − pi )+ pi (1 + w1 p j )a 2b − p j (1 + w1 pi )a 2b
if r = 1 (5)
and Ci (S1 )−C j (S2 ) = ( p j − pi ) 1+
r −1
a wk p[k]
r + pi 1+wr p j + b
k=1
×(r +1) − p j 1+wr pi + b
r−1
a wk p[k]
k=1
a wk p[k]
r −1
(r +1)b
if r ≥ 2
(6)
k=1
since C h (S1 ) = C h (S2 ). Case I: r = 1 p Letting λ = pij , we have p j = λpi .
(7)
Substituting (7) into (5) yields Ci (S1 ) − C j (S2 ) = (λpi − pi ) + pi (1 + λw1 pi )a 2b − λpi (1 + w1 pi )a 2b = pi λ(1 − (1 + w1 pi )a 2b ) − (1 − (1 + λw1 pi )a 2b ) (8) Letting t = w1 pi > 0, we can re-write (8) as Ci (S1 ) − C j (S2 ) = pi λ(1 − (1 + t)a 2b ) − (1 − (1 + λt)a 2b ) Hence, if λ =
pj pi
(9)
≥ 1, then from Lemma 2 we have
Ci (S1 ) − C j (S2 ) = pi λ(k − (1 + t)a 2b ) − (k − (1 + λt)a 2b ) ≥ 0 since t = w1 pi > 0, l = 2, and k = 1. That is, in an optimal schedule for the 1/L E pos /Cmax problem, if pi ≤ p j , then job i is scheduled before job j.
123
T. C. E. Cheng et al.
Case II: r ≥ 2 −1 wk p[k] > 0, we have Letting x = 1 + rk=1 wr p j a r + 1 b Ci (S1 ) − C j (S2 ) 1 + = ( p − p ) + p j i i xar b x r b wr pi a r + 1 −pj 1 + x r a r +1 b wr pi = pj 1 − 1 + x r wr p j a r + 1 b − pi 1 − 1 + . x r
(10)
Substituting (7) into (10) yields Ci (S1 ) − C j (S2 ) wr pi a r + 1 b = λpi 1 − 1 + xar b x r a r +1 b λwr pi − pi 1 − 1 + x r wr pi a r + 1 b = pi λ 1 − 1 + x r a r +1 b λwr pi − 1− 1+ x r Letting t =
wr pi x
(11)
, we can re-write (11) as
b b Ci (S1 )−C j (S2 ) a r +1 a r +1 = pi λ 1−(1 + t) − 1 − (1 + λt) xar b r r Thus, if λ =
pj pi
≥ 1, then from Lemma 2 we have
b b Ci (S1 )−C j (S2 ) a r +1 a r +1 = pi λ k −(1+t) − k −(1+λt) ≥0 xar b r r since t = wrxpi > 0, k = 1, and l = r +1 r ≥ 1. That is, in an optimal schedule for the 1/L E pos /Cmax problem, if pi ≤ p j , then job i is scheduled before job j. This completes the proof of (a).
123
Scheduling with a position-weighted learning effect
Next, we give a proof of (b) as follows: Let Js be the first job in π2 . Then Cs (S1 ) = Ci (S1 ) + ps 1 + wr p j + wr +1 pi +
r −1
a wk p[k]
(r + 2)b
k=1
and Cs (S2 ) = C j (S2 ) + ps 1 + wr pi + wr +1 p j +
r −1
a wk p[k]
(r + 2)b .
k=1
So Cs (S1 ) − Cs (S2 ) = Ci (S1 ) − C j (S2 ) ⎛ a r −1 wk p[k] + ps (r + 2)b ⎝ 1 + wr p j + wr +1 pi + k=1
a ⎞ r −1 − 1 + wr pi + wr +1 p j + wk p[k] ⎠
(12)
k=1
If pi ≤ p j , then Ci (S1 ) − C j (S2 ) ≥ 0 from the result of (a). Since (wr p j + wr +1 pi ) ≤ (wr pi + wr +1 p j ) and a ≤ 0, the second term on the right-hand side of (12) is greater than or equal to zero. Therefore, Cs (S2 ) ≤ Cs (S1 ). Similarly, we can show that the completion time of any job in S2 is earlier than that in S1 . Therefore, the makespan of S2 is not greater than that of S1 . Repeating this interchange argument for all the jobs not sequenced in the SPT order yields the theorem. 2.2 Total completion time minimization In this section we study the single-machine scheduling problem under the general learning effect model (4) to minimize the total completion time, i.e., the 1/L E pos / Ci problem. Theorem 2 For the 1/L E pos / Ci problem, there exists an optimal schedule in which the jobs are sequenced in non-decreasing order of pi , i.e., in the SPT order Proof We use the same notation in the proof of Theorem 1. To prove that the total completion time of the problem 1/L E / Ci isminimized by the SPT rule pos (i.e., pi ≤ p j ), it suffices to show that nk=1 Ck (S2 ) ≤ nk=1 Ck (S1 ). By the proof of Theorem 1, we have C j (S2 ) ≤ Ci (S1 ) and Cm (S2 ) ≤ Cm (S1 ) for any job Jm in π2 . Since C h (S1 ) = C h (S2 ) and Ck (S1 ) = Ck (S2 ) for any job Jk in π1 , we only need to show that Ci (S2 ) ≤ C j (S1 ).
123
T. C. E. Cheng et al.
Taking the difference between Ci (S2 ) and C j (S1 ), we have ⎛
C j (S1 ) − Ci (S2 ) = ⎝C h (S1 ) + p j 1 + ⎛
r −1
wk p[k]
k=1
− ⎝C h (S2 ) + pi 1 +
⎞
a
r −1
r b⎠ a
wk p[k]
⎞ r b ⎠ ≥ 0.
k=1
Repeating this interchange argument for all the jobs not sequenced in the SPT order yields the theorem. Corollary 1 For the 1/L E pos / Cik problem, there exists an optimal schedule in which the jobs are sequenced in non-decreasing order of pi , i.e., in the SPT order. Proof Since k is a positive integer, Corollary 1 follows immediately from Theorem 2.
2.3 Total tardiness minimization In this section we study the single-machine scheduling problem under the general learning effect model (4) to minimize the total tardiness, i.e., the 1/L E pos / Ti problem. Theorem 3 For the 1/L E pos / Ti problem, there exists an optimal schedule in which the jobs are sequenced in non-decreasing order of di , i.e., in the earliest duedate (EDD) order, if the job processing times and due-dates are agreeable, i.e., di ≤ d j implies pi ≤ p j Proof We use the same notation in the proof of Theorem 1. To prove that the total tardiness of the 1/L E pos / Ti problem is minimized by the EDD rule (i.e., di ≤ d j ) under the agreeable to show that (a) Ti (S2 ) + T j (S2 ) ≤ T j (S1 ) + condition, it suffices Ti (S1 ) and (b) Jk ∈π2 Tk (S2 ) ≤ Jk ∈π2 Tk (S1 ). Part (a) guarantees that the total tardiness of Ji and J j in S2 is not greater than that in S1 . Part (b) guarantees that the total tardiness of all the jobs scheduled after the pair of jobs Ji and J j in S2 is not greater than that in S1 . If di ≤ d j implies pi ≤ p j , the result of part (b) can be easily obtained from the result of Theorem 1. Therefore, we only need to show that part (a) holds. The tardiness of job i and job j in S1 are ⎧ ⎫ a r −1 ⎨ ⎬ wk p[k] r b − d j , 0 T j (S1 ) = max C h (S1 ) + p j 1 + ⎩ ⎭ k=1
and
123
Scheduling with a position-weighted learning effect
⎧ a r −1 ⎨ Ti (S1 ) = max C h (S1 ) + p j 1 + wk p[k] r b ⎩ + pi 1 + wr p j +
k=1
r −1
a
wk p[k]
(r + 1)b − di , 0
k=1
⎫ ⎬ ⎭
Similarly, the tardiness of job iand job j in S2 are ⎧ ⎫ a r −1 ⎨ ⎬ wk p[k] r b − di , 0 Ti (S2 ) = max C h (S2 ) + pi 1 + ⎩ ⎭ k=1
and ⎧ a r −1 ⎨ T j (S2 ) = max C h (S2 ) + pi 1 + wk p[k] r b ⎩ + p j 1 + wr pi +
k=1
r −1 k=1
wk p[k]
a (r + 1)b − d j , 0
⎫ ⎬ ⎭
There are four possible cases for the relationship between the completion times and due-dates of Ji and J j in S1 as follows: Case I. Ci (S1 ) ≤ di and C j (S1 ) ≤ d j In this case, from Theorem 1, swapping Ji and J j does not change the relationship between the completion times and due-dates of Ji and J j , i.e., Ci (S2 ) ≤ di and C j (S2 ) ≤ d j . Therefore, the total tardiness of Ji and J j in S2 is not greater than that in S1 , or more precisely, the total tardiness (zero) of Ji and J j in S2 is equal to that in S1 . Case II. Ci (S1 ) ≥ di and C j (S1 ) ≤ d j In this case, after swapping Ji and J j , if Ci (S2 ) ≥ di , then C j (S2 ) ≤ d j or C j (S2 ) ≥ d j . If Ci (S2 ) ≤ di , then C j (S2 ) ≤ d j or C j (S2 ) ≥ d j . Case III. Ci (S1 ) ≤ di and C j (S1 ) ≥ d j Clearly, since C j (S1 ) ≤ Ci (S1 ), this case does not exist under the condition di ≤ d j . Case IV. Ci (S1 ) ≥ di and C j (S1 ) ≥ d j In this case, after swapping Ji and J j , if Ci (S2 ) ≥ di then C j (S2 ) ≤ d j or C j (S2 ) ≥ d j . If Ci (S2 ) ≤ di , then C j (S2 ) ≤ d j or C j (S2 ) ≥ d j . Based on the above analysis, we only need to consider Cases II and IV to show that Ti (S2 ) + T j (S2 ) ≤ T j (S1 ) + Ti (S1 ). Besides, we only need to consider the situation of Ci (S2 ) ≥ di and C j (S2 ) ≥ d j in the two cases since it is the most restrictive situation compared with the other situations. In the following we show that Ti (S2 ) + T j (S2 ) ≤ T j (S1 ) + Ti (S1 ) in Cases II and IV. Case II. Ci (S1 ) ≥ di and C j (S1 ) ≤ d j versus Ci (S2 ) ≥ di and C j (S2 ) ≥ d j
123
T. C. E. Cheng et al.
T j (S1 ) + Ti (S1 ) − (Ti (S2 ) + T j (S2 )) ⎛ ⎞a ⎛ ⎞a r −1 r −1 = C h (S1 ) + p j ⎝1 + wk p[k] ⎠ r b + pi ⎝1 + wr p j + wk p[k] ⎠ (r + 1)b − di k=1
k=1
⎧ ⎛ ⎞a r−1 ⎨ − C h (S2 )+ pi ⎝1+ wk p[k] ⎠ r b − di ⎩ k=1 ⎫ ⎛ ⎞a ⎛ ⎞a r−1 r−1 ⎬ + C h (S2 )+ pi ⎝1+ wk p[k] ⎠ r b + p j ⎝1+wr pi + wk p[k] ⎠ (r +1)b − d j ⎭ k=1 k=1 ⎛ ⎞a ⎛ ⎞a r −1 r −1 b wk p[k] ⎠ r + pi ⎝1 + wr p j + wk p[k] ⎠ (r + 1)b = ( p j − pi ) ⎝1 + ⎛
k=1
− p j ⎝1 + wr pi +
r −1
⎞a
k=1
⎛
wk p[k] ⎠ (r + 1)b − pi ⎝1 +
k=1
r −1
⎞a wk p[k] ⎠ r b − C h (S2 ) + d j .
k=1
(13) From (6) in Theorem 1, the sum of the first three terms on the right-hand side of (13) is greater than or equal to zero for pi ≤ p j . Also, under C j (S1 ) ≤ d j , we have a a −1 −1 d j ≥ C h (S1 ) + p j 1 + rk=1 wk p[k] r b ≥ C h (S2 ) + pi 1 + rk=1 wk p[k] r b . Thus, T j (S1 ) + Ti (S1 ) − (Ti (S2 ) + T j (S2 )) ≥ 0. Case IV. Ci (S1 ) ≥ di and C j (S1 ) ≥ d j versus Ci (S2 ) ≥ di and C j (S2 ) ≥ d j T j (S1 ) + Ti (S1 ) − (Ti (S2 ) + T j (S2 )) ⎛ ⎞a ⎛ ⎞a r −1 r −1 = C h (S1 ) + p j ⎝1 + wk p[k] ⎠ r b + pi ⎝1 + wr p j + wk p[k] ⎠ (r + 1)b − di k=1
k=1
⎧ ⎛ ⎞a r−1 ⎨ b +C h (S1 )+ p j ⎝1+ wk p[k] ⎠ r − d j − C h (S2 )+ pi ⎝1+ wk p[k] ⎠ r b −di ⎩ k=1 k=1 ⎫ ⎛ ⎞a ⎛ ⎞a r−1 r−1 ⎬ b b + C h (S2 )+ pi ⎝1+ wk p[k] ⎠ r + p j ⎝1+wr pi + wk p[k] ⎠ (r +1) −d j ⎭ k=1 k=1 ⎛ ⎛ ⎞a ⎞a r −1 r −1 wk p[k] ⎠ r b + ( p j − pi ) ⎝1 + wk p[k] ⎠ r b = ( p j − pi ) ⎝1 + ⎛
⎛
r−1
k=1
+ pi ⎝1 + wr p j +
r −1 k=1
⎞a
⎞a
⎛
k=1
wk p[k] ⎠ (r + 1)b − p j ⎝1 + wr pi +
r −1
⎞a wk p[k] ⎠ (r + 1)b
k=1
(14) From (13) and pi ≤ p j , (14) is greater than or equal to zero. Thus, T j (S1 ) + Ti (S1 ) − (Ti (S2 ) + T j (S2 )) ≥ 0. Repeating this interchange argument for all the jobs not sequenced in the EDD order yields the theorem.
123
Scheduling with a position-weighted learning effect
3 Some flowshop scheduling problems The flowshop scheduling problem is described as follows: Assume that there are n jobs J1 , J2 , . . . , Jn to be processed on m machines M1 , M2 , . . . Mm . J j consists of m operations (O1 j , O2 j , . . . , Om j ). Operation Oi j must be processed on machine Mi , i = 1, 2, . . . , m. The processing of Oi+1, j may start only after Oi j has been completed and all the machines process the jobs in the same order, i.e., a permutation schedule. The jobs are non-preemptive. Each machine can handle at most one job at a time and cannot stand idle until the last job assigned to it has finished processing. Let pi jr be the actual processing time of J j ( j = 1, 2, . . . , n) on Mi (I = 1, 2, . . . , m) if it is scheduled in position r of a sequence. As before, the position-weighted learning effect model for the flowshop scheduling problem is given as follows: pi jr = pi j 1 +
r −1
a wk pi[k]
r b,
j, r = 1, 2, . . . , n; i = 1, 2, . . . , m, (15)
k=1
where pi j is the normal processing time of Oi j . For a given schedule , let Ci j = Ci j () denote the completion time of Oi j , C j = Cm j () denote the completion time of J j , and = ([1], [2], . . . , [n]) represent a permutation of (1, 2, . . . , n), where [ j] denotes the job in the jth position of . 3.1 Flowshop problems with dominating machines We first consider a special case of the m-machine flowshop problem with dominating machines. Following Nouwelund et al. [13], and Yang and Chern [14], we say that machine Mr is dominated by Mk (denoted as Mr ≺ ·Mk ) if and only if max{ pr j | j = 1, 2, . . . , n} ≤ min{ pk j | j = 1, 2, . . . , n}. We study the flowshop scheduling problem with an increasing series of dominating machines (idm), i.e., M1 ≺ ·M2 ≺ · · · ≺ ·Mm . Note that this special case is the same as that considered in Wang and Xia [15]. For a given schedule = [J1 , J2 , . . . , Jn ], since M1 ≺ ·M2 ≺ · · · ≺ ·Mm , Oi, j+1 can start processing immediately after Oi j for each i ≥ 1 and j ≥ 1. We have C1 =
m
pi1 ,
i=1
C2 = C1 + pm2 (1 + w1 pm1 )a 2b , ..., ⎛ ⎞a j−1 i p m j ⎝1 + wk pmk ⎠ j b , Ci = C1 + ..., Cn = C1 +
j=2 n j=2
(16)
k=1
⎛ p m j ⎝1 +
j−1
⎞a wk pmk ⎠ j b .
(17)
k=1
123
T. C. E. Cheng et al.
The makespan is given by (19). The first term on the right-hand side of (19) is a conm j−1 stant (if J1 is fixed, then C1 = i=1 pi1 ). Meanwhile, nj=2 pm j (1+ k=1 wk pmk )a j b is minimized by a similar strategy in the proof of Theorem 1. Also, the proofs of Theorems 5 and 6 are similar to those of Theorems 2 and 3, respectively. Thus, Theorems 4 to 6 follow immediately. a −1 wk pi[k] r b , idm/Cmax , Theorem 4 For the problem Fm / pi jr = pi j 1 + rk=1 there exists an optimal schedule in which all the jobs are sequenced in non-decreasing order of pm j except the first processed job. a −1 wk pi[k] r b , idm/ Ci , Theorem 5 For the problem Fm / pi jr = pi j 1 + rk=1 there exists an optimal schedule in which all the jobs are sequenced in non-decreasing order of pm j except the first processed job. a −1 w[k] pi[k] r b , idm/ Ti , Theorem 6 For the problem Fm / pi jr = pi j 1 + rk=1 if the jobs satisfy the agreeable condition that pmi ≤ pm j implies di ≤ d j for all jobs i and j, then there exists an optimal solution in which all the jobs are sequenced in non-decreasing order of d j except the first processed job. 3.2 Flowshop problems with identical processing times on all the machines Now we consider another special case of the flowshop scheduling problem with identical processing times on all the machines, i.e., pi j = p j [16]. The completion time C[ j] is C[ j] () =
j
p[k] + (m − 1) max{ p[1] , p[2] , . . . , p[ j] }.
(18)
k=1
Hence, under the proposed learning model, the completion time C[ j] of J[ j] is C[ j] () =
j
p[l] 1 +
l=1
×
⎧ ⎨ ⎩
l−1 k=1
a wk p[k]
l b + (m − 1) max ⎛
p[1] , p[2] (1 + w1 p[1] )a 2b , . . . , p[ j] ⎝1 +
j−1 k=1
⎞a wk p[k] ⎠ j b
⎫ ⎬ ⎭
. (19)
In the following the proofs of Theorems 7 to 9 are similar to those of Theorems 1 to 3, respectively. Thus, we omit the proofs. a −1 w[k] pi[k] r b , pi j = p j /Cmax , Theorem 7 For the problem Fm / pi jr = pi j 1 + rk=1 there exists an optimal schedule in which all the jobs are sequenced in non-decreasing order of p j .
123
Scheduling with a position-weighted learning effect
a −1 Theorem 8 For the problem Fm / pi jr = pi j 1 + rk=1 wk pi[k] r b , pi j = p j / Ci , there exists an optimal schedule in which all the jobs are sequenced in non-decreasing order of p j . a −1 wk pi[k] r b , pi j = p j / Ti , Theorem 9 For the problem Fm / pi jr = pi j 1 + rk=1 if the jobs satisfy the agreeable condition that pmi ≤ pm j implies di ≤ d j for all jobs i and j, then there exists an optimal schedule in which all the jobs are sequenced in non-decreasing order of d j . 4 Conclusions In this paper we study scheduling problems under a position-weighted learning effect model. We propose this learning model based on the following facts: (1) a worker’s learning effect on a job depends not only on the total processing time of the jobs that he has processed but also on the job position, and (2) in the early stage of processing a given set of similar jobs, a worker is not familiar with the operations, so the learning effect on the jobs scheduled early is not apparent. We provide optimal solutions for the single-machine problems to minimize the makespan and the total completion time, and an optimal solution for the single-machine problem to minimize the total tardiness under an agreeable situation. We also consider two special cases of the flowshop problem. For future research, some other learning effect models related to realistic situations should be studied for different scheduling problems. Acknowledgments We are grateful to two anonymous referees for their helpful comments on earlier versions of our paper. This research was supported in part by the National Science Council of Taiwan, Republic of China, under grant number NSC-98-2221-E-150-032. Professor Yang is currently a Fulbright visiting scholar to Oregon State University and he was supported in part by the National Science Council of Taiwan, Republic of China, under grant number NSC-99-2221-E-150-034-MY2.
References 1. Bachman, A., Janiak, A.: Scheduling jobs with position-dependent processing times. J. Oper. Res. Soc. 55, 257–264 (2004) 2. Biskup, D.: Single-machine scheduling with learning considerations. Eur. J. Oper. Res. 115, 173–178 (1999) 3. Biskup, D., Simons, D.: Common due date scheduling with autonomous and induced learning. Eur. J. Oper. Res. 159, 606–616 (2004) 4. Cheng, T.C.E., Wang, G.: Single machine scheduling with learning effect considerations. Ann. Oper. Res. 98, 273–290 (2000) 5. Mosheiov, G., Sidney, J.B.: Scheduling with general job-dependent learning curves. Eur. J. Oper. Res. 147, 665–670 (2003) 6. Kuo, W.H., Yang, D.L.: Minimizing the total completion time in a single machine scheduling problem with a time-dependent learning effect. Eur. J. Oper. Res. 174, 1184–1190 (2006) 7. Koulamas, C., Kyparisis, G.J.: Single-machine and two-machine flowshop scheduling with general learning functions. Eur. J. Oper. Res. 178, 402–407 (2007) 8. Biskup, D.: A state-of-the-art review on scheduling with learning effects. Eur. J. Oper. Res. 188, 315–329 (2008) 9. Wu, C.C., Lee, W.C.: Single-machine scheduling problems with a learning effect. App. Math. Model. 32, 1191–1197 (2008)
123
T. C. E. Cheng et al. 10. Cheng, T.C.E., Wu, C.C., Lee, W.C.: Some scheduling problems with sum-of-processing-times-based and job-position-based learning effects. Inf. Sci. 178, 2476–2487 (2008) 11. Yin, Y., Xu, D., Sun, K., Li, H.: Some scheduling problems with general position-dependent and time-dependent learning effects. Inf. Sci. 179, 2416–2425 (2009) 12. Graham, R.L., Lawler, E.L., Lenstra, J.K.: Optimization and approximation in deterministic sequencing and scheduling: a survey. Ann. Discret. Math. 5, 287–326 (1979) 13. Nouweland, V.D., Krabbenborg, M., Potters, J.: Flowshops with a dominant machine. Eur. J. Oper. Res. 62, 38–46 (1992) 14. Yang, D.L., Chern, M.S.: A two-machine flowshop sequencing problem with limited waiting time constraints. Comput. Ind. Eng. 28, 63–70 (1995) 15. Wang, J.B., Xia, Z.Q.: Flow-shop scheduling with a learning effect. J. Oper. Res. Soc. 56, 1325–1330 (2005) 16. Pinedo, M.: Scheduling: Theory. Algorithms and Systems. Prentice-Hall, Englewood Cliffs (1995)
123