4OR-Q J Oper Res DOI 10.1007/s10288-016-0339-6 RESEARCH PAPER
Improved approximation algorithms for parallel machine scheduling with release dates and job rejection Xueling Zhong1 · Jinwen Ou2
Received: 22 September 2016 © Springer-Verlag Berlin Heidelberg 2016
Abstract In this paper we study a parallel machine scheduling model with different job release dates, where each job is either accepted and processed by a machine at or after its release date, or it is rejected and a certain penalty cost is paid. The objective is to minimize the makespan of the accepted job plus the total penalty of all rejected jobs. The scheduling problem is NP-hard in the strong sense. Zhang and Lu (4OR A Q J Oper Res 14:165–172, 2016) have proposed a 2-approximation for the problem, and a fully polynomial time approximation scheme (FPTAS) for the special case when the number of machines m is fixed. In this paper we present an improved 2-approximation and a polynomial time approximation scheme for the problem. We also propose an improved FPTAS for the case when m is fixed. Keywords Scheduling · Release date · Rejection · Approximation algorithm Mathematics Subject Classification 90B35
1 Introduction In classical scheduling research it is usually assumed that all jobs have to be processed. But in many practical make-to-order production systems, due to limited production
B
Jinwen Ou
[email protected] Xueling Zhong
[email protected]
1
Department of Internet Finance and Information Engineering, Guangdong University of Finance, Guangzhou 510520, People’s Republic of China
2
Department of Administrative Management, Business School, Jinan University, Guangzhou 510632, People’s Republic of China
123
X. Zhong, J. Ou
capacity or tight delivery requirements, a manufacturer may only accept some of the orders and reject the others, and schedule the accepted orders onto the machines for processing. In such cases the decisions of order acceptance/rejection and scheduling have to be made simultaneously to balance between the total penalty cost of the rejected orders and the production due date of finishing all of the accepted orders. It is common in practical make-to-order production systems that jobs (orders) may have different release dates. In this paper we study parallel machine scheduling with rejection and different job release dates.
1.1 Model description The scheduling problem we study can be described formally as follows: We are given a set of m identical parallel machines M = {M1 , M2 , . . . , Mm } and a set of n > m jobs J = {J1 , J2 , . . . , Jn }. Associated with each job J j is a processing time p j > 0, a release date r j ≥ 0 and a rejection penalty w j > 0. Job J j is either rejected and a rejection penalty w j is paid, or it is accepted and then processed by one of the m machines at or after its release date r j . Job preemption is not allowed during job processing, each machine can process at most one job at a time, and each machine is available for processing jobs at time zero. Let A ⊆ J denote the set of jobs to be accepted, and R = J \ A denote the set of jobs to be rejected. Let C j denote the completion time of job J j on a machine if J j ∈ A. The objective is to determine A and a feasible schedule for the jobs in A on the m machines, so as to minimize objective function Z = max J j ∈A {C j } + J j ∈R w j , i.e., the makespan of all accepted jobs plus the total rejection penalty of all rejected jobs. The scheduling problem is denoted by P r j , r ej Cmax + w j . We let Pm |r j , r ej|Cmax + w j denote the special case when m is fixed. We define a number of useful notations as follows. Let σ ∗ denote the optimal solution to problem P|r j , r ej|Cmax + w j , A∗ denote the set of all accepted jobs in ∗ ∗ σ ∗ , R ∗ = J \ A∗ denote the set of all rejected jobs in σ , Cmax denote the makespan ∗ + w denote the objective function of all accepted jobs in σ ∗ , and Z ∗ = Cmax J j ∈R ∗ j ∗ value of σ . 1.2 Literature review Machine scheduling with rejection (MSR), or named Order Acceptance and Scheduling (OAS), has attracted considerable attention from scheduling researchers as well as production managers who practice it in the past a few decades. MSR/OAS has been studied extensively in the scheduling literature. Slotnick (2011) and Shabtay et al. (2013) have provided excellent surveys on MSR/OAS. Our scheduling model belongs to off-line nonpreemptive parallel machine scheduling with rejection (PMSR), which has relatively fewer results. Bartal et al. (2000) was the first to study PMSR to minimize the makespan of all accepted jobs plus the total rejection penalty of all rejected jobs. They studied both the on-line model and the off-line model. For the off-line model, they developed a
123
Improved approximation algorithms for parallel machine…
simple (2 − 1/m)-approximation algorithm with a time complexity of O(n log n), a polynomial time approximation scheme (PTAS), and a fully polynomial time approximation scheme (FPTAS) with a time complexity of O(n m / m ) for the special case when the number of machines m is fixed. Ou et al. (2015) studied the same off-line model in Bartal et al. (2000) and presented a (3/2 + )-approximation algorithm with a near-linear time complexity. Ou and Zhong (2016) considered PMSR with service level constraints, where the number of rejected jobs is not allowed to be greater than a predefined value. They first presented a (2 − 1/m)-approximation algorithm with a time complexity of O(n log n), followed by a (3/2 + )-approximation algorithm with a time complexity of O(n log n + nm 2 /). Li et al. (2015) studied the parallel machine scheduling problem with rejection to minimize makespan subject to the total rejection cost no greater than a given bound. They first developed a 2-approximation algorithm with a time complexity of O(mn 2 ), followed by a PTAS with a time comO(
1
)
plexity of O(nm 2 + mn 2 ). They also presented an FPTAS with a time complexity 1 + n 2 ) for the special case when m is fixed. In all of the PMSR models of O( 2m+3 mentioned above, job release dates are assumed to be zero. Our scheduling model is an extension of problem P|r j |Cmax , the classical parallel machine scheduling problem with release dates to minimize makespan. For P|r j |Cmax , the list-scheduling algorithm is a (2 − 1/m)-approximation (Gusfield 1984; Hall and Shmoys 1989), and the longest-processing-time-first (LPT) algorithm is a 3/2-approximation (Chen and Vestjens 1997). Hall and Shmoys (1989) presented a PTAS for problem P|r j |Cmax . Mastrolilli (2003) developed an “efficient” PTAS for the parallel machine scheduling problem with release dates and delivery times to O(
1
)
minimize the maximum delivery time, where the time complexity is O(n + ( 1 ) 3 ). Li and Wang (2010) presented a PTAS for the parallel machine scheduling problem with release dates and inclusive processing set restrictions to minimize makespan. In all of these scheduling models, job rejection is not allowed. w j , i.e., the sinZhang et al. (2009) considered problem 1|r j , r ej|Cmax + gle machine scheduling problem with release dates and rejection to minimize the makespan of the accepted jobs plus the total penalty cost of the rejected jobs. They showed that problem 1|r j , r ej|Cmax + w j is NP-hard in the ordinary sense, and proposed a 2-approximation algorithm with a time complexity of O(n 2 ), followed by an FPTAS with a time complexity of O(n 3 /). For the same problem, He et al. (2016) proposed a 5/4-approximation algorithm with an O(n log n) time complexity, and Ou et al. (2016) developed a 2-approximation algorithm with a time complexity of O(n log n), and an exact algorithm with a time complexity of O(n 2 log n) for the special case with identical job processing times. Zhong et al. (2016) considered problem P2 |r j , r ej|Cmax + w j , and developed a (3/2 + )-approximation algorithm with a time complexity of O(n 2 /). Zhang and Lu (2016) studied problem w j . They proposed a 2-approximation algorithm with a time P|r j , r ej|Cmax + complexity of O(n 2 ) for the problem, followed by an FPTAS for the special case when m is fixed. The time complexity of their FPTAS is O(n 2m+1 / m ). In this paper we study the same model in Zhang and Lu (2016). Our scheduling problem is also related to the unrelated parallel machine scheduling problem with job assignment costs. Shmoys and Tardos (1993) presented approx-
123
X. Zhong, J. Ou
imation algorithms to the unrelated parallel machine scheduling problem with job assignment costs, where the objective is to minimize the makespan plus the total job assignment cost. When the number of machines m is fixed, Fishkin et al. (2008) proposed an FPTAS with a time complexity of O(n + C) for the unrelated parallel machine scheduling problem with job assignment costs, where C only dependson and m. When release date r j = 0 for any J j ∈ J , problem Pm |r j , r ej|Cmax + w j can be regarded as a special case of the model in Fishkin et al. (2008). 1.3 Contributions of the work The major contributions of the paper are as follows. We are the first to develop a PTAS for problem P|r j , r ej|Cmax + w j . Due to the existence of job release dates, the development of our PTAS requires different techniques to handle those jobs with very short processing times, which is quite different from the design of some traditional PTASs in parallel machine scheduling research. Our PTAS has a time complexity O(
1
log 1 )
). We also present a 2-approximation algorithm with a of O(n log n + m 2 time complexity of O(n log n), which improves upon the O(n 2 ) time complexity of the previous 2-approximation algorithm developed by Zhang and Lu (2016). For the special case when m is fixed, we also develop an FPTAS with a time complexity of 1 ), which improves upon the O(n 2m+1 / m ) time complexity of the O(n log n + 3m+6 previous FPTAS developed by Zhang and Lu (2016). The rest of the paper is organized as follows. In Sect. 2 we develop a new 2approximation algorithm. In Sect. 3, we first present a PTAS for the general problem, followed by an FPTAS for the special case when m is fixed. Some concluding remarks are provided in Sect. 4. All mathematical proofs are provided in “Appendix”.
2 A 2-approximation algorithm For problem P|r j , r ej|Cmax + w j , Zhang and Lu (2016) have proposed a 2approximation algorithm with an O(n 2 ) time complexity. Ou et al. (2016) have developed a 2-approximation algorithm with a time complexity of O(n log n) for the special case when m = 1 (i.e., there is only a single machine). Let H denote our 2-approximation algorithm to problem P|r j , r ej|Cmax + w j . Algorithm H has a similar major idea of time complexity reduction as the 2-approximation algorithm in Ou et al. (2016). Both of the two algorithms make use of a similar procedure with a time complexity of O(n) to determine the set of jobs to accept. However, H differs from the 2-approximation algorithm in Ou et al. (2016) in two aspects. First, in the procedure of H to determine the set of jobs to accept, the jobs are sorted in an order with non-decreasing r j + p j values, while the algorithm in Ou et al. (2016) sorts the jobs in an order with non-decreasing r j values. Second, the algorithm in Ou et al. (2016) assigns the accepted jobs to the single machine by a simple earliest-release-date-first rule. However, in H the jobs are assigned to the machines by a modified-longestprocessing-time-first rule. The rule of job assignment in H plays an important role to achieve the performance ratio of 2.
123
Improved approximation algorithms for parallel machine…
We now introduce algorithm H in details. In H we first sort and re-index the jobs in an order with non-decreasing r j + p j values such that r1 + p1 ≤ r2 + p2 ≤ · · · ≤ rn + pn . j Let W j = k=1 wk , j = {Jk | k = 1, 2, . . . , j} and Y j = {Jk ∈ j | pmk < wk } for j = 1, 2, . . . , n. Also let z j = m1 Jk ∈Y j pk and W j = Wn − Jk ∈Y j wk for j = 1, 2, . . . , n. Provided job sequence J1 , J2 , . . . , Jn , all of the values of z j , W j and W j can be predetermined in O(n) time. For j = 1, 2, . . . , n, we define Z¯ j = r j + p j + z j + W j . Assume Z¯ η = min Z¯ j | j = 1, 2, . . . , n for some index η. Provided the value of η, we now construct a feasible solution σ to problem P|r j , r ej|Cmax + w j . In σ , we only accept all of the jobs in Yη and reject all of the others. Let t = |Yη | be the number of jobs in Yη . We sort the jobs in Yη in an order of non-increasing processing times such that pq1 ≥ pq2 ≥ · · · ≥ pqt . We then assign the t jobs in Yη to the m machines by a modified-longest-processingtime-first (MLPT) procedure, which is described as follows: Each time we select the longest job remaining Yη . Assume that jobs Jq1 , Jq2 , . . . , Jq j−1 have been assigned to the m machines, and now we would like to consider the assignment of job Jq j to some machine. Let Vi denote the completion time of the last job assigned to machine Mi so far, and let i = arg min{Vi | i = 1, 2, . . . , m}. We assign job Jq j to Mi and start to process it on Mi at the point of time max{Vi , rq j }. It is easy to implement H in O(n log n) time. We have the following theorem, where the proof is based the construction of an auxiliary single machine scheduling. Theorem 1 Algorithm H is a 2-approximation to problem P|r j , r ej|Cmax + with a time complexity of O(n log n).
wj
3 Approximation schemes In this section we first develop a PTAS for problem P|r j , r ej|Cmax + w j , followed w j . Let be any arbitrary small by an FPTAS for problem Pm |r j , r ej|Cmax + 1 constant with 0 < < 1. Let ˆ = 8 such that ˆ is an integer. (If 1ˆ is not integral, we then replace ˆ by ˆ = 11 . It is clear that ˆ < ˆ and O( ˆ1 ) = O( 1ˆ ). Hence, ˆ
replacing ˆ by ˆ does not affect the validity of our algorithms.)
123
X. Zhong, J. Ou
3.1 PTAS In this subsection we will develop a PTAS for P|r j , r ej|Cmax + w j . To develop our PTAS, we make use of some of the techniques in Mastrolilli (2003) and Li et al. (2015). Usually it is easy to handle those jobs with very short processing times in the design of a PTAS for a parallel machine scheduling problem. However, as what we will see it later, due to the existence of job rejection and release dates, the biggest challenge in developing our PTAS is how to decide which small jobs to be rejected so as to make the amount of the total rejection penalty in control. In our PTAS, we first apply algorithm H to obtain solution value Z H . By Theorem 1, we have Z ∗ ≤ Z H ≤ 2Z ∗ . Let L B = Z H /2, then L B is a lower bound of Z ∗ . Provided Z H , it is optimal to reject any job J j if r j + p j > Z H ( j = 1, 2, . . . , n). It does not affect the validity of our PTAS by replacing w j by Z H if w j > Z H for any j = 1, 2, . . . , n. Without loss of generality, we assume r j + p j ≤ Z H and w j ≤ Z H for j = 1, 2, . . . , n. We divide the release date, processing time and rejection penalty of each job by L B. Then, we have 0 ≤ rj, pj, wj ≤ 2 for any J j ∈ J , and 1 ≤ Z ∗ ≤ 2. The major idea flow of the development of our PTAS is as follows. We first construct a problem instance I1 by rounding down each r j value to the nearest multiple of ˆ . Based on instance I1 , we then further construct problem instance I2 by a job merging procedure. In the job merging procedure, all of those small jobs that have the same rounded release date and a processing time less than ˆ 2 are merged into some composed jobs. Based on instance I2 , we further construct problem instance I3 by scaling the processing time of each job (including those composed jobs). All the jobs with the same rounded processing time and rounded release date are categorized in instance I3 . The number of different job categories in instance I3 only depends on ˆ (i.e., the number of different job categories can be regarded as a constant). We try all of the possible combinations of how many jobs are accepted in each categories in the optimal solution, and the number of all of the possible combinations that we try is polynomial. As what we will show it later, it is polynomial to determine an optimal solution to instance I3 . Based on the optimal solution to I3 , we construct a feasible solution to the original problem. The gap between the value of the optimal solution to I3 and the value of the constructed feasible solution to the original problem approaches zero as the value of ˆ approaches zero. For ease of presentation, we let Z (πβ ) denote the solution value for any feasible solution πβ to instance Iβ (β = 1, 2, 3) in the following analysis. We now begin the presentation of our PTAS. We first introduce the release date rounding procedure, denoted by P 1 . In procedure P 1 , we construct a problem instance I1 the same as the original problem except that the release date of each job is rounded
123
Improved approximation algorithms for parallel machine…
down to the nearest multiple of ˆ . Consider instance I1 . We refer to those jobs with processing times less than ˆ 2 as small jobs and the other jobs as big jobs. Let h be the number of the different release dates in I1 . Remember that after divided by L B, we have r j ≤ 2 for any J j ∈ J . As a result, the number of different rounded release dates is bounded by 2ˆ , i.e., h ≤ 2ˆ . Let rˆ1 , rˆ2 , . . . , rˆh be those rounded release dates. We refer to a job with release date rˆμ as a type-μ job (μ = 1, 2, . . . , h). Consider those small jobs in I1 . We partition all of the small jobs into h subsets S1 , S1 , . . . , Sh , where the jobs in Sμ have the same release date rˆμ (μ = 1, 2, . . . , h). h Sμ denote the set of all small jobs, and B denote the set of all big Let S = ∪μ=1 jobs. How to handle those small jobs is important in our PTAS. We now present a job merging procedure, denoted by P 2 , to handle small jobs. In procedure P 2 , we first sequence the jobs in every Sμ in an order of non-increasing value of w j / p j for each μ = 1, 2, . . . , h. We then merge the small jobs in Sμ by the following method. Assume that Ja and Jb are the two small jobs remaining in Sμ with the largest values of w j / p j . We merge these two jobs to form a composed job Jc such that Jc has a release date of rˆμ , a processing time of pa + pb , and a rejection penalty of wa + wb . It is clear that the w j / p j values of the rest small jobs are no greater than (wa + wb )/( pa + pb ). If Jc is still a small job, it will be selected to be merged with the next small job remaining in Sμ . If Jc has been a big job, we then merge the next two small jobs after Jc remaining in Sμ so as to form the next composed job. We repeat this merging procedure until each subset Sμ contains at most one small job. If there is a small job remaining in Sμ , we then replace its processing time by ˆ 2 , and also call it a composed job. Define I2 to be the instance after executing the above merging procedure to instance I1 . Note that no small job exists in instance I2 . It only takes O(n log n) time to complete the job merging process and construct instance I2 . In the following analysis, let Sˆμ denote the job set Sμ after executing procedure 2 P , and still let Sμ represent the original set of the small jobs with the same release h Sˆμ be the set of composed jobs in instance I2 . Clearly, the date rˆμ . Let Sˆ = ∪μ=1 processing time of a composed job is within interval [ˆ 2 , 2ˆ 2 ), and there is no small job in each Sˆμ . For β = 1, 2, let πβ∗ denote the optimal solution to instance Iβ . We have the following property. Lemma 1 Z (π2∗ ) ≤ Z (π1∗ ) + 4ˆ .
Consider instance I2 . We now present a procedure, denoted by P 3 , to scale the processing times of the jobs in I2 . In procedure P 3 , for each job J j in instance I2 , we first find out ρ j = max l | ˆ 2 (1 + ˆ )l ≤ p j , l = 0, 1, 2, . . . . We must have ˆ 2 (1 + ˆ )ρ j ≤ p j < ˆ 2 (1 + ˆ )ρ j +1 .
123
X. Zhong, J. Ou
We round p j down to ˆ 2 (1 + ˆ )ρ j . Define instance I3 the same as I2 except that the processing time of each job is replaced by its rounded processing time. We will solve instance I3 optimally (using an approach described later). Let π3∗ denote the optimal solution to instance I3 determined by our approach, and let Z (π3∗ ) ∗ be its solution value. Based on π3 , we determine a feasible solution π3o to the original problem as follows. To construct π3o , for any job in I3 that is accepted and assigned to machine Mi (1 ≤ i ≤ m) in π3∗ , we also assign the job to machine Mi in solution π3o . For any job in I3 that is rejected in π3∗ , we also reject the job in π3o . For any job that is a composed job, we replace it by its corresponding original small jobs in π3o . The release date, processing time and rejection penalty of each job follow their original values in π3o . Finally, for the jobs assigned to the same machine, we sequence them in an order of non-decreasing release times, and let each job start as early as possible. This completes the construction of solution π3o . It is clear that π3o is feasible to the original problem. We have the following property. Lemma 2 Z (π3o ) ≤ (1 + 2ˆ )Z (π2∗ ).
We will present an approach to determine an optimal solution to instance I3 in the following. We first categorize the jobs in such a way that two jobs belong to the same category if they have the same release time and the same processing time. Let τ be the number of different categories. We have the following property. Lemma 3 τ ≤
7 ˆ 2
log2 1ˆ .
We index the τ job categories as 1, 2, . . . , τ . We assume that the τ job categories are indexed in a non-decreasing order of job release times. Let n denote the total number of jobs in category , = 1, 2, . . . , τ . The jobs in each category are sorted in a non-increasing order of w j values. Since Z ∗ ≤ 2 and the processing time of each job in I3 is no less than ˆ 2 , it suffices to accept at most min{ 2m , n} jobs in I3 to process ˆ 2 on the m machines. We refer vector U = (u 1 , u 2 , . . . , u τ ) with τκ=1 u κ ≤ 2m to be a job acceptance combination if the first u jobs in category ˆ 2 are accepted (in some feasible solution) for = 1, 2, . . . , τ . For each given job acceptance combination U = (u 1 , u 2 , . . . , u τ ), we will determine a feasible solution to the instance I3 in which the first u jobs in category are accepted (while the last n − u jobs in category are rejected if n > u ), and all of the accepted jobs are scheduled onto the m machines with the minimal makespan. Let g = min
2m , n . ˆ 2
Then, the number of different job acceptance combinations is bounded by O(g τ ), which is also polynomial. As what we will show it later, to determine such a schedule of all accepted jobs on the machines with the minimal makespan is polynomial. For each given job acceptance
123
Improved approximation algorithms for parallel machine…
combination U = (u 1 , u 2 , . . . , u τ ), the number of jobs in consideration is bounded by g, and we make use of the method proposed in Mastrolilli (2003) to determine an optimal schedule. In such a method, we first construct an integer linear programming just as the formulation (ILP) shown on page 528 in Mastrolilli (2003), and then solve it optimally by Lenstra’s algorithm (1983) in O(log O(1) g + f ) time, where f only depends exponentially on 1ˆ . This indicates that it is polynomial to determine an optimal schedule for each given job acceptance combination U. After trying all of the possible job acceptance combinations, we can obtain an optimal solution to instance I3 . Remember that the optimal solution to instance I3 is o applied to transform a feasible solution π3 to problem P|r j , r ej|Cmax + w j . We now show that Z (π3o ) can be made arbitrarily close to Z ∗ . Clearly, Z (π1∗ ) ≤ Z ∗ . Note that 0 < ˆ = 18 < 18 . By Lemmas 1 and 3, we have Z (π3o ) ≤ (1 + 2ˆ )Z (π2∗ ) ≤ (1 + 2ˆ )(Z (π1∗ ) + 4ˆ ) ≤ (1 + 2ˆ )(Z ∗ + 4ˆ ) = Z ∗ + 4ˆ + 2ˆ Z ∗ + 8ˆ 2 < Z ∗ + 4ˆ + 2ˆ Z ∗ + ˆ ≤ (1 + 7ˆ )Z ∗ < (1 + )Z ∗ . We now consider the overall time complexity of generating π3o . We first apply algorithm H with a time complexity O(n log n) to determine the value of L B. Provided the value of L B, it spends O(n) time to scale the release date of each job so as to construct instance I1 , and O(n log n) time to merge the small jobs so as to construct instance I2 , and O(n) time to scale the processing time of each big job so as to construct instance I3 . After instance I3 is constructed, we try at most O(g τ ) different job acceptance combinations. For each given job acceptance combination, the number of jobs in consideration is bounded by g, and it takes O(log O(1) g + f ) time to solve the corresponding integer programming by Lenstra’s algorithm. Based on the optimal solution to I3 , solution π3o is easily generated in O(n log n) time. Note that 7 1 and = 8ˆ . Thus, totally it takes τ ≤ ˆ 2 log2 ˆ (by Lemma 2), g ≤ 2m ˆ 2
τ
O n log n + g · f = O n log n +
2m ˆ 2
7
ˆ 2
log2
1 ˆ
· f
O 1 log 1 = O n log n + m 2 · fˆ 7
1
log time to obtain π3o , where fˆ = ( ˆ22 ) ˆ 2 2 ˆ · f is a constant that only depends on but does not depend on the input. In summary, we have the following theorem. + w j admits a PTAS with a time complexity of Theorem 2 P|r j , r ej|Cmax
O 12 log 1 O n log n + m .
123
X. Zhong, J. Ou
3.2 FPTAS when m is fixed wj. Zhang and Lu (2016) proposed an FPTAS for problem Pm |r j , r ej|Cmax + They claimed that their FPTAS has a time complexity of O(n 2m+1 / m ). We would like to point out that their FPTAS can be easily implemented in O(n m+1 / m ) time. Z , where Z is the value of the solution In their FPTAS, they first defined M = 2(n+1) generated by a 2-approximation. They then defined an instance I in which all of the rounded release dates and the rounded processing times are a multiple of M, and solved I optimally by a dynamic programming. They showed that the makespan of }, an optimal solution of instance I is equal to k · M for some k ∈ {0, 1, 2, . . . , 2n(n+1) 2n(n+1) m 2m+1 m / ). and thus their FPTAS has a time complexity of O(n · ( ) ) = O(n , which indicates that k ∈ Actually, by the definition of M, we have Z = 2M(n+1) 2(n+1) {0, 1, 2, . . . , }. Hence, the time complexity of their FPTAS can be implemented )m ) = O(n m+1 / m ). in O(n · ( 2(n+1) In the following we present an efficient FPTAS for Pm |r j , r ej|Cmax + w j . Our FPTAS is similar to the PTAS presented in the above subsection, except that we use a different procedure P˜ 3 to construct an instance I˜3 instead of using procedure P 3 to construct instance I3 , and use a different exact algorithm to solve problem instance I˜3 . Note that the processing time of each job in instance I2 is within [ˆ 2 , 2]. In procedure P˜ 3 , for each job J j in instance I2 , we first find out integer ρ˜ j such that ˆ 3 ˆ 3 · ρ˜ j < p j ≤ · (ρ˜ j + 1). 2 2 We must have ρ˜ j ∈ {1, 2, . . . , ˆ43 − 1}. We round p j down to ˆ2 · ρ˜ j . Problem instance I˜3 is defined as same as I2 except that the processing time of each job is replaced by its rounded processing time. Let τ˜ be the number of different job categories in instance I˜3 . It is clear that τ˜ ≤ 2ˆ · ( ˆ43 − 1) ≤ ˆ84 . 3
Lemma 4 τ˜ ≤
8 . ˆ 4
We now develop a different exact algorithm to solve problem instance I˜3 . Let n be the number of jobs in I˜3 . We have n ≤ n. Note that for each category of jobs we only need to consider the first 2m jobs to accept. This indicates that in I˜3 the number ˆ 2 of jobs considered to accept is bounded by
2m 16m n˜ = min n , τ˜ · 2 = min n , 6 . ˆ ˆ Without loss of generality, we assume that there are only n˜ jobs in I˜3 (otherwise we reject the other n − n˜ jobs that are not considered to accept). Re-index the n˜ jobs of I˜3 in an order of non-decreasing scaled release dates. Let J˜j denote the j-th job in I˜3 , and let r˜ j be the scaled release date and p˜ j be the scaled
123
Improved approximation algorithms for parallel machine…
processing time of J˜j . We then have r˜1 ≤ r˜2 ≤ · · · ≤ r˜n˜ . Let = ˆ 3 /2. Then, we have ˆ43 = 2. Note that for any j = 1, 2, . . . , n, ˜ each r˜ j is a multiple of ˆ . This 3 indicates that each r˜ j is a multiple of ˆ /2 = , i.e., r˜ j ∈ {0, , . . . , ˆ43 }. Also note that each p˜ j is a multiple of ˆ 3 /2 = , i.e., p˜ j ∈ {0, , . . . , ˆ43 }. Then, it is not hard to see that the makespan of each machine must be a multiple of . We will make use of such an insight to develop an exact dynamic programming algorithm to solve I3 . Let I j denote the problem instance containing only the first j jobs J˜1 , J˜2 , . . . , J˜j of I˜3 . For any j = 1, 2, . . . , n, ˜ i = 1, . . . , m and gi = 0, , 2 , . . . , ˆ43 , we define G j (g1 , . . . , gm ) as the minimum total rejection penalty to instance I j such that the makespan of the accepted jobs assigned to Mi is exactly equal to gi . In the computation of G j (g1 , . . . , gm ), if J˜j is selected to be rejected, then G j (g1 , . . . , gm ) = G j−1 (g1 , . . . , gm ) + w j . If J˜j is selected to be accepted and processed by machine Mi , then two possible cases could happen. In the first case, we have gi > r˜ j + p˜ j , and thus G j (g1 , . . . , gm ) =
min
i=1,2,...,m s.t. gi >˜r j + p˜ j
G j−1 g1 , . . . , gi−1 , gi − p˜ j , gi+1 , . . . , gm .
In the second case, we have gi = r˜ j + p˜ j , and thus G j (g1 , . . . , gm ) =
min
min
i=1,2,...,m 0≤u≤˜r j s.t. gi =˜r j + p˜ j
G j−1 (g1 , . . . , gi−1 , u, gi+1 , . . . , gm ) .
We have the following dynamic programming DP: (I) Recurrence relation: For j = 1, 2, . . . , n, ˜ i = 1, . . . , m and gi = 0, , 2 , . . . , 4
, 3 ˆ G j (g1 , . . . , gm ) ⎧ G j−1 (g1 , . .. , gm ) + w j , ⎪ ⎪ ⎪ ⎪ min G j−1 (g1 , . . . , gi−1 , gi − p˜ j , gi+1 , . . . , gm ) , ⎪ ⎨ i=1,2,...,m = min s.t. gi >˜r j + p˜ j ⎪ ⎪ min min G j−1 (g1 , . . . , gi−1 , u, gi+1 , . . . , gm ) . ⎪ ⎪ ⎪ i=1,2,...,m 0≤u≤˜r j ⎩ s.t. g =˜r + p˜ i
j
j
(II) Boundary condition: G 0 (g1 , . . . , gm ) =
0, +∞,
if g1 = · · · = gm = 0; otherwise.
(III) Objective: 4 min G n˜ (g1 , . . . , gm ) + max {gi } | gi ∈ 0, , . . . , 3 , i = 1, 2, . . . , m . 1≤i≤m ˆ
123
X. Zhong, J. Ou We let π˜ 3∗ denote the optimal solution to instance I˜3 determined by DP. Based on o ∗ π˜ 3 , we determine a feasible solution π˜ 3 to the original problem by the same method to construct π3o in our PTAS developed in the above subsection. Note that both Lemmas 1 and 2 still hold. Recall that ˆ = 8 ≤ 18 and 1 ≤ Z ∗ ≤ 2. Hence,
Z (π˜ 3o ) ≤ (1 + 2ˆ )Z (π2∗ ) ≤ (1 + 2ˆ )(Z (π1∗ ) + 4ˆ ) ≤ (1 + 2ˆ )(Z ∗ + 4ˆ ) ≤ (1 + 7ˆ )Z ∗ ≤ (1 + )Z ∗ . We thus have the following property. Lemma 5 Z (π˜ 3o ) ≤ (1 + )Z ∗ . We now consider the time complexity of generating π˜ 3o in our FPTAS. We first need to apply our O(n log n) algorithm H to determine the value of L B. It only takes O(n log n) time to construct instances I2 and I˜3 . We focus on the complexity of DP. Note that r˜ j ∈ {0, , . . . , ˆ43 } and p˜ j ∈ {0, , . . . , ˆ43 } for any j = 1, 2, . . . , n . Also note that gi ∈ {0, , . . . , ˆ43 } for any i = 1, 2, . . . , m. When job J˜j is in consideration in DP, the recursive function has at most ( ˆ43 )m states for G j (g1 , g2 , . . . , gm ) with gi > r j + p j , and has at most m states for G j (g1 , g2 , . . . , gm ) with gi = r j + p j . If gi > r j + p j , each iteration needs constant computation time. If gi = r j + p j , the computation time of each iteration is bounded by O(˜r j ) = O( ˆ43 ). Remember that n˜ ≤ 16m . Therefore, the time complexity of DP is bounded by ˆ 6
m
m
m 4 16m 1 4 4 4 = O n ˜ = O = O . O n˜ + m · · ˆ 3 ˆ 3 ˆ 3 ˆ 6 ˆ 3 ˆ 3m+6
1 As a result, totally it takes O(n log n + 3m+6 ) time to obtain π˜ 3o in our FPTAS. We thus have the following theorem. Theorem 3 Problem Pm |r j , r ej|Cmax + w j admits an FPTAS with a time com1 . plexity of O n log n + 3m+6
Remark Li et al. (2015) studied the parallel machine scheduling problem with rejection to minimize makespan subject to the total rejection cost no greater than a given bound. For the special case when m is fixed, they presented an FPTAS with a time 1 complexity of O( ˆ 2m+3 + n 2 ). Fishkin et al. (2008) considered the unrelated parallel machine scheduling problem with job assignment costs. They proposed an FPTAS with a time complexity of O(n + C) for the case when the number of machines m is fixed, where C only depends on ˆ and m. In the development of our FPTAS to problem Pm |r j , r ej|Cmax + w j , we make use of some techniques in Li et al. (2015) and Fishkin et al. (2008).
4 Conclusion In this paper we study problem P|r j , r ej|Cmax + w j . We are the first to develop a PTAS for the scheduling problem. We also present an improved 2-approximation
123
Improved approximation algorithms for parallel machine…
algorithm for the problem, and an improved FPTAS for the special case when the number of machines is fixed. For future research, it is an interesting research topic to develop fast approximation algorithm with a better performance ratio, say 47 , to solve problem P|r j , r ej|Cmax + w j . Though He et al. (2016) have proposed a near-linear-time 45 -approximation for the special case with a single machine, we have to say that the development of an efficient 47 -approximation algorithm for problem P|r j , r ej|Cmax + w j could be very challenging. Another interesting research topic is to study the extension of the current scheduling model with other objective functions, such as minimizing the total (weighted) completion times or maximal lateness. It is also worthwhile to study the bicritiria variant of minimizing the1 makespan of the accepted jobs subject to the total rejection cost is no greater than a given bound. Acknowledgements The authors thank the AE and the anonymous referees for their helpful comments and suggestions. This research was supported in part by the National Natural Science Foundation of China (71101064, 71501051), the Humanities and Social Sciences Research Foundation of Ministry of Education of China (13YJC630239), and the Foundation for Distinguished Young Teachers in Higher Education of Guangdong Province (YQ201403). The second author is also supported by the Fundamental Research Funds for the Central Universities, and Jinan University Business School Research Grant 56910039.
Appendix Proof of Theorem 1 Let Z H denote the objective function value of σ . Let Cmax (σ ) denote the makespan of the accepted jobs in σ , and C j (σ ) denote the completion time of the accepted job J j in σ . Assume that Jqφ is the job completed last in σ , i.e., Cmax (σ ) = Cqφ (σ ). Remember that by the definition of Yη , for any job Jq j ∈ Yη , we have rq j + pq j ≤ rη + pη . According to procedure MLPT, if φ ≤ m, we have Cmax (σ ) ≤ rη + pη . If φ > m, then on one hand, each job Jqi (i = 1, 2, . . . , m) among the m longest jobs in Yη starts to be processed on one machine at its release date, and we thus have Cqi = rqi + pqi ≤ rη + pη ; on the other hand, no machine idle time exists during the processing for jobs Jq(m+1) , Jq(m+1) , . . . , Jqφ at and after the point of time rη + pη . Therefore, if φ > m, we must have ⎞ ⎛ φ m 1 pq j − pq j − pqφ ⎠ + pqφ Cmax (σ ) ≤ rη + pη + ⎝ m j=1 j=1 ⎞ ⎛ φ m 1 1 ⎝ = r η + pη + pq j − pq j − (m − 1) pqφ ⎠ . m m j=1
j=1
123
X. Zhong, J. Ou
Note that Jq1 , Jq2 , . . . , Jqt are in an order of non-increasing of processing times. m We have pqi ≥ pqφ for any i = 1, 2, . . . , m if φ > m. This indicates that pq j − (m − 1) pqη = pq1 +
m j=2
j=1
( pq j − pqη ) > 0. In addition, by the definition of z η , we have
φ t 1 1 1 pq j ≤ pq j = p j = zη . m m m j=1
J j ∈Yη
j=1
As a result, we have ⎞ ⎛ φ m 1 1 Cmax (σ ) ≤ rη + pη + pq j − ⎝ pq j − (m − 1) pqφ ⎠ ≤ rη + pη + z η . m m j=1
j=1
We then further have Z H = Cmax (σ ) + Wη ≤ rη + pη + z η + Wη = Z¯ η .
(1)
We now try to show that Z H ≤ 2Z ∗ .
(2)
To do so, let α = arg max{r j + p j | J j ∈ A∗ }. Then, Jα is the job with maximum release time plus processing time of the jobs in A∗ . Notice that Z¯ η ≤ Z¯ j for any j = 1, 2, . . . , n, and by (1), we then have Z H ≤ Z¯ η ≤ Z¯ α .
(3)
To prove (2), we only need to prove that Z¯ α = rα + pα + z α + Wα ≤ 2Z ∗ . It is clear that ∗ rα + pα ≤ Cmax .
(4)
z α + Wα ≤ Z ∗ .
(5)
We now show that
To prove (5), we construct an auxiliary problem AUX, which is same as our original problem except that (i) there is a single machine instead of m identical parallel
123
Improved approximation algorithms for parallel machine…
machines; (ii) there is also a set of n jobs J˘ = J˘1 , J˘2 , . . . , J˘n , but the release date r˘ j , the rejection penalty w˘ j and the processing time p˘ j of job J˘j are defined as follows: r˘ j = 0 for j = 1, 2, . . . , n; w˘ j = w j for j = 1, 2, . . . , n; pj for j = 1, 2, . . . , α; p˘ j = m p˘ j = +∞ for j = α + 1, α + 2, . . . , n. Let σ˘ ∗ denote the optimal solution to AUX, and Z˘ ∗ denote the objective function value of σ˘ ∗ . It is clear that Z˘ ∗ ≤ Z ∗ since AUX is a relaxation of the original problem. We now determine the value of Z˘ ∗ . Define Y˘α = J˘j | j = 1, 2, . . . , α; p˘ j < w˘ j pj < wj . = J˘j | j = 1, 2, . . . , α; m ˘ Clearly, As a result, jobs inpYα are the same as the one in job set Yα . the indexes of we have J˘j ∈Y˘α p˘ j = J j ∈Yα mj = z α and nj=1 w˘ j − J˘j ∈Y˘α w˘ j = nj=1 w j − ˘ ˘ j, J j ∈Yα w j = Wα . It is easy to see that it is optimal to reject any job J j if p˘ j ≥ w and that all of the remaining jobs are processed on the single machine consecutively without any idle time in σ˘ ∗ . As a result, Z˘ ∗ =
J˘j ∈Y˘α
p˘ j +
n j=1
w˘ j −
w˘ j = z α + Wα .
J˘j ∈Y˘α
This indicates Z ∗ ≥ Z˘ ∗ = z α + Wα , i.e., inequality (5) holds. This completes the proof.
Proof of Lemma 1 To obtain inequality Z (π2∗ ) ≤ Z (π1∗ ) + 4ˆ , it suffices to show that there exists a feasible solution π2 to I2 such that Z (π2 ) ≤ Z (π1∗ ) + 4ˆ , where Z (π2 ) is the solution value of π2 . We assume that the accepted jobs are processed on each machine in non-decreasing order of release dates in π1∗ . Let A(π2 ) denote the set of accepted jobs in π2 , and R(π2 ) denote the set of rejected jobs in π2 . Let i,μ denote the total processing time of the type-μ small jobs that m are accepted and processed on
i,μ be the total processing machine Mi in π1∗ (μ = 1, 2, . . . , h). Let μ = i=1 time of the type-μ small jobs accepted in π1∗ . Let A(π1∗ ) denote the set of accepted jobs in π1∗ , and R(π1∗ ) denote the set of rejected jobs in π1∗ .
123
X. Zhong, J. Ou
Let γμ = | Sˆμ | be the number of jobs in Sˆμ for μ = 1, 2, . . . , h. We let Jˆ1 , Jˆ2 , . . . , Jˆγμ denote the γμ jobs in Sˆμ . Let pˆ j denote the processing time of Jˆj , and wˆ j denote the rejection penalty of Jˆj . Note that after procedure P 2 , all jobs in Sˆμ are in an order of non-increasing values of wˆ j / pˆ j such that wˆ γμ wˆ 1 wˆ 2 ≥ ≥ ··· ≥ . pˆ 1 pˆ 2 pˆ γμ For each μ = 1, 2, . . . , h, we now determine which jobs in Sˆμ are accepted or rejected and how to assign the accepted jobs to the m machines so as to construct π2 . Based on sequence Jˆ1 , Jˆ2 , . . . , Jˆγμ , we first find out some integer r such that r −1
pˆ j < μ ≤
j=1
r
pˆ j .
(6)
j=1
Let Sˆμ = { Jˆ1 , Jˆ2 , . . . , Jˆr }. In solution π2 we accept all jobs in Sˆμ and reject all jobs in Sˆμ \ Sˆμ . By the order rule of sequence Jˆ1 , Jˆ2 , . . . , Jˆγμ , we have
wj ≤
J j ∈A(π1∗ )∩Sμ
wˆ j ,
Jˆj ∈ Sˆμ
or equivalently,
wj ≥
J j ∈R(π1∗ )∩Sμ
wˆ j .
Jˆj ∈Sμ \ Sˆμ
We further have h
μ=1
J j ∈R(π1∗ )∩Sμ
wj ≥
h
wˆ j .
μ=1 Jˆj ∈ Sˆμ \ Sˆ μ
h h h Remember that S = ∪μ=1 Sμ and Sˆ = ∪μ=1 Sˆμ . Let Sˆ = ∪μ=1 Sˆμ be all of the composed jobs accepted in solution π2 . We thus have
J j ∈R(π1∗ )∩S
wj ≥
h
μ=1
J j ∈R(π1∗ )∩Sμ
wj =
h
μ=1 Jˆj ∈ Sˆμ \ Sˆ μ
wˆ j =
wˆ j .
ˆ Sˆ Jˆj ∈ S\
Note that in solution π2 all jobs in Sˆ are accepted while all jobs in Sˆ \ Sˆ are rejected. This indicates that R(π2 ) ∩ Sˆ = Sˆ \ Sˆ , we then have
123
Improved approximation algorithms for parallel machine…
wj ≥
J j ∈R(π1∗ )∩S
wˆ j =
ˆ Sˆ Jˆj ∈ S\
wj.
(7)
Jˆj ∈R(π2 )∩ Sˆ
We now consider the assignment of jobs in Sˆμ to the m machines. We consider machines M1 , M2 , . . . , Mm one by one and in this order. For any current machine Mi , 1 ≤ i < m, we assign the jobs remaining in Sˆμ to the machine one by one until the total processing time of jobs in Sˆμ that have been assigned to the machine is greater than i,μ , or all jobs in Sˆμ have been assigned. Once the total processing time of jobs assigned to the current machine Mi has been greater than i,μ , no job in Sˆμ will be assigned to the machine further, and the next machine Mi+1 is considered. If there are still some jobs remaining in Sˆμ after the scheduling of jobs on machine Mm−1 has been done, then all of the remaining jobs in Sˆμ are assigned to the last machine Mm . This completes the scheduling of all jobs in Sˆμ on the m machines. For each big job J j ∈ B, we reject it if it is rejected in π1∗ ; otherwise we assign it to the machine that processes it in π1∗ . For the jobs assigned to the same machine, we sequence them in an order of non-decreasing release times, and let each job start its processing as early as possible. This completes the construction of solution π2 . Consider such a constructed solution π2 . Each accepted job in π2 is processed no earlier than its rounded release time. Therefore, π2 is a feasible solution to I2 . For those big jobs, it is clear that
wj =
J j ∈R(π1∗ )∩B
wj.
(8)
J j ∈R(π2 )∩B
Combining (7) and (8), we then have
wj =
J j ∈R(π1∗ )
wj +
J j ∈R(π1∗ )∩S
≥
wj
J j ∈R(π1∗ )∩B
wˆ j +
wj =
J j ∈R(π2 )∩B
Jˆj ∈R(π2 )∩ Sˆ
wˆ j .
(9)
Jˆj ∈R(π2 )
Let C(π2 ) denote the makespan of the accepted jobs in π2 . Consider the scheduling of jobs in Sˆμ for μ = 1, 2, . . . , h. Remember that any composed job has a processing time less than 2ˆ 2 , any job in Sˆμ is a composed job, and the total processing time of type-μ small jobs accepted in π1∗ is equal to μ . By (6), we then have J j ∈ Sˆμ
pˆ j =
r
pˆ j ≤ μ + 2ˆ 2 .
j=1
Hence, the completion time of the last job on machine Mi (i = 1, 2, . . . , m) in π2 will not exceed the completion time of the last job on machine Mi in π1∗ by an amount of h · (2ˆ 2 ) = 2h ˆ 2 . This indicates that
123
X. Zhong, J. Ou
C(π2 ) ≤ C(π1 ) + 2h ˆ 2 ≤ C(π1∗ ) + 4ˆ
(10)
since h ≤ 2ˆ . By (9) and (10), we then have Z (π2 ) = C(π2 ) +
wˆ j ≤ C(π1∗ ) + 4ˆ +
Jˆj ∈R(π2 )
J j ∈R(π1∗ )
w j ≤ Z (π1∗ ) + 4ˆ .
Clearly, Z (π2∗ ) ≤ Z (π2 ), which indicates the desired inequality holds. C(π3∗ )
π3∗ ,
denote the makespan of the accepted jobs in and Proof of Lemma 2 Let C(π3o ) denote the makespan of the accepted jobs in π3o . Let L i denote the set of jobs accepted and assigned to Mi in π3∗ , i = 1, 2, . . . , m. On one hand, when we replace all the processing times of accepted jobs in π3∗ by their original processing time or the original processing times of the corresponding small jobs, the completion time of the last job processed on Mi increases an amount no more than ˆ 2 (1 + ) ˆ ρ j +1 − ˆ 2 (1 + ˆ )ρ j = ˆ ˆ 2 (1 + ˆ )ρ j ≤ ˆ · C(π3∗ ) J j ∈L i
J j ∈L i
because J j ∈L i ˆ 2 (1 + ˆ )ρ j ≤ C(π3∗ ). On the other hand, when we sequence the accepted jobs in an order of non-decreasing original release times, and let each job start as early as possible, we add ˆ to the starting time of each accepted job (if necessary) in π3∗ . In other word, the completion time of the last job processed on Mi increases by no more than ˆ . We thus have C(π3o ) ≤ C(π3∗ ) + ˆ C(π3∗ ) + ˆ ≤ (1 + 2ˆ )C(π3∗ ).
(11)
When we replace all the rejection penalty of rejected jobs in π3∗ by their original rejection penalty or the original rejection penalty of the corresponding small jobs (note that we do not change the rejection penalty of each job in the construction of instances I1 , I2 and I3 ), we have wj = wj. (12) J j ∈R(π3o )
J j ∈R(π3∗ )
It is clear that Z (π3∗ ) ≤ Z (π2∗ ).
(13)
By (11), (12) and (13), we have Z (π3o ) = C(π3o ) +
w j ≤ (1 + 2ˆ )C(π3∗ ) +
J j ∈R(π3o )
wj
J j ∈R(π3∗ )
≤ (1 + 2ˆ )Z (π3∗ ) ≤ (1 + 2ˆ )Z (π2∗ ). This completes the proof.
123
Improved approximation algorithms for parallel machine…
Proof of Lemma 3 Note that after divided by L B, we have p j ≤ 2 for j = 1, 2, . . . , n. As a result, the scaled processing time ˆ 2 (1 + ˆ )ρ j is at most 2, which implies that 0 ≤ ρ j ≤ 1 + log1+ˆ ( ˆ22 ). Thus, the number of possible values of ρ j is no more than 2 + log1+ˆ ( ˆ22 ). The number of distinct rounded release dates is no more than 2ˆ . Therefore, the number of different job categories is no more than 2 · 2 + log1+ˆ ˆ22 . Note that 0 < ˆ < 18 . As a result, we have log2 (1 + ) ˆ ≥ ˆ , ˆ
log1+ˆ
2 ˆ 2
=
log2 ˆ22 2 1 1 1 3 1
≤ log2 2 ≤ log2 3 = log2 , ˆ ˆ ˆ ˆ ˆ ˆ log2 1 + ˆ
and
2 2 2 4 2 τ ≤ · 2 + log1+ˆ ≤ + · log1+ˆ ˆ ˆ 2 ˆ ˆ ˆ 2 1 7 1 1 1 6 4 2 3 ≤ + · log2 ≤ 2 + 2 log2 ≤ 2 log2 . ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ This completes the proof.
References Bartal Y, Leonardi S, Marchetti-Spaccamela A, Sgall J, Stougie L (2000) Multiprocessor scheduling with rejection. SIAM J Discrete Math 13:64–78 Chen B, Vestjens APA (1997) Scheduling on identical machines: how good is LPT in an on-line setting? Oper Res Lett 21:165–169 Fishkin AV, Jansen K, Mastrolilli M (2008) Grouping techniques for scheduling problems: simpler and faster. Algorithmica 51:183–199 Gusfield D (1984) Bounds for naive multiple machine scheduling with release times and deadlines. J Algorithms 5:1–6 Hall LA, Shmoys DB (1989) Approximation schemes for constrained scheduling problems. In: Proceedings of the 30th annual IEEE symposium on foundations of computer science, pp 134–139 He C, Leung JY-T, Lee K, Pinedo ML (2016) Improved algorithms for single machine scheduling with release dates and rejections. 4OR Q J Oper Res 14:41–55 Lenstra HW Jr (1983) Integer programming with a fixed number of variables. Math Oper Res 8:538–548 Li C-L, Wang XL (2010) Scheduling parallel machines with inclusive processing set restrictions and job release times. Eur J Oper Res 200:702–710 Li WD, Li JP, Zhang XJ, Chen ZB (2015) Penalty cost constrained identical parallel machine scheduling problem. Theor Comput Sci 607:181–192 Mastrolilli M (2003) Efficient approximation schemes for scheduling problems with release dates and delivery times. J Sched 6:521–531 Ou JW, Zhong XL (2016) Order acceptance and scheduling with consideration of service level. Ann Oper Res. doi:10.1007/s10479-016-2277-2 Ou JW, Zhong XL, Wang GQ (2015) An improved heuristic for parallel machine scheduling with rejection. Eur J Oper Res 241:653–661 Ou JW, Zhong XL, Li C-L (2016) Faster algorithms for single machine scheduling with release dates and rejection. Inf Process Lett 116:503–507 Shabtay D, Gaspar N, Kaspi M (2013) A survey on offline scheduling with rejection. J Sched 16:3–28 Shmoys D, Tardos E (1993) An approximation algorithm for the generalized assignment problem. Math Program 62:461–474 Slotnick SA (2011) Order acceptance and scheduling: a taxonomy and review. Eur J Oper Res 212:1–11
123
X. Zhong, J. Ou Zhang LQ, Lu LF (2016) Parallel-machine scheduling with release dates and rejection. 4OR Q J Oper Res 14:165–172 Zhang LQ, Lu LF, Yuan JJ (2009) Single machine scheduling with release dates and rejection. Eur J Oper Res 198:975–978 Zhong XL, Pan ZM, Jiang DK (2016) Scheduling with release times and rejection on two parallel machines. J Comb Optim. doi:10.1007/s10878-016-0016-x
123