J Sched DOI 10.1007/s10951-017-0524-2
Exact exponential algorithms for 3-machine flowshop scheduling problems Lei Shang1
· Christophe Lenté1 · Mathieu Liedloff2 · Vincent T’Kindt1
© Springer Science+Business Media New York 2017
Abstract In this paper, we focus on the design of an exact exponential time algorithm with a proved worst-case running time for 3-machine flowshop scheduling problems considering worst-case scenarios. For the minimization of the makespan criterion, a Dynamic Programming algorithm running in O∗ (3n ) is proposed, which improves the current best-known time complexity 2O(n) × I O(1) in the literature. The idea is based on a dominance condition and the consideration of the Pareto Front in the criteria space. The algorithm can be easily generalized to other problems that have similar structures. The generalization on two problems, f i problems, is discussed. namely the F3 f max and F3 Keywords Moderately exponential algorithms · Dynamic programming · Flowshop
1 Introduction This paper provides a moderately exact exponential algorithm which can be applied to some N P−har d flowshop
B
Lei Shang
[email protected] Christophe Lenté
[email protected] Mathieu Liedloff
[email protected] Vincent T’Kindt
[email protected]
1
Laboratoire d’Informatique (EA 6300), ERL CNRS OC 6305, Université François-Rabelais de Tours, 64 avenue Jean Portalis, 37200 Tours, France
2
INSA Centre Val de Loire, LIFO EA 4022, Université d’Orléans, 45067 Orléans, France
scheduling problems. For a long time, computer scientists categorize problems to different complexity classes according to the expected difficulty to solve them in the worst case. A well-known class of problems is the one of N P−har d problems, which consists of problems that are considered as hard, i.e., no polynomial time algorithm is likely to exist for finding the optimal solution of these problems, unless P = N P. So, the best-known algorithms to optimally solve N P−har d problems are super-polynomial in the worst case, which in practice turn out to be inefficient in solving large instances. However, from the theoretical side, several questions are of importance to evaluate the worst-case tractability of a N P−har d problem. Notably, in which (provable) worst-case running time can such a problem be solved? We can be interested in establishing an upper bound on its worst-case time complexity. The field of moderately exact exponential algorithms provides tools for stating upper bounds, which improve upon those of a brute-force search to optimality . Having fast exact exponential algorithms may have also an impact on the practical solution of a problem. It is now noticed that the traditional view of exponential algorithms being slow needs to be reconsidered: An algorithm with complexity O(1.1n ) runs faster than an algorithm with O(n 4 ) for n < 224. A recent case study (Akiba and Iwata 2015) on the performance of algorithms for the Vertex Cover problem shows that the branch-and-reduce exponential algorithms are actually quite practical and competitive compared with other state-of-the-art empirical methods. This result shows the possibility to achieve promising results in the future, even though currently for a lot of hard problems exact exponential algorithms do not seem competitive on the average in practice. For more discussion on exponential algorithms for N P−har d problems, the reader is kindly referred to the survey by Woeginger (2003) and the book by Fomin and Kratsch (2010). In
123
J Sched
the context of exact exponential algorithm, we usually make use of the notation O∗ to express the worst-case (time and space) complexity of an algorithm. An algorithm is in O∗ (α n ) time if there exists a polynomial p such that the algorithm is in O( p(n) · α n ), for some constant α. For some N P−har d problems in the literature; for instance Maximum Independent Set, several exact exponential algorithms have been proposed, whose complexity results consist of a continuous improvement on the value of α. The best-known algorithm runs in O(1.2002n ) (Xiao and Nagamochi 2013) time for solving this problem. This shows that N P−har d problems can be solved much faster than the brute-force search. In spite of the growing interest on exact exponential algorithms, few results are known for N P−har d scheduling problems. Some results for classic scheduling problems can be found in the survey by Lenté et al. (2014), for instance, they give for the P3Cmax problem an exponential algorithm running in O∗ (1.7321n ) time in the worst case. A generalized application of the technique Sort & Search to a set of scheduling problems has been discussed in Lenté et al. (2013) to design faster exact algorithms. A discussion on the implication of Exponential Time Hypothesis on algorithms for scheduling problems is conducted in Jansen et al. (2013), a complexity lower bound 2o(n) × I O(1) is established for a set of problems including P2Cmax and P2 w j C j , which denies the possibility to solve them in sub-exponential time, unless SNP⊆SUBEXP. They also provided an algorithmic framework which solves a number of scheduling problems in time 2O(n) × I O(1) . On specific scheduling problems, a dynamic programming algorithm (Gromicho et al. 2012) is provided for solving the job-shop scheduling problem in O(( pmax )2n (m +1)n ). Another problem, denoted by 1| pr ec| Ci , is proved to have an algorithm (Cygan et al. 2014) running in O((2 − )n ) for some small . In this paper, we deal with the 3-machine flowshop scheduling problem to minimize the makespan and two extensions. The problem, denoted by F3Cmax , is described as follows. Consider n jobs to be scheduled, each of which must be processed first on machine one, then on machine two and machine three in this order. Each machine can only have one job being processed at a time. We note Oi j , i ∈ {1, . . . , n}, j ∈ {1, 2, 3}, the operation of job i on machine j, with pi j a nonnegative integer representing the corresponding processing time. The objective is to find an optimal job sequence that minimizes the makespan, which is defined as the completion time of the last job on the last machine. Even though there exists a polynomial time algorithm when the number of machines is two, the problem with machine number m ≥ 3 has been proved to be N P−har d (Garey et al. 1976). Extensive researches have been done on the design of heuristic and metaheuristic algorithms (see Ruiz and Maroto 2005; Framinan et al. 2004), approximation algorithms (see Smutnicki 1998; Sviridenko 2004) and
123
also exact methods (see Ladhari and Haouari 2005 and earlier Ignall and Schrage 1965; Lomnicki 1965; Brown and Lomnicki 1966; McMahon and Burton 1967; Ashour 1970; Lageweg et al. 1978; Potts 1980; Carlier and Rebaï 1996; Cheng et al. 1997). For a global view on different approaches on this problem and its numerous variants, we refer the reader to the review of Reza Hejazi and Saghafian (2005). In this paper, we focus on the search for an optimal permutation schedule of the problem, with a provable worst-case running time. We provide a theoretical study which aims at establishing upper bounds on the worst-case complexity. Clearly, the problem can be solved trivially by enumerating all possible job permutations, which yields a O∗ (n!) worst-case time algorithm. The current fastest exact algorithms in practice is the Branch & Bound algorithm provided by Ladhari and Haouari (2005). However, when considering worst-case scenarios, this branching algorithm can hardly be proved to be faster than the brute-force. The aim of this paper is to design a moderately exponential algorithm, that is, an algorithm running in O∗ (cn ) time with the constant c as small as possible, improving the currently best complexity result of 2O(n) × I O(1) (Jansen et al. 2013) . In the following sections, we first introduce a dynamic programming algorithm to solve the F3Cmax problem, together with its worst-case complexity analysis (Sect. 2). Generalizations of this algorithm to some other 3-machine flowshop problems with different objective functions are then discussed in Sect. 3. Finally, in Sect. 4, we describe briefly the possibility to generalize this algorithm to a larger class of problems.
2 Solving the F3Cmax problem 2.1 General idea Given a job permutation π , let C j (π ) return the completion time of π on machine j, j ∈ {1, 2, 3}. We omit π whenever it does not create ambiguity. We also define a binary operator “.” which concatenates two permutations of jobs. The proposed Dynamic Programming algorithm is based on Theorem 1. Theorem 1 Let S be a set of n jobs to be scheduled, and S ⊂ S a subset of S. Given π and π as two permutations of jobs from S , σ as a permutation of jobs from S S , then we have: C2 (π ) ≤ C2 (π ) C2 (π · σ ) ≤ C2 (π .σ ) If then C3 (π ) ≤ C3 (π ) C3 (π · σ ) ≤ C3 (π .σ ) that is, the partial permutation π dominates π . Every schedule starting by π is dominated by a schedule starting by π .
J Sched
Fig. 2 Example of critical paths
Fig. 1 Partial solutions and Pareto Front of a given job set in criteria space
Proof The proof of Theorem 1 is straightforward. First, note that C1 (π ) = C1 (π ) always holds since π and π contain the same jobs and there is no idle time between jobs on machine 1. On the second and third machines, π completes earlier (or equal) than π so σ is always preferred to be put after π instead of π in order to minimize the makespan Cmax .
The idea of Dynamic Programming is the following. When trying to construct an optimal solution from partial solutions, only the partial solutions that are not dominated need to be considered. This can be seen by switching to the criteria space C2 , C3 for representation. For a fixed job set, we plot all its partial permutations as points in the criteria space C2 , C3 . Then the non-dominated permutations form the Pareto Front of points (Fig. 1). 2.2 Dynamic programming Now we formulate the algorithm, starting with some definitions. Definition 1 Given a set of permutations of a job set S, Pareto Permutations define the subset of permutations whose associated criteria vector C2 , C3 is not dominated by the criteria vector of any other permutation from the set. In case that several dominating permutations have the same criteria vector, only one of them is retained, arbitrarily. Let Min Per m be a function which takes a set of job permutations as input and returns its Pareto Permutations set. O pt Per m(S) is the Pareto Permutations of all jobs from the job set S and O pt Per m l (S), l ∈ {1, . . . , |O pt Per m(S)|} is the lth permutation (the numeration is arbitrary). Definition 2 For a given job schedule in which all operations are executed as early as possible, its Critical Path (see for instance Brucker 2007; Kelley Jr and Walker 1959)1 CP j , 1
Instead of defining Critical Path in the context of digraph, here we adapt the definition to make it more intuitive.
j ∈ {1, 2, 3}, is defined as the sequence of operations on the first j machines, that are executed consecutively without any idle time and the total processing time of these operations determines C j . More specifically, for any two consecutive operations in CP j , denoted by Oi j → Oi j , i , i ∈ {1, . . . , n}, j , j ∈ {1, . . . , j}, we must have j = j or otherwise i = i and j = j + 1. In the latter case, job i is defined as critical job. For remaining jobs, let j CPq , q ∈ {1, . . . , j}, be the set of jobs whose operations on machine q appear in CP j , excluding critical jobs. As an example, in Fig. 2, CP2 consists of the sequence (O11 , O21 , O31 , O32 , O42 ), with job 3 as critical job. CP21 = {1, 2} and CP22 = {4}. It should be noticed that for a given j, CP j may not be unique, in that case we choose it arbitrarily. Lemma 1 For a job set S with |S | = t, the number of its Pareto Permutations |O pt Per m(S )| is at most O∗ (2t ). Proof Each permutation in O pt Per m(S ) has a corresponding criteria vector C2 , C3 . The number of Pareto Permutations corresponds to the number of non-dominated vectors, which is bounded by the number of possible values of C2 . The next observation is that the C2 value of a given permutation is related to the critical path of job operations on the first two machines (Definition 2). Hence the problem reduces to count the number of possible critical paths. To do this, we first decide the critical job, for which we have t choices. Each of the remaining jobs has to belong to either CP21 or CP22 . This yields 2t−1 choices. Hence
|O pt Per m(S )| ≤ t2t−1 = O∗ (2t ). We now state in Lemma 2 another upper bound on |O pt Per m(S )|. Lemma 2 If there exists a constant M such that ∀i, j, pi j ≤ M, then for a given job set S of t jobs, the number of nondominated criteria vectors C2 , C3 , in Pareto Permutations, is at most (t + 1)M = O∗ (M). Proof Lemma 2 is based on the consideration that the maximum value of C2 will not exceed (t + 1)M. Since C2 is an integer, the number of possible values of C2 is also bounded by (t + 1)M.
The recurrence function of our dynamic programming algorithm is stated in Theorem 2.
123
J Sched
Theorem 2 O pt Per m(S) can be computed by Dynamic Programming as follows: O pt Per m(S) = Min Per m O pt Per m l (S {k}).{k} : k ∈ S, l ∈ {1, . . . , |O pt Per m(S {k})|}
The algorithm traverses across all problems defined by all subsets of jobs. Thus, the overall time complexity for calculating O pt Per m(S) is n n n n O(t 3 2t ) = n 3 O(2t )1n−t t t t=1
Theorem 2 is the recursive formula used by the Dynamic Programming algorithm. It is directly based on Theorem 1: When constructing an optimal solution starting with partial permutations, only Pareto Permutations are kept. Consider that we already have O pt Per m(S {k}), ∀k ∈ S. In order to construct O pt Per m(S), we append job k after each permutation from O pt Per m(S {k}) to get a new set of permutations for job set S. Over all of these newly constructed permutations, we apply the operation Min Per m to remove dominated ones and return O pt Per m(S). A more intuitive representation is given in Fig. 3 where |S| = t. 2.3 Complexity analysis Consider implementing the algorithm in a recursive way according to Theorem 2. We keep in memory already computed O pt Per m(S), for all S, for fast access later. As initialization, when S = { j}, we set O pt Per m(S) = { j}. From that point, every O pt Per m(S) for |S| > 1 can be computed by Min Per m from O pt Per m(S {k}), ∀k ∈ S. The function Min Per m uses an existing algorithm (Kung et al. 1975) for finding, given a set of N vectors C2 , C3 , non-dominated vectors, with a complexity of O(N log N ). According to the proof of Lemma 1, N is bounded by t (t − 1)2t−2 = O(t 2 2t ) where the extra factor t is the number of possible values of k. Therefore, computing O pt Per m(S) from solved subproblems yields a complexity of O(t 2 2t log(t 2 2t )) = O(t 3 2t ) = O∗ (2t )
t=1 3
= O(n (2 + 1)n ) ∗
(Binomial theorem)
= O (3 ) n
Alternatively, based on Lemma 2, the time complexity can also be expressed as n n O∗ (M) = O∗ (M2n ) t t=1
The minimum value of Cmax can be retrieved from O pt Per m(S) with an additive cost of O∗ (2n ) (or O∗ (M)) time, which does not change the established complexity. The space complexity is also O∗ (3n ) (or O∗ (M2n )) considering the storage of all necessary Pareto Permutations. Despite the superiority of our algorithm in worst-case scenarios with respect to existing exact algorithms, preliminary experimentation shows that the algorithm only managed to solve random instances with up to 25 jobs (due to its space complexity). This is far less than that can be solved by the Branch & Bound algorithm of Ladhari and Haouari (2005). However, it is of theoretical interest to have an algorithm with a proved running time upper bound faster than the brute-force approach.
3 Generalizations to other three-machine flowshop problems 3.1 The F3 fmax problem The Dynamic Programming algorithm can be applied to a more general problem referred to as F3 f max . This problem is similar to the F3Cmax problem but with a more general objective function f max to be minimized. For job i ∈ {1, . . . , n}, let ci be the completion time of job i on machine 3 and let f i (ci ) be the cost associated to job i. We consider f i as a non-decreasing function. The objective is defined as f max = max{ f i (ci )|i = 1, . . . , n}
Fig. 3 Recursive procedure of the Dynamic Programming algorithm
123
If f i = f j ∀i = j, then the Dynamic Programming algorithm can be applied without adaptation, and therefore with the same O∗ (3n ) worst-case complexity. The F3Cmax problem can be considered as a particular case of this setting where f (ci ) = ci .
J Sched
If f i is specific to job i, the consideration in the Dynamic Programming algorithm becomes insufficient, since the dominance status of a given permutation is not only dependent on C2 , C3 , but also on the corresponding f max value of the permutation. In other words, given two different permutations π and π for a same subset of jobs, even if C2 (π ) < C2 (π ) and C3 (π ) < C3 (π ), it is not sufficient to discard π since it is possible that f max (π ) < f max (π ) which means that π may lead to an optimal solution. Therefore, we add f max as a third criterion into the criteria vector. The Dynamic Programming acts in a similar way as the one for F3Cmax but on a three-dimension criteria space C2 , C3 , f max . Theorem 3 Let S be a set of n jobs to be scheduled, and S ⊂ S a subset of S. Given π and π as two permutations of jobs from S , σ as a permutation of jobs from S S , then we have: ⎧ ⎪ ⎨C2 (π ) ≤ C2 (π ) I f C3 (π ) ≤ C3 (π ) ⎪ ⎩ f max (π ) ≤ f max (π )
⎧ ⎪ ⎨C2 (π.σ ) ≤ C2 (π .σ ) then C3 (π.σ ) ≤ C3 (π .σ ) ⎪ ⎩ f max (π.σ ) ≤ f max (π .σ )
that is, the partial permutation π dominates π . Every schedule starting by π is dominated by a schedule starting by π .
Proof The proof of Theorem 3 is straightforward.
The same running time analysis can be performed as before, where the critical part is to count the number of possible non-dominated vectors. This number is bounded by the number of different C2 , C3 values. Similarly to what has been done in Lemma 1 to establish an upper bound on the number of values of C2 , we can notice that there also exists a critical path (but through three machines) which leads to each value of C3 and therefore the number of possible values of C3 is upper bounded by O∗ (3n ). If we consider the number of possible C2 , C3 vectors as the number of combinations of C2 values and C3 values, then we have 2n × 3n = 6n non-dominated criteria vectors C2 , C3 . However, it can be proved that this upper bound is far from being tight, by exploiting the link between CP2 and CP3 . Lemma 3 On machine 1, it is sufficient to consider critical paths such that CP31 ⊆ CP21 . ⊆ Proof Firstly, it is obvious that on machine 1, either CP21 or CP21 ⊆ CP31 , since either they are empty sets or they must contain the first several operations on machine 1. Consider the situation where we have CP21 CP31 , CP32 CP22 holds since CP22 contains all jobs of N CP21 except the critical job. Now by setting CP31 = CP21 and adjust CP32 accordingly, CP3 remains a critical path and CP31 ⊆ CP21 . An example is illustrated in Fig. 4.
CP31
Fig. 4 An example for Lemma 3. CP2 (resp. CP3 ) is indicated by a solid (resp. dashed) line a CP21 CP31 , b change the choice of CP3
Lemma 4 sidered:
On machine 2, only two cases need to be con-
1. CP31 = CP21 and CP32 ⊆ CP22 . 2. CP31 CP21 and CP32 ∩ CP22 = ∅. Proof Both cases are easy to verify with a similar consideration as in Lemma 3. The first case is obvious, since CP31 = CP21 and CP22 contains all jobs of N CP21 except the critical job, which is also the critical job of CP3 , so CP32 ⊆ CP22 holds. In the second case, CP31 CP21 , if CP32 ∩ CP22 = ∅, let job o ∈ CP32 ∩ CP22 . This means that both critical paths CP2 and CP3 contain the operation Oo2 , and the operation sequences preceding Oo2 are different in CP2 and CP3 . Then similar to Lemma 3, CP2 or CP3 can then be rearranged by adopting the same operation sequences preceding Oo2 to reduce to the first case of Lemma 4. See Figs. 5 and 6 for an example. Note that we are only interested in the value of C2 and C3 , the actual critical paths are not our concern.
Lemma 5 For a job set S with |S | = t, the number of possible pairs C2 , C3 is bounded by O∗ (4t ). Proof In order to count this number, we first need to choose the critical jobs for CP2 and CP3 , as similar to the proof of Lemma 3. This yields a factor of t (t − 1) = O(t 2 ). According to Lemma 4, we then choose i jobs that form CP21 (CP22 is then implicitly decided). Then, for CP3 : – Consider Lemma 4 case 1: the i jobs are also in CP31 , the rest t − i jobs can be put in CP32 or CP33 , which yields 2t−i choices.
123
J Sched
pairs C2 , C3 is bounded by (n+1)M ·(n+2)M = O∗ (M 2 ), and the final complexity is n n t=1
t
3.2 F3
Fig. 5 An example for Lemma 4. CP2 (resp. CP3 ) is indicated by a solid (resp. dashed) line a CP31 CP21 and CP32 ∩ CP22 = ∅, b Change the choice of CP2
Fig. 6 An example for Lemma 4 case 2. by a solid (resp. dashed) line
CP2
(resp.
CP3 )
is indicated
– Consider Lemma 4 case 2 (see Fig. 6): these i jobs can be put in CP31 , CP32 , CP33 , so three choices for each. The remaining t − i jobs can only be put in CP33 , since CP32 ∩ CP22 = ∅. In total this yields 3i choices. t t t (t − 1) O(3i + 2t−i ) = O(t 2 (4t + 3t )) i
M 2 = O∗ (M 2 2n )
f i problem
Another problem that our Dynamic Programming can be applied to is F3 f i , in which the objective is to mini mize f i (ci ). When we consider f i as an non-decreasing function, this problem can be solved by Dynamic Programming in a similar way to F3 f max . Hence based on a similar dominance condition between partial permutations, the cri f i (ci ) on teria vector is defined as C2 , C3 , F with F = a given sequence. Theorem 4 Let S be a set of n jobs to be scheduled, and S ⊂ S a subset of S. Given π and π as two permutations of jobs from S , σ as a permutation of jobs from S S , then we have: ⎧ ⎧ ⎪ ⎪ ⎨C2 (π ) ≤ C2 (π ) ⎨C2 (π.σ ) ≤ C2 (π .σ ) I f C3 (π ) ≤ C3 (π ) then C3 (π.σ ) ≤ C3 (π .σ ) ⎪ ⎪ ⎩ ⎩ F(π ) ≤ F(π ) F(π.σ ) ≤ F(π .σ ) that is, the partial permutation π dominates π , and there exists a permutation schedule starting by π at least as good for the f i as another starting by π . Proof The proof of Theorem 4 is straightforward.
Therefore with the same analysis for the F3 f max problem, the total complexity is also O∗ (5n ) (or O∗ (M 2 2n )) in time and space.
4 Conclusion
i=1
= O(t 2 4t ) ∗
= O (4 ) t
Finally, with a same analysis as for the F3Cmax problem, the complexity of our algorithm for the F3 f max problem is: n n O(t · t 2 4t log(t · t 2 4t )) = O(t 4 4n ) = O∗ (5n ) t t=1
As in Lemma 2, we can also adopt a parameter M such that pi j ≤ M into the analysis. In this case, the number of possible
123
In this paper, we focus on the design of exact algorithms considering worst-case scenarios. The problems that we tackled are 3-machine flowshop scheduling problems, which are pure sequencing problems. In this context, we first propose a Dynamic Programming algorithm for the F3Cmax problem, which improves the best-known time complexity in the literature from 2 O(n) × I O(1) (which can be proved to be O∗ (cn ), c > 3, when applied to this problem) to O∗ (3n ). The generalizations of the algorithm on the F3 f max and F3 f i problems are then presented with worst-case time and space complexity in O∗ (5n ) or O∗ (M 2 2n ). Since the idea is based on the consideration of dominance conditions for non-decreasing cost functions and on the consideration of Pareto Front of corresponding criteria vectors, it can be
J Sched
easily generalized to other problems that have similar structures. For instance, the N P−har d problems F2 f max and F2 f i (with f a generic function defined at the beginning of sect. 3) can all be tackled by the Dynamic Programming with time and space complexity in O∗ (3n ) as for the F3Cmax problem. Note that when f max = Cmax the problem can be solved in polynomial time (Johnson 1954). More generally, for all problems that possess a dominance condition based on multiple criteria, our Dynamic Programming is likely to be applicable. Some questions are naturally raised for further research: Could the proposed upper bound O∗ (3n ) be improved further based on dynamic programming or other techniques? Is it possible to provide a tighter worst-case complexity analysis for Branch & Bound algorithms? The latter is of great importance and requires much research effort to be answered.
References Akiba, T., & Iwata, Y. (2015). Branch-and-reduce exponential/fpt algorithms in practice: A case study of vertex cover. Theoretical Computer Science. doi:10.1016/j.tcs.2015.09.023. http://www. sciencedirect.com/science/article/pii/S030439751500852X. Ashour, S. (1970). A branch-and-bound algorithm for the flow shop problem scheduling problem. AIIE Transactions, 2, 172–176. Brown, A., & Lomnicki, Z. (1966). Some applications of the “branchand-bound” algorithm to the machine scheduling problem. Journal of the Operational Research Society, 17(2), 173–186. Brucker, P. (2007). Scheduling algorithms (5th ed.). Berlin: Springer. Carlier, J., & Rebaï, I. (1996). Two branch and bound algorithms for the permutation flow shop problem. European Journal of Operational Research, 90(2), 238–251. Cheng, J., Kise, H., & Matsumoto, H. (1997). A branch-and-bound algorithm with fuzzy inference for a permutation flowshop scheduling problem. European Journal of Operational Research, 96(3), 578– 590. Cygan, M., Pilipczuk, M., Pilipczuk, M., & Wojtaszczyk, J. O. (2014). Scheduling partially ordered jobs faster than 2n . Algorithmica, 68(3), 692–714. Fomin, F. V., & Kratsch, D. (2010). Exact exponential algorithms. Springer Berlin Heidelberg. Framinan, J. M., Gupta, J. N., & Leisten, R. (2004). A review and classification of heuristics for permutation flow-shop scheduling with makespan objective. Journal of the Operational Research Society, 55(12), 1243–1255. Garey, M. R., Johnson, D. S., & Sethi, R. (1976). The complexity of flowshop and jobshop scheduling. Mathematics of Operations Research, 1(2), 117–129. Gromicho, J. A., van Hoorn, J. J., da Gama, F. S., & Timmer, G. T. (2012). Solving the job-shop scheduling problem optimally by dynamic programming. Computers & Operations Research, 39(12), 2968–2977.
Ignall, E., & Schrage, L. (1965). Application of the branch and bound technique to some flow-shop scheduling problems. Operations Research, 13(3), 400–412. Jansen, K., Land, F., & Land, K. (2013). Bounding the running time of algorithms for scheduling and packing problems. In F. Dehne, R. Solis-Oba, & J. R. Sack (Eds.), Algorithms and data structures (Vol. 8037, pp. 439–450)., Lecture notes in computer science Berlin: Springer. Johnson, S. M. (1954). Optimal two-and three-stage production schedules with setup times included. Naval Research Logistics Quarterly, 1(1), 61–68. Kelley Jr, J. E., & Walker, M. R. (1959). Critical-path planning and scheduling. In Papers Presented at the December 1–3, 1959, Eastern Joint IRE-AIEE-ACM Computer Conference (pp. 160–173), ACM. Kung, H. T., Luccio, F., & Preparata, F. P. (1975). On finding the maxima of a set of vectors. Journal of the ACM (JACM), 22(4), 469–476. Ladhari, T., & Haouari, M. (2005). A computational study of the permutation flow shop problem based on a tight lower bound. Computers & Operations Research, 32(7), 1831–1847. Lageweg, B., Lenstra, J., & Rinnooy Kan, A. (1978). A general bounding scheme for the permutation flow-shop problem. Operations Research, 26(1), 53–67. Lenté, C., Liedloff, M., Soukhal, A., & T’Kindt, V. (2013). On an extension of the Sort & Search method with application to scheduling theory. Theoretical Computer Science, 511, 13–22. Lenté, C., Liedloff, M., Soukhal, A., & T’Kindt, V. (2014). Exponential algorithms for scheduling problems. https://hal.archives-ouvertes. fr/hal-00944382. Lomnicki, Z. (1965). A “branch-and-bound” algorithm for the exact solution of the three-machine scheduling problem. Journal of the Operational Research Society, 16(1), 89–100. McMahon, G., & Burton, P. (1967). Flow-shop scheduling with the branch-and-bound method. Operations Research, 15(3), 473–481. Potts, C. (1980). An adaptive branching rule for the permutation flowshop problem. European Journal of Operational Research, 5(1), 19–25. Reza Hejazi, S., & Saghafian, S. (2005). Flowshop-scheduling problems with makespan criterion: A review. International Journal of Production Research, 43(14), 2895–2929. Ruiz, R., & Maroto, C. (2005). A comprehensive review and evaluation of permutation flowshop heuristics. European Journal of Operational Research, 165(2), 479–494. Project Management and Scheduling. Smutnicki, C. (1998). Some results of the worst-case analysis for flow shop scheduling. European Journal of Operational Research, 109(1), 66–87. Sviridenko, M. (2004). A note on permutation flow shop problem. Annals of Operations Research, 129(1), 247–252. Woeginger, G. J. (2003). Exact algorithms for NP-hard problems: A survey. In M. Jünger, G. Reinelt, & G. Rinaldi (Eds.), Combinatorial optimization—Eureka, you shrink! (Vol. 2570, pp. 185–207)., Lecture notes in computer science Berlin: Springer. Xiao, M., & Nagamochi, H. (2013). Exact algorithms for maximum independent set. In L. Cai, S. W. Cheng, & T. W. Lam (Eds.), Algorithms and computation (Vol. 8283, pp. 328–338)., Lecture notes in computer science Berlin: Springer.
123