CEJOR (2008) 16:379–390 DOI 10.1007/s10100-008-0066-y ORIGINAL PAPER
Heuristic algorithms for a complex parallel machine scheduling problem Zoltán Blázsik · Csanád Imreh · Zoltán Kovács
Published online: 13 June 2008 © Springer-Verlag 2008
Abstract In this work we present a new scheduling model for parallel machines, which extends the multiprocessor scheduling problem with release times for minimizing the total tardiness, and also extends the problem of vehicle routing with time windows. This new model is motivated by a resource allocation problem, which appears in the service sector. We present two class of heuristic algorithms for the solution of the problem, the first class is a class of greedy algorithms, the second class is based on the solutions of linear assignment problems. Furthermore we give a rescheduling algorithm, which improves a given feasible solution of the problem. Keywords
Heuristic algorithms · Scheduling · Vehicle routing
1 Introduction In scheduling applications the fundamental problem is that there are some jobs and there are some machines, which can be used to execute the jobs. Several mathematical models are used to examine this scenario. In this paper we present heuristic algorithms for a complex scheduling model. The problem arises in the service sector (e.g., electricity supply) where the machines model the teams, which have to execute different tasks (e.g., service). In the model there exists a metrical space and the jobs appear at the points of this space. The distances in the space give the traveling times between the points. Each team has a shift when the team works and the team has a starting
This research has been supported by the Hungarian National Foundation for Scientific Research, Grant T046405. Z. Blázsik · C. Imreh (B) · Z. Kovács Department of Informatics, University of Szeged, Árpád tér 2, 6720 Szeged, Hungary e-mail:
[email protected]
123
380
Z. Blázsik et al.
point where it starts to work and an ending point where it finishes the work. Furthermore the team may extend the shift, but in this case it obtains overtime allowance. The value of the overtime allowance depends on the team. The jobs have a processing time, furthermore they have a preferred time interval when they should be executed. The decision maker has to define a schedule by determining for each job the starting and completion time and the team, which performs the job. The team must travel to the place of the next job after the completion of its previous job before starting the execution, and it must return to its ending point before the end of its overtime interval. The goal is to find a schedule with minimal complete cost. The complete cost is the sum of four costs. The overtime cost is the sum of the overtime allowances paid to the teams. The traveling cost is a weighted sum of the distances traveled by the teams. Each job has some latency cost, which can be calculated by three cost parameters assigned to the job. The total latency cost is the sum of the latency costs of the jobs. Finally each team has a waiting cost, this is C W · T where T is the total time during the shift when the team does not work and does not travel and C W is a parameter depending on the team. The problem is very complex, it extends several mathematical models, which consider the basic scenario. The traveling cost belongs to an extended TSP version with multiple salesmans (see Gutin and Punnen 2002 for such models). But in this model the jobs have preferred time windows thus this problem is related to the mathematical model of vehicle routing with multiple vehicles and time windows (see Azi et al. 2007; Lau et al. 2003). On the other hand in our case the time window is not a strict constraint and this makes the model similar to the vehicle routing model defined in Hashimoto et al. (2006). Our model is also related to the area of machine scheduling on parallel machines where the objective is the total tardiness (see Shim and Kim 2007 and its list of references for recent results in this area), but there are some differences. In this model the teams have to travel, and the teams have fixed time intervals and overtime is allowed. Furthermore the jobs can not be started before the starting point of their preferred intervals, this is equivalent with the release time condition of the jobs (see Baptiste et al. 2004 for scheduling with release time minimizing tardiness). Finally the latency cost is more difficult than tardiness in the standard models. These connections show that the problem is very difficult, it is an extension of NP-hard problems, so one cannot expect an algorithm, which guarantees optimal solution for the inputs, which arise in real applications. Therefore, we think so that it is important to develop heuristic algorithms for the solution of the problem.
2 Notions, definitions In this scheduling model a discrete metrical space, jobs and teams are given. The distance function in the metrical space is denoted by U . Each job j has a processing time denoted by p j ; it has a place, which is a point of the metrical space where it has to be performed, this point is denoted by h j ; it has a preferred time window when it should be performed it is denoted by [t P ( j), t P ( j)]. Furthermore the job has three cost parameters denoted by K 1 ( j), K 2 ( j), K 3 ( j), these parameters are used to determine
123
Heuristic algorithms for a complex parallel machine scheduling problem
381
the latency cost of the job. Each team k has a starting position, which is a point in the metrical space and denoted by h k ; it has a final position which is a point in the metrical space and denoted by h k ; it has a shift which is a time interval denoted by (k)] when the team can work, and it has an overtime period [t (k), t (k)] [t M (k), t M M M when the team can work for extra overtime allowance. Team k has three cost parameters cW (k) belongs to the waiting cost, c O (k) belongs to the overtime cost, and cU (k) belongs to the traveling cost. We have to find a schedule of the jobs, which means that we have to define for each job j a time interval [s j , c j ] with c j − s j = p j when it is executed, and a team A j , which executes the job. Denote the list of the assigned jobs by L k for team k. We suppose that the jobs in L k are ordered by increasing starting time. Then a schedule is feasible if for each team k with job list L k = {k(1), k(2), . . . , k(r )} the following inequalities are valid. – – – –
sk(i) ≥ t P (k(i)) for each i = 1, . . . , r sk(i) ≥ ck(i−1) + U (h k(i−1) , h k(i) ) for each i = 2, . . . , r sk(1) ≥ t M (k) + U (h k , h k(1) ) ck(r ) + U (h k(r ) , h k ) ≤ t" M (k)
We are to find a feasible schedule which has the smallest total cost. The objective function is calculated as the sum of the following four different costs. – Latency cost: The total latency cost is the sum of the latency costs of the jobs. If job j is finished at time c j then its latency cost is K 1 ( j)(c j − t P ( j) − p j ) if c j ≤ t P ( j) and K 2 ( j) + K 3 ( j)(c j − t P ( j)) otherwise. – Traveling cost: The total traveling cost is the sum of the traveling costs of the teams. If team k has the set L k = {k(1), k(2), . . . , k(r )} of jobs then its traveling cost is cU (k)
r −1
U (h k(i) , h k(i+1) ) + U (h k , h k(1) ) + U (h k(r ) , h k )
,
i=1
where cU (k) is a constant assigned to k. – Overtime cost: The total overtime cost is the sum of the overtime costs of the teams. If team k finishes its work with job j then its overtime cost is c O (k)(max{0, c j + (k)}) U (h j , h k ) − t M – Waiting cost: The total waiting cost is the sum of the waiting costs of the teams. If team k has the set L k = {k(1), k(2), . . . , k(r )} of jobs then its waiting cost is a weighted time when it does not work and does not travel. This can be given as follows r (k)} − t M (k) − pk(i) cW (k) max{ck(r ) + U (h k(r ) , h k ), t M i=1
−
r −1
U (h k(i) , h k(i+1) ) − U (h k , h k(1) ) − U (h k(r ) , h k
.
i=1
123
382
Z. Blázsik et al.
In the procedures we will use some variables which are assigned to the teams, we define them below: – a(i) is a binary variable, it is 1 if the team finished its work, 0 otherwise, – g(k) is a point in the metrical space, this variable gives the current place of the team. At the beginning g(k) = h k . – t (k) is the time when the team can continue the work. At the beginning t (k) = t M (k). – L(k) is a list of jobs, this contains the jobs which are assigned to k. At the beginning the list is empty. 3 The algorithms We developed two classes of heuristic algorithms, and a further algorithm which improves a feasible solution. The classes depend on some parameters, and some evaluating functions. In this section we present the developed algorithms. 3.1 A class of greedy algorithms This class of greedy algorithms works as follows. In each step we consider the team which can continue the work at the earliest time. Then we select the jobs which are compatible with the team (the team can return to its ending point after the completion of the job before the end of the overtime period). We choose the job from this set where an evaluating function is minimal, and the job is assigned to the team with the earliest possible starting time. We can define the algorithm as follows: Algorithm GREEDY Initialization Let M contain the set of all jobs. Step 1. If M is empty then the procedure terminates. Otherwise choose the team which is available at the earliest time, denote it by i. (This means that i is the team where t (i) is minimal among the teams with a(i) = 0.) Step 2. Consider the set L of jobs which are compatible with the team i. These are the jobs which can be completed in time such that the team can return to its final position before the end of its overtime period. If this set is empty then the team finishes the work: let a(i) = 1, and go to Step 1. Otherwise let k j = max{t (i) + U (g(i), h j ), t P ( j)} be the earliest time when the team can start the job for each job j from this set, and let l j = k j + p j be the earliest time when the team can finish the job. Go to Step 3. Step 3. Choose the job j from set L, where the evaluating function ( j, i) is minimal, and go to Step 4. Step 4. (Registration of the decision). Let s j = k j , c j = l j , A( j) = i, delete j from M. Extend the list L(i) with j, let t (i) := l j , g(i) := h j . Go to Step 1. After analysing some different evaluating functions on test instances we defined the following class of evaluating function: ( j, i) = T AV ( j, i) + V A R( j, i) − K E S( j, i) + T U L( j, i) −K 1 ( j)max{0, t P ( j) − k j } − x1 K 2 ( j) − x2 K 3 ( j)
123
Heuristic algorithms for a complex parallel machine scheduling problem
383
where – T AV ( j, i) = U (g(i), h j )CU (i) – V A R( j, i) = (l j − p j − t (i) − U (g(i), h j ))C W (i), – K E S( j, i) = K 1 ( j)(k j − t P ( j)), if l j ≤ t P ( j), and K 3 ( j)(l j − t P ( j)) + K 2 ( j) otherwise, (i)})C (i) – T U L( j, i) = (max{0, l j + U (h j , h i ) − t M O – x1 and x2 are the parameters of the algorithm.
3.2 Algorithms based on the linear assignment problem This class can be considered as the generalization of the greedy algorithm. In each iteration step we choose p teams (the teams which can start the work at the earliest time). Then we select q ≥ p jobs, and we construct a q × q linear assignment problem [one can find an overview on the LAP in the survey (Burkard and Cela 1999)], using q − p fictive machines and some estimation on the costs of the possible decisions in the cost matrix. After the solution of the assignment problem, we assign the jobs to the selected teams and we go to the next iteration step. We use different rules for selecting the jobs, and we also define several cost estimating functions. First we present the basic algorithm and later we define some further versions. LAP based algorithm (Version A) Initialization Let M contain the set of all jobs. Step 1. Consider the teams which did not finish their work (a(i) = 1). Choose p teams which have the minimal values t (i). Let K M be the set of these teams and let T = maxi∈K M t (i). Step 2. Consider the jobs which can be executed by these teams (these are the jobs (i)). Select p jobs where the value with t P ( j) + p j ≤ maxi∈K M t M K 1 ( j) + K 2 ( j)/20 + K 3 ( j)/10 max{1, t P ( j) − p j − T } is maximal. Extend this set with the (q − p) jobs which have the minimal t P ( j) value. (If there are not enough jobs then we choose less.) Denote K J the set of the selected jobs. Step 3. Define q − p fictive teams. (If we could not choose q jobs, then define fictive teams or jobs to achieve |K J | = |K M|.) For the sets K J and K M define a linear assignment problem. The cost matrix is the following: – ci j = G for a very large G, if both i and j are real, and the team can not execute (i)). the job (max{t P ( j), t (i) + U (g(i), h j )} + p j + U (h j , h i ) > t M – If the job and the team are real, and the team can execute the job then let k j = max{t (i) + U (g(i), h j ), t P (e j )}, l j = k j + p j . Then ci j = T + V + D + P where (i)})C (i) T = max{0, l j + U (h j , h i ) − t M O V = max{0, k j − t (i) − U (g(i), h j )}C W (i), D = U (g(i), h j )CU (i) P = (t P ( j) − l j )K 3 ( j) − K 2 ( j), if l j > t P ( j) (in this case this is a negative value which decreases the cost of the delayed jobs)
123
384
Z. Blázsik et al.
P = −K 1 ( j)(k j − t P ( j)) − K 2 ( j) − K 3 ( j)[1 + (100/ max{1, (t P ( j) − l j )})], if l j ≤ t P ( j) (in this case we obtain an estimation on the urgency of the job). – 0, if one of the job and the team is fictive. Step 4. (Registration of the decision) For each team i ∈ K M, which receives a real job j ∈ K J such that it can execute it (the cost in the assignment is less than G) we assign the job to the team: – – – – – –
j is scheduled in the interval (k j , l j ) A( j) = i delete j from M extend the list L(i) of jobs with j t (i) := l j g(i) := h j .
For the teams which have a job with cost G we finish the work of the team, let a( j) = 0. Go to Step 1. Further versions of the algorithm The LAP based algorithm defined above can be considered as a frame algorithm. There are some decisions in it which are independent in the sense that they can be replaced by other rules to obtain a modified version of the algorithm. One of these parts is the selection rule in Step 2 which determines the set of jobs which are taken into account in the assignment problem. It is a straightforward idea to consider the jobs which are urgent and the jobs which have large latency cost. But it is hard to give a good rule when should we label a job urgent and it is also a hard task to balance the two goals (urgency, latency cost). In the basic version we selected two types of jobs, the jobs with minimal t P ( j) (the urgent job) and the jobs where an estimated latency cost is minimal. In the further versions modified selection rules are used in Step 2 both for latency and both for urgency. It is hard to tell which rules are the good ones, the effectiveness of the selection rules highly depends on the structure of the input (if there are many jobs and it is impossible to avoid large lateness, then latency cost is more important, in the opposite case urgency is more important to avoid latency). This is the reason why we defined more versions for the selection rule, in the versions we used different estimations on the effect of the latency and urgency. The versions were studied in the case studies. The second part which can be modified is the objective function of the assignment problem. Some part of the cost of the assignment is well defined (waiting cost, traveling cost, overtime cost), but we also have to take into account the importance of the job (again this consists of two parts the urgency and the latency cost). In similar way to the selection rule we can use different estimations for this part. We define the following versions of the algorithm. Since they are similar to version A, we present only the differences below. Version B This version differs to version A only in Step 2. In this case we choose p jobs, where the value K 1 ( j) + K 2 ( j)/10 + K 3 ( j)/10 max{1, t P ( j) − p j − T } is maximal. Moreover we extend this set with (q − p) jobs, with minimal t P ( j).
123
Heuristic algorithms for a complex parallel machine scheduling problem
385
Version C This version also differs to version A only in Step 2. In this case we choose p jobs, where the value K 1 ( j) + K 2 ( j)/20 + K 3 ( j)/10 max{1, t P ( j) − p j − T } is maximal. Moreover we extend this set with (q − p) jobs, with minimal t P ( j) − t P ( j) − p j . Version D In this version we use a different selection rule in Step 2, and we also use a different function P to estimate the priority of the jobs. In Step 2 we choose p jobs with maximal value K 1 ( j) max{1, (T − t P ( j)} + 10(K 1 ( j) + K 2 ( j) + K 3 ( j)) + K 3 ( j) max {1, T − t P ( j)}. We extend this set with q − p jobs with minimal T − t P ( j) − p j . In the assignment problem we use P = −K 3 ( j) if l j > t P ( j) and P = −K 1 ( j) − (K 2 ( j) + K 3 ( j))( p j / max{1, t P ( j) − T }) otherwise. Version E In this version we again change Step 2, and the definition of P. In Step 2 we choose p jobs with maximal value 10K 1 ( j) p j / max{1, t P ( j) − T } + K 1 ( j) + K 2 ( j) + K 3 ( j) + K 3 max{1, T − t P ( j)}. We extend this set with q − p jobs with minimal (T − t P ( j))/ max{1, K 1 ( j)}. Let P = −K 3 ( j) if l j > t P ( j) and P = −10K 1 ( j) − (K 2 ( j) + K 3 ( j))( p j / max{1, t P ( j) − T }) otherwise. Version F In this version we again change Step 2, and the definition of P. In Step 2 we choose p compatible jobs with t P ( j) − p j − T < 100 (if there are not p such jobs, then we select less). Extend this set to a q element set by choosing jobs with maximal K 1 ( j) + K 2 ( j) + K 3 ( j). Let P = −K 3 ( j) if l j > t P ( j) and P = (t P ( j) − l j ) − K 1 ( j) − K 2 ( j) − K 3 ( j) otherwise.
3.3 Algorithms for improving a feasible solution In this part we present a further algorithm which tries to improve a feasible solution. The basic idea is the following. For each job which has large latency cost, we try to reschedule the job. We calculate an estimated cost of scheduling the job for each team (we may eliminate jobs from the list of jobs assigned to the team, but we also have to estimate the cost of scheduling the eliminated jobs). We choose the team where the estimated cost is minimal. The algorithm continues this rescheduling phase with the eliminated jobs. If after scheduling all jobs we obtain a better solution than the previous one then we change the solution and consider the next job with large latency. Thus the improving algorithm uses the following algorithms as subalgorithms. The first algorithm examines whether is it possible to insert the job e into the list R of jobs assigned to team m. If it is possible then it gives back a boolean value true, furthermore it gives the cost of the optimal insertion and also the optimal order of jobs. Algorithm I N S(m, R, e) – If min j∈R {U (h m , h j } + min j∈R {U (h j , h m } + pe + j∈R p j > t" M (i) − t M (i), then it is not possible to insert e, thus I N S.B(m, R, e) := False, I N S.C(m, R, e) := ∞, I N S.O(m, R, e) := ∅. – Let C := ∞, O := ∅
123
386
Z. Blázsik et al.
– Into each of the possible positions of R insert the job e, consider the schedule which we obtain if each job is started at the earliest possible time (the maximum of the starting point of the preferred time interval of the job and the time when the team can arrive after the completion of the previous job). Denote the cost of this schedule by x. If x < C, then let C = x, and let O be the order of jobs received by inserting e into the position considered. – If C = ∞ at the end of the above iteration, then I N S.B(m, R, e) := F AL S E, and I N S.C(m, R, e) := ∞, I N S.O(m, R, e) := ∅. In the opposite case let I N S.B(m, R, e) = T RU E, I N S.C(m, R, e) = C, I N S.O(m, R, e) = O. The next algorithm uses INS and it makes an estimation on the cost of executing job e by team m (in this case we may eliminate some jobs from the list L(m)). We call this algorithm EXTT, its input is a team m with the order of its jobs (the list of jobs is denoted by R), and a new job e. The output is an estimated cost of extending the jobs assigned to the team with job e. Algorithm E X T T (m, R, e) – Let Z (m) be the total cost of the schedule of the jobs executed by m. – If I N S.B(m, R, e)=T RU E then let E X T T (m, R, e)=I N S.C(m, R, e)−Z (m). – Otherwise if maxq∈R t P (q) ≤ t P (e) and minq∈R {K 1 (q) + K 2 (q) + K 3 (q)} ≥ K 1 (e) + K 2 (e) + K 3 (e), then E X T T (m, R, e) := ∞. – Otherwise order the jobs from R with F I X ( j) = F AL S E by decreasing value of K 1 ( j) + K 2 ( j) + K 3 ( j) + U (h m , h j ) + U (h j , h m ). Let Q = R and omit these jobs from Q and put them into Q until I N S.B(m, Q, e) becomes true. If this does not happen then E X T T (m, R, e) = ∞. Otherwise E X T T (m, R, e) = I N S.C(m, Q, e) − Z (m) + k∈Q E(k, t" M (i) + pk ), where E(k, T ) = K 1 (k)(T − t P (k)), if T − pk ≤ t P (k), and E(k, T ) = K 3 (k)(T − t P (k)) + K 2 (k) otherwise. (E(k, T ) is an estimation on the cost of postponing job k after time T .) The next subalgorithm extends a feasible schedule Y with a further job e. It uses EXTT as a subalgorithm. Algorithm EXTS(Y,e) Initialization Let N = e and F I X ( j) = False for each job j. Step 1. If N = ∅, then the procedure terminates, it gives as output the last schedule. Otherwise choose the job j ∈ N , where K 1 ( j) + K 2 ( j) + K 3 ( j) is maximal. Let M = ∞ and B E ST = ∞. Go to Step 2. Step 2. For each team i with t" M (i) ≥ t P ( j), calculate E X T T (i, L(i), j). If E X T T (i, L(i), j) < M, then let B E ST := i and M := E X T T (i, L(i), j). Go to Step 3. Step 3. If M = ∞ delete j from set N otherwise extend the schedule of team B E ST with the job j by using the schedule which gives the solution in E X T T (B E ST, L(B E ST ), j). Let Fi x( j) = T r ue. Put the eliminated jobs into set N and go to Step 1. Based on the rescheduling algorithm EXTS we can define our algorithm to improve a feasible solution.
123
Heuristic algorithms for a complex parallel machine scheduling problem
387
Algorithm IMPROVE Initialization. Let X be the starting solution of the problem and C be the cost of the solution. Let L be a list which contains the jobs which have latency cost, ordered by this cost in decreasing order. Unmark all elements of L. Perform the following iteration step. Iteration. Consider the first unmarked element of L, denote it by e. If no such element exists the procedure terminates. Otherwise let Y be the schedule which we obtain if we delete e from X . Then execute E X T S(Y, e). Denote the new schedule by Z , let the total cost be z(Z ). If z(Z ) < C then let X = Z and C = z(Z ), delete e from L, unmark the other elements of L and go to the next iteration. Otherwise mark e and go to the next iteration.
4 Case studies We could examine the behavior of the algorithms on some inputs which were simplified versions of data appearing in a real application. We obtained 8 different inputs, the size of the inputs were 150 jobs and 40 teams in 4 cases and 350 jobs and 90 teams in the other 4 cases. Since the size of the inputs was too large to obtain an optimal solution we could not compare our algorithms with the optimal one. Therefore we used these data to compare the different versions of the algorithms and to investigate the effect of the parameters p and q. First we investigated the effect of the parameters p and q for the LAP based algorithms. Investigating the structure of the input, we could observe that they contained parallel teams which had common shifts and common overtime period. The number of the teams which are parallel was same for most time interval. We denote this value by PAR. To investigate the effect of p and q we performed the LAP based algorithms for each pair p, q 1 ≤ p ≤ 8P A R, 1 ≤ q ≤ 8P A R. The results showed that the algorithms are sensitive for the changes of the parameters p and q In most versions the best solutions where achieved at the settings p = P A R, q = 2P A R, in some versions the settings p = P A R, q = 3P A R were the best settings, and in one version p = P A R, q = 3P A R − 2 but even in these later cases p = P A R, q = 2P A R gave a solution which was close to the best settings. After this test we compared the heuristic algorithms to each other. We used the values p = P A R, q = 2P A R in the LAP based algorithms. We performed Greedy and each version of the LAP based algorithms on the inputs, furthermore we performed IMPROVE on all of the obtained solutions. The average costs are summarized in Table 1. The first line gives the name of the algorithm, the second gives the objective
Table 1 The average cost of the algorithms
Greedy LAP(A) LAP(B) LAP(C) LAP(D) LAP(E) LAP(F) 65709
60879
64296
49711
51071
61383
50066
57553
52654
55257
44225
42289
52666
45490
123
388
Z. Blázsik et al.
function values without the IMPROVE subrutin, the third line gives the values after IMPROVE. The results show that the best average cost without improvement was given by version C of the LAP based algorithm, but the versions D and F gave similar results. If we consider the results which we obtained after the improvement of the result of the algorithms we can conclude that the improvement phase was very useful in average it caused approximately 12 percent decrease in the cost. We note that after the improve procedure the best average result was given in the case of version D not in the case of version C. It is not very surprising, the behavior of algorithm IMPROVE and the quality of its result depends much more on the structure of the starting solution than on its objective value. On the other hand the empirical results show that we obtained the better average results after IMPROVE in the case of the better solutions (versions C, D, F). So we may conclude that considering the average results these versions are the best ones. On the other hand we note that in some cases algorithms L A P(A) and L A P(E) gave good results both of them gave the best result in one case before IMPROVE. We think so that the LAP based approach is better than Greedy, Greedy never gave the best solution, and in six cases it gave the worst one. Considering the running time of the algorithms, the basic algorithms were fast on each test input, they needed less than 1 second. (We used a computer which has the following parameters: Intel P4 2.8 GHz Prescott, ASUS P4P800SE, and 1 GB DDR400.) The IMPROVE procedure requested more time, but in the case of the smaller instances it was fast, the maximal running time was 11 s. In the case of the larger inputs the maximal running time was 2 min 23 s. We cannot state anything about the relation between the different versions and the running time of the IMPROVE procedure. On the other hand it seems that the running time depends on the amount of the improvement, the cases where the running time was higher belonged to the instances and versions where IMPROVE could make bigger improvements on the starting solutions. We summarize the following conclusions. – The improving algorithm is were useful, in almost all cases (all inputs, all versions) it could improve significantly the obtained solution. The average improvement was approximately 12%. – The algorithms are sensitive for the parameters p and q. Even very small changes in these parameters caused significant difference in the objective function value of the solution. On the other hand the inputs had special structure, they contained parallel teams which had common shifts and common overtime period. The number of the teams which are parallel was the same for most time interval. Using this number as p and q = 2 p or q = 3 p we usually obtained a good solution. – Concerning the comparison the different versions, it seems that the LAP based versions C, D, F give the better results. On the other hand a version which gave the best solution for some inputs gave bad results for some others. Therefore we could not clearly rank the different versions by our test data. Using our test results we defined the following package for the solution of the problem. It performs the following 13 algorithms and IMPROVE for the 13 results: Greedy, all of the versions (A–F) with the parameter p given above and q = 2 p and q = 3 p. Finally the package chooses the best solution among the results. This
123
Heuristic algorithms for a complex parallel machine scheduling problem
389
package worked well on the test data comparing to the other versions of the heuristic algorithms. Furthermore the running time of the package is also reasonable its longest running time was less than 20 min on our test data. 5 Further extensions of the problem In some applications the mathematical model can be more general than the one which is investigated above. In this section we collect some of the possible extensions, and we show how the algorithms of the previous section can be extended to handle the more general models. 5.1 Time dependent travel times In many application the traveling time and the cost may depend on the time interval when the team has to travel. It is easy to build this assumption into the model. We can define then time intervals, each time interval is described by a metrical space. When a team has to travel, the cost and time of its travel is calculated as the sum of the parts traveled in the different time intervals. We can modify each of the above defined algorithms to handle this scenario if we use a time dependent three variable function instead of U . 5.2 Restricted assignments In some applications it may happen that some teams are not allowed to execute some jobs. We can modify the algorithms as follows. In the case of the Greedy algorithm in Step 2 we only consider such jobs which can be executed by the team. In the case of the assignment problem based algorithms we use the value ∞ in the cost matrix if the team is not allowed to execute the job. In the improving algorithm we do not try to insert the job to the teams where it cannot be executed. 5.3 Jobs requesting more machines In some cases more than one team may be necessary for the execution of a job. This extension leads to scheduling multiprocessor tasks (see Drozdowski 1996 for an overview on the area). In the case of the greedy algorithm it is not difficult to develop an extension. The evaluation function can be extended to jobs and sets of teams. Furthermore if one multiprocessor job is assigned to a team then two new team is defined with the time intervals which are before and after the job. On the other hand it seems to be difficult to extend the other algorithms for this more general case. References Azi N, Gendreau M, Potvin JV (2007) An exact algorithm for a single-vehicle routing problem with time windows and multiple routes. Eur J Oper Res 178(3):755–766 Baptiste P, Carlier J, Jouglet A (2004) A Branch-and-Bound procedure to minimize total tardiness on one machine with arbitrary release dates. Eur J Oper Res 158(3):595–608
123
390
Z. Blázsik et al.
Burkard RE, Cela E (1999) Linear assignment problems and extensions. Handbook of combinatorial optimization, Supplement vol A. Kluwer, Dordrecht, pp 75–149 Lau HC, Sim M, Teo KM (2003) Vehicle routing problem with time windows and a limited number of vehicles. Eur J Oper Res 148(3):559–569 Drozdowski M (1996) Scheduling multiprocessor tasks—an overview. Eur J Oper Res 94:215–230 Gutin G, Punnen PA (eds) (2002) The traveling salesman problem and its variations. Kluwer, Dordrecht Hashimoto H, Ibaraki T, Imahori S, Yagiura M (2006) The vehicle routing problem with flexible time windows and traveling times. Discrete Appl Math 154(16):2271–2290 Shim SO, Kim YD (2007) Scheduling on parallel identical machines to minimize total tardiness. Eur J Oper Res 177(1):135–146
123