ISSN 1064-2307, Journal of Computer and Systems Sciences International, 2007, Vol. 46, No. 2, pp. 262–268. © Pleiades Publishing, Ltd., 2007. Original Russian Text © N.V. Kolesov, M.V. Tolmacheva, 2007, published in Izvestiya Akademii Nauk. Teoriya i Sistemy Upravleniya, 2007, No. 2, pp. 101–108.
COMPUTER METHODS
Design of Computational Processes in Hierarchic Systems N. V. Kolesov and M. V. Tolmacheva Central Scientific and Research Institute Elektropribor, ul. Malaya Posadskaya 30, St. Petersburg, 197046, Russia Received February 28, 2006; in final form September 5, 2006
Abstract—A suboptimal algorithm of scheduling in hierarchic systems is considered. The algorithm does not practically require the enumeration of possibilities. The results on the efficiency of this algorithm are given based on the random generation of examples. DOI: 10.1134/S106423070702013X
INTRODUCTION The problem of scheduling data processing for distributed computations was always paid a fairly great attention [1, 2]. In this paper, the scheduling of computations means the allocation of computational resources (time) between the programs to be executed [3]. A number of solutions associated with this problem are proposed in the framework of the theory of scheduling [4−6]. At the same time, usually, the systems in a canonical form are investigated; those are parallel, sequential, parallel-sequential, and sequentially parallel systems. Among these models, the most popular one is the pipeline (sequential) model. The known procedures of search for optimal schedules for distributed computing systems are characterized by high algorithmic complexity. Thus, in practice, approximate locally optimal algorithms or heuristic algorithms are usually used [7–10]. The authors of this paper believe that it is worth to consider a class of hierarchic system besides the classes mentioned above. The reason is not only that this class covers the family of pipeline systems, but also that this class might describe the computational systems that belong to different informational control complexes of special purpose. For example, it could be navigation complexes installed on mobile objects. The structure of such computational systems can be quite diverse and can be characterized by no means hierarchic relations (usually, these systems are designed based on the trunk principle). However, functional relations between the programs realized on different computers of the system can be, in our opinion, assumed to be hierarchic with a sufficient degree of approximation. It is necessary to note that, for this kind of systems, the questions of ordering of the programs to be executed, which are considered below, are just a part of a general scheduling problem. This problem presumes, in particular, a strong time binding of the intervals of execution of these problems. An additional feature of such systems is the use of static cyclic schedules. In view of that, the notion of “queue” used further in the text has a conditional char-
acter, since in this case the problems waiting for the solution are not accumulated in a buffer, but are residentially present in the memory of the computer and are launched at the moments determined by a permanent schedule. Thus, in this paper the problem of scheduling for hierarchic computational system solving a sequence of problems is considered. Each problem is split into fragments according to the number of computers in the system. The execution time of each fragment for all problems is supposed to be known. Below, a suboptimal algorithm of scheduling is described, which almost does not require the enumeration of possibilities. 1. MAIN CONCEPTS AND DEFINITIONS Let us discuss first the concept of hierarchic system. In Fig. 1a an example of four-level hierarchic system consisting of eight computers is given. In the figure, the rectangles denote the computers and the arrows bring out the relations between them. The system in Fig. 1a has no cycles and branches. It includes four input computers (1, 2, 4, and 6) and one output computer (8). Note that the pipeline system is a special case of the hierarchic system with just one computer in each level (Fig. 1b).
262
8 5 2
6 3
1 (‡)
level 4
4'
7
level 3
3'
4
level 2
2'
level 1
1' (b)
Fig. 1. The examples of hierarchical systems: (a) a general hierarchical system, (b) a pipeline hierarchical system.
DESIGN OF COMPUTATIONAL PROCESSES IN HIERARCHIC SYSTEMS
Each input computer Li is connected with the output computer L0 by some computational path (by a sequence of computers) pk = Li, Lj, …, L0. We call the value tj(pk) the execution time of the computational path pk in the jth problem. We define it as the sum of execution times τi, j of the fragments of the jth job by the computers belonging to this path. Using the introduced numeration, we can write it as mk
t j ( pk ) =
∑τ
i, j ,
where mk is the number of computers that belong to the path pk. Note that in a general case the real computational time on this path is longer because of the necessity of waiting for the initial data to be ready for the computers that belong to pk. This data is prepared by the computers that do not belong to this path. The path p *j is called critical for the jth job, if its execution time is maximal among all other paths of the system. Obviously, for different tasks being solved in the same system, the computational paths can be different. start
For any hierarchic system, the time t i, j of the start of information processing by some computer Li s defined by the time of preparation of all initial data for it. In turn, this data are composed of the series of particular data formed by the preceding computers Lr from each rth information input Li (in Fig. 1a, computers 5, 6, and 7 precede computer 8). As a result, the time of getting the data by the computer Li is maximal among the times t characterizing the end of preparation of particular data, that is t i, j = max { t r, j }. r end
Consider three base classes of hierarchic systems assuming that this list can be extended later. The specific feature of these classes is the existence of special orders on the set of computers with respect to the domination relation defined below. The first class is related to the presence of the decreasing order, the second one corresponds to the increasing order, and the third one involves the presence of two orders (increasing and decreasing). Let us define a domination relation “>” on the set of computers. Definition. The computer Lq dominates over the computer Lr (Lq > Lr), if
i=1
start
263
end
In turn, the time t r, j of preparation of particular data by some computational path is defined not only by the duration of data processing in the computer of this path, but also by the waiting period of the data and by the time of starting calculations in the input computer on the considered computational path. The latter quantity has different values for different computational paths for any problem except the first one. In what follows, it is convenient to interpret the concept of hierarchical system or, simply, system (denote it by ë) as the system composed of two objects, the hierarchy of the computers L = {L1, L2, …, Lm} and the set J = (J1, J2, …, Jn} of the problems being solved, that is ë = {L, J}. This is connected with the domination relation introduced below and to the classification of hierarchic systems based on it, where the properties of the structure and the problems under consideration intertlace. Thus, two hierarchic systems are assumed to be different, if they have the same structure, but different sets of problems to be solved.
min τ r, j . τ q, j ≥ max j j Let us give a particular definition of three base classes of hierarchic systems supposing that, for all classes, the following common property is characteristic. If, at each level of the system, we select a computer that is maximum relative to domination, then these computers generate a critical path for each of the problems solved. Class 1. The set of computers is a decreasing sequence L1 > L2 > … > Lm with respect to the domination relation. Class 2. The set of computers makes an increasing sequence L1 < L2 < … < Lm with respect to the domination relation. Class 3. The set of computers is a pair of joined sequences L1< L2 < … < Lh > … > Lm – 1 > Lm, 1 ≤ h ≤ m, such that the first of them increases and the second one decreases with respect to the domination relation. In Fig. 2, a hierarchic system is shown that corresponds to Class 3. The domination relation is marked by a dashed-line curve, and the computers with the numbers 1, 2, 5, and 8, which are the maximal elements of the levels, are shown in bold lines. It can be seen that these computers make a path, and this path is critical for all problems of the system.
8
5
2
6
7
3
4
1 Fig. 2. An illustration of the domination relation for systems of Class 3.
JOURNAL OF COMPUTER AND SYSTEMS SCIENCES INTERNATIONAL
Vol. 46
No. 2
2007
264
KOLESOV, TOLMACHEVA
Hierarchical computational systems
Recursive scheduling algorithm
Base class 1
Algorithm 1
3. CONSTRUCTION OF OPTIMAL SCHEDULES FOR BASE CLASSES Now, we show how to construct an optimal schedule for the system of the first class. The characteristics of the critical path are marked by the star (*).
C Base class 2
Algorithm 2
Algorithm 1. Base class 3
Algorithm 3
Step 1. Choose the job Js that satisfies the condition m*
∑
Fig. 3. An illustration of the main idea of the proposed algorithm.
2. STATEMENT OF THE PROBLEM AND THE MAIN IDEA OF THE PROPOSED ALGORITHM Let the system ë = {L, J} under consideration be characterized by a hierarchy L = {L1, L2, …, Lm} of m computers and by the set J = {J1, J2, …, Jn} of n problems to be solved. Each problem is separated into m fragments (by the number of the computers). Note that the ith fragment of the jth job is executed by the ith computer for the time τi, j i = 1, m , j = 1, n , and τi, j = Í Ì Ì Í t i, j – t i, j , where t i, j and t i, j are the moments of the start and of the end of executing the jth job by the ith computer. The problem consists in making a schedule for executing the jobs that does not severely yield to the minimal schedule in the total execution time of all the jobs. Note that the minimal schedule is the schedule that is characterized by the minimal time of executing all the jobs. To construct the desired schedule, we propose a suboptimal recursive algorithm (Fig. 3). At each step of the algorithm one or two jobs from the ones that belong to the initial set J and that were not introduced at the previous steps are included into the forming schedule. These actions are repeated until the initial set of jobs is exhausted. To include the jobs into the schedule at each step of the recursion, one of three specific algorithms without enumeration are used. Each of them is optimal with respect to the systems of one of the base classes defined above. In the proposed suboptimal algorithm, at a particular step of the recursion, an allocation algorithm is applied. This algorithm corresponds to the class of the classes mentioned above that is the closest to the system C' = {L, J'} considered at this step, where J' is the set of non-attributed jobs (J' ⊆ J). Clearly, a real system usually does not belong to any of these three classes; consequently, all mentioned algorithms are not optimal for it.
m*
⎧ ⎫ τ *i, s = min τ *i, j ⎬. j ⎨ ⎩i = 2 ⎭ i=2
∑
Step 2. Make the schedule π1 = [ π˜ Js], where π˜ is an arbitrary schedule for (n – 1) jobs that does not contain Js. Let us show the optimality of Algorithm 1. We preliminarily note an important feature of systems of Class 1. This feature consists in the fact that none of the computers that belong to the critical path, except for the input one, has a queue of fragments of jobs waiting for the execution. The queue at the input of the computers can arise for two reasons. First, it can happen because of waiting by the particular initial data, prepared on the critical path, for the arrival of other particular data. Another reason is that the computer can continue the execution of a fragment of the previous job. Waiting due to the first reason is excluded, since, for any job, the execution on each computer that belong to the critical path is always finished later than on the other computers of the same level. This happens since this computer is a maximal element of the level with respect to the domination relation and the starting time of calculations on the input computer of the critical path is always latest among all input computers. Waiting due to the second reason is also excluded, because the preceding computer of the critical path dominates the next ones and the shortest fragment of the job for this computer is executed by the next computer for the time longer, than the time of executing the longest fragment of any other job. From these considerations, we can write an expression for the execution time T1(π) of an arbitrary schedule π in the system of Class 1. This time is the sum of two components. One component is the waiting period for a fragment of the last nth job in the initial queue, which corresponds to the input computer of the critical path. Another component is the duration τ *n of executing the critical fragment for this job. This interpretation is possible, since the initial moment of waiting for the input fragment of the critical path for the nth job coincides with the starting time of realization of the schedule; the time of finishing the execution of the critical path coincides with the end time of processing the schedule. The waiting period will be defined as the sum of execution times of input fragment of the critical path for all jobs, except the last one. As a result, using the
JOURNAL OF COMPUTER AND SYSTEMS SCIENCES INTERNATIONAL
Vol. 46
No. 2
2007
DESIGN OF COMPUTATIONAL PROCESSES IN HIERARCHIC SYSTEMS
numeration of the computers along the critical path, we have n–1
T 1(π) =
∑
n–1
τ *i, j + t *n =
j=1
∑
m*
τ *i, j +
j=1
∑ τ* , i, n
i=1
where τ *i, j is the time of executing a fragment of the jth job on the ith computer of the critical path of the system, and m* is the number of computers that belong to it. Carrying the first summand from the second sum to the first sum for convenience of analysis, we obtain n
T 1(π) =
∑ j=1
m*
τ *i, j +
∑ τ* . i, n
i=2
The first summand is the total time that is spent for executing all input fragments of the critical path for all jobs. Obviously, for a given queue, its value is fixed and does not depend on the choice of schedule. In contrast to the first sum, the choice of schedule affects the second summand. The optimal schedule is the one with the minimal value of this sum. This implies that the schedule formed by Algorithm 1 is optimal for systems of Class 1. Note two essential facts. First, in this algorithm, there is no enumeration of variants of schedules. Second, for the schedule to be optimal, it is sufficient to choose correctly the last job to be executed, which has to be most efficiently executed on all computers of the critical path of the system except, possibly, the first one. The order of the other jobs does not affect the execution time of the schedule. Consider the rules of forming a schedule for systems of Class 2, which is opposite to Class 1 in some sense. This class is also characterized by the determining property, but this property is that, at the input of each computer of the critical path of the system, there is always a queue of fragments of jobs waiting for execution. As a result, each job except the first one, starts at any computer of the critical path with some delay with respect to its arriving time to the input of the computer, since any ith fragment of the jth job is finished later than the (i – 1)th fragment of the (j + 1)th job following it. This fact also follows from the definition of the domination relation. The algorithm of constructing a schedule in this case is as follows.
schedule for a system of Class 2. We take into account the fact that before the last m*th computer of the critical path of the system (as well as before all the others) there is always a queue. Thus, this computer is always busy from the instant when the first job arrives at its input and until the instant of leaving the last job. In spite of this, the execution time T2(π) of the arbitrary schedule π can be represented as the sum of the time tl that is necessary for executing the first job and of the duration of executing the last m*-ı fragments of the critical path for the other jobs. Note that the time of executing the first job is equal to the execution time of the critical path. We have n
T 2( π ) = t1 +
m* – 1
⎧ ⎫ τ *i, 1 = min τ *i, j ⎬. j ⎨ ⎩ i=1 ⎭ i=1 Step 2. Make the schedule π 2 = [ J 1 π˜ ], where π˜ is an arbitrary schedule for the (n – 1)th job that does not contain Jl. Let us show the optimality of this algorithm. Write the expression for the execution time of an arbitrary
∑
∑
∑
m*
τ *m*, j =
j=2
∑
n
τ *i, 1 +
i=1
∑ τ*
m*, j .
j=2
Carrying for convenience the last summand from the first sum to the second one, we obtain m* – 1
T 2(π) =
n
∑ τ* + ∑ τ*
m*, j .
i, 1
i=1
(3.1)
j=1
The value of the second summand is the total time of executing the last fragments of the critical path for all jobs. Obviously, its value is fixed for a given queue and does not depend on the choice of the schedule. In contrast to the second sum, the first one is regulated by the choice of schedule. The optimal schedule is the schedule that is characterized by the minimal value of this sum. This implies that the schedule formed by Algorithm 2 is optimal for Class 2. As in the previous case, there is no enumeration of schedule choices and the optimality of the result is provided by a correct choice of the first problem to be executed. It must be the most efficiently executed job on all computers of the critical path except, possibly, the last one. The order of other problems does not affect the execution time of the schedule. Consider the problem of constructing an optimal schedule for systems of Class 3. Algorithm 3. Step 1. Choose the job Js that satisfies the condition h* – 1
h* – 1
⎧ ⎫ τ *i, s = min τ *i, j ⎬. j ⎨ ⎩i=1 ⎭ i=1
∑
Algorithm 2. Step 1. Choose the job Jl that satisfies the condition m* – 1
265
∑
Step 2. Choose the job Jp that satisfies the condition m*
m*
⎧ ⎫ τ i*, p = min τ *i, j ⎬. j j ≠ s⎨ ⎩ i = h* + 1 ⎭ i = h* + 1
∑
∑
Step 3. Form the schedule π 3 = [ J s π˜ J p ], where π˜ is an arbitrary schedule for (n – 2) jobs that does not contain Js and Jp.
JOURNAL OF COMPUTER AND SYSTEMS SCIENCES INTERNATIONAL
Vol. 46
No. 2
2007
266
KOLESOV, TOLMACHEVA
Step 4. Form the schedule π 4 = [ J q π˜ J r ], repeating Steps 1–3 but executing Steps 1 and 2 in the opposite order.
Carrying for convenience the last summand from the first sum and the first summand from the third sum to the second one, we obtain h* – 1
Step 5. Choose the best schedule among the schedules π3 and π4. Let us show that this algorithm is optimal. We divide the critical path and the system, correspondingly, into two parts. The first part of the critical path includes the computers from the first to the (h* – 1)th, the second one includes the computers from the h*th to the m*th. Introduce the computers of the levels from the first to the (h* – 1)th into the first part of the system, and include the other computers into the second part. Obviously, the first part of the system joins the computers of Class 2. As a result, a fragment of each job is placed into the queue before being executed on any computer of the corresponding part of the critical path. Since the same is true for h*th computer, the second part of the system belongs to Class 1. Indeed, not only the necessary ordering of the computers takes place, but fragments of jobs arrive at the corresponding parts of the critical path without any additional delays so that they simultaneously stand in a queue to the h*th computer. Let us write the execution time T3(π) of an arbitrary schedule in the considered system. We represent T3(π) as the sum of two quantities. The first one is the time tn(1, h* – 1) of executing all n jobs in the first part of the system (from the beginning of processing the input part of the first job on the initial computer of the critical path to the beginning of executing a fragment of the nth job by theh*th computer), and the second one is the execution time tn(h, m) of the nth job on the second part of the system: T3(π) = tn(1, h* – 1) + tn(h*, m*). Obviously, the time tn(1, h* – 1) is equal to the time of executing the (n – 1)th job by the sequence of computers from the first to the h*th one, which is determined by relation (3.1): h*
t n ( 1, h* – 1 ) = t n – 1 ( 1, h* ) =
∑
T 3(π) =
i=1
∑
The execution time tn(h*, m*) of the nth job on the second part of the system is given by the obvious expression
i, 1
h*, j
∑ τ* . i, n
S1 =
∑ τ* , i, 1
T 3(π) =
n–1
∑ τ* + ∑ τ* i, 1
i=1
h*, j
j=2
m*
+
∑ τ* . i, n
i = h*
m*
S2 =
i=1
∑
τ *i, n .
i = h* + 1
The results of calculations are given in Table 2, from which it can be seen that the first sum attains its miniTable 1
i = h*
h*
τ *i, n .
i = h* + 1
h* – 1
Problems
Thus, for T3(π) we have
∑
+
j=1
m*
t n ( h*, m* ) =
m*
The value of the second summand is the total time spent for executing the corresponding fragments of all jobs on the h*th computer. Clearly, for the given queue, its value is fixed and does not depend on the choice of a schedule. Unlike the second sum, the choice of schedule affects the first and on the third sums. The optimal one is the schedule which is characterized by the minimal value of this part of the expression. Its minimal value is, in turn, achieved, if the first problem in the schedule requires the minimal time for (h* – 1) initial fragments to be executed on the computers of the critical path, and if the last problem is the one that has to be most efficiently executed on the last (m* – h*) computers of the critical path. Since the same problem can satisfy both conditions, it could be placed at any of these positions, and then the best schedule of the obtained ones has to be chosen. This implies that the schedule formed by Algorithm 3 is optimal for systems of Class 3. Note that, in this algorithm, there is almost no enumeration of schedules either. It is just sufficient to analyze two possible cases, and the optimality of the result is achieved because of the right choice of the first and the last jobs to be executed. The order of the other jobs does not affect the execution time of the schedule. Example. Let us construct a schedule for a hierarchical system. The time of executing the problems in this system, expressed in conventional units, are given in Table 1. The structure shown in Fig. 3 corresponds to this system. It is not difficult to see that this system belongs to Class 3. Indeed, Table 1 implies that computers 1, 2, 5, and 8 are maximal in their levels and they make a computational path. Then according to Algorithm 3 of scheduling, for each job, we have to calculate two sums
τ *h*, j .
j=2
∑ τ* + ∑ τ*
i=1
n–1
τ *i, 1 +
n
1 2 3 4
Computers 1
2
3
4
5
6
7
8
12 11 10 9
14 14 13 13
11 12 11 12
9 10 9 10
7 7 8 8
5 6 5 6
3 3 4 4
1 2 1 2
JOURNAL OF COMPUTER AND SYSTEMS SCIENCES INTERNATIONAL
Vol. 46
No. 2
2007
DESIGN OF COMPUTATIONAL PROCESSES IN HIERARCHIC SYSTEMS
mum on the fourth job J4 and the second one attains its minimum on the first job Jl. Consequently, these problems have to take the first and the last positions in the optimal schedule. Thus, in this case, there are two variants of an optimal schedule: π1 = J4J2J3J1 and π2 = J4J3J2J1. 4. SUBOPTIMAL SCHEDULING ALGORITHM Certainly, the optimality of the algorithms proposed above takes place only in the cases when the system under consideration completely satisfies the requirements of some of the base classes. However, in practice, these requirements are often not fulfilled, and then, in particular, the statement on the independence of execution time of the schedule of the ordering variant for non-extreme jobs does not make sense. On the contrary, it seems to be likely that the smaller the execution time of the schedule for all non-extreme jobs, the smaller difference of the execution time of the optimal schedule and of its variant with the ordering of just extreme jobs. Thus, we come to the following recursive algorithm. Step 1. Find in the system C = (L, J) the computational path pk called critical such that it is characterized mk
by the maximal value of the sum t ( p k ) =
∑τ
i
of the
i=1
τ i, j of the times of executing upper bounds τ i = max j the jobs. Note that this critical path in a general case will not have the properties of the base classes. Step 2. Based on the described above heuristic classification rule, define to which base class (see Section 1) the system C' = (L, J') is closer at this step. Step 3. Using the corresponding algorithms (see Section 3) choose one of the extreme (first or last) jobs of the future schedule (cases 1 and 2) or both extreme jobs (case 3). If the set J' is not empty, then go to Step 2, otherwise, the end of the algorithm. The classification at Step 2 is made by selecting a determining property of the system under consideration, the property of the presence of a decreasing sequence (Class 1), of an increasing sequence (Class 2), or joined decreasing and increasing sequences (Class 3). To do this, we analyze the interval dependence of the execution time of all the jobs for the computers of the critical path on the number of the computer (Figure 4). The analysis includes a parabolic approximation of the upper bounds of this dependence. In the figure, the time intervals are shown by the intervals of bold lines and their upper bounds are marked by black circles. If the vertex of the parabola is to the left of the interval of the numbers of the computers, then the system belongs to Class 1, if it is to the right of, then the system belongs to Class 2, and if it is inside the interval, the system is in Class 3.
267
Table 2 Jobs
The results of calculations
1
2
3
4
S1 S2
12 36
11 40
10 38
9 42
5. ESTIMATION OF THE EFFICIENCY OF AN APPROXIMATE ALGORITHM The estimation of the efficiency of the proposed algorithm was made on the base of computer simulation and was done by the authors with respect to the class of hierarchical system with the structure described by a binary balanced graph (all computational paths in such systems have the same length). This approach is based on a random generation of examples. The number of computers forming hierarchy was fixed (m = 15 or 31) and the number of jobs n varied from 5 to 10, and, for any value n from this interval, 100 examples were generated. The times of executing the jobs on each computer were sampled as random variables that are uniformly distributed in the given interval. For each obtained example, schedules were constructed based on the proposed algorithm and based on full enumeration of possibilities (an optimal schedule). These schedules were compared between each other with respect to the relative difference of the execution times. The simulation results are given in Figs. 5–7. The results have shown that, for example, for n = 5 and m = 15 (Fig. 5), in 53% of cases, the proposed algorithm constructed the schedules that yield to the optimal schedule by the duration not more than 10%, and in 80% of cases the schedules were formed that lose to the optimal one not more than 30%. The most unfortunate case was the case n = 6, when just for 37% of examples the schedules were obtained that had the difference with the optimal schedule within 10%, and for 67% of examples the difference was higher than 30%. Using the algorithm for the system of 31 computers, not less [τ, τ] 10 8 6 4 2
1
2
3
4
5
Computer number Fig. 4. An illustration of the classification rule for systems.
JOURNAL OF COMPUTER AND SYSTEMS SCIENCES INTERNATIONAL
Vol. 46
No. 2
2007
268 % 100 90 80 70 60 50 40 30 20 10 0
KOLESOV, TOLMACHEVA
5
6
7
8
9 10 Number of jobs
Fig. 5. The efficiency of the suboptimal algorithm for hierarchical systems consisted of 15 computers: ( ) the loss <10%, ( ) the loss <20%, ( ) the loss <30%.
% 120 100 80 60 40 20 0
5
8 7 Number of jobs
6
Fig. 6. The efficiency of the suboptimal algorithm for pipeline systems: ( ) the loss <5% (10 computers), ( ) the loss <10% (10 computers), ( ) the loss <15% (10 computers), —the loss <5% (100 computers).
% 100 90 80 70 60 50 40 30 20 10 0
5
6
7
8
9 10 Number of jobs
Fig. 7. The efficiency of the suboptimal algorithm for hierarchical systems consisting of 31 computers: ( ) the loss <10%, ( ) the loss <20%, ( ) the loss <30%.
than in 75% of cases, a schedule was constructed that yields to the optimal one in the duration not more than 30% (Fig. 7). The simulation results obtained for the subclass of hierarchical system and pipeline systems (Fig.6) seem to be useful. It is remarkable that, in this case, the features of the suboptimal algorithm are essentially better. In particular, for almost all examples it did not lose to the optimal algorithm more than 15%. This fact is quite explainable, because the number of assumptions on the
properties of the system under consideration increases by the transition from the class of pipeline systems to the class of hierarchical systems. Indeed, in hierarchical systems, the presence of the critical path composed of the computers dominating in their levels is necessary, besides the presence of the domination relation. It is clear that in practice, in a general case, these assumptions do not hold, and then the ideas on the properties of the system under consideration and the proposed algorithm become more and more approximate. In the same graph, the simulation results are given for the case when the number of computers is greater by an order of magnitude (m = 100). CONCLUSION In the paper, a suboptimal algorithm of scheduling is considered for hierarchical computational systems that almost does not require the enumeration of possibilities. Because of this, it has a low time complexity. It is based on three no-enumerative optimal algorithms of scheduling. Each of them is oriented to its quite narrow base class of systems. The study of the efficiency of the algorithm by the method of random sampling of examples has shown that, in particular, using the algorithm for the system of 31 computers, not less than in 75% of cases, a schedule is constructed that yields to the optimal schedule in duration not more than 30%. REFERENCES 1. Scheduling Theory and Computers, Ed. by E. G. Koffman (Nauka, Moscow, 1984) [in Russian]. 2. V. V. Toporkov, Models of Distributed Computations (Fizmatlit, Moscow, 2004) [in Russian]. 3. V. Stollings, Operating Systems (Vil’yams, Moscow, 2002) [in Russian]. 4. R. V. Konvey, V. L. Maxwell, and L. V. Miller, Scheduling Theory (Nauka, Moscow, 1975) [in Russian]. 5. V. S. Tanaev, Yu. N. Sotskov, and V. A. Strusevich, Scheduling Theory. Multi-Stage Systems (Nauka, Moscow, 1989) [in Russian]. 6. V. I. Levin, Structural–Logical Methods of Investigation of Complex Systems with Application of Computers (Nauka, Moscow, 1987) [in Russian]. 7. I. Yu. Miretskii, “Synthesis of Suboptimal Schedules for Sequential Systems,” Izv. Ross. Akad. Nauk, Teor. Sist. Upr., No. 1, (2002) [Comp. Syst. Sci. 41 (1), 73–81 (2002)]. 8. M. L. Mastrolilli and M. Gambardella, “Effective Neighborhood Functions for the Flexible Job Shop Problem,” J. Scheduling 3 (2000). 9. M. Krone and K. Stieglitz, “Heuristic Programming Solution of a Flowshop Scheduling Problem,” Oper. Res. 22 (3) (1974). 10. V. A. Kostenko and E. X. Gur’yanov, “Algorithm for Constructing Schedules of Exchange through a Bus with Centralized Control and Investigation of Its Efficiency,” Programmirovanie, No. 6 (2005) [Program. Comput. Software, No. 6, (2005)].
JOURNAL OF COMPUTER AND SYSTEMS SCIENCES INTERNATIONAL
Vol. 46
No. 2
2007