WU41i$
Vol. 8 No. 1B 2003 253-258
WUHANUNIVERSITYJOURNALOF NATURALSCIENCES
Article ID: 1007-1202(2003)01B-0253-06
A Parallel Global.Local Mixed Evolutionary Algorithm for Multimodal Function Optimization Based on Domain Decomposition 0 [] Wu Zhi-jian, Tang Zhi-long, Kang Li-shan State Key Laboratory of Software Engineering, Wuhan University, Wuhan 430072, Hubei, China
Abstract= This paper presents a parallel two-level evolutionary algorithm based on domain decomposition for solving function optimization problem containing multiple solutions. By combining the characteristics of the global search and local search in each sub-domain, the former enables individual to draw closer to each optima and keeps the diversity of individuals, while the latter selects local optimal solutions known as latent solutions in sub-domain. In the end, by selecting the global optimal solutions from latent solutions in each sub-domain, we can discover all the optimal solutions easily and quickly. Key words, function optimization; GT algorithm; GLME algorithm; evolutionary algorithm; domain decomposition
CLC number: TP 301
Received date: 2002-11-13 Foundation item: Supported by the National Natural Science Foun
dation of China (60133010,60073043,70071042) Biography= Wu ghi-jian(1963 ), male, Associateprofessor, research direction= parallel computing, evolutionary computation. E-maih zjwu@public,wh. hb. cn
Introduction
s multi-individuals and random search method, evolutionary algorithms El,e? have been utilized in many fields. Their global optimization, parallelism, and effectiveness are widely applied to function optimization. With the aid of naturally evolutionary process, evolutionary algorithms overcome the negative aspects represented by some conventional numerical methods. Thanks to increasing recognition of the important role that the application of evolutionary algorithms plays in the area of optimization, various kinds of evolutionary algorithms have come into being. Nonetheless, the vast majority of algorithms are aiming at obtaining singular optimal solution. When it comes to applied engineering, there are large amounts o{ complex problems that fall into the category of muhimodal function optimization problems which contain many optimal solutions. Since evolutionary algorithm contains the population of solutions, it has the advantage of solving multimodal optimization. But many evolutionary algorithms often lead to only one optimal solution. For solving muhimodal optimization, niche technique and subpopulation method have been often used and the diversity must be considered seriously. In this case, some specific algorithms have been presented Ea-6?. In Re{. [7] K. Deb gave detailed description about multimodal problems and summarized and discussed some algorithms for solving these problems. In Re{. [5], to maintain the population diversity, the authors introduced immune
A
253
algorithm. In Ref. ]-8-] a species conservation technique is introduced, the population is divided into several species according to their similarity (if two individuals ~ distance is less than a given parameter they are considered similar). In this paper, we proposed a novel algorithm based on global-local mixed search and domain decomposition tacticEg]for solving the multimodal function optimization problems.
1 Description of the Algorithm In general, the global function optimization problem of nonlinear numerical function is formulated as follows: min f iX), XED
where D={X; li<~:r~<~<, i - - 1 , 2 , ' " , n } , n is the dimension of variable space and D is termed as global search space. To solve the above problem, we present a Parallel Global-Local Mixed Evolutionary Algorithm based on Domain Decomposition (PGLMEA_ DE)). This algorithm is paying great attention to the domain decomposition as well as the global space search and sub-space search. We introduce domain decomposition strategy and divide the global search domain into several sub-domains. When dividing global domain, we can also conduct it according to the number of the processors. Then, the global evolution is conducted in each sub-domain so that every individual in the population can draw closer to one of the optimal solutions in the sub-domain and meanwhile maintain the diversity of genetic population. After the global search, some different individuals in global space are selected to form some niches. In algorithm we view the two individuals as different according to their Euclid distance and a presupposed parameter. If the distance between two individuals (solutions) is less than the parameter, the two individuals can be viewed as the same. The supposed parameter can be variable so as to control appropriate number of solutions in each sub domain. In the following procedure, sub-space evolution will be progressed respectively in each niche and a variety of sub optimal solutions are obtained as latent solutions in each sub-domain eventually. Mter the two-level evolutions in all sub-domains are conducted, some individuals are chosen to form global optimal solutions. Because the GLME algorithm presented in Ref. [-10-] is easy to handle and is more effective than other similar algorithms, we choose it to carry out sub domain evolution in our algorithm. In the GTME algorithm, GT 254
algorithm is sdeeted to conduct the global search evolution. Just the same as described in Ref. [117, the GT algorithm is better than traditional evolutionary algorithm in the simplicity of structures, the robustness of algorithm, the accuracy of solutions and the wide range of application. However, when dealing with sub-space evolution in GLME, the improved GT algorithm (IGT algorithm) continues to improve the precision of solutions as well as the speed of the convergence. Therefore, it is necessary to talk about the GT algorithm, IGT algorithm and GLME algorithm respectively. At first, we describe the GT algorithm in the same way as presented in Ref. 1-11-1: GT algorithm : Begin Step 1: N individuals are randomly and uniformly produced in the search space to form the initial population P(0) and set t=0; Step 2: When terminative condition is satisfied goto Step 4; Step 3: Select M individuals X 1 , X2, " " ' , XM randomly from P(t) to form a sub spaceV = {X] X E D, M
M
X = ~ aiXi} , where ai fits the condition ~ i=1
ai = 1,
i=1
- O. 5 ~ ai ~< 1.5. Then produce a new individual randomly in V. If the new individual is better (i. e. the fitness is smaller for minimum problem or bigger for maximum problem) than the worst one in P(t), then substitute it for the worst one and form a new population P(t+ 1) else P i t + l ) = Pit). t + l ~ t , goto Step 2; Step 4: Output the best solution. End. In Step 3 of GT algorithm, when doing selecting, M individuals X1, X2, "", XM are randomly selected from Pit ). Ref. [-10-] made an improvement for GT algorithm (IGT algorithm): the best individual in P(t) will be always involved in M individuals, while the other M - 1 individuals are randomly selected from P(t). The purpose for doing this is to speed up the convergence of the solutions. GT algorithm and IGT algorithm have a good behavior in finding the global optimal solution; However, when they come across multimodal function optimization problem, they can hardly discover all optimal solutions. So for solving multimodal function optimization problem, two algorithms mentioned above are combined smartly to form the new algorithm namely global-local mixed evolutionary (GLME) algorithm [107. In GLME algorithm, as
described above, GT algorithm process the global evolution to generate some niches and IGT algorithm improve precision of the latent solutions in each niche and speed up the sub space evolution as well. GLME algorithm is described as follows [10]: GLME algorithm: Begin Step 1: Global evolution (adopt GT algorithm). When the evolutionary generation gen is great than global_gen goto Step 2: Step 2: Select p different individuals from P(gen). Step 2. 1: Assign e an appropriately small positive number and a ratio factor a(>0) ; Step 2. 2: Select all the individuals from P (gen) such that ]] X i - X j ]] > e for every pair (i, j)(here we define ]] X i - X j ]] Euclidean distance measure); Step 2.3: If p>sub_max_num then ~(1 q-a)-+e, goto Step 2.2 , else if p< sub_min_num then s(l+a)---" e, goto Step 2.2, else sorting X1, Xz,'", Xt, in descent order by their fitness. Step 3 :Sub-space evolution(adopt IGT algorithm): For i = 1 , 2 ,... ,p do in parallel Step 3.1 : Determine search sub-space D; = {X [ X ED, 1[ X - X i [[ <~s(l+(i-1)/(p-1))/2 }; Step 3.2: Produce N1 individuals randomly in Di to form a sub-population Pi; Step 3.3 : If fitness of the best individual equals that of the worst one in Pi then goto Step 3.5; Step 3. 4: Select M1 - 1 individuals X1, X2, "", XM,-1 randomly from Pi and XM, (the best individual in Pi) to form a sub-space gi = {X I X ff D~,X = M1
M1
~_~a,X,}, where ~--],a, = 1 , - 0 . 5 ~ a , 1=1
41.5,
pro-
j=l
duce a new individual randomly in V~, if the new individual is better than the worst one in P~ then substitute it for the worst one; goto Step 3.3; Step 3.5: Xb~,,--* X[. End do. Step 4. Get multiple solutions to optimization prob/ lem by rejecting the same ones in X[, X2','", Xp . End Because the above algorithm is simple to understand and easy to implement and plays an active role on discovering multiple optimal solutions, we adopt it when performing sub-domain evolution. As a result, a novel global-local mixed evolution algorithm based on the domain decomposition (PGLMEA_DD) comes into being. In our
algorithm, we adopt domain decomposition (space decomposition) strategy and divide the global search domain into several sub-domains acting on our convenience for solving problem, then in each sub-domain, GLME algorithms are conducted respectively, and none of them passes any information on to the others at any time. That is to say, our algorithm can be implemented in complete asynchronous parallel. We described the structure of PGLMEA_DD as follows: GLME_DD algorithm: Begin Step 1 : Decompose the global search domain (global search space) D into q sub-domains D1, D2 ,"',Dq. Step 2: For i=1 to q do Conduct GLME algorithm in sub-domain Di; End for. Step 3: Get multiple global optimal solutions by rejecting the non-global optimal solutions and rejecting the same ones generated in Step 2. Step 4: Output the solutions to the problem. End Obviously, the structure of our algorithm is simple to understand and easy to implement as well. For the parameters (q, N, M, global_gen, sub_gen, N1., Mr,, sub_max_num, sub_min_num, e,a,) , we can adjust them to the situation for a specific problem with the aid of several times' computation. Furthermore, we can determine some parameters through our good understanding about the problem or the algorithm. For instance, how many sub-domains (q) will be divided into, which can be performed in the light of convenience for solving the problem. If we attempt to implement the algorithm into parallelization at first, we can set the number of sub-domains (q) equal to the number of processors. We can also determine it according to the search domain's shape of the specific problem. Several computation experiments conducted by us indicated that the global gen in step 1 of GLME algorithm play a key role in maintaining the appropriate diversity of genetic population. Just the same as described in Ref. [-10-], when we set the value of global gen, if the times of global evolution is fairly big, we may fail to maintain the diversity of the solutions. On the other hand, if it is too small, what we obtain will be far from the optimal solutions. The two sorts of results above will have negative effect on the following sub-space evolution in the sub-domain. Therefore, to find all subdomain optimal solutions depends much on times of glob255
Apparently, PGLMEA_DD takes advantage of parallelism much more than GLME algorithm does despite that GLME has a good parallelism in itself. As described previously, the number of sub domains we are going to divide the global domain into can be determined in terms of the number of processors. Because the GT algorithm have an inherent parallelism c127, GLME algorithm can be carried out in parallel too El0?
al evolution in each sub-domain evolution. Fortunately, the numerical experiments conducted by us indicates that we are capable of determining the range through several computations. The maximum and minimum size of the different individuals in sub-domain i. e sub_max_hum, sub_min_nurn, also plays key role for generating the subdomain optimal solutions as much as possible. If the value of sub_max_numis too small, some optimal solutions that may be the global optima solutions can be excluded, while if it is too big, we will waste much time on the sub-space evolution in some niches that are less worth maintaining the diversity. Similarly, It is true for the parameter sub_min_mum. If we learn in advance the general size of all the global optimal solutions to the problem, we can take sub_max_hum as the number bigger than the size, and sub_min_numequal to the number; if we know nothing about it beforehand, we can both assign bigger number to the two parameters, and then alter it through debugging and analyzing the program. As for ~, we assign very small positive value which is variable at all times in GLME algorithm until the number of optimal solutions fall into the range between the sub_max_num and sub_ min_num. In one word, it is feasible to determine those parameters appropriately through our understanding about the problem or several numerical computations. Let ' s take a look at our algorithm ' s parallelism. Table 1
Numerical Experiments Problem 1. Shubert Function minf(x) =
~-]jcos[-(j q- 1)xi q-j-l, i=1 j = l
where - lO<~xi~lO,i= 1,2,"',n. There are 3, 18, 81, 324 different global optimal solutions respectively for n= 1,2, 3, 4 ; We have carried out 10 numerical experiments respectively for each value of n. Table 1 shows the parameters and Table 2 shows the results of the experiments. Fig. 1 (a)-(d) show individuals' distribution at different stages for problem 1 when n = 2. Problem 2 9 5
minf(x,y)
5
=l II i=0
I+l ]-[ ( y ' - i
where- 10.0 ~ x,y~ 10.0.
Parameters
n= 1
n= 2
n= 3
q
3
9
27
81
N
100
100
100
250
n= 4
M
10
10
10
10
global_gen sub_gen sub_max_num suh_min_num
2 000
2 000
5 000
50 000
5 000
5 000
20 000
100 000
5
10
10
10
1
3
4
5
e
0.5
0.5
0.5
0.005
a
0.2
0.2
0.2
0.05
Nl
100
100
100
150
M~
10
10
10
10
Results of experiments for problem 1
Results
n= 1
n= 2
n= 3
n= 4
Number of optimal solutions
3
18
81
324
0.1
17.0
1909.3
/
0. I
6.6
200.8
822.5
Average time for each experiment for G L M E / s Average time for each experiment for P G I . M E A _ D D / s Optimal value
J,
i=0
Parameters adopted in experiments for problem 1
Table 2
256
2
--12.870 885 497 726--86.730
908 831 024--2
709.093 505 572 829 --39 303.550 054 363 201
10 .-. ,.,
.t.-.~.'. ~;.-,
10
10
....
5
9.
.;..
9.
..
.~.
,..
...
.,.
5
I I
t I
O!
OO
I I
! i
l I
i I
l O
9." :roe~t..'-.,,,~....-*.~;~ K" ..-'we.%'.'-t~.',& 99 ~'%.
-5 _101~.' ; ~ . . . , / . . _ t i ~ i ~ . , . . ~ ...~, -10 - 5 0 5 I0
-10 -10
-5
0
~'0
"1
l)l)
It)
el)
l)t
leo
-5 -10 10 -10
5
-5
0
5
10
-1 -10
-5
0
5
X]
XI
XI
XI
(a)
(b)
(c)
(d)
10
Fig. 1 Individuals' distribution at different evolutionary stages for problem 1 (a) initial population; (b) individuals' distribution after the global search; (c) individuals' distribution after the local evolution; (d) optimal solutions generated by our algorithm Table 4
The problem 2 is constructed by the authors. For this problem, there exist 121 global optimal solutions. Ten numerical experiments have been conducted, parameters are shown in Table 3 and the results are shown in Table 4. When using GLME algorithm, we only find about 2/3 optimal solutions to the problem 2. Table 3
25
N
800
M
10
global_gen sub_gen sub_max_num sub_min_nurn
100
20 10 1.0
0.2
N1
200
MI
I0
I
9
I
9
t
t 99
9
t
t
4
~
o
o 4 e
e
o
e
~
-5~ -5
, 0
X
X
(a)
(b)
Fig. 2
' 5
-5 -5
t
~ ~
o
e
e ~
e
~
o
o
e
~
4
4
t
4 o e
4
e
o
4 e
~ o
e e
t
e
o
o
e
0
o
4
4
~
~*l
t
e
-5
178
Fig. 2 ( a)- (d) show individuals ' distribution at different stages for problem 2. From Table 2 and Table 4 it can be concluded that the PGLMEA_DD algorithm is more effective and robust than GLME algorithm. Notes: 1) All experiments are simulated on the a PC with PIV 1.5G CPU. If the algorithm is conducted on parallel computer, the run time for PGLMEA_DD will be reduced greatly. 2) In problem 1, through our many numerical experiments we guess that the number of global optimal soLutions is equal to n 93".
10000
a
121 0. 000 000 000 000
ment for P G L M E A _ D D / s
Parameters in experiments for problem 2 Q
Results of the experiments for problem 2
Number of optimal solutions Optimal value Average time for each experi-
~
~ 4
e
e
e
4
e
e
e
0
e
+
~
e
t
~
o
o
e
e
~
Q
4 e
~ ~
o
e
e
o
e
~
4
e
~.0
4 e
e ~
4
5
-5 -5
X
(c)
o
o
o
~
t
e
e e
~
e
o
4
o
4
o
~
~
o
4
e
e
e
o ~
t
o o
e
~ e o
4
e ~
e
o 4
e
e
~
t o
o
o
o
o t
o
o
4 o
~
4 e
o
~
o
o
e
o
o
o
4
;
4
o
4
o
~
e
o
o
4
e
o
4
o
b
4
0
4
e
e
0
4
5
X
(d)
Individuals' distribution at different evolutionary stages for problem 3
(a) initial population; (b) individuals' distribution after the global search; (c) individuals' distribution after the local evolution; (d) optimal solutions generated by our algorithm
257
3) For problem 1(2), the points in Fig. l(c) (Fig. 2 (c)) are so close to the optimal solutions that we can hardly distinguish them from the points in Fig. 1 (d) (Fig. 2(d)), but it is true that the number of points in Fig. l(e) (Fig. 2@)) must be more than or equal to that in Fig. l(d) (Fig. 2(d)).
3
Conclusions
By comparing with some other similar algorithms, we can observe obviously that the algorithm constructed by us turns out to be more effective, reliable as well as stable. In addition, this algorithm makes it possible for us to obtain all the global optimal solutions within the fairly short period of time. As in Ref. E8], we can also use other distance definition so long as it is a distance in a mathematical sense. The algorithm can be parallelized asynchronously. Although the algorithm is very effective for some problems, there are some open problems, such as how to divide the global domain adaptively, how to apply the algorithm to constrained optimization problem, etc. These will be discussed further in other papers.
References El}
tNck T, Fogel D, Michalewicz Z. Handbook o f Evolutionary Computation. Brsitoh Institute of Phusics Publishing
and New York: Oxford University Press, 1997. Michalewicz Z. GeneticAlgorithm + Data Structures = Evolutionary Programs. Berlin, Herdelberg, New York: Springer-Verlag, 1996. [-3] Fonesca C M, Fleming P J. Muhiobjective Optimization and Multiple Constraint Handing with Evolutionary Algorithms-
F2]
258
Part h A Untied Formulation. IEEE Transactions on Systems, Man, and Cybernetics, Part A : Systems and Humans, 1998,28(1): 26-37. I-4] Fonesca C M, Fleming P J. Muhiobjective Optimization and Multiple Constraint Handing with Evolutionary Algorithmspart II.- Application Example. IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, 1998, 28(1).- 38- 47.
[-5] Toyoo Fukuda, Kazuyuki Mori, Makoto Tsukiyama. Parallel Search for Multi-modal Function Optimization with Diversity and Learning of Immune Algorithms. Dipankar Dasgupta Ed, Artificial Immune Systems and Their Applications . Ber-
lin, Herdelberg, New York:Springer-verlag,1999. 210-220. Zitzler E, Deb E, Thiele L. Comparison of Multi-objective Evolutionary Algorithms: Empirical Results. Evolutionary Computation, 2000,8(2).. 125-148. [7-] Deb K. Multi-objective Optimization Using Eevolutionary Algorithms. : John Wiley Sons. I.td, 2001. 143-159. I-8] Blalxs M E, Li Jian-ping. The Effect of Distance Measure in A GA with Species Conservation. Proceedings o f 2001 IEEE Congress on Evolutionary Computation, 2001. 67-74. [-9] Kang Li-shan, Sun I.e-lin, Chen Ywping. Asynchronous
[-6]
Parallel Algorithms for Solving Mathematical and Physical Problems. Beijing..science Press, 1985(Ch).
[10] Wu Zhi-jian, Kang Li-shan, Zou Xiu-fen. A Parallel GlobalLocal Evolutionary Algorithm for Muhimodal Function Optimization. Proceedings o f the f i f t h International Conference on Algorithms and Structures o f Parallel Process, Beijing, 2002. 247-250. [11] Guo Tao, Kang Li-shan, Li Yah. A New Algorithm for Solving Function Optimization Problems with Inequality Constraints. Joun2al o f Wuhan University (Natural Science Edition), 1999, 45(5B).. 771-775(Ch). [12] Kang Li-shan, Kang Zhuo, Li Yan, et al. Asynchronous Parallelization of Guo' s Algorithm for Function Optimization. Proceedings o f the 2000 IEEE Congress on Evolutionary Computation, 2000. 783-789.
[]