Soft Comput DOI 10.1007/s00500-015-1951-7
METHODOLOGIES AND APPLICATION
A novel approach for optimization in dynamic environments based on modified cuckoo search algorithm Nazanin Fouladgar1 · Shahriar Lotfi2
© Springer-Verlag Berlin Heidelberg 2015
Abstract Cuckoo search algorithm is one of the famous algorithms in the area of swarm intelligence algorithms. It has been supplied widely for solving static optimization problems. However, it should be considered that a great number of optimization problems in the real world, are in the form of dynamic optimization problems. In fact, the algorithms which have been implemented for static environments are not able to solve problems in dynamic environments. In this paper, a novel multi-swarm algorithm based on modified cuckoo search algorithm (MCSA) has been proposed to find and track the optimum (optima) of the problem space in dynamic environments. Each swarm performs optimization process based on MCSA. Also, a deactivation mechanism has been utilized to improve the efficiency of this approach. Finally the proposed algorithm has been tested on moving peak benchmark, one of the most well-known benchmarks of this domain, and compared with several prominent algorithms in this area. The results indicate the superiority of this approach. Keywords Swarm intelligence · Dynamic optimization problems · Multi-swarm · Cuckoo search · Moving peak benchmark
Communicated by V. Loia.
B
Nazanin Fouladgar
[email protected] Shahriar Lotfi
[email protected]
1
Computer Engineering Department, University College of Nabi Akram, Tabriz, Iran
2
Computer Science Department, University of Tabriz, Tabriz, Iran
1 Introduction A great number of optimization problems in the real world are dynamic. It means that optimum can change over time in the problem space. There are a lot of instances which demonstrate the importance of this domain. Developing a model for financial statements in variable market conditions, data mining while databases are updating continuously, scheduling problems with dynamic resources and the vehicle routing problems are some of these instances. To solve such problems, the algorithms which are designed should not only find the optimum, but also have the potentiality of tracking it over time. Accordingly, there are many challenges to design optimization algorithms in dynamic environments. To meet these challenges, researchers have proposed various mechanisms and ideas since a few decades ago (Nguyen et al. 2012). Employing evolutionary and swarm intelligence algorithms to solve dynamic optimization problems (DOP) are a hot topic for researchers. Nguyen (2010) defines dynamic optimization problems as follows: “Given a dynamic problem f t , an optimization algorithm G to solve f t , and a given optimization period [t begin ; t end ], f t is called a dynamic optimization problem (DOP) in the period [t begin ; t end ] if during [t begin ; t end ] the underlying fitness landscape that G uses to represent f t changes and G has to react to this change by providing new optimal solutions”. According to the behavior of DOPs, they can be categorized to different perspectives. One of the most prominent but the hardest DOPs, are those which there are unknown number of peaks. In such environments the height, width and position of the peaks will change causing each of the peaks to be considered a potential optimum. Thus, in best the case, the algorithm should cover all peaks to determine the global optimum after each environmental change. Until now, numerous methods have been proposed to solve DOPs. One of the
123
N. Fouladgar, S. Lotfi
largest categories of them is related to meta-heuristic and precisely to evolutionary computing and swarm intelligence algorithms. The algorithms based on meta-heuristic methods developed for optimization in dynamic environments have mechanisms and functions in their own structures enabling them not only deal with the challenges of these environments but also to find peaks and track them all through the optimization process. Invalid memory is among all the challenges researchers encounter with. The major reason for this problem lies in invalidity of the fitness values that have been saved before changes. Therefore, the optimization process cannot be continued. Another challenge which is one of the most important ones is loss of diversity among population. It arises from the essence of swarm intelligence and evolutionary algorithms for convergence. In fact, after the agents converge to the target, the ability to increase local search in the covered space decreases. Thus, it causes reduction in their ability to move through the new position of target after environmental changes. Meeting the challenge of diversity loss is one of the most important issues to design algorithms for solving dynamic optimization problems. Another challenge to design such algorithms is the existence of potential optimum (optima). Thus, the algorithms of this domain should be able to find peaks quickly and cover them. Moreover, they should track all the peaks after the environmental change. Regarding the innumerable number of peaks in some problems, finding and tracking all the peaks are very time taking and have high computational complexity. Improving the algorithms’ structures and making changes to them, researchers make effort to increase the efficiency of algorithms and overcome the related challenges. In this paper, a novel algorithm inspired by cuckoo search algorithm (CS) is proposed for optimization in dynamic environments. Cuckoo search algorithm is one of the swarm intelligence algorithms introduced by Yang and Deb (2009). It is a population-based algorithm which simulates breeding behaviors of cuckoos. This algorithm has been used in engineering optimization (Yang and Deb 2010), structural optimization problems (Gandomi et al. 2013), multi-objective optimization (Yang and Deb 2013), design optimization (Gandomi et al. 2012), discrete problems (Ouaarab et al. 2013), business optimization (Yang et al. 2012) and topology optimization (Yang and Deb 2012) applications. There will be two phase of presenting the proposed algorithm. The first one focuses on a novel modified cuckoo search algorithm (MCSA) which meets the major requirements for solving DOPs. The second one employs a multiswarm algorithm by the usage of the first phase’s algorithm. In the end, the proposed algorithm has various mechanisms to meet the challenges and requirements of dynamic environments. The efficiency of this algorithm has been evaluated on moving peak benchmark (MPB) (Branke 1999), one of
123
the most popular benchmarks in this domain. In fact, it has different parameters in its structure to make various dynamic environments. Comparing the results of the proposed algorithm with several well-known algorithms of this domain, it shows the superiority of this algorithm. The rest of paper is organized as follows. In Sect. 2, related works has been reviewed. Section 3 explains the proposed algorithm. Experiments and results are investigated in Sect. 4 and the last one, Sect. 5 concludes the paper and dedicates the future works.
2 Related works Until now, numerous algorithms have been proposed by researchers to solve DOPs. In the continuation, effort has been made to overview them. In Blackwell and Branke (2006) two algorithms based on particle swarm optimization (Kennedy and Eberhart 1995) have been presented. They are one of the first set of proposed algorithms in this domain which inspired many researchers. In one of these algorithms, quantum particles and in another one charged particles have been used alongside normal particles to increase diversity of the population. Moreover, the multi-swarm idea has been employed to cover the existing peaks of the problem space in both two algorithms. Thus, there are several swarms in the problem space which each of them has the duty to cover the existing peaks of the problem space. Also, an exclusion mechanism has been applied to avoid from residency of more than one swarm in one peak. To do so, the Euclidean distance of each two swarms’ best positions is calculated and if the value is less than a predefined threshold, the swarm with worse fitness value is reinitialized. Anti-convergence operator is another method that has been used in this paper. It has been utilized when the number of swarms is less than the number of peaks. In these circumstances, the swarm residing in worst peak will be reinitialized after all swarms of population are converged. So if an uncovered peak turns into the highest peak after environmental change, the algorithm will have the ability to discover it. Blackwell et al. (2008) developed the presented algorithm in Blackwell and Branke (2006) with self-adaptation of the number of swarms. In fact in its previous approach, the number of swarms was constant. As such, the efficiency of algorithm was decreased, when the number of peaks was both less and more than the number of swarms. In self-adaptation approach, first the algorithm starts with one swarm and after it is converged, another swarm will be created in the problem space. This process will be continued until all peaks are covered. Therefore, the number of swarms in this algorithm is determined according to the number of peaks. As a result, it causes great deal of improvement in the efficiency of algorithm.
A novel approach for optimization in dynamic environments based…
In Yazdani et al. (2012), an algorithm based on artificial fish swarm algorithm (AFSA) (Li et al. 2002) has been proposed for optimization in dynamic environments. First, a modified AFSA algorithm is designed to meet the goal of increasing the convergence rate and to be appropriate for optimization in dynamic environments. Also, to find and track the peaks, the authors utilize self-adaptive multi-swarm mechanism (Blackwell et al. 2008). To increase diversity after each environmental change, artificial fishes (AFs) of the swarm are distributed randomly around the previous best AF position of the swarm. In Yazdani et al. (2014), the authors have developed the algorithm in Yazdani et al. (2012) and presented new version of artificial fish swarms for optimization in dynamic environments. Sleeping–awaking mechanism is one of the most impressive changes of new version. In this mechanism, as soon as a swarm, residing at a non-optimum peak, gets close to its goal, it becomes asleep. So, the swarm will be inactive until the next environmental change occurs. Accordingly, there will be enough time for the best swarm to perform more accurate local search to save time and computational resources. Other methods have been presented to control the performance of multi-swarm procedure in the problem space. In Oppacher and Wineberg (1999), there are some small subpopulations in the role of global search and one large subpopulation in the role of tracking the peaks changing. All of these subpopulations have been employed in an algorithm called shifting balance genetic algorithm (SBGA). Also, in Yazdani et al. (2013b), there has been applied PSO with a method of finder–tracker to find and track the peaks, a local search method to increase the accuracy of the best swarm and a sleeping-awaking mechanism to conserve more evaluations for better results. Another technique with the name of self organizing scouts (SOS) has been proposed in Branke et al. (2000) which is contrary to two previous methods. It applies one subpopulation for global search and several small subpopulations for tracking changes. This approach has been applied with some meta-heuristic methods such as genetic algorithm and particle swarm optimization in Lung and Dumitrescu (2007). In Changhe and Shengxiang (2008) and Kamosi et al. (2010b, a) two similar methods with SOS have been presented called FMSO, mPSO and HmSO. In these papers, one parent swarm is in the role of exploration of problem space to find promising area and a number of child swarms are in the role of local search. In Li et al. (2006) and Parrott and Li (2004, 2006), the speciation-based PSO approaches have been proposed for optimization in dynamic environments. Furthermore, in Li (2004) species are formed adaptively according to the achieved feedback of multimodal fitness landscape. In Bird and Li (2007) a method based on regression, called rSPSO, has been employed for convergence rate increase. Also, it is based on speciation technique. In this method each subpopulation is created in a hyper-
sphere space centered on its best particle and a computable radius. In Li and Yang (2009), a method based on clustering has been proposed to create subpopulations. It has been developed in Yang and Li (2010) and presented as PSO-CP. A series of simplifications such as eliminating the learning process and reducing the number of clustering phase from two to one are applied in this paper.
3 Proposed algorithm In this section, a novel algorithm based on cuckoo search algorithm is presented for optimization in dynamic environments. As such, first some structural changes on CS have been made called MCSA. Then, a multi-swarm algorithm is proposed based on MCSA. 3.1 Modified cuckoo search algorithm To design a dynamic algorithm, it is required to have an optimization algorithm as a basic algorithm, used in static environments. Then, some mechanisms on the basic algorithm to prepare the capability of optimization in dynamic environments should be applied. This basic algorithm is a novel modified version based on cuckoo search algorithm. According to the characteristic of dynamic environments and multi-swarm model using in this paper, the basic algorithm should have high convergence rate to be able to find out the peaks in the problem space. Moreover, each population of cuckoos should have the capability of reducing diversity while they are getting close to the goal. In this way, they can perform well-qualified local search and are able to track the peaks after the environmental change. In fact, the basic algorithm is designed to meet the requirements of optimization in dynamic environment. In this regard, MCSA cannot be expected to have high efficiency to solve the various problems in static environments. It should be noted that, the major criteria which leads to make changes on standard CS algorithm (Yang and Deb 2009) are the increase of convergence rate and high capability of local search. In MCSA, there is a population of cuckoos performing the following process in each algorithm execution: 1. Each cuckoo hatches a number of eggs randomly, contrary to cuckoo search algorithm which hatches just one egg. Therefore, it leads to increase convergence rate and local search capability. 2. Each cuckoo lays its egg(s) in random regions around itself. This region is determined with a radius called rlaying which is initiated with the exclusive distance (Blackwell and Branke 2006) to enhance the performance of algorithm in searching and is equal for all cuckoos. Dur-
123
N. Fouladgar, S. Lotfi
ing the MCSA procedure rlaying size is not constant but is decreased by a self-adapting mechanism. There are two issues regarded here: first, the reason behind using rlaying as an area for hatching is the increase of local search capability. Moreover, the search space of each swarm will become limited. It is an important issue for covering and tracking a peak in dynamic optimization problems. In fact, with an inappropriate search space, the swarm may leave its covered peak and so it is undesirable for dynamic environments. Second, the reason behind decreasing rlaying size is not only to increase the local search capability but also to match the swarm with the space which is searched for finding better positions. The proposed self-adaptive mechanism utilizes cuckoos’ successful rate in the process of hatching. This mechanism will be explained after cuckoos’ hatching process. The following equation (Eq. (1)) shows how to determine each egg position in rlaying radius around the cuckoo space: Egg_Posd = X i,d + rlaying × Randd (−1, 1).
(1)
rlaying,Itr = rlaying,Itr × L Min + (L Max − L Min ) N × N
i=1 Successi
i=1 EggNumber i
in which Successi is the number of times cuckoo i can lay eggs with better positions in the iteration Itr and EggNumberi is the total number of eggs which cuckoo i has laid in this iteration. Total population is defined by N and rlaying,Itr shows the value of rlaying,Itr in iteration Itr. L Min and L Max are the lower and upper limit of rlaying size. According to Eq. (2) as much as successful rate is high, the current rlaying size is suitable and so it will be decreased by lower rate. On the contrary, as much as successful rate is low, the rlaying size will be decreased by higher rate to increase the local search capability. 4. After laying eggs, in MCSA, all cuckoos move through the best cuckoo of population according to the following equation. It leads to the dramatic increase in convergence rate: X i = X i + [Rand D (0, 2) ⊗ (Best X − X i )].
In which egg position (Egg_Pos) is determined in each of the D-dimensional of problem space. X i,d is the cuckoo position i in dimension d and Rand function produces a D-dimensional vector of random numbers in the range of (−1, 1) with a uniform distribution. 3. In this level, first each cuckoo lays one egg in the rlaying radius. Then its fitness value is evaluated. If it is better than the oviparous cuckoo fitness value, next egg will be laid according to the current egg. Also the center of rlaying will change to the position of the current egg. But if the egg fitness value is worse than the oviparous cuckoo, it will lay its next egg according to its position. Therefore, in the best case, there will be better positions as much as the number of eggs and in the worst case, the previous position will remain without any change. It has been noted that in MCSA, only the best position will remain after each cuckoo’s egg laying whether it is oviparous cuckoo or the best egg. The other eggs will be killed. Therefore, the size of population will be constant. According to the mentioned issues in this section, the size of rlaying is adaptive in each process of eggs laying. As such, a self-adaptive mechanism which employs the successful rate of swarm’s cuckoos as an adaptive parameter has been proposed. In fact, we calculate the ratio of the number of times all cuckoos of swarm can improve their positions by eggs being laid to the total eggs that have been laid during the process of hatching. This ratio is called, Successful rate. The following equation shows the rlaying value in every repetition of algorithm execution:
123
(2)
(3)
In which X i is the position of cuckoo i in D-dimensional space and Best X is the position of best cuckoo of swarm. To produce a D-dimensional vector of random numbers with uniform distribution in the range of (0, 2), Rand D (0, 2) function has been used. Also ⊗ indicates entrywise multiplication. The Pseudo-code of MCSA is shown in Fig. 1 and is executed as mentioned above. Generally the structure of this algorithm meets the major goals of optimization in dynamic environment. However, it has some weakness points. The most important of all is the incapability of escaping from local optima. Regarding requirements of this paper’s dynamic environments and the usage of multi-swarm mechanism in the final algorithm, MCSA does not need to solve this problem. 3.2 Multi-swarm modified cuckoo search algorithm based on MCSA (MMCSA) In previous section, MCSA was described which was a basic algorithm, suitable for designing optimization algorithm in dynamic environment. Due to the specific properties of dynamic environments and their various challenges, we should add some mechanisms to the basic algorithm to have an acceptable efficiency for optimization in dynamic problems. Regarding dynamic environments of this paper, there are a number of peaks with the change in their position, width and height in discrete timescales. As such each peak may turn
A novel approach for optimization in dynamic environments based…
MCSA Procedure 1:
for each Cuckoo i [1 ... N] Initialize Xi Randomly
2: 3:
Repeat
4:
for each Cuckoo i [1 ... N] Determine EggNumberi
5: 6:
for each Cuckoo i [1 ... N] for j=1 to EggNumberi
7: 8:
Lay Egg j in rlaying say EggPsition j by Eq. (1) and compute its fitness value
9:
if f (EggPsition j) < f (Xi) Then Xi = EggPsition j
10:
Update rlaying based on Eq. (2)
11:
for each Cuckoo i [1 ... N]
12:
Apply Eq. (3)
13:
until stopping criterion is met
Fig. 1 Pseudo-code of MCSA
into the global optimum. Therefore, all peaks are a potential optimum and algorithm should track them. Accordingly, the final proposed algorithm is a multi-swarm algorithm in which each swarm consists of a population of cuckoos performing MCSA procedure. In the beginning, there is only one swarm in the problem space. After this swarm is converged, another swarm will be created and performed optimization as mentioned before. In fact, when the new created swarm is converged to a peak, and covers it, another new swarm will be created. This procedure will continue until all peaks of the problem space are covered by swarms. But there will be only one free swarm in the search space for discovering other uncovered peaks. This mechanism helps the algorithm at situations where the number of peaks is unknown in the environment. To determine the swarm convergence, the Euclidean distance of all cuckoos of swarm is calculated. If the farthest distance is less than the parameter rconv , the swarm is regarded as converged one. However, there is another challenge in this domain which is the convergence of two swarms to one peak. In this case, one of two swarms is not useful and sometimes harmful. The additional swarm consumes computational resources and fitness evaluation. So it slows down the algorithm efficiency. To meet this challenge, an exclusion mechanism is utilized. In this mechanism, the Euclidean distance of each swarm’s best position with those in all involving swarms is evaluated. If the best positions’ distance of two swarms is less than rexcl (Blackwell and Branke 2006), the swarm with worse fitness value will be eliminated and instead a new swarm will be initialized randomly in the problem space. Discovering environmental changes is another challenge in dynamic environments. To meet this challenge, there are dif-
ferent methods which are dependent on the environmental changes. In this paper, the purpose is solving those problems with global changes. It means that all points’ fitness value changes after the environmental changes. Thus, it is possible to evaluate only one point’s fitness value called “Test Point” and compare with its previous value. Any differences means the environment has been changed. After discovering the changes, optimization algorithms encounter with two basic challenges: prodigious diversity reduction and invalid memory. To meet these challenges, first the best cuckoo position of all converged swarms is determined. Then all swarms’ cuckoos are distributed randomly in a region around this position with a special radius. The radius size is accordance with the shift length of peaks which can be calculated based on peaks’ positions in two consecutive environments. Therefore, diversity problem will be eliminated. Also, to solve the problem of invalid memory, the fitness value of cuckoos in new environment will be evaluated. In the end, the rlaying size is determined based on shift severity of peaks. As mentioned before, there is at least one unconverged swarm in the problem space which has recently been created. It is called “Free Swarm”. To match this swarm with new environment, it is needed to evaluate only its cuckoos’ fitness values. Thus, invalid memory problem of the Free Swarm will be solved. In the proposed algorithm, it has been utilized from a deactivation mechanism to increase the algorithm efficiency. All changes occurring in the environment are based on the number of fitness evaluations. Therefore, as much as the algorithm has the useful number of evaluations until the next environmental change, better results will be achieved. In this paper, the number of swarms is as many as number of peaks should
123
N. Fouladgar, S. Lotfi MMCSA Procedure
1:
Initialize Parameters
2:
Initialize first swarm
3:
Repeat:
4: 5: 6: 7:
for each swarm i Execute MCSA procedure for each swarm i calculate distances between cuckoos and determine convergence
8:
if all existed swarm are converged Then create and initialize new swarm
9:
Calculate distance between best cuckoos of all swarm
10:
for each swarm i
11: 12:
for each swarm j if distance between best cuckoo i & j < rexcl Then if f (best cuckoo i) < f (best cuckoo j)
13: 14:
Eliminate swarm j
15:
else
16: 17:
Eliminate swarm i Reevaluate Test Point and if new fitness value is not equal to stored value Then
18:
Store new fitness value of Test Point
19:
for each converged swarm i
20:
Randomize all cuckoo around old best_cuckoo based on shift severity
21:
Evaluate fitness value of all cuckoo
22:
Set rlaying based on shift severity
23:
Reevaluate all cuckoos in all no converged swarms
24:
Deactivate all swarms except non converged swarms and best Nacs converged swarm
25:
until stopping criterion is met
Fig. 2 Pseudo-code of the main procedure of the proposed algorithm
be covered. All the swarms undertake the task of tracking their covered peaks. But the important point is that only the swarm residing near the highest peak has fundamental role in algorithm efficiency in current environment. Thus, it should be given more chance to perform local search. As much as it performs better search, better results will be achieved. Accordingly, after discovering environmental change and its following tasks, Nacs numbers of the best swarms with regards to their best cuckoo positions are activated and the other swarms are deactivated. Active swarms are the one that can perform MCSA and computations. On the contrary, deactivate swarm cannot move in the problem space. As a
123
result, there will be more chance for active swarms to get better results. Pseudo-code of the proposed algorithm called multi-swarm MCSA has been shown in Fig. 2.
4 Evaluation and experimental results In this section, the efficiency of proposed algorithm has been tested on MPB, one of the best known benchmarks in dynamic optimization problems where there are unknown number of peaks. It has been investigated by several configurations of the MPB parameters adjusted in Table 1. Then,
A novel approach for optimization in dynamic environments based… Table 1 Setting standard parameters for MPB Parameters
Value
Number of peaks, M
1, 5, 10, 20, 30, 50, 100, 200
Change frequency
500, 1000, 2500, 5000, 10,000
Height change
7.0
Width change
1.0
Peaks shape
Cone
Basic function
No
Shift length
1, 2, 3
Number of dimensions, D
2, 3, 4, 5
Correlation coefficient, λ
0
Peaks location range
[0–100]
Peak height
[30.0–70.0]
Peak width
[1–12]
Table 2 Value configuration of MCSA parameters Parameters
Initial value
Swarm population size
5
Min number of eggs
2
Max number of eggs
4
Initial value for rlaying for new swarms
0.3 × [100/(peak_number1/dimension )]
Initial value for rlaying after change for converged swarms
0.5 × shift_severity
rconv (for determining swarm convergence)
10
rexcl (for exclusion)
0.5 × [100/(peak_number1/dimension )]
L min
0.5
L max
1
Re-diversification radius
0.5 × shift_severity
Activated converged swarm# (Nacs )
2
Termination condition
Reaching “change frequency × 100” function evaluations
it has been compared with several well-known algorithms in this domain. Offline error (Branke 1999) is measured as the criterion of the algorithms’ efficiency. The experiments are executed 50 times with different random seeds both for benchmark and the algorithm. Then the proposed algorithm’s standard error and the average value of offline error have been presented in related tables. It is notable that the results of other algorithms have been achieved from their related papers. In Table 2, the different parameter’s values of the proposed algorithm are shown. These values are based on a large number of experiments with which the best results were achieved for the proposed algorithm. Tables 3 and 4 present the proposed algorithm and the other ones’ efficiency with
different number of peaks and change frequency and also shift length 1 in five-dimensional space. As it is shown in Tables 3 and 4, the efficiency of proposed algorithm is better in all cases than other compared algorithms called HmSO (Kamosi et al. 2010b, a), APSO (Rezazadeh et al. 2011), CLPSO (Hashemi and Meybodi 2009), RPSO (Hu and Eberhart 2002), PSO-CP (Liu et al. 2010), ESCA (Lung and Dumitrescu 2010), RVDEA (Woldesenbet and Yen 2009), SFA (Nasiri and Meybodi 2012), PSO-AQ (Yazdani et al. 2013a), CLDE (Noroozi et al. 2011) and DynPopDE (Plessis and Engelbrecht 2012). Yet it is worse than CCPSO (Nickabadi et al. 2012) just when the number of peaks is 5. Generally, with decreasing in change frequency, the efficiency of algorithms becomes worse. It is because of the less chance of them for finding better positions between two environmental changes. Moreover, in environments with less number of peaks, the efficiency of algorithms is acceptable but with increase in the number of peaks, the efficiency is reduced in most cases. However, it has been observed that the proposed algorithm’s efficiency has less decrease when the number of peaks increases; that is because of applying the deactivation mechanism in this paper. In fact, in other algorithms, there will be more fitness evaluations by increasing the number of peaks. Thus, the swarms have less chance to improve their positions. But in the proposed algorithm, there is no matter how great is the number of swarms (according to the large number of peaks). Since there are only Nacs converged swarms and one active free one, there will be more opportunity for the best swarm to improve the algorithm efficiency and also for free swarm to find new peaks. Another noteworthy point about results is that by increasing the number of peaks to 100 and 200, the efficiency of some algorithms improves unexpectedly. This is because of degradation in the distance between the heights of peaks. So the error amount reduced. There are some other unexpected results. For instance, when the number of peaks improves from 20 to 30, the efficiency of the proposed algorithm improves or when the change frequency increases, the efficiency does not have remarkable advancement. A large number of experiments have been done and the algorithm efficiency and swarms’ behavior have been investigated accurately to find out the achieved results. According to the randomness of the peaks’ position in every execution, sometimes it occurs that at the beginning of the algorithm some of these peaks (specially the best peak) are located on the boundaries of the search space. Therefore, these peaks are discovered and supervised later than when they are not located on the boundaries. Accordingly, the average value of error in 50 times of execution can upgrade and the efficiency degrades. If these conditions occurred in 50 times of execution, the average amount of efficiency would be reduced. Figure 3 illustrates the diagram
123
123
N/A
0.25 (0.01)
0.0051 (0.0022)
HmSO
APSO
MMCSA
0.19 (0.02)
0.27 (0.02)
mPSO
1.90 (0.18)
mQSO (5,5q )
AmQSO
1.06 (0.03)
0.0189 (0.0048)
1.75 (0.10)
HmSO
MMCSA
1.79 (0.10)
mPSO
APSO
0.87 (0.11)
AmQSO
10,000
0.0226 (0.0080)
MMCSA
7.64 (0.64)
2.72 (0.04)
APSO
2500
4.46 (0.26)
HmSO
mQSO
4.44 (0.24)
mPSO
(5,5q )
2.33 (0.31)
AmQSO
18.60 (1.63)
mQSO (5,5q )
1000
4.81 (0.14)
0.0675 (0.0169)
MMCSA
8.53 (0.49)
HmSO
APSO
8.71 (0.48)
mPSO
33.67 (3.42)
1
Number of peaks
3.02 (0.32)
500
C–F
AmQSO
mQSO (5,5q )
Algorithm
0.3920 (0.1173)
0.57 (0.03)
N/A
0.70 (0.10)
0.45 (0.04)
1.03 (0.06)
0.5510 (0.0430)
1.55 (0.05)
1.92 (0.11)
2.04 (0.12)
2.16 (0.19)
3.26 (0.21)
1.0207 (0.2169)
2.99 (0.09)
4.27 (0.08)
3.93 (0.16)
2.90 (0.32)
6.56 (0.38)
2.5621 (0.2953)
4.95 (0.11)
7.40 (0.31)
6.69 (0.26)
5.77 (0.56)
11.91 (0.76)
5
0.6913 (0.1017)
0.82 (0.02)
N/A
0.97 (0.04)
0.76 (0.06)
1.10 (0.07)
0.9933 (0.1349)
2.17 (0.07)
2.39 (0.16)
2.66 (0.16)
2.49 (0.10)
3.12 (0.14)
1.4896 (0.1877)
3.87 (0.08)
4.61 (0.07)
4.57 (0.18)
4.56 (0.40)
5.71 (0.22)
2.7587 (0.1759)
5.16 (0.11)
7.56 (0.27)
7.19 (0.23)
5.37 (0.42)
9.62 (0.34)
10
0.9658 (0.1068)
1.23 (0.02)
N/A
1.34 (0.08)
1.28 (0.12)
1.84 (0.08)
1.1906 (0.0609)
2.51 (0.05)
2.46 (0.09)
3.07 (0.11)
2.73 (0.11)
3.58 (0.13)
1.5956 (0.1746)
4.13 (0.06)
4.66 (0.12)
4.97 (0.13)
5.36 (0.47)
5.85 (0.15)
3.4831 (0.1823)
5.81 (0.08)
7.81 (0.20)
8.01 (0.19)
6.82 (0.34)
9.07 (0.25)
20
1.1552 (0.1377)
1.39 (0.02)
N/A
1.43 (0.05)
1.78 (0.09)
2.00 (0.09)
1.1326 (0.1683)
2.61 (0.02)
2.57 (0.05)
3.15 (0.08)
3.24 (0.18)
3.63 (0.10)
1.8701 (0.1349)
4.12 (0.04)
4.83 (0.09)
5.15 (0.12)
5.20 (0.38)
5.81 (0.15)
3.2994 (0.1903)
6.03 (0.07)
8.33 (0.18)
8.43 (0.17)
7.10 (0.39)
8.80 (0.21)
30
0.8432 (0.2039)
1.46 (0.01)
N/A
1.47 (0.04)
1.55 (0.08)
1.99 (0.07)
1.4205 (0.0917)
2.66 (0.02)
2.65 (0.05)
3.26 (0.07)
3.68 (0.15)
3.63 (0.10)
1.9849 (0.1525)
4.11 (0.03)
4.96 (0.03)
5.33 (0.10)
6.06 (0.14)
5.87 (0.13)
3.4677 (0.1357)
5.95 (0.06)
8.83 (0.17)
8.76 (0.18)
7.57 (0.32)
8.72 (0.20)
50
1.0983 (0.0772)
1.38 (0.01)
N/A
1.50 (0.03)
1.89 (0.14)
1.85 (0.05)
1.4131 (0.0716)
2.62 (0.02)
2.72 (0.04)
3.31 (0.05)
3.53 (0.14)
3.58 (0.08)
1.9818 (0.1164)
4.26 (0.04)
5.14 (0.08)
5.60 (0.09)
4.77 (0.45)
5.83 (0.13)
3.0921 (0.0767)
6.08 (0.06)
8.85 (0.16)
8.91 (0.17)
7.34 (0.31)
8.54 (0.16)
100
Table 3 Comparison of offline error (standard error) of six algorithms on MPB problem with different number of peaks and change frequency and shift length = 1
0.7162 (0.1811)
1.36 (0.01)
N/A
1.48 (0.02)
2.52 (0.10)
1.71 (0.04)
1.4205 (0.0917)
2.64 (0.01)
2.81 (0.04)
3.36 (0.05)
3.07 (0.12)
3.30 (0.06)
2.1341 (0.1282)
4.21 (0.02)
5.25 (0.08)
5.78 (0.09)
5.75 (0.26)
5.54 (0.11)
3.4879 (0.3091)
6.20 (0.04)
8.85 (0.16)
8.88 (0.14)
7.48 (0.19)
8.19 (0.17)
200
N. Fouladgar, S. Lotfi
0.38 (0.06)
0.09 (0.00)
0.0111 (0.0020)
mNAFSA
CCPSO
MMCSA
–
0.55 (0.06)
DMAFSA
CLDE
DynPopDE
0.53 (0.01)
1.53 (0.07)
APSO
0.42 (0.07)
0.34 (0.02)
PSO-AQ
1.02 (–)
RVDEA
SFA
0.98 (0.00)
0.87 (0.05)
HmSO
ESCA
0.90 (0.05)
mPSO
3.41 (0.06)
1.42 (0.06)
rSPSO
1.04 (0.00)
2.64 (0.10)
SPSO
CESO
4.93 (0.17)
mCPSO
PSO-CP
3.44 (0.11)
0.56 (0.04)
RPSO
CLPSO
FMSO
0.51 (0.04)
2.55 (0.12)
AmQSO
4.92 (0.21)
1
Number of peaks
mQSO (5,5q )
Algorithm
0.4593 (0.1560)
0.25 (0.01)
0.55 (0.04)
0.78 (0.06)
1.03 (0.13)
1.50 (0.04)
1.05 (0.06)
0.80 (0.12)
0.89 (0.09)
–
–
–
–
1.18 (0.04)
1.21 (0.12)
1.04 (0.03)
2.15 (0.07)
2.07 (0.08)
12.22 (0.76)
2.94 (0.07)
1.68 (0.11)
1.01 (0.09)
1.82 (0.08)
5
0.7396 (0.0670)
0.75 (0.06)
0.90 (0.04)
1.01 (0.05)
1.39 (0.07)
1.64 (0.03)
1.31 (0.03)
0.89 (0.03)
1.05 (0.04)
3.54 (–)
1.54 (0.02)
1.38 (0.02)
1.31 (0.06)
1.42 (0.04)
1.61 (0.12)
1.50 (0.08)
2.51 (0.09)
2.08 (0.07)
12.98 (0.48)
3.11 (0.06)
1.78 (0.05)
1.51 (0.10)
1.85 (0.08)
10
1.0082 (0.0563)
1.21 (0.08)
1.25 (0.06)
1.42 (0.06)
–
2.46 (0.05)
1.69 (0.05)
1.45 (0.06)
1.48 (0.05)
3.87 (–)
1.89 (0.04)
1.72 (002)
–
1.50 (0.06)
2.05 (0.08)
2.20 (0.07)
3.21 (0.07)
2.64 (0.07)
12.79 (0.54)
3.36 (0.06)
2.61 (0.07)
2.00 (0.15)
2.48 (0.09)
20
1.1501 (0.1274)
1.40 (0.07)
1.47 (0.05)
1.63 (0.06)
–
2.62 (0.05)
1.78 (0.02)
1.52 (0.04)
1.56 (0.06)
3.92 (–)
1.52 (0.02)
1.24 (0.01)
2.02 (0.07)
1.65 (0.04)
2.18 (0.06)
2.62 (0.07)
3.64 (0.07)
2.63 (0.08)
12.35 (0.62)
3.28 (0.05)
2.93 (0.08)
2.19 (0.17)
2.51 (0.10)
30
1.1374 (0.1915)
1.50 (0.09)
1.65 (0.05)
1.84 (0.07)
2.10 (0.06)
2.75 (0.05)
1.95 (0.02)
1.77 (0.05)
1.87 (0.05)
3.78 (–)
1.67 (0.02)
1.45 (0.01)
–
1.66 (0.02)
2.34 (0.06)
2.72 (0.08)
3.86 (0.08)
2.65 (0.06)
11.34 (0.29)
3.22 (0.05)
3.26 (0.08)
2.43 (0.13)
2.53 (0.08)
50
0.9910 (0.0890)
1.76 (0.09)
1.83 (0.05)
1.95 (0.05)
2.34 (0.05)
2.73 (0.03)
1.95 (0.01)
1.95 (0.05)
2.01 (0.04)
3.37 (–)
1.61 (0.01)
1.28 (0.02)
2.14 (0.08)
1.68 (0.03)
2.32 (0.04)
2.93 (0.06)
4.01 (0.07)
2.49 (0.04)
9.73 (0.28)
3.06 (0.04)
3.41 (0.07)
2.68 (0.12)
2.35 (0.06)
100
Table 4 Comparison of offline error (standard error) of 22 algorithms on MPB problem with different number of peaks with change frequency = 5000 and shift length = 1
0.9629 (0.0778)
–
1.84 (0.05)
1.99 (0.04)
2.44 (0.05)
2.61 (0.02)
1.90 (0.01)
1.96 (0.04)
1.99 (0.06)
3.54 (–)
–
–
2.04 (0.07)
1.71 (0.02)
2.34 (0.03)
2.79 (0.05)
3.82 (0.05)
2.44 (0.04)
8.90 (0.19)
2.84 (0.03)
3.40 (0.06)
2.62 (0.10)
2.24 (0.05)
200
A novel approach for optimization in dynamic environments based…
123
N. Fouladgar, S. Lotfi Fig. 3 The average current error of MMCSA on MPB problem with change frequency = 5000, shift length = 1 and a number of peaks = 1, b number of peaks = 10 and c number of peaks = 50
of average current error of each evaluation in 50 times of the proposed algorithm execution with the change frequency 5000, shift length 1 and the number of peaks 1, 10 and 50. Also, Fig. 4 shows the average offline error of each environment in 50 times of the proposed algorithm execution with change frequency 5000, shift length 1 and the number of peaks 1, 10 and 50. Table 5 presents the efficiency of some algorithms tested on MPB with change frequency 5000 and 10 peaks in fivedimensional space and shift length 1, 2 and 3. As can be seen, when the shift length of peaks increases, the efficiency
123
of algorithms dramatically decreases. The proposed algorithm is not exempt from this issue either. In fact, along with increasing the peaks’ shift length, the swarms residing in peaks should search more extensive space for finding the new peak position after the environmental change. In this case, it takes much time to reach to the new peak position. Therefore, the efficiency degrades according to the limited time in which swarms can perform optimization until the next environmental change. As the results in Table 5 show, the proposed algorithm performs better than other compared algorithms with the shift length of 1 and 2 but worse than PSO-CP algo-
A novel approach for optimization in dynamic environments based…
Fig. 4 The average obtained error (offline error) of MMCSA on MPB problem in the end of each environment with change frequency = 5000, shift length = 1 and a number of peaks = 1, b number of peaks = 10 and c number of peaks = 50 Table 5 Comparison of offline error (standard error) of seven algorithms on MPB problem with different shift lengths, change frequency = 5000 and ten peaks
Algorithm
Shift length 1
2
3
mQSO (5,5q )
1.85 (0.08)
2.40 (0.06)
3.00 (0.06)
AmQSO
1.51 (0.10)
2.09 (0.08)
2.72 (0.09)
mCPSO
2.08 (0.07)
2.80 (0.07)
3.57 (0.08)
SPSO
2.51 (0.09)
3.78 (0.09)
4.96 (0.12)
rSPSO
1.50 (0.08)
1.87 (0.05)
2.40 (0.08)
PSO-CP
1.31 (0.06)
1.98 (0.06)
2.21 (0.06)
MMCSA
0.7396 (0.0670)
1.7547 (0.1816)
2.2493 (0.2827)
rithm. However, the difference between the amount of offline error in proposed algorithm and the one in PSO-CP is not significant.
In Table 6, the effect of various dimensions on the proposed algorithm and some other algorithms has been considered. The results are achieved by ten peaks, shift length
123
N. Fouladgar, S. Lotfi Table 6 Comparison of offline error (standard error) of five algorithms on MPB problem with different dimensions, change frequency = 5000 and ten peaks
Table 7 Obtained results from the proposed algorithm with different values of swarm’s population size on scenario 2 of MPB
Algorithm
Dimensions 2
4
5
mQSO (5,5q )
1.01 (0.04)
1.49 (0.09)
1.47 (0.08)
1.85 (0.08)
AmQSO
0.71 (0.05)
1.16 (0.10)
1.33 (0.08)
1.51 (0.10)
mPSO
1.24 (0.07)
1.42 (0.10)
1.35 (0.09)
1.61 (0.12)
RPSO
2.62 (0.08)
6.61 (0.33)
10.43 (0.54)
12.98 (0.48)
MMCSA
0.1157 (0.0161)
0.3588 (0.0522)
0.4132 (0.0695)
0.7396 (0.0670)
Swarm population size
Offline error
3
4
5
6
7
10
1.42 (0.41)
0.98 (0.07)
0.74 (0.07)
0.79 (0.17)
0.88 (0.20)
0.95 (0.17)
1, change frequency 5000 and dimensions 2, 3, 4 and 5. As observed from this table, the proposed algorithm efficiency is better than all other compared algorithms. It is notable that the efficiency reduces as the dimension extends which is a normal issue among optimization problems. Actuality, it is much harder for algorithms to find better positions in the problem space, when the number of optimization parameters in fitness function increases. 4.1 Effects of parameter settings In this section, we have studied the effect of proposed algorithm efficiency with its various values of parameters. The experiments of this section have been performed on MPB scenario 2 (Branke 1999). The values used for the parameters have been presented in Table 2 which is in accordance with many experiments. In the following, the results of proposed algorithm with various parameter settings have been considered. 4.1.1 Effects of various values of swarm’s population size Table 7 presents the obtained results of the proposed algorithm with different swarm’s population size. As can be seen, the least efficiency of algorithm is when the swarm population size is 3. In fact, in this case, the swarms are not able to cover their appropriate search space perfectly because their population size is too small. As such, the algorithm faces with loss of efficiency specially in finding the peaks. With increase of population size to 5, local search capability gets better and the efficiency upgrades. Thus, the best result is when the population size is 5. But, when the size becomes greater than 5, the efficiency degrades. It is because of high number of evaluations and so degradation in algorithm convergence rate; because, by increasing the number of evaluations, the swarms have less number of times to be performed until next
123
3
environmental change. Therefore, it leads to decrease in efficiency. 4.1.2 Effects of various values of initiative rlaying for new swarms Table 8 illustrates the obtained results of proposed algorithm with various size of initiative rlaying in new swarms. This parameter’s value is determined based on weight of Dboa value (Blackwell and Branke 2006), called α. As observed in Table 8, the best results are related to the usage of 0.3 of Dboa . The truth is that this amount provides the best circumstances for local search and finding a new peak while smaller or greater values cause decreasing in the algorithm convergence rate and so in efficiency. 4.1.3 Effects of various values of rlaying for tracker swarms after changes To increase diversity and local search capability for finding new positions of peaks, the rlaying value of swarms is determined by weight of shift severity (β) after each environmental change. Table 9 shows the efficiency of the proposed algorithm with various values of rlaying for converged swarms after the environmental change. Actuality β has direct effect on swarms which has covered the peaks and should track and find their new position after environmental change. Therefore, inappropriate values of β degrades the algorithm efficiency. As can be seen, the best results are achieved when β is 0.5. 4.1.4 Effects of various values of rconv In this paper, the proposed algorithm commences the optimization process with only one swarm. When the swarm is converged to a peak, another swarm is created in the prob-
A novel approach for optimization in dynamic environments based… Table 8 Obtained results from the proposed algorithm with different values of initiative rlaying for new swarms
α
Offline error
0.1
0.2
0.3
0.4
0.5
0.97 (0.09)
0.86 (0.10)
0.74 (0.07)
0.77(0.09)
0.83 (0.08)
Table 9 Obtained results from the proposed algorithm with different values of rlaying for tracker swarms after changes β 0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Offline error 1.21 (0.08) 0.86 (0.12) 0.81 (0.09) 0.75 (0.08) 0.74 (0.07) 0.79 (0.09) 0.92 (0.09) 1.02 (0.16) 1.03 (0.20) 1.19 (0.18)
Table 10 Obtained results from the proposed algorithm with different values of rconv
rconv
Offline error
5
10
12
15
20
0.79 (0.11)
0.74 (0.07)
0.78 (0.09)
0.83 (0.14)
1.22 (0.21)
lem space. In this case, convergence criteria is determined with rconv . Table 10 illustrates the achieved results of various rconv values. As it is observed, the best result is when rconv is 10. Smaller value than 10 makes the convergence determination stricter and postpones the convergence process. Thus, finding and covering peaks procedure becomes slower. In this circumstance, the algorithm efficiency decreases. Also, values larger than 10 cause the swarm to be wrongly considered convergent. In fact, although the swarm may not have been converged to a peak yet, it is considered as a converged swarm. Therefore, it consumes a huge computational load and remains useless until it is eliminated by exclusion mechanism.
in Table 11. When Nacs is 1, it means that only the best swarm is activated. However, it may be possible that another swarm of the problem space which is not the best one is residing in the best peak instead of the best swarm. This situation happens under some special conditions; for instance, when the swarm is far from the new position of peak or when the peak is narrow. Considering Nacs equal to 2, it is more likely that one of two better swarms of population is residing on the best peak. Moreover, making Nacs greater than 2, the fitness evaluation increases and so the best swarm will have less chance to perform more precise search. Accordingly, it leads to degradation in algorithm efficiency. Finally, the best result is when Nacs is 2.
4.1.5 Effects of various values of activated converged swarm# (Nacs )
4.1.6 Effects of various values of L Min
According to deactivation mechanism, after every environmental change, there will remain Nacs numbers of activated converged swarm in the problem space. The efficiency of algorithm with various values of this parameter is presented Table 11 Obtained results from the proposed algorithm with different values of Nacs
L Min is a parameter to control rlaying reduction. In Table 12, the algorithm efficiency with various values of this parameter is presented. According to the achieved results, the best value of L Min is 0.5. If L Min value is less than 0.5, it leads to great reduction of rlaying value and so reduction of local search
Number of activated converged swarm
Offline error
1
2
3
5
10
0.80 (0.08)
0.74 (0.07)
0.91 (0.08)
1.06 (0.12)
1.72 (0.15)
Table 12 Obtained results from the proposed algorithm with different values of L Min L Min 0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Offline error 1.09 (0.07) 1.00 (0.09) 0.94 (0.08) 0.82 (0.07) 0.74 (0.07) 0.77 (0.06) 0.85 (0.07) 0.97 (0.06) 1.12 (0.09) 1.20 (0.08)
123
N. Fouladgar, S. Lotfi Table 13 Obtained results from the proposed algorithm with different values of minimum and maximum eggs number each cuckoo can lay in every repetition (Min Egg#, Max Egg#)
Offline error
(1, 4)
(2, 4)
(3, 4)
(4, 4)
(2, 3)
(2, 5)
(2, 10)
0.86 (0.11)
0.74 (0.07)
0.80 (0.10)
0.86 (0.12)
0.87 (0.16)
0.85 (0.23)
0.97 (0.12)
capability. Moreover, if L Min value is greater than 0.5, it leads to less but inappropriate reduction of rlaying value. Therefore, the algorithm needs much more time to increase the local search capability of swarm according to the search space. 4.1.7 Effects of various ranges of eggs number As previously discussed, in MCSA, each cuckoo lays a limited random number of eggs in every repetition. The results of egg number’s various ranges are shown in Table 13. As it is observable, the most appropriate range is when each cuckoo lays at least two and at most four eggs in every repetition of algorithm. 4.2 Statistical analysis To demonstrate the superiority of achieved results of the proposed algorithm in comparison with other algorithms, it has been applied a statistical analysis. In this paper, t test is a method to do this analysis. Thus, two hypotheses H0 and H1 have been defined as below: H0 : μ j = a ∗j , H1 : μ j =
a ∗j
ai = arg min(μi, j )
(4) (5) ∗
(6)
i ∈ all algorithms, j ∈ all scenarios,
design of MCSA was based on the requirements of dynamic environments. Therefore, the major purpose of this algorithm was the increase of convergence rate for finding the peaks and tracking them after environmental change as quickly as possible. Moreover, MMCSA was proposed for optimization in dynamic environments by adding some mechanisms to increase efficiency and to meet the challenges of dynamic environments. The efficiency of the proposed algorithm was evaluated on various numbers of configurations on MPB which is one of the prominent benchmark in the area of dynamic environments. Finally, the achieved results were compared with those of several well-known algorithms of this domain and demonstrated the superiority of the proposed algorithm. It is noteworthy to mention that the proposed algorithm designed for optimization in those environments has unknown number of peaks. Furthermore, each of these peaks had the potentiality of being optimum. Also, discrete environmental change is another specification of these environments. However, there are numerous problems in real world in which all of the peaks are not considered potential ones and sometimes they can never turn into global optimum. Additionally, changes can occur continuously over time in many environments. Therefore, the future works can be pursued by changing the structure of the algorithm and designing new mechanism to adapt it with other types of dynamic problems. Compliance with ethical standards
in which a j is the average result of performing the best algorithm on j scenario and μ j is that of the proposed algorithm on this scenario. Since the number of samples (number of execution) is more than 30, there is no need to test normal distribution of the samples. In this case, t test in the form of two-tailed test with 49◦ and alpha equal to 0.01 of freedom has been performed. Finally, the results obtained by experiments show that H0 is rejected with 99 % confidence level in all cases.
5 Conclusion and future works In this paper, a novel algorithm based on MCSA was presented in dynamic environment. The proposed algorithm employed multi-swarm approach in which each swarm consisted of number of cuckoos and performed MCSA. The
123
Conflict of interest The authors declare that they have no conflict of interest.
References Bird S, Li X (2007) Using regression to improve local convergence. IEEE Cong Evol Comput CEC 2007:592–599 Blackwell T, Branke J (2006) Multiswarms, exclusion, and anticonvergence in dynamic environments. IEEE Trans Evol Comput 10:459–472 Blackwell T, Branke J, Li X (2008) Particle swarms for dynamic optimization problems. In: Blum C, Merkle D (eds) Swarm intelligence. Springer, Berlin, Heidelberg, pp 193–217. doi:10.1007/ 978-3-540-74089-6_6 Branke J (1999) Memory enhanced evolutionary algorithms for changing optimization problems. In: Proceedings of the 1999 congress on evolutionary computation, 1999. CEC 99, vol 3. IEEE, Washington
A novel approach for optimization in dynamic environments based… Branke J, Kaussler H, Schmidt C, Schmeck H (2000) A multipopulation approach to dynamic optimization problems. In: Parmee IC (ed) Evolutionary design and manufacture: selected papers from ACDM ’00. Springer, London, pp 299–307. doi:10. 1007/978-1-4471-0519-0_24 Changhe L, Shengxiang Y (2008) Fast multi-swarm optimization for dynamic optimization problems. In: Fourth international conference on natural computation (ICNC’08). doi:10.1109/ICNC.2008. 313 Gandomi AH, Talatahari S, Yang XS, Deb S (2012) Design optimization of truss structures using cuckoo search algorithm. Struct Des Tall Spec Build. doi:10.1002/tal.1033 Gandomi AH, Yang XS, Alavi AH (2013) Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Eng Comput. doi:10.1007/s00366-011-0241-y Hashemi AB, Meybodi MR (2009) Cellular PSO: a PSO for dynamic environments. In: Cai Z, Li Z, Kang Z, Liu Y (eds) Advances in computation and intelligence. Lecture notes in computer science, vol 5821. Springer, Berlin, Heidelberg, pp 422–433 Hu X, Eberhart RC (2002) Adaptive particle swarm optimization: detection and response to dynamic systems. In: IEEE congress on evolutionary computation (CEC’02), pp 1666–1670 Kamosi M, Hashemi AB, Meybodi MR (2010a) A hibernating multiswarm optimization algorithm for dynamic environments. In: Proceedings of world congress on nature and biologically inspired computing, pp 370–376 Kamosi M, Hashemi AB, Meybodi MR (2010b) A new particle swarm optimization algorithm for dynamic environment. In: Swarm, evolutionary, and memetic computing (SEMCO). Lecture notes in computer science, vol 6466, pp 129–138 Kennedy J, Eberhart RC (1995) Particle swarm optimization. IEEE Int Conf Neural Netw. doi:10.1109/ICNN.1995.488968 Li X (2004) Adaptively choosing neighborhood bests using species in a particle swarm optimizer for multimodal function optimization. In: Proceedings of genetic and evolutionary computation conference (GECCO’04), pp 105–116 Li C, Yang S (2009) A clustering particle swarm optimizer for dynamic optimization. In: IEEE congress on evolutionary computation (CEC’09), pp 439–446 Li X, Shao Z, Qian J (2002) An optimization method base on autonomous animates: fish swarm algorithm. Syst Eng Theory Pract 22:32–38 Li X, Branke J, Blackwell T (2006) Particle swarm with speciation and adaptation in a dynamic environment. In: Proceedings of the 8th annual conference on genetic and evolutionary computation, pp 51–58 Liu L, Yang S, Wang D (2010) Particle swarm optimization with composite particles in dynamic environments. IEEE Trans Syst Man Cybern Part B Cybern. doi:10.1109/TSMCB.2010.2043527 Lung RI, Dumitrescu D (2007) A new collaborative evolutionary-swarm optimization technique. In: Companion on genetic and evolutionary computation (GECCO), pp 2817–2820 Lung RI, Dumitrescu D (2010) Evolutionary swarm cooperative optimization in dynamic environments. Nat Comput 9:83–94 Nasiri B, Meybodi MR (2012) Speciation based firefly algorithm for optimization in dynamic environments. Int J Artif Intell 8:118– 132 Nguyen TT (2010) Continuous dynamic optimisation using evolutionary algorithms. Ph.D. thesis, The University of Birmingham Nguyen TT, Yang S, Branke J (2012) Evolutionary dynamic optimization: a survey of the state of the art. Swarm Evol Comput. doi:10. 1016/j.swevo.2012.05.001 Nickabadi A, Ebadzadeh MM, Safabakhsh R (2012) A competitive clustering particle swarm optimizer for dynamic optimization problems. Swarm Intell. doi:10.1007/s11721-012-0069-0
Noroozi N, Hashemi AB, Meybodi MR (2011) CellularDE: a cellular based differential evolution for dynamic optimization problems. In: Dobnikar A, Lotriˇc U, Šter B (eds) Adaptive and natural computing algorithms. Lecture notes in computer science, vol 6593, part 1. Springer, Heidelberg, pp 340–349 Oppacher F, Wineberg M (1999) The shifting balance genetic algorithm: improving the GA in a dynamic environment. In: Proceedings of the genetic and evolutionary computation conference, pp 504–510 Ouaarab A, Ahiod B, Yang XS (2013) Discrete cuckoo search algorithm for the travelling salesman problem. Neural Comput Appl. doi:10. 1007/s00521-013-1402-2 Parrott D, Li X (2004) A particle swarm model for tracking multiple peaks in a dynamic environment using speciation. In: IEEE congress on evolutionary computation (CEC’04), pp 98–103 Parrott D, Li X (2006) Locating and tracking multiple dynamic optima by a particle swarm model using speciation. IEEE Trans Evol Comput 10:440–458 Plessis MC, Engelbrecht AP (2012) Differential evolution for dynamic environments with unknown numbers of optima. J Glob Optim. doi:10.1007/s10898-012-9864-9 Rezazadeh I, Meybodi MR, Naebi A (2011) Adaptive particle swarm optimization algorithm for dynamic environments. In: Tan Y, Shi Y, Chai Y, Wang G (eds) Advances in swarm intelligence. Lecture Notes in Computer Science, vol 6728. Springer, Berlin, Heidelberg, pp 120–129 Woldesenbet YG, Yen GG (2009) Dynamic evolutionary algorithm with variable relocation. IEEE Trans Evol Comput 13:500–513 Yang XS, Deb S (2009) Cuckoo search via Lévy flights. World Congr Nat Biol Inspired Comput. doi:10.1109/NABIC.2009.5393690 Yang XS, Deb S (2010) Engineering optimisation by cuckoo search. Int J Math Model Numer Optim 1:330–343 Yang XS, Deb S (2012) Cuckoo search for inverse problems and topology optimization. In: Proceedings of international conference on advances in computing. Advances in intelligent systems and computing. doi:10.1007/978-81-322-0740-5_35 Yang XS, Deb S (2013) Multiobjective cuckoo search for design optimization. Comput Oper Res. doi:10.1016/j.cor.2011.09.026 Yang S, Li C (2010) A clustering particle swarm optimizer for locating and tracking multiple optima in dynamic environments. IEEE Trans Evol Comput 14(6):959–974 Yang XS, Deb S, Karamanoglu M, He X (2012) Cuckoo search for business optimization applications. Natl Conf Comput Commun Syst (NCCCS). doi:10.1109/NCCCS.2012.6412973 Yazdani D, Akbarzadeh-T MR, Nasiri B, Meybodi MR (2012) A new artificial fish swarm algorithm for dynamic optimization problems. In: IEEE congress on evolutionary computation (CEC’12). doi:10. 1109/CEC.2012.6256169 Yazdani D, Nasiri B, Azizi R, Sepas-Moghaddam A, Meybodi MR (2013a) Improving multi swarm PSO utilizing adaptive quantum based local search for optimization in dynamic environments. Int J Artif Intell 11:170–192 Yazdani D, Nasiri B, Sepas-Moghaddam A, Meybodi MR (2013b) A novel multi-swarm algorithm for optimization in continues dynamic environments based on particle swarm optimization. Appl Soft Comput. doi:10.1016/j.asoc.2012.12.020 Yazdani D, Nasiri B, Sepas-Moghaddam A, Meybodi MR, AkbarzadehTotonchi MR (2014) mNAFSA: a novel approach for optimization in dynamic environments with global changes. Swarm Evol Comput. doi:10.1016/j.swevo.2014.05.002
123