Evol. Intel. (2008) 1:171–185 DOI 10.1007/s12065-008-0014-8
RESEARCH PAPER
A parallel immune optimization algorithm for numeric function optimization Henry Y. K. Lau Æ Wilburn W. P. Tsang
Received: 29 November 2007 / Revised: 3 July 2008 / Accepted: 8 July 2008 / Published online: 5 August 2008 Springer-Verlag 2008
Abstract Immune optimization algorithms show good performance in obtaining optimal solutions especially in dealing with numeric optimization problems where such solutions are often difficult to determine by traditional techniques. This article presents the parallel suppression control algorithm (PSCA), a parallel algorithm for optimization based on artificial immune systems (AIS). PSCA is implemented in a parallel platform where the corresponding population of antibodies is partitioned into subpopulations that are distributed among the processes. Each process executes the immunity-based algorithm for optimizing its subpopulation. In the process of evolving the solutions, the activities of antibodies and the activities of the computation agents are regulated by the general suppression control framework (GSCF) which maintains and controls the interactions between the populations and processes. The proposed algorithm is evaluated with benchmark problems, and its performance is measured and compared with other conventional optimization approaches. Keywords Artificial immune systems Immune optimization algorithm Function optimization Parallel implementation
1 Introduction Performance is a key attribute in many cooperative systems where different components co-operate to achieve a
H. Y. K. Lau (&) W. W. P. Tsang Department of Industrial and Manufacturing Systems Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People’s Republic of China e-mail:
[email protected];
[email protected]
common goal. In modern logistics businesses, complex automated material handling systems are crucial in marine and air cargo terminals, and logistics hubs. In a typical logistics hub, different components interact to perform various tasks and optimizing the interactions is becoming more and more important to attain high competitiveness. Even if there is only a small improvement towards the optimal mix, it often makes a significant difference to the success of the business. Optimization paradigms have become essential to determine the best strategy in cooperative systems. In fact, many problems in the real world can be formulated as optimization problems having global optima that are buried in local optima. The problem of finding the global optimal solution is often difficult to achieve by traditional methods especially for a multi-modal optimization problem. Recently, in the derivation of computation algorithms including optimization strategies, various aspects of biological processes are the major stimulations in developing new engineering models and problem solving techniques. The immune system is one of those systems that have drawn significant attention in recent years [1–4]. From an engineering point of view, its characteristics of distributed nature, learning and adaptability, self-organization, and memory capabilities have received great interest across different disciplines [5]. Artificial immune systems (AIS), the engineering analogy of human immune systems, have established itself as one of the major field in the study of artificial intelligence. Recent studies in AIS-based optimization show that the contribution of AIS is growing [6–9]. Optimization is one of the major fields of application of AIS-inspired algorithms with promising results. Research studies had shown that the AIS-based optimization algorithms are comparable
123
172
against other natural-inspired algorithms in solving optimization problems [10]. Through adopting the mechanisms of the immune systems, a novel AIS-based optimization algorithm, known as Suppression Control Algorithm (SCA), is developed in this article to determine the optimal solution with special attention paid to minimize the overheads in the evolution process and to speed up the convergence of the algorithm. The algorithm is then extended to execute on a parallel structure and the distributed version of the algorithm is known as Parallel Suppression Control Algorithm (PSCA), where the immunity-based optimization process is conducted in a fully distributed and concurrent manner. The remainder of the article is organized as follows: Sect. 2 gives an overview of related research studies. Section 3 describes the methodology and the basis for the design and development of SCA and PSCA. Sections 4 and 5 present the proposed algorithm and its major features. Section 6 describes the experiments conducted on numeric optimization problems with PSCA, and compared its performance with other conventional optimization approaches. Results are presented and analyzed. Finally, summary and conclusion are given in Sect. 7.
2 Immune optimization algorithm This section provides an overview of the related research studies. Section 2.1 presents an overview of the immune systems and the artificial immune systems. Artificial immune systems and their applications in optimization are then reviewed in Sect. 2.2. Section 2.3 focuses on the review of the parallel implementation in artificial immune systems based algorithms. 2.1 Immune system and artificial immune systems The human immune system is composed of molecules and cells which are able to recognize, defend against and memorize invading pathogens. Generally speaking the immune system consists of two levels, namely, the innate immune system and the acquired immune system. The innate immune system provides the first line of defense against the invaders. It consists of elements including skins acting as a natural, physical and omnipresent barrier to variety of invasions. It provides nonspecific general defense response to every invader and alerts the adaptive immune system to danger posing from invader. Phagocytes, complement proteins and natural killer cells present in the innate system can help to puncture and destroy the foreign invaders. In contrast to the innate system, the Acquired Immune System directs specific responses towards the targeted
123
Evol. Intel. (2008) 1:171–185
invaders. T-lymphocytes (T-cells) and B-lymphocytes (Bcells) perform the major immune response for the acquired immune system. B-cells circulate through the blood and lymphatic network to recognize antigens by the receptors present on the surface of B-cells. Humoral immunity is then carried out in which the B-cell secrete antibodies with specificity against the recognized antigens. T-cells facilitate the cell-mediated immunity in the acquired immune system. T-cells help to seek out the invading pathogen and activate other components of the immune system regarding the pathogen. They also assist the B-cells to secrete antibodies [11, 12]. Artificial immune systems, are the computational or engineering systems inspired by theoretical immunology and observed immune functions, principles and models [13]. Some of the key mechanisms and methods adopted include the immune network theory [14], negative selection mechanism and clonal selection principle [15]. Practical implementations of AIS focus on data analysis [16], anomaly detection [17], computer security [18] and robotic and control [19]. 2.2 Immunity-based algorithms for optimization There has been an increasing attention paid to the development of artificial immune systems in the field of optimization and meta-heuristics in recent years. According to the underlying principles of existing immunity-based optimization algorithms, they can be broadly classified into three main categories: genetic algorithm-aided, clonal selection-based and immune networks-based approaches [3]. The optimization methods for the genetic algorithmaided category are actually very similar to the genetic algorithm. They bring together the ideas from both the natural selection and evolutionary processes. The clonal selection principle describes the central features of an immunological response against antigen invasion. It establishes the idea that only antibodies with high antigen recognition capability are proliferated. The main features include (a) cells with affinity higher than a certain threshold are selected to clone, (b) new cells are clones of their parents generated using a process similar to asexual reproduction, (c) clones are subject to somatic hypermutation to gain higher antigen recognition capabilities, and (d) proliferation and differentiation of the population of immune cells specific for individual antigen. The immune network theory suggests that the immune system maintains a network of interconnected B-cells. B-cells are stimulated and suppressed not only by antigens but also by other interconnected B-cells. In terms for the clonal selection-based approach, de Castro and Von Zuben [6] developed an innovative algorithm based on the clonal selection principle named
Evol. Intel. (2008) 1:171–185
CLONALG. CLONALG is one of the most widely used and researched algorithm in the context of AIS for optimization. The algorithm imitates the clonal selection principle which starts with a random set of candidate solutions. Fitness of all candidate solutions is evaluated and then a number of comparatively good candidates are selected. The algorithm consists of four stages and normally loops for a predefined number of iterations. (a) These candidates are then cloned according to their fitness values. (b) Each clone is subjected to somatic hypermutation with a mutation rate inversely proportional to their fitness values. The mutants of the same set of clone from the same parent are compared. (c) The best mutants will then replace the parent candidate forming the next population. (d) Within the current population, a number of low fitness candidates will be replaced by randomly generated individuals. It is relatively easy to understand and simple approach. However, CLONALG has several shortfalls: 1. 2.
3.
4.
Premature convergence: The evolution process is usually trapped in the local optimal solution. Redundancy: The number of objective function evaluation is often considerably large. Some parent solutions are over-cloned and mutated. Trivial solutions are also not omitted successfully in the searching process which results in an inefficient use of the computation resources. Number of Parameters: a large number of parameters needs to refine in the algorithm which assumes background knowledge of the problem being optimized and these parameters have significant effect on the performance of the algorithm. Mutation Rate Definition: A single mutation rate is difficult to represent the whole population. The use of a small mutation probability improves the local penetrating ability while slacken the global searching. However, the use of a larger mutation probability speeds up the global search process but hampers the accuracy significantly.
Considering the drawback of CLONALG that needs manual tuning of run-time parameters, Garrett [20] reexamined the use of clonal selection and introduced a parameter-free clonal selection algorithm known as adaptive clonal selection (ACS). ACS is very similar to CLONALG. The major different is that the parameters involved are adjusted so that they are adaptive within the algorithm. There is no need to refine the parameters before using the algorithm. The algorithm will automatically assign the parameters to fit the problem domain. To address the limitations on premature convergence, aging operators were introduced in opt-IA to tackle the shortcoming [8, 21]. The age of each candidate is
173
predefined by a parameter. The current age for the candidates will be increased after each generation. In clonal expansion, each clone carries the same age as its parent. And after the hyper-mutation phase, a cloned cell which successfully mutates will be considered to have a current age equal to zero. Two aging strategies, namely, pure static aging and stochastic aging strategies are adopted in opt-IA. A candidate is allowed to clone and mutate until its death and is eliminated when the age of death is reached unless it is the current best solution under the static pure aging strategy. Under the stochastic aging strategy, candidates may randomly erase with a negative exponential probability. The probability is governed by the equation ln 2=sB 1e with parameter sB. However, aging operators in opt-IA were only applied to the selection process for the next generation. A more comprehensive adoption of aging operators can be found in age artificial immune systems (AAIS) [22]. The algorithm differs from the opt-IA in that aging concept was brought into the cloning and surviving of antibodies. The number of antibodies selected to the next generation and the number of clones generated from each selected antibodies depends on the ages of antibodies, which increase initially and decline with its age accordingly. Kelsey and Timmis [23] proposed the B-cell algorithm (BCA) which is again inspired by the clonal selection process. The distinctive feature of BCA in comparison with other clonal selection-based approach is on its unique mutation operator, contiguous somatic hypermutation. Bcells are mutated only to the contiguous regions, and the concept is inline with the biologically immune system that mutation on B-cell receptors focuses only on complementarily determining regions. The algorithm starts with a random set of initial population of B-cells. Each B-cell is cloned to produce clonal pool. One clone is selected at random from each pool to undergo random change to bring in diversity. Contiguous somatic hypermutation is then applied to all clones. Clones with the highest affinity within each clonal pool including the parent are then selected for the next generation. The algorithm then loops certain stopping criteria is matched. Concerning the immune networks-based approach, optaiNet was proposed which combines CLONALG with the immune network theory. opt-aiNet is an extension of CLONALG with network suppression operators introduced in a discrete timeframe and separation of the exploration and exploitation stages to avoid the premature convergence and is triggered via the stabilization of the average fitness in the population [7]. Network suppression removes any similar cells based on the Euclidean distance in the population according to a certain predefined threshold. However, the network interactions in the opt-aiNet are only suppressive in nature and appear in a discrete timeframe
123
174
which is in contrast to the immune network theory where network interactions include both suppression and stimulation betweens solutions, and such interactions are applied continuously. 2.3 Parallel implementation in artificial immune systems Parallel computing has been widely applied to many evolutionary algorithms as these algorithms are highly suited for parallelization. The fact that mutation and fitness evaluation can be performed independently on each individual within the population has facilitated their parallel implementation. Randall and Lewis [24] implemented a parallel framework with the Ant Colony Optimization. Ants construct solutions in parallel. A master ant was used to coordinate the activities of the colony. Liu et al. [25] introduced a parallel evolutionary algorithm based on the island model for solving function optimization problems. The structure of parallel operation was organized as an island model in which each processor executes without waiting for network communication. Venter and Sobieszczanski-Sobieski [26] reviewed and evaluated the synchronous and asynchronous parallel versions of the particle swarm optimization algorithm in terms of accuracy and efficiency. He et al. [27] proposed a parallel computing platform called parallel computing platform for global optimization (PGO) which was based on the Genetic Algorithm for facilitating the global optimization. The parallel implementation aimed to exploit parallel computation for speeding up the GA computation and organized as a global master-slave system. The performance of such algorithms is highly promising. In fact, the exploitation of parallelism inherited in many evolutionary algorithms has already provided definitive gain in efficiency and effectiveness according to the result reported. In terms of the immunological-inspired algorithms, Watkins et al. [28] conducted a preliminary investigation of parallelism on it. It focused on simple parallelization of CLONALG. Input data set were randomly divided across the processes which each then proceeded to use this data set as the input for the CLONALG operation. Once all the processes have finished, the results were gathered to generate the final result. Watkins and Timmis [16] furthered the study to parallelize another immune learning algorithm artificial immune recognition system (AIRS) [29] with primary focus on pattern recognition. The parallel structure adopted was similar to the previous study. Both studies showed that the parallelization techniques can help to improve the performance of immunological-inspired algorithms. Watkins [30] proposed another parallel version of AIRS, distributed AIRS. The algorithm adopted an
123
Evol. Intel. (2008) 1:171–185
alternative approach to employ multiple processes in artificial immune systems where each processor evolved individually. However, there is no final gathering and revaluation on top of the result from each processor and the decision is completely decentralized. For the above studies, the authors have not included any interaction between the processors in the searching process in the parallel architecture. Cutello et al. [31] proposed a parallel immune algorithm (parIA) for numeric function optimization. parIA runs similar to opt-IA with three main immune operators, namely, cloning, hypermutation and aging. In terms of the parallel computing platform, par-IA adopted a Master– Slave approach. For each generation, the master partitioned the population to each slave. The slave executes cloning and hypermutation on the received population, and returns result to the master at the end of computation. The master then executes the aging operation on the received population and selects a population for the next generation. The parallel implementation was primarily aimed to reduce the computation complexity by dividing the time-consuming operations among processors.
3 Foundation and methodology Previous research studies have established the potentials of immunity-based optimization algorithms in solving optimization problems. In this article, a novel immunity-based optimization algorithm, known as suppression control algorithm (SCA) and parallel suppression control algorithm (PSCA) is developed to improve the performance of immunity-based optimization and to further explore the gain from the distributed and parallel framework adoption in artificial immune systems. In the following sections, the basis and methodology adopted in the development of SCA and PSCA are discussed. 3.1 Basis and methodology An optimization algorithm known as suppression control algorithm (SCA) for solving single-objective optimization problems is first designed. It is then extended to work in a parallel and decentralized computation platform with focus on improving its performance in solving complex optimization problems. This distributed and parallelized algorithm is called parallel suppression control algorithm (PSCA). SCA is derived based on the clonal selection principle of AIS that centers on the ideas of ranking population, with multiple mutation rates and adopting the concept of immuno-suppression hypothesis. The algorithm is population based with each individual of the population being a
Evol. Intel. (2008) 1:171–185
candidate solution of the problems. The normalized value of the objective function, when evaluated for the individual, is assigned as the affinity of the individual. SCA has the following key features: 1. 2.
3.
Individuals of the population are classified into different classes with respect to their fitness. Multiple mutation rates are adopted in somantic hypermutation. The mutation rate is not only inversely proportional to the affinity, but also to the class of the individuals. Regulation mechanism is adopted as a collaborative feature.
The idea behind the classification of the candidate population and the multiple mutation rates is to better suit the needs of individual candidate. The mutation imparts an important influence on the convergence and capability of the algorithm. However, the majority of immunity-based optimization algorithms take on a global mutation formulae where every individual in the population mutates with the same formulae. In human immune system, immunity agents are mutated mainly on the determinable regions of that particular agent. In essence, a more focused mutation is to be adopted by the more promising candidates to bias the search in nearby regions. Those best candidates already situated at promising regions where fine adjustments in the solution vector by mutation enhance their local penetrating ability to converge to the local optimum at an improved rate, while other candidates need to make bigger adjustments through mutation to explore other more promising regions. Therefore, it is crucial to have adjustments appropriate to individual situations during the optimization process. Different mutation rates are adopted for different individual candidates to enhance the search accuracy and capability. In this respect, mutation rate not only depends on the individual affinity, but also on the class and potential of the individual. General suppression control framework (GSCF) [32] was developed based on the analogy of the regulation of immune response by the lymphocytes and the immunosuppression hypothesis. The type of immune response is determined not only by the nature of the antigens, but also by the regulatory T-cells and their cytokine products. T cells modulate the immune response in a positive sense by helper T-cell. They are also capable of down-regulating immune response by the production of several cytokine [33]. The present of inflammatory cytokine molecules in the environment tend to elicit aggressive behaviors, whereas the present of anti-inflammatory cytokine, like IL10, tends to suppress immune response [33]. In brief, the immune response of an activated T-cell would also be influenced by the environment which may contain encouraging or discouraging cytokines to bias the response.
175
In addition, after an activated T-cell developed a behavior, it emits signals to convert others to join by the production of certain cytokine. The GSCF was originally conceptualized for the control of homogeneous agents and subsequently, applied to the control of distributed homogeneous and heterogeneous autonomous systems [32, 34, 35]. Based on the concept of GSCF, the regulation mechanism is taken as a means to regulate the dynamics of the evolution population in the proposed algorithm. The affinity, the class and the Euclidean distance between candidate solutions form the encouraging or discouraging factors in the local environment that trigger the suppression activities. Candidate solutions with a high affinity act as activated T-cells to stimulate or suppress other candidates according to the encouraging or discouraging factors in the population. 3.2 Parallel realization By introducing a distributed computation paradigm to the aforementioned suppression control algorithm, a decentralized optimization algorithm known as parallel suppression control algorithm (PSCA) for solving singleobjective optimization problems is developed. The PSCA is inspired from the decentralized nature of immune system. Immunological-inspired algorithms can be naturally parallelized since the operations on the individuals in a population can be readily undertaken in parallel. In fact, the operation of the human immune system is inherently parallel and distributed with a network of players who work, coordinate and co-operate simultaneously to get things done. There is no single point of supervisory control in the system and each individual cell reacts, responds, communicates and coordinates in the local environment. In contrast, the proposed parallel framework consists of a society of agents where they are logically connected. The inter-connected agents co-operate to perform optimization tasks. Each agent is an immunity-based optimizer that undergoes the processes of selection, mutation, suppression and recombination as in the SCA algorithm. As the immunity-based optimizers iterate and interact, solutions are merged, suppressed and activated continuously over time. Redundancy can be reduced and overlapping is resolved through the regulation mechanism of GSCF. Favorable intermediate solutions will be promoted to improve the optimality of the final solution. As such, immunity-based optimizers can be thought of as self-contained agents that carry information and search the solution space for the best solution. Furthermore, they are capable of making decisions based on the information they carry and the information obtained from their environment.
123
176
Evol. Intel. (2008) 1:171–185
4 Suppression control algorithm (SCA) This section defines the optimization problem considered in this article, which is followed by the presentation of the proposed immunity-based optimization algorithm in Sect. 4.2.
Step 2.
Step 3.
4.1 Optimization problems An optimization problem is defined as the search for the optimum of a function. An optimization problem thus requires the finding of the vector of decision variables that optimizes the corresponding objective functions:
where Ncs is the clone size of the ith antibody, d is the maximum clone size of an antibody per iteration.
for a maximization case : f ðkÞ f ð xÞ; for a minimization case : f ðkÞ f ð xÞ where x = [x1, x2,…, xn] [ X is the n-dimension vector of decision variables representing the variables of the function f and k = [k1, k2,…, kn] [ X is the vector of decision variables such that the function f is optimized. X [ [l, u] is the feasible region of the decision variables with l being the lower bound and u being the upper bound. The primary aim of global optimization is to find the vector of decision variables (a) such that the function is optimized.
Step 4.
Step 1.
Initially a random population of antibodies is created in the feasible region. The affinity of each antibody is equal to the normalized value of the function when evaluated for that particular candidate solution. Aff i ¼
f ð cÞ f ð x i Þ f ðcÞ f ð/Þ
where Affi is the affinity of the ith antibody, f(/) is the lowest value of the function in the current population and f(c) is the highest value of the function in the current population.
123
The pool of clones then undergoes mutation. The mutation depends on both the inverse of the affinities and the class of the individuals. The clones are mutated as follow: a ¼ b expðq Aff i Þ; c0 ¼ c þ a R g; g¼gh where b is the mutation rate, R is the random variable between -1 and 1, g is the step factor which initially set to a value depends on the bound of the variables and it is dynamically reduced by a factor h and with a minimum value of 1, c is the clone and c’ is the mutated clone. b is then divided into bh and bm which is the mutation rate for the high-affinity subset and the medium-affinity subset, respectively. Simple recombination operation by mean of the singlepoint recombination and random walk strategy by using a much larger mutation rate are applied to the clone with pre-defined probability.
4.2 Suppression control algorithm (SCA) In SCA, antibodies undergo somantic hypermutation with different mutation rates. The mutation rate depends not only on the inverse of the affinities, but also on the class of the individuals. Random factors are also introduced to the mutation of antibody with comparatively low affinity. The immuno-suppression hypothesis is incorporated to manage the direction of the search and the evolution of the population. The regulation mechanism of GSCF is taken as a means to regulate the dynamics of the evolution. The SCA algorithm is described as follow:
The population is then partitioned into three subsets (high-affinity subset, medium-affinity subset and low-affinity subset) based on their affinity. Antibodies in the high-affinity subset, mediumaffinity subset are cloned to generate a pool of clone. Antibody with higher affinity could produce a relatively large number of clones. The number of clone actually depends on the affinity. The higher the affinity, the more the clone will generate. The number of clone for antibody is calculated as follow: Ncs ¼ d Aff i
Step 5.
The affinity of each clone is calculated according to the formulae listed in Step 1. Matured clone with a better result will be selected to replace the parent antibody.
Step 6.
Suppression operation is triggered by each antibodies in the high-affinity subset and effected in two ways: (a) in the medium-affinity subset, or (b) within the high-affinity subset. For each triggered antibody, the suppression operation occurs with other antibodies on the mediumaffinity subset and the high-affinity subset. Regarding the effect in the medium-affinity subset, antibodies in the medium-affinity subset will be suppressed if the Euclidean distance in the solution space, between that antibody and the
Evol. Intel. (2008) 1:171–185
antibody in the high-affinity subset which triggers the suppression operation, is less than a threshold value. Concerning the effect within the highaffinity subset, the antibodies in high-affinity subset may be suppressed by antibodies within high-affinity subset if the number of antibodies within a certain Euclidean distance is higher than a threshold value. This threshold is set to avoid the potential crowding in a particular solution space and should be set to a percentage of the current population (10% is used in this algorithm). Step 7.
The low-affinity subset will be discarded and replaced by randomly generated antibodies or by the recombination operation from the current population to maintain a sufficient diversity during the whole course of the search.
Step 8.
All three subsets are combined.
Step 9.
Step 2 to Step 8 are repeated until the termination criteria is reached.
Figure 1 shows the flow of SCA with highlight on the salient functionality. In summary, the three characteristics that the algorithm exemplified are: (a) decomposition of antibody population (Step 2); (b) multiple mutation rate (Step 4); and (c) general suppression control process (Step 6). 4.2.1 Decomposition of antibody population and multiple mutation rate In Step 2, population is partitioned into three subsets, highaffinity subset, medium-affinity subset and low-affinity subset. The entire population of antibodies is decomposed based on their affinity in each iteration. Statistical importance concept is being adopted for the decomposition. The antibodies are classified based on the following: High-affinity subset : Aff i Aff þ 1:5 r Medium-affinity subset : Aff 1:5 r\Aff i \Aff þ 1:5 r Low-affinity subset : Aff i Aff 1:5 r where Aff is the mean and r is the standard deviation of the affinity of the whole population. As stated in Sect. 3.2, mutation depends on both the affinity and the class of the individual in order to bias the search in nearby region(s) for promising individuals. This is done by adopting different mutation rates (b) for different classes of population in Step 4 of the algorithm, with bh as the mutation rate for the high-affinity subset and bm for the medium-affinity subset. A comparatively small
177
value with bh \ bm is being used for antibodies in highaffinity subset so that the mutated clones of the antibodies are targeted on a contiguous region. Besides the adoption of different mutation rates for different classes of individual, random factors are also introduced to the antibody to improve the exploration ability of the algorithm. Random walk strategy and recombination operation are applied to the clone with predefined probability. Recombination operation is adopted by mean of single-point recombination which combines the clone with one of the antibody in the current population. Random walk strategy takes on a much larger mutation rate through the feasible region to explore wider areas from the parent antibody. 4.2.2 General suppression control framework (GSCF) In Step 6, the regulation mechanism of the general suppression control framework (GSCF) is used to regulate the dynamics of the population evolution. Antibodies from the high affinity subset act as mature T-cells, which are able to suppress and stimulate other antibodies. Thus only high affinity antibodies can suppress other cells. The suppression operation is applied to two cases, namely, (a) the medium-affinity subset, and (b) within the high-affinity subset. For the suppression action on the medium-affinity subset, the Euclidean distance between the antibodies in the medium-affinity subset with the antibodies in the highaffinity subset is first calculated. If the affinity is smaller than a specific suppression threshold, this antibody (in the medium-affinity subset) is suppressed. The suppression threshold used here is not a constant for all antibodies in the high-affinity subset, but rather, dynamically calculated for each antibody with a pre-defined maximum and minimum value. The threshold is equal to the Euclidean distance between the farthest clone after mutation and the best mutant of that particular antibody in the mutation process. Based on this calculation, the threshold represents the search region for that particular antibody in the last cloning and mutation operation. As a result, the present antibody should be the best antibody within this range. Any other antibodies in this range are not potential candidates that worth further exploration. Thus they are suppressed. For the suppression action within the high-affinity subset, the focus point is on the density of the antibodies in the high-affinity subset. If the density of the high affinity antibodies in a certain search space is higher than a certain value, some of the high affinity antibodies will be suppressed. This could prevent a particular area being overexploited.
123
178
1
Evol. Intel. (2008) 1:171–185
Initial Population Decomposition
2
Population is partitioned into three subsets based on their affinity.
Partitioned Population
Cloning 3 Cloned Population Mutation
Mutation depends on both the affinity and the class of the individual.
4 Mutated Population 5
Selection Suppression
6-8
Matured clone with better result will be selected to replace the parent antibody. Suppression operation is activated by High-affinity subset to apply on the medium-affinity subset or within the high-affinity subset.
Population
Fig. 1 The structure of SCA
5 Parallel suppression control algorithm (PSCA) A parallel version of SCA known as parallel suppression control algorithm (PSCA) is developed that is designed to run under a decentralized computation platform that consists of a number of processes that are inter-connected in a specific configuration for improved computation effectiveness. In contrast with some other parallel algorithms, the parallel computing framework of PSCA is organized as an island model rather than a master-slave configuration. The major difference between the island model and the masterslave approach is that there is no central supervisory system to coordinates the search activities of each process. In the island model, the overall population is partitioned into a number of sub-populations and assigned to each island. Sub-populations in each island undergo an individual evolution process including cloning, mutation and selection before migrating with other islands, which also carry out the evolution process. Each process in PSCA represents an island in the island model. Each island carries information, searches the solution space for best solutions and makes decision and selection according to its own knowledge. Each process is responsible for the evolution of the initially divided sub-population. At predetermined times during the search process (the end of iteration is used in PSCA), certain individuals in an island will migrate towards other connected island. Islands interact and communicate with other islands where they send and receive migrants to and from other
123
islands. Individuals will be merged, suppressed and activated by the interaction with other islands. Each island continues to iterate until the termination criteria is reached. As processes interact with each other and cooperate to search for the optimal solution in a distributed manner, communication and information exchange between different process should be considered. Under the parallel and distributed computation platform, a complete view of the evolution of all the processes is not required as such communication is expensive and would lead to a very inefficient mechanism. Thus, a ring network configuration is adopted by PSCA where each process connects to exactly two other processes, forming a circular pathway for communication and information exchange. This structure has the benefit of controlling the communication overhead in the overall network where redundancy in communication that exists in other parallel systems can be greatly minimized. In addition, a ring configuration can speed up the overall communication process without sacrificing the connectivity in the network. Details of PSCA are described in the following: Step 1.
Initially a random population of antibodies is created in the feasible region. The entire population of antibodies is divided into several subpopulations and distributed among different islands. Step 2. The affinities of the antibody are determined within each island. The affinity is equal to the normalized value of the function when evaluated for that particular candidate solution. f ð cÞ f ð x i Þ Aff i ¼ f ðcÞ f ð/Þ where Affi is the affinity of the ith antibody, f(/) is the lowest value of the function in the current population of that particular island and f(c) is the highest value of the function in the current population of that particular island. Step 3.
Each island executes all the steps in the same way as for SCA including cloning, mutation, selection and suppression (Step 2 to Step 8 discussed in Sect. 4.2). The evolution process of each island undergo independently with only local information. In particular, a further separation of antibodies according to the affinity in Step 2 of SCA focuses only on the local population within the island. The highest and lowest values of the function used in Step 5 of SCA refer only to the local population within the island. And Step 6 of SCA, the suppression operation is only triggered by the antibodies in the local high-affinity subset
Evol. Intel. (2008) 1:171–185
179
and applied only to the local population within the island. Step 4. Islands communicate with other connected islands. Antibodies in the high affinity subset as determined in Step 3 will be migrated to other connected islands. Step 5. Suppression and activation take place in the process which is triggered by the migrants from the connected islands. Step 6. Steps 3 to 5 are repeated until appropriate termination criteria are reached.
6 Numerical experiments
5.1 Communication, suppression and activation across Islands
6.1 Performance evaluation of SCA and PSCA
Communication is a very important aspect in distributed processing. It helps to synchronize the operation of each process and to provide a channel for sharing knowledge between processes. Exchange of information between islands occurs at the end of each iteration. Each island communicates with connected islands to share the antibodies of the high affinity subset. Antibodies of the high affinity subset are thus migrated between islands. This sharing is inline with the concept that memory cells in the immune system flow freely within the immune system. This shared information will be used by each island for the general suppression control process. The exchange of information between islands triggers the general suppression control process. The process is very similar to that of SCA with a slight difference in that the antibodies from the connected islands acts as activated T-cells, which are able to suppress and stimulate other antibodies within the process. Similarly, the Euclidean distance between the antibodies is used as the decision factor for suppression. If the distance is smaller than a pre-defined threshold value, the antibody in the process is suppressed. If the antibody within an island is suppressed by an antibody shared from the connected island, that particular antibody will join the current population of the island to substitute the suppressed antibodies in the island.
In order to analysis the performance of SCA and PSCA, two different sets of experiment were carried to evaluate the proposed algorithms. The first set of experiments aims to investigate the effectiveness of the proposed algorithms; while the second set is performed to benchmark the proposed algorithms with other evolutionary-based optimization approaches. Experimental procedures and results will be presented and discussed in this section. All experiments were run on an IBM compatible personal computer with 3.2 GHz CPU and 512 M RAM.
To evaluate the convergence characteristic, the effectiveness (measured by the number of evaluations taken) and the gain obtained from adopting a parallel paradigm, SCA and PSCA are applied to five numerical functions with different characteristics and degrees of complexity as listed in Table 1. The optimization version of CLONALG [6] was used as a control. All algorithms were implemented with Matlab and CLONALG was implemented according to the coded provided by de Castro [36]. Without losing fairness, the termination criteria for the algorithms is set to fcurrent fglobal l where fcurrent is the current best solution, fglobal is the known optimal value and l is the acceptable deviation. l is set to 10-3 and 10-5, respectively. The maximum evaluation was set to 150,000. Each algorithm was repeated 15 times. The result of the mean number of evaluations to reach l = 10-3 and l = 10-5 are shown in Table 2. Mean best result for every 500 evaluations was recorded. And the convergence characteristics of f2 and f3 are shown in Figs. 2 and 3. Table 2 summarizes the mean and standard deviation of the number of evaluations required to successfully reaching the global optimum with the required accuracy. The larger the number of evaluations, the more computing resources is used in the search operation. Thus the number of evaluations corresponds to the computation cost and the effectiveness of the algorithm. It is clear from the results
Table 1 Numerical benchmark functions used in the experiment Test function Sphere function f1 ¼
Bound P2
2 i¼1 xi
P Rastrigin’s function f2 ¼ 20 þ 2i¼1 x2i 10 cosð2pxi Þ Bohachevsky’s function f3 ¼ x21 þ 2x22 0:3 cosð3px1 þ 4px2 Þ þ 0:3 pffiffiffiffiffiffi P2 Schwefel’s function f4 ¼ i¼1 xi sin jxi j 2 Rosenbrock’s function f5 ¼ 100 x2 x21 þðx1 1Þ2
Optimal value
Global optimal
[-100, 100]
0
(0, 0)
[-5.12, 5.12]2
0
(0, 0)
[-100, 100]
0
(0, 0)
[-500, 500]2
837.9658
(420.9687, 420.9687)
[-30, 30]2
0
(1, 1)
2
2
123
180
Evol. Intel. (2008) 1:171–185
Table 2 Experimental results for the convergence experiment Function
PSCA
SCA -3
l = 10
-5
l = 10
CLONALG -3
l = 10
l = 10
-5
l = 10-3
l = 10-5
F1 Mean number of evaluation
3,288
4,136
3,536
4,712
9,600
1,2967
SD
795
947
1,078
1,182
2,115
2,629
Percentage of fail run
–
–
–
–
–
–
F2 Mean number of evaluation
1,825
3,945
1,677
5,169
16,850
23,167
SD Percentage of fail run
461 –
2,747 –
692 –
2,958 –
6,339 –
5,251 –
F3 Mean number of evaluation
5,037
6,350
5,350
7,517
14,500
16,600
SD
1,368
1,526
2,194
2,408
4,228
3,372
Percentage of fail run
–
–
–
–
–
–
F4 Mean number of evaluation
3,439
4,857
2,541
3,734
57,250
61,500
SD
1,409
2,089
547
887
30,774
51,274
Percentage of fail run
–
–
–
–
–
33%
F5 Mean number of evaluation
15,733
27,257
22,571
37,776
55,625
–
SD
6,949
7,923
12,231
11,874
26,693
–
Percentage of fail run
–
–
–
–
87%
100%
1.5
Solution Value
Fig. 2 Convergence characteristic of f2
1 PSCA
0.5
SCA Clonalg
0 0
2000
4000
Number of Evaluation
1.5
Solution Value
Fig. 3 Convergence characteristic of f3
1
PSCA 0.5
SCA Clonalg
0 0
2500
5000
7500
Number of Evaluations
that the solutions obtained by the proposed algorithms converge to higher quality solutions with much smaller number of evaluations than CLONALG. The results also show that the proposed algorithms significant reduce the redundancy in the search process. By comparing PSCA and SCA, PSCA requires fewer numbers of evaluations to reach the required accuracy than
123
SCA in most of the benchmark functions. With a similar population size in the experiments and similar parameters used in the experiments, parallelization with GSCF regulation mechanisms adopted is able to reduce the redundancy and overlapping in the population of antibodies. Unnecessary evaluations are greatly minimized. Therefore PSCA requires fewer numbers of evaluations
Evol. Intel. (2008) 1:171–185
181
and meaning that PSCA is more efficient that requires a smaller computation cost to reach the optimal solution. 6.2 Benchmarking To evaluate the ability and performance of SCA and PSCA, we have performed a set of experiments on several numerical benchmark problems reported in the literature. Again, SCA and PSCA are implemented with Matlab. The distributed computation structure of PSCA is implemented with MatlabMPI, which is a software tool that provides a subset of Message Passing Interface for different Matlab programs to communicate under a distributed computing platform [37]. Performance of the proposed algorithms is benchmarked on two aspects, namely, with immunological-inspired algorithms and with other evolutionary-based optimization algorithms. Five different immunological-inspired algorithms: CLONALG [6], Opt-IA [8], BCA [23], Opt-ainet [7] and parIA [31] are chosen for the benchmarking with the immunological-inspired algorithms. Four evolutionary algorithms, differential evolution (DE), particle swarm optimization (PSO), simple evolutionary algorithm (SEA) and the attractive and repulsive PSO (arPSO) are used in the benchmarking with other evolutionary-based optimization algorithms. Table 3 lists the six numerical benchmark problems and their key properties used in the experiment. These problems can be classified into two major categories: unimodal functions (F6 - F7) and multimodal functions (F8 - F11). In the benchmarking experiment, the number of evaluations is used as the comparison. The larger the number of evaluations implies that more computing resource is used
in the search operation, indicating that the corresponding algorithm is less efficient. Hence, the number of evaluation is an objective measure for the computing resources used in the evolution. Opt-IA uses the same mutation rate as CLONALG. Two equations for computing the mutation rate which were proposed by de Castro and Von Zuben [6, 7] were adopted separately for the two algorithms in the experiment. The better results among the separate experiment based on the two mutation rates are reported. Two versions of CLONALG, namely, CLONALG1 and CLONALG2, are used and both are in binary coding, in particular, 1. 2.
CLONALG1: at the end of each generation, each antibody will be substituted by the best mutated clone CLONALG2: n best mutated clone from all antibodies will form the next generation.
The better results among the separate experiment based on the two versions of CLONALG are reported. For the comparison with the five immunological-inspired algorithms, each algorithm was tested with the 6 numerical benchmark problems shown in Table 3. The maximum numbers of evaluations allowed (Tmax) were shown in Table 4. The experiments were repeated 20 times with different random numbers, and the best solutions for each optimization run were recorded. The result of CLONALG and Opt_IA were obtained from [38]. The result of parIA was obtained from [31]. BCA and Opt-ainet were implemented as [39]. The mean best value for all 20 runs and the standard deviation are shown in Table 4. Computing accuracy was set to be 10-4. Parameters for the five different immunological-inspired algorithms are listed below. Details of the parameters were described in [7, 23, 31, 38].
Table 3 Six numerical benchmark problems Test function
Dimension
Pn
2 i¼1 xi i 2 Pn1 h f7 ¼ i¼1 100 xiþ1 x2i þðxi 1Þ2
f6 ¼
1 500
f8 ¼ f9 ¼
þ
P25
i¼1
jþ1þ
j¼1
P11
ai
"
xi aij
6
30 1 !1
Optimal value
[-100, 100]
30
30
[-30, 30]
2
[-65.54, 65.54]2
4
[-5, 5]4
2
[-5, 5]2
2
[-2, 2]2
0 0 *1
i¼1
ð
Þ
x1 b2i þbi x2 b2i þbi x3 þx4
4x21 2:1x41 þ x61 =3
f10 ¼
2 P
30
Bound
!
þx1 x2 4x22 þ x42
f11 ¼ 1 þ ðx1 þ x2 þ 1Þ2 " 30 þ ð2x1 3x2 Þ
19 14x1 þ 3x21
!#
14x2 þ 6x1 x2 þ 3x22 18 32x1 þ 12x21 þ 2
48x2
0.0003075 -1.0316
3
!#
36x1 x2 þ 27x22
123
182
Evol. Intel. (2008) 1:171–185
Table 4 Experimental results of SCA, PSCA, CLONALG, Opt_IA, opt-aiNet, BCA and parIA Function (Tmax)
PSCA
SCA
Opt_IA
CLONALG
opt-aiNet
BCA
parIA
0.00
0.00
1.50
2.7 9 10-3
0.00
0.0
0.0
0.18
4.6 9 10-3
0.0
F6 (150,000) Mean best SD
4.4 9 10-3
2.4 9 10-4
-3
-5
8.4 9 10
5.3 9 10
F7 (200,0000) Mean best
12.0
20.5
28.4
27.6
140.0
28.9
15.53
SD
5.8
23.7
0.42
1034
21.3
1.1
13.53
F8 (10,000) Mean best
0.998
0.998
1.042
1.002
0.998
1.029
0.998
SD
0.0
0.0
0.11
2.8 9 10-2
0.0
7.5 9 10-2
1.3 9 10-3
6.1 9 10-4 1.8 9 10-4
7.3 9 10-4 9.2 9 10-5
7.1 9 10-4 1.3 9 10-4
1.4 9 10-3 5.4 9 10-4
7.4 9 10-4 1.3 9 10-7
7.4 9 10-4 6.6 9 10-10
4.3 9 10-4 2.2 9 10-4
-1.0316
-1.0316
-1.0314
-1.0315
-1.0314
-1.0316
-1.0252
F9 (400,000) Mean best SD F10 (10,000) Mean best SD
0.0
-4
0.0
8.7 9 10
1.8 9 10
-4
1.2 9 10
-4
-4
1.1 9 10
8.4 9 10-3
F11 (10,000) Mean best
3.00
3.00
3.00
3.00
3.01
3.00
4.50
SD
0.0
0.0
0.0
0.0
1.5 9 10-2
0.0
3.55
Table 5 Experimental results of SCA, PSCA, DE, PSO, arPSO and SEA Function (Tmax)
PSCA
SCA
DE
PSO
arPSO
SEA
Mean best
0.0
0.0
0.0
0.0
0.0
1.8 9 10-3
SD
0.0
0.0
0.0
0.0
0.0
2.8 9 10-4
Mean best
29.13
40.29
0.0
4.03
355
31.32
SD
24.58
32.27
0.0
4.99
2,150
17.4
F6 (500,000)
F7 (500,000)
F8 (500,000) Mean best
0.998
0.998
0.998
1.16
0.998
0.998
SD
0.0
0.0
0.0
3.68 9 10-1
0.0
0.0
5.97 9 10-4 1.79 9 10-4
7.19 9 10-4 2.21 9 10-4
4.17 9 10-4 3.01 9 10-4
1.34 9 10-3 3.94 9 10-3
1.25 9 10-3 3.96 9 10-3
3.70 9 10-4 8.78 9 10-5
Mean best
-1.0316
-1.0316
-1.0316
-1.0316
-1.0316
-1.0316
SD
0.0
0.0
0.0
0.0
0.0
0.0
F9 (500,000) Mean best SD F10 (500,000)
F11 (500,000) Mean best
3.0
3.0
3.0
3.0
3.52
3.0
SD
0.0
0.0
0.0
0.0
3.65
0.0
• • •
CLONALG and Opt_IA: N = n = 50, d = 0, b = 0.1, d = 20, dup = 2, and sB = 20 Opt-ainet: N = 20, Nc = 5, d = 40%, b = 1 and suppression threshold = 0.2 BCA: population size = 4, clonal pool = 4, new random elements = 1.
123
•
parIA: d = 20, q = 10, d = 20, dup = 2, sB = 15 and number of processes = 4.
The parameters for the proposed algorithms, SCA and PSCA, were manually tuned based on preliminary experiments, thus the following run-time parameters are employed:
Evol. Intel. (2008) 1:171–185
• •
Population size = 50, d = 5, q = -4, bh = 0.2, bm = 3 (for PSCA only) Number of processes = 4, population size for each agent = 12.
For the comparison with the four evolutionary algorithms, each algorithm was tested again with the six numerical benchmark problems shown in Table 3. The maximum number of evaluations allowed was set to 500,000. Each of the experiments was repeated 20 times with different random numbers. The result of DE, PSO, SEA and arPSO are obtained from [40]. The mean best value for all 20 individual runs and the standard deviation are shown in Table 5. Computing accuracy was set to be 10-4. The parameters for the all algorithms were manually tuned, based on preliminary experiments. Specific settings for each of these four algorithms were described in [40]. The following run-time parameters are employed for SCA and PSCA: • •
Population size = 50, d = 5, q = -4, bh = 0.2, bm = 3 (for PSCA only) Number of processes = 4, population size for each agent = 12.
Table 4 gives the results of the comparison with the five immunological-inspired algorithms. From the results shown in Table 4, PSCA achieved the best result on four functions over six benchmark problems, SCA and parIA obtained the best results on three functions, CLONALG, optIA and BCA attained the best results on two functions. However, the proposed algorithms failed in the comparison for benchmark problems f6. These results (except for the benchmark problems f6) show that PSCA and SCA are able to obtain nearer optimal solutions with the same amount of resources as other immunological-inspired algorithms; and they outperformed other immunological-inspired algorithms used in the experiment. They are shown to be more effective than other immunological-inspired algorithms in determining global optimal solutions than the other six benchmark functions. Table 5 summarizes the comparison with the four evolutionary algorithms proposed in literature. In this comparison, DE attains the best performance on achieving the best result over the five other functions out of the six benchmark problems. The proposed PSCA, SCA and the evolutionary algorithm SEA obtained the best results over the four functions. PSO and arPSO were able to attain the best result over the three functions. From the results, the proposed algorithms are able to achieve comparable results to the other evolutionary algorithms. And the algorithms are capable of reaching global targets with the required computing accuracy
183
(10-4) within the required number of evaluations for most of the benchmark functions as with other conventional algorithms. As such the performance of SCA and PSCA are comparable to those evolutionary algorithms for numerical optimization. The standard deviation of the number of evaluations also reflects the stability and the dependence on the initial population. The relatively smaller values of the standard deviations of PSCA and SCA indicate that both algorithms have good consistency in performing searching operations. 7 Conclusion and future work This work develops an immunity-based optimization algorithm called SCA for solving multi-modal optimization problems and its parallel realization (PSCA). The characteristics of the two algorithms are discussed and their performance are analyzed and evaluated with numerical experiments undertaken for solving several benchmark problems with different complexities and specialty. The introduction of the immunity-based concepts, including GSCF, clustering of population and multiple mutation rate in the immunity-based algorithms not only delivers capabilities in finding optimal solutions, but also reduces the number of iteration significantly, which in term makes the algorithms more efficient and saves computation cost. The finding presented in Sect. 6.1 suggested that PSCA has the advantage in solving optimization problems with reduced computation cost under a distributed computation paradigm. In the context of the parallel structure of PSCA, further research will be conducted to further explore the effect of changing the number of islands on both the efficiency and effectiveness of the proposed algorithm. In essence, the regulation mechanism of GSCF has successfully regulated the dynamics of the evolved population, synchronized the cooperation, and performed the suppression and activation of processes. It is intended that further research on the extension of the regulation mechanism is to be undertaken with the development of more refined suppression and activation techniques to manage the evolving population and mutation. Applications in combinational optimization, constrained optimization and multi-objective optimization problems using the developed algorithm are also planned. Acknowledgments The authors would like to thank the anonymous reviewers for their valuable comments and suggestions that help to improve the contents of this paper. This work was supported by the General Support Fund project 713707E of the Research Grants Council, Hong Kong SAR.
123
184
References 1. Dasgupta D, Zhou J (2003) Artificial immune system (AIS) research in the last five years. In: 2003 congress on evolutionary computation (CEC 2003). IEEE, Canberra, pp 123–130 2. Timmis J, Knight T, de Castro LN, Hart E (2004) An overview of artificial immune systems. In: Paton R, Bolouri H, Holcombe M, Parish JH, Tateson R (eds) Computation in cells and tissues: perspectives and tools of thought. Springer, Heidelberg, pp 51–86 3. Wang X, Gao XZ, Ovaska SJ (2004) Artificial immune optimization methods and applications—a survey. In: IEEE international conference on systems, man and cybernetics, 2004, Hague, The Netherlands, pp 3415–3420 4. Dasgupta D (2006) Advances in artificial immune systems. IEEE Comput Intell Mag 1:40–49 5. Hart E, Timmis J (2008) Application areas of AIS: the past, the present and the future. Appl Soft Comput 8:191–201 6. de Castro LN, Von Zuben FJ (2002) learning and optimization using the clonal selection principle. IEEE Trans Evol Comput 6:239–251 7. de Castro LN, Von Zuben FJ (2002) An artificial immune network for multimodal function optimization on dynamic environments. In: 2002 congress on evolutionary computation (CEC 2002), Honolulu, USA, pp 289–296 8. Cutello V, Nicosia G, Pavone M (2004) Exploring the capability of immune algorithms: a characterization of hypermutation operators. In: Third international conference on artificial immune systems (ICARIS-2004), Catania, Italy, pp 263–276 9. Luo Y, Li R, Tian F (2004) Application of artificial immune algorithm to multimodal function optimization. In: Fifth world congress on intelligent control and automation, 2004 (WCICA 2004), Hangzhou, China, pp 2248–2252 10. Zhu C, Zhao B, Ye B, Yijia C (2005) An improved immune algorithm and its evaluation of optimization efficiency. In: International conference on natural computation (ICNC 2005), Changsha, China, pp 895–904 11. Playfair JHL, Chain BM (2001) Immunology at a Glance. Blackwell, Cornwall 12. Sompayrac L (1999) How the immune system works. Blackwell, Cornwall 13. de Castro LN, Timmis J (2002) Artificial immune systems: a new computational intelligence approach. Springer, Heidelberg 14. Jerne NK (1974) Towards a network theory of the immune system. Annu Immunol 125(C):373–389 15. Burnet FM (1959) The clonal selection theory of acquired immunity. Cambridge University Press, Cambridge 16. Watkins A, Timmis J (2004) Exploiting parallelism inherent in AIRS, an artificial immune classifier. In: Third international conference on artificial immune systems (ICARIS-2004), Catania, Italy, pp 427–438 17. Greensmith J, Aickelin U, Cayzer S (2005) Introducing dendritic cells as a novel immune-inspired algorithm for anomaly detection. In: Fourth international conference on artificial immune systems (ICARIS-2005), Banff, Canada, pp 153–167 18. Kim J, Bentley PJ (1999) The human immune system and network intrusion detection. In: Seventh European congress on intelligent techniques and soft computing (EUFIT’99), Aachen, Germany 19. Lau HYK, Wong VWK (2004) A strategic behavior-based intelligent transport system with artificial immune system. In: IEEE international conference on systems, man and cybernetics, 2004, Hague, The Netherlands, pp 3909–3914 20. Garrett SM (2004) Parameter-free, adaptive clonal selection. In: 2004 Congress on evolutionary computation (CEC 2004), Portland, USA, pp 1052–1058
123
Evol. Intel. (2008) 1:171–185 21. Cutello V, Nicosia G (2004) The clonal selection principle for in silico and in vitro computing. In: de Castro LN, Von Zuben FJ (eds) Recent developments in biologically inspired computing. Idea Group Publishing, pp 104–145 22. Mak KL, Lau PSK (2008) Order pickings in an AS/RS with multiple I/O stations using an artificial immune system with aging antibodies. Eng Lett 16:122–130 23. Kelsey J, Timmis J (2003) Immune inspired somatic contiguous hypermutation for function optimisation. In: Genetic and evolutionary computation conference (GECCO 2003), Chicago, USA, pp 207–218 24. Randall M, Lewis A (2002) A parallel implementation of ant colony optimization. Parallel Distrib Comput 62:1421–1432 25. Liu P, Lau F, Lewis MJ, Wang C-L (2002) A new asynchronous parallel evolutionary algorithm for function optimization. In: Seventh international conference on parallel problem solving from nature, Granada, Spain, pp 401–410 26. Venter G, Sobieszczanski-Sobieski J (2006) A parallel particle swarm optimization algorithm accelerated by asynchronous evaluations. J Aerosp Comput Inf Commun 3(3):123–137 27. He K, Zheng L, Dong S, Tang L, Wu J, Zheng C (2006) PGO: a parallel computing platform for global optimization based on genetic algorithm. Comput Geosci 33:357–366 28. Watkins A, Bi X, Phadke A (2003) Parallelizing an immuneinspired algorithm for efficient pattern recognition. intelligent engineering systems through artificial Neural Networks: Smart Engineering System Design: Neural Networks, Fuzzy Logic, Evolutionary Programming, Complex Systems and Artificial Life 13: 225–230 29. Watkins A, Boggess L (2002) A new classifier based on resource limited artificial immune systems. In: 2002 IEEE world congress on computational intelligence, Honolulu, USA, pp 225–230 30. Watkins A (2005) Exploiting immunological metaphors in the development of serial, parallel, and distributed learning algorithms. University of Kent, Canterbury 31. Cutello V, Nicosia G, Pavia E (2006) A parallel immune algorithm for global optimization. Adv Soft Comput 5:467–475 32. Ko AWY, Lau HYK, Lau TL (2004) A general suppression framework for distributed control. In: Tenth IEEE international conference on methods and models in automation and robotics, Miedzyzdroje, Poland 33. Male D, Brostoff J, Roitt I, Rotth DB (2006) Immunolohy, 7th edn. Mosby 34. Ko AWY, Lau HYK, Lau TL (2004) An immuno control framework for decentralized mechatronic control. In: Third international conference on artificial immune systems (ICARIS2004), Catania, Italy, pp 91–105 35. Ko AWY, Lau HYK, Lau TL (2005) General suppression control framework: application in selfbalancing robots. In: Fourth international conference on artificial immune systems (ICARIS-2005), Banff, Canada, pp 375–388 36. de Castro LN (2001) Demo file for Matlab. In: Artificial immune systems. Available via Department of Computer Engineering and Industrial Automation, State University of Campinas. http://www.dca.fee.unicamp.br/*lnunes/immune.html. Accessed on 1 April 2007 37. Kepner J (2001) MatlabMPI. In: Parallel programming with MatlabMPI. Available via Lincoln Laboratory, Massachusetts Institute of Technology. http://www.ll.mit.edu/MatlabMPI. Accessed on 1 April 2007 38. Cutello V, Narzisi G, Nicosia G, Pavone M (2005) Clonal selection algorithms: a comparative case study using effective mutation potentials. In: Fourth international conference on artificial immune systems (ICARIS-2005), Banff, Canada, pp 13–28
Evol. Intel. (2008) 1:171–185 39. Brownlee J (2007) Optimization Algorithm Toolkit (optalgtoolkit) Version 1.4. In: Optimization Algorithm Toolkit (OAT). Available via Sourceforge.net. http://sourceforge.net/projects/ optalgtoolkit. Accessed on 12 February 2008
185 40. Vesterstrom J, Thomsen R (2004) A comparative study of differential evolution, particle swarm optimization, and evolutionary algorithms on numerical benchmark problems. In: 2004 Congress on evolutionary computation (CEC 2004), Portland, USA
123