Structural Optimization 18, 146-155 (~) Springer-Verlag 1999
On improving multiobjective genetic algorithms for design optimization S. N a r a y a n a n
a n d S. A z a r m
Department of Mechanical Engineering, University of Maryland, College Park, MD 20742, USA
[email protected]
Abstract This paper presents some improvements to MultiObjective Genetic Algorithms (MOGAs). MOGA modifies certain operators within the GA itself to produce a multiobjective optimization technique. The improvements are made to overcome some of the shortcomings in niche formation, stopping criteria and interaction with a design decision-maker. The technique involves filtering, mating restrictions, the idea of objective constraints, and detecting Pareto solutions in the non-convex region of the Pareto set. A step-by-step procedure for an improved MOGA has been developed and demonstrated via two mnltiobjective engineering design examples: (i) two-bar truss design, and (ii) vibrating platform design. The two-bar truss example has continuous variables while the vibrating platform example has mixed-discrete (combinatoriai) variables. Both examples are solved by MOGA with and without the improvements. It is shown that MOGA with the improvements performs better for both examples in terms of the number of function evaluations.
1
Introduction
A general Multi-Objective Optimization Problem (MOOP) with m objective functions is expressed as, minimize f(x) -- { f l ( x ) , - . . , f i ( x ) , . . . , fro(x)}, subject to
(1)
xCD, D={x:gj(x)<_O,
j=l,...,],
hk(x)= 0 , k = 1,...,K},
where x is an n x 1 design vector, fi(x) is the i-th objective function to be minimized, gj(x) is the j - t h inequality constraint and hk(x ) is the k-th equality constraint. The set of all design vectors that satisfy the constraints constitutes the feasible region denoted as D. Some of the variables (a subset of x) may be constrained to take on discrete/combinatorial values. In (1), the word "minimize" means that: (i) all objective functions are simultaneously minimized, (ii) the objectives are at least partly conflicting with one another, and (iii) there does not exist a single solution that is optimal with respect to every objective function (Eschenauer et aI. 1990, Miettinen 1999). Many optimized solutions may exist in a MOOP and a final preferred solution is selected from these solutions according to the designer's preference structure (Chankong and Haimes 1983). Given below are a few definitions in order to facilitate presentation of the paper.
Pareto solution. A design solution x* E D is said to be a Pareto solution or strong Pareto solution if there does not exist another solution x E D such that fi(x) < fi(x*) for all i = 1 , . . . , m, with strict inequality for at least one i. A collection of such solutions is referred to as a Pareto or noninferior set. If this definition is applied to the individuals in a population, then in this paper the design solution is said to be a (strongly) noninferior solution. Ideal (good) point. If each of the objective functions in (1) is individually minimized subject to the constraints defining the feasible region D, then the components of an ideal vector (or an ideal point) are obtained. The ideal point is often used as a reference point in MOOPs and is a solution that is unlikely to be achieved. In a minimization problem, as in (1), the ideal point provides a lower b o u n d of the Pareto optimal set. A good point is an estimate of the ideal point. The generation of a Pareto set by conventional optimization techniques requires a M O O P be converted into a series of single objective optimization problems. These problems are then solved one-by-one with a single-objective optimizer in order to obtain Pareto solutions one at a time. Solution techniques for a MOOP can be classified into three categories: (i) a priori methods wherein designer expresses preferences in terms of a single aggregating function prior to optimization, (ii) a posteriori methods - - wherein the Pareto set is generated and presented to the designer who then chooses the final solution from that set, and (iii) interactive methods - wherein the designer arrives at a final solution interactively. For an overview of these methods, the reader may refer to the work by Chankong and Haimes (1983), Miettinen (1999), and Palli et al. (1998), among others. Genetic algorithms (GAs) have been modified to handle multiple objectives (Fonseca and Fleming 1998) so that the problem need not be converted into a series of single objective optimization problems. These modified GAs have come to be known as Multi-Objective Genetic Algorithms (MOGAs). MOGAs rely on a fitness assignment strategy different from that of single objective GAs. In MOGAs, a number of schemes have been proposed for forming a good fitness function (see, e.g. Fonseca and Fleming 1998; Jones et al. 1998). In this paper, some new improvements to the MOGA developed by Fonseca and Fleming (1998) have been proposed. Section 2 gives an overview of the literature on MOGAs wherein the main shortcomings of these techniques are reviewed. Section 3 discusses the new improve-
147 merits that are proposed in the paper for the MOGA. This is followed by a step-by-step procedure for the MOGA with the new improvements. Section 4 presents two engineering design examples that demonstrate the application of the MOGA with and without the improvements. The concluding remarks are given in Section 5.
2
P r e v i o u s work o n M O G A
Plain aggregation methods in multiobjective optimization use GAs as a single objective optimizer external to the multiobjective technique (Azarm and Narayanan 1999). These methods do not take advantage of the GA's population-based search strategy to generate the Pareto set in a single run. MOGA however shifts the conventional view of multiobjecrive optimization by treating the objectives separately. Although GAs can still be used with plain aggregation methods, MOGAs generally are either population-based non-Pareto, or Pareto-based methods. The most well-known population-based non-Pareto method is the Vector Evaluated Genetic Algorithm, VEGA (Schaffer 1991). In VEGA, the population is divided into groups in each generation, and each group is evaluated with only a single objective. Individuals from different groups are allowed to mate with the intent that offspring will perform well with respect to the objectives of both their parents. While this approach and several of its variations have been successful in locating the elbow and extreme points of the Pareto set, it is recognized that exploration of the entire Pareto set can be cumbersome. In a non-Pareto-based technique, Hajela and Lin (1992) used a combination of restrictive mating, objective weighting, and fitness sharing. Their variable weighting approach treats the weights on objectives as decision variables that are encoded into the GA chromosome for each individual. Combined with fitness sharing, this approach allows individuals to evolve to represent a large number of different weightings. However, Fleming and Pashkevic (1985) noted that any technique that aggregates objectives linearly, as in VEGA and Hajela and Lin's, is not capable of obtaining a solution in a nonconvex region of the Pareto set. Pareto-based approaches were proposed by Goldberg (1989) and have become a major focus of MOGA research. These techniques explicitly make use of the definition of Pareto optimality. In Pareto-based schemes, the fitness of an individual is defined in terms of its rank. The method consists of assigning rank 1 to all noninferior individuals, removing them from contention, again finding a set of noninferior individuals and assigning rank 2 to them and so forth. Fonseca and Fleming (1993) have proposed a slightly different scheme wherein an individual's rank in the current population corresponds to the number of individuals dominating it. Thus all noninferior individuals are assigned rank 1, and so on and the selection proceeds via Baker's (1987) stochastic universal sampling algorithm. For each generation of the GA, the noninferior set of solutions in the population is given the top rank and then removed from that population. This step is repeated, where in each iteration the noninferior set of solutions remaining in the population is given the next rank, until all solutions in the population are assigned a rank. Rankings are then typically used in a tournament selection
to encourage the exploration in the direction of noninferior individuals. Liepins et al. (1988) found the Pareto optimal ranking scheme to be superior to VEGA. An important advantage of Pareto approaches is that they are able to identify solutions in nonconvex areas of the Pareto set. In GAs and MOGAs, the populations tend to converge into many small clusters called niches. This clustering tendency in a population results in gaps and nonuniform sampiing of the Pareto set. Pareto-based ranking correctly assigns all noninferior individuals the same fitness, but that does not guarantee a uniform sampling of the Pareto set. When presented with multiple Pareto points, finite populations tend to converge to one of them. This phenomenon, known as genetic drift (Goldberg and Segrest 1987) has been observed in natural as well as artificial evolution. The additional use of fitness sharing was proposed by Goldberg (1989) to prevent genetic drift and to promote the sampling of the entire Pareto set by the population. 2.1
Shortcomings of previous work
MOGA must be able to sample as large a Pareto set as possible and produce solution points which are uniformly spread across the Pareto set. Also, the smoothness of the Pareto set may depend on how early the MOGA converges. That in turn depends on the stopping criteria used in the algorithm. For single objective GAs, one of many different stopping criteria can be used depending on the problem. But stopping criteria such as "no improvement in an objective function value for N generations" cannot be used for a MOGA. This calls for a technique to detect when the Pareto set has been obtained. Three main problems are identified, as discussed next. 2.1.1 Problem 1. The first problem is how to guide the population out of closed-form niches and encourage it to sample as large a Pareto set as possible. In order to achieve this, two requirements need to be met. Firstly, the method needs to identify "groups" or "clusters" of individuals in the Pareto set. As will be shown in the examples later on, there are usually several niches that form early on during the design evolution. As the algorithm proceeds, these niches tend to spread out, but the process can be hastened considerably by building in some sort of rule at the parent selection stage. Secondly, the method needs to prevent these niches from deepening and growing. This would involve a method to prevent parents from the same region or niche from mating to produce offspring. The reviewed MOGAs typically suffer from population drift, where the population migrates to a small portion of the Pareto set (Goldberg and Segrest t987). To assure a uniform sampling of the Pareto set, recent applications of Pareto-based MOGAs by Fonseca and Fleming (1995), Horn et al. (1994), and Srinivas and Deb (1994) have incorporated niching schemes. In both the Pareto and niched-Pareto methods, there may be a likelihood of mating between individuals that are far apart in the noninferior set. These solutions are likely to be very different in decision space, and as a result, their children may not meet either objective satisfactorily (Goldberg 1989). The niched-Pareto scheme by Fonseca and Fleming (1993) and
148 the non-Pareto scheme by Hajela and Lin (1992) deal with this problem by restricting mating to individuals in similar regions in the objective space.
~.1.2 Problem. 2. The second problem is how to impose stopping criteria which reliably detect when the Pareto set has been obtained and whether or not the solution in the set is "uniformly spread". A uniformly spread Pareto set should not have groups of solutions clustered together in some places and gaps in other places. These gaps and clusters are the result of niche forming tendencies of populations. 2.1.3 Problem 3. The third problem is how to give the designer the flexibility to choose a.particular region of the Pareto set in order to recursively zoom into the region. An advantage of an interactive method like this is that the population size needed at a given stage in the process could be very small. For example, if the designer chooses an a-posteriori method, a detailed Pareto set might have to be generated. This would involve using a large population because it has been noted that the number of points on the Pareto set is almost equal to the population size itself, since the population tends to become noninferior and inferior solutions are weeded out early. On the other hand, an interactive method that involves a small population converges much faster than a large population. The disadvantage would be that it would be more difficult to spread out the small population across the entire Pareto set.
3
Proposed approach: an improved MOGA
A four-pronged approach is proposed to resolve the above mentioned problems. The modifications described next together make improvements over existing practices in MOGAs. The thrust has been in making MOGA interactive with the designer and incorporating decision-making strategies in the fitness assignment stage. The approach also involves filtering, mating restrictions and the idea of objective constraints. The stopping criteria introduced helps in detecting when the Pareto set has been obtained.
their mean, and the spread can be measured by the increase in standard deviation. If the improvement in the mean is less than some small number (tolerance), the MOGA is assumed to have converged and is stopped.
hJ
Ideal (good) point
fl Fig. 1. L2 metrics for noninferior points
3.2
At each generation, once the noninferior set has been obtained, a filtering routine can go through the population and delete some individuals from each niche. The number of individuals to delete would depend on how crowded or dense the niche is. This process is similar to "infighting" observed within natural communities, although the reason there is competition for scarce resources. In a MOGA, the reason is to help the designer focus clearly on the problem and understand it better. The deleted individuals are then replaced by those randomly generated. The observation that parents in a niche tend to proliferate extensively leads one to the conclusion that less mating among close "cousins" might lead to a more uniform sampling of the Pareto set. Hence, a limit using a distance metric in the objective function space could be used in preventing close parents from mating. One way to implement it would be to compute the distance (L2 norm) between the two parents selected for reproduction as
Lij = 3.1
Stopping criteria modification
Developing stopping criteria for a MOGA is more difficult than those for a single objective GA. To detect improvement in the noninferior set between one generation and the next, one has to maintain two sets of noninferior solutions. A threestep scheme based on an L 2 metric (or Euclidean norm) is proposed here to establish convergence. The scheme is as follows. First, for each individual in the noninferior set, the distance (the L 2 metric) from the ideal point (or a good point as identified by the designer) is computed, as shown in Fig. 1. Hence, for each generation, one set of L2.metrics is obtained. Second, the mean and standard deviation of the L 2 metrics are calculated. Third, as the noninferior set improves across generations, the points on the set get closer and closer to the ideal point. Therefore, the distance of each point on the set from the ideal point decreases as the convergence gradually is achieved. This decrease in L 2 metrics can be measured by
Filtering and mating restrictions modification
~[fk(xi) k=l
- fk(xj)] 2 ,
(2)
where m is the number of objective functions and x i and x j are two solutions. If this distance is found to be less than some limiting distance (tolerance), the parents shall not be allowed to mate, otherwise, they are. The limiting distance would depend on how dense the niches are and what the spread of the population is.
3.3
Objective constraints modification
Once the entire spread of the Pareto set is shown to the designer, (s)he should have the flexibility to zoom into a desired region by imposing objective constraints. This essentia/ly involves (i) pausing the GA, (ii) imposing the necessary constraints on the objectives (hereafter referred to as objective constraints), and (iii) resuming the GA. This process can be carried out every time the GA converges or as often as the
149 designer wishes. The MOGA can be easily adapted to prevent any individual violating the objective constraints from reproducing.
3.4
Constraint evaluation modification
Constraints are usually evaluated for all individuals in the population. However, if they are evaluated only for the noninferior individuals, instead of the entire population, then it is likely that substantial saving in computational effort is obtained. Moreover, if any constraint is violated by a noninferior individual, it is undesirable to completely prevent it from reproducing into the next generation. This is because such an individual may contain some genes that might give rise to future Pareto optimal points. For this reason, only noninferior individuals are evaluated for constraint violation and if any constraint is violated, a mild penalty is imposed in the form of reduced fitness. This reduced fitness is achieved by assigning the rank of N to the individual violating the constraint (N is the population size).
3.5 Step by step MOGA with modifications A step by step procedure for implementing MOGA with the improvements, as proposed in this paper, is given next. The procedure follows the GA steps, as summarized in Fig. 2, with the modifications. Step 1. Coding - Candidate solutions are represented by binary bit strings called chromosomes. Real valued variables are represented by discretizing them (rescaled as necessary) into equivalent integer variables. Integer variables can be easily converted into a binary form. The accuracy with which a solution can be obtained then depends on the encoded bit length of these integers. If there is more than one variable, the strings for all variables are concatenated to form a chromosome. Step 2. Generation - The initial population is chosen randomly. The size of the population is predetermined and kept constant throughout the algorithm. Step 3. Evaluation and fitness assignment - The fitness of an individual is a measure of the number of offspring that the individual will produce. Fonseca and Fleming (1998) outline a broad methodology to handle multiple objectives in a fast and efficient manner within GAs. They propose a fitness assignment for individuals as follows: 1. Of all candidate solutions in a population, determine all noninferior ones. (Note: these solutions are not Pareto solutions to the optimization problem, but only noninferior for the current population.) 2. Assign the highest fitness to these noninferior solutions. They reproduce more in the next generation. Over a number of generations, this noninferior set becomes the Pareto set. Here, the fitness is computed as follows. For each individual in the population, the objective function is computed. Each individual is compared with every other in-
dividual to determine whether it is inferior (in all objectives) with respect to others. Every individual that is inferior even to one other individual is marked accordingly. In this way, strong noninferiority (or eventually the Pareto optimality) of an individual is guaranteed. Furthermore, such an individual can fall in the nonconvex part of the noninferior (or eventually the the Pareto) set. All individuals that are noninferior are assigned a rank 1. All other inferior individuals are assigned an arbitrarily low rank. The fitness for each individual is a function of its rank. The fitness function in the GA used in this paper is based on a linearly scaled ranking of the individuals, i.e. F : [Cmax - (Cmax - Cmin)(r - 1)/(/V - 1)l/N,
(3)
where F is the fitness, r is the rank, N is the population size, Cmax = 1.2, and Cmin = 0.8. It can be seen that for the best individual (i.e. r = 1), fitness is maximum and for the worst individual (i.e. r = N), fitness is minimum. Traditionally, constraint handling in GAs is achieved through interior or exterior penalty function methods, where the value of the objective function is modified according to whether or not one of the constraints are violated. For example, the exterior penalty function is of the form J ~(x, r) = f(x) + R ~ ( g j ( x ) ) q , j=l
(gj(x)) =max(gi(x),0 ) =
{gj(x), g/(x)>0 0, gj(x) <_ o
(4)
This calls for the evaluation of every constraint for every individual in the population.
Modification: Constraints are evaluated only for noninferior individuals. For each constraint that is violated, the corresponding individual is penalized by lowering its rank. Note that the objective function(s) are not modifled. It may be advisable to be lenient with the penalty for constraint violation in the early generations because all the individuals in the population may be violating some constraints. As the population evolves, penalties for constraint violation can be increased. However, the actual penalties are problem dependent and need to be experimented with for each problem. Step 4. Selection - Population selection is based on the principle of "survival of the fittest". All noninferior solutions are assigned with high fitness values while inferior solutions are assigned with very low fitness values. This ensures that the noninferior solutions are given more probability to reproduce into the next generation. It is essentially a way of sieving out unfit individuals. Selected individuals serve as progenitors (breeding stock) for a new generation. Some of the popular selection methods are the roulette wheel selection, stochastic universal selection
150
Begin Code designs into chromosomes Generate population, P ........................................................................... ..~ Evaluate and assign fitness to individuals in P .............................~ While (not converged) do Crossovers in P to give O .......................................................... .~ Mutations in O .................................................................................. .1Evaluate and assign fitness to O ........................................... -~ Selection to form next population, P
end do end
"~r ~ qV ~ ~r "k" "A""~" "h" ~r ~ "A""~ "~r ~ ~ "~r ~Ir ~ ~ ~r ~r
......................................................................................................................... .9 " k ' k ~r ~r ~
Fig. 2. Structure of a GA (P = parental population, O = offspring population) of the mutation stage is to prevent stagnation due to the loss of genetic information and hence to maintain diversity within the population. For instance, if every solution in the population has 0 (zero) as the value of a particular bit, then no amount of crossover will produce a solution with a 1 (one) there instead. However, if that bit is mutated, there is a possibility of sampling new individuals. The mutation rates are kept low to avoid the possibility of a widely careening population that has difficulty in converging.
and tournament selection (Levine 1996). The selection technique used in this paper is based on the stochastic universal selection.
Modification: Before the parents with the high fitness values are selected, the population is filtered as described in Section 3. The selected parents are then tested for relative locations as part of the mating restrictions. If they lie too close together, they are not allowed to mate as the new parents are selected. This process is carried out until sufficient number of parents has been selected.
Step 7. Constituting next generation - With the new offspring added to the population, some individuals have to be deleted to maintain the size of the population constant. There are many existing methods, the most popular one being to delete as many weakest individuals (regardless of whether it is a parent or an offspring) as offspring are produced.
Step 5. Crossover (or recombination) - Crossover is the process of creating offspring from parents. The simplest form of crossover (one point crossover) proceeds as follows. First, pairs of solutions are chosen to undergo crossover with a random probability. Then, two new offsprings are created by exchanging one or more "pieces" of chromosomes from parents. For example, Fig. 3 shows a onepoint crossover where both parents split at a single point in the chromosome and recombine to form new offspring. It can be seen that two new individuals have been introduced to the population that could possibly be better than either parent. Crossover is one of the most important steps in a GA precisely for this reason. 10010
OllO1
;
100 + 1
100 + 01
10001
,L
~
1
011 + 10
01110
Fig. 3. One-point crossover Step 6. Mutation - - M u t a t i o n is the process of randomly changing one or more of the randomly selected bits in a chromosome in order to create a new individual. Once a set of offspring is obtained after crossover, one or more of them are mutated with a low probability. The purpose
Step 8 with modification: P~epeat Steps 3 through 7 until the designer pauses the algorithm in order to impose/review objective constraints or until the stopping criterion described in Section 3 is satisfied. This is achieved by questioning the designer every few generations in order to review the objective constraints. Shown in Fig. 4 is the feasible region on the objective function plane for a hypothetical problem. It graphically demonstrates the gradual detection of the Pareto set by the MOGA described in this paper.
4
Examples
MOGA has been demonstrated with two examples, a truss design example with continuous variables and a vibrating beam design example with continuous-combinatorial variables. Some of the typical values for the main parameters used in the GA to solve these examples are: population size = 100; crossover type = two point; mutation probability = 0.01; crossover probability = 0.85; selection type is stochastic universal selection (Levine 1996); exterior penalty function with the penalty parameter R = 50; stopping criteria is that 50 generations must pass without any improvement in the best individual; population replacement is 30%.
151 Both examples are first solved with an implementation of the MOGA as proposed by Fonseca and Fleming (1998), here called MOGA without improvements. They are also solved again with the improvements proposed in Section 3, here called MOGA with improvements. The results are then compared.
4.1
fvolume -< 0.1,
fstress,AC -< 1O0000,
fstress,BC-< 100000,
1_
Initial Population
f~
~-
~. Feasible Region
)
fl Fig. 4. MOGA detects the Pareto set
,], lm "1
/////
/////
Fig. 5. Two-bar truss The problem has been modified here to a two-objective problem in order to show the Pareto set in the objective space clearly in two dimensions. The constraints are the stresses in AC and BC that should not exceed 100,000 kPa and the total volume of material should not exceed 0.1 m 3. The reason these constraints have been imposed on the two objectives is that the Pareto set is asymptotic and extends from - o c to co. As x I and x 2 go to zero, fvolume goes to zero and fstress,AC goes to infinity. As x 1 and x 2 go to infinity, fvolume goes to infinity and fstress,AC goes to zero. Hence, in order to generate Pareto optimal solutions in a reasonable range, objective constraints are imposed minimize fvolume = x1(16 + y2)1/2 + x2(1 + y2)1/2, 20(16 q- y2)1/2 minimize fstress,AC --
YXl
Xl,X 2 > 0 ,
where fstress,BC --
The first example involves the design of a two bar truss, as shown in Fig. 5. The example was originally formulated as a single-objective problem by Kirseh (1981). In the example, point C is to be located vertically and cross-sectional areas of links AC and BC are to be selected. Hence, the design variables are Xl, x 2 and y, all continuous in nature.
4m
. (5)
80(1 + y2)0.5
Two-bar truss
!'
subject to
yx2
(6)
The problem has been solved with MOGA without the improvements in Section 4.1.1, and then with MOGA with the improvements in Section 4.1.2.
~,.1.1 MOGA without improvements Figure 6 gives the evolution history of the noninferior set for the two-bar example. As can be seen from Fig. 6, the population starts of with a nonuniform noninferior set. By 300 generations, the values of both objectives have been improved considerably. By 600 generations, the Pareto set seems to have been reached, but the population is clustered around a few niches. There are 6-7 niches clearly visible in Fig. 6. By 900 generations, the population has spread out in it's quest for Pareto solutions. The total number of function evaluations used to obtain the Pareto set is 27,296.
4.1.1
MOGA with improvements
Figure 7 shows the salne Pareto set as in Fig. 6, but with the proposed improvements in MOGA. In Fig. 7, the optimization proceeded even after the Pareto set had been generated with reasonable accuracy, pointing to scope for savings in function evaluations. The Pareto set shows improved uniformity (less clustering) at 239 generations compared to the earlier one in Fig. 6. In addition, the total number of function evaluations used to obtain the Pareto set in this case is 9,523. This number is significantly less than that obtained with MOGA without improvements (i.e. 27,296). The number however may change significantly depending upon the GA parameters. A final solution for the two-bar truss example can be selected by an L2 metric, i.e. the Pareto point that has minimum distance to an ideal (good) point. Such a point has been obtained: fvolume = 0.0137 m 3, fstress,AC = 17,384.29 kPa. The values of the final design variables are x = [xl, x2, y] = [0.0019, 0.0013, 2.9355].
4.2
Vibrating platform
The second example is a modification of the vibrating platform problem by Messac (1996) who originally formulated it with a single objective being the maximization of fundamental frequency, with an estimated cost being one of the constraints. The problem has been modified here to include cost as the second objective, and also by making the problem combinatorial (both material and geometry of the platform are synthesized). The problem is to design a platform with a motor mounted on it, as shown in Fig. 8. The machine setup is simplified as a pin-pin supported beam carrying a weight. A vibratory disturbance is imparted from the motor onto the beam, which is of length L, width b, and symmetrical about
152
Starting Iteration 50000
i
300 Generations
200000
@
40000 30000 20000 10000 0
150000 @
@
100000
%..
A
0 0
0,05
0.1
0.15
, " I N I ~ I , O , ~I,,b
0
0.01
Volume
40000 20000
I
0.03
0.04
900 Generations
100000 60000
0.02
~
Volume
600 Generations
80000
.,~_,
100000 oooo ~ 60000 40oo0
, 9
9
20OO0
"'~m~
~
~ql~
O
0 0
0.005
0.01
0.015
0.02
0.025
0.005
Volume
0.01
0.015
0.02
0.025
0.03
Volume
Fig. 6. Evolution of Pareto set (MOGA without improvements) for the two-bar truss its mid-plane. Variables d 1 and d2, respectively, locate the contact of materials 1 and 2, and 2 and 3. Variable d 3 locates the top of the beam. The combinatorial variables M i refers to the material type for layer i (i - 1,2,3) that each of the three layers 1, 2, and 3 of the beam can be made of. The mass density (p), Young's modulus of elasticity (E), and cost per unit volume (c) for each material type is also given in Table 1.
minimize f2(dl, d2, d3, b, Mi) =
Table 1. Vibrating platform example - - material properties
g6 = 0 . 2 < d 3 < 0 . 6 ,
Material type Mi
p (Kg/m 3)
E ( N / m 2)
c (S/volume)
1
2,770
70 x 109
1,500
2
100
1.6 x 109
500
3
7,780
200 x 109
800
The objective functions are the fundamental frequency, f l , and the cost of the setup, f2. The complete formulation is as follows: maximize fl ( dl, d2, d3, b, L, Mi) = (rr/2L2)( E I /#) 1/2 ,
E I = (2b/3)tEM1d31 + EM2 (d32 - d~) + EM3 (d~ - d23)], # = 2b[pMld 1 § PM2(d2 - dl) + PM3(d3 - d2)],
2b[cMl dl § CM2(d2 - dl) + CM3(d3 - d2)], subject to
g1=yL-2800-
(7) g2 = d 2 - d 1 - 0 . 0 1 < 0 ' g4=0.05
1<0.5,
gT=0.35
g8=3
4.2.1 M O G A without improvements. In Fig. 9, the cost has been plotted against the negative of the fundamental frequency in order to present it as a min-rnin problem. The problem is combinatorial in nature, there are only a few Pareto optimal points in contrast to the truss design problem where there are many solutions. The few solutions also lead to quicker convergence and therefore without the improvements MOGA was found to stagnate around 150 generations. In the initial runs, the population was allowed to evolve up to 1000 generations, but no more solutions emerged after 150 generations. The MOGA detected seven Pareto optimal solutions and required 2546 function evaluations (with a population size of
153 100 Generations
Starting Iteration 70000
60000
60000
50000
50000
40000
I
= 40000
=
@
30000
30000
20000
20000
10000
10000 0.02
0.04
9 i
e
$~
0.06
0.08
0.1
0.02
0.12
0.04
150 Generations 90000 80000 9 70000 60000 80000 , 40000 I A 30000 20000 10000 0 0
60000 50000 40000 30000
6A
10000 0 0.02
0.04
0.06
0.08
0.1
0.12
239 Generations
70000
20000
0.06 Volume
Volume
0.08
0.1
0.12
d~O
0.02
0.04
0.06
4164&
0.08
0.1
0.12
Volume
Volume
Fig. 7. Evolution of Pareto set (MOGA with improvements) for the two-bar truss \ Vibrating--]
where the design vector, x, is defined as: x = [ L , b , d l , d2, d3, M1, M2, M3] T, where M 2 = 1 (as an example) indicates that the material properties for the middle layer (layer 2) are those given in the first row of Table 1.
5
Fig. 8. Vibrating platform apparatus 150). It should be noted from Fig. 9 that the MOGA has detected the noneonvex parts of the Pareto set.
4.2.2 MOGA with improvements. The same problem was solved with the modifications in MOGA and the results are given in Fig. 10. The convergence was again somewhat faster with the improvements (127 generations as compared to 150 generations without) and the number of function evaluations required is 2115. In addition, as a result of greater sharing between noninferior solutions and mating restrictions, the final number of the solutions detected, which is 9, is also greater than the MOGA without the improvements, which is 7. The final solution for the example is selected as ffreq = 311.28 Hz, fcost = $144.9. The values of the variables for the final design are: x = [3, 0.35, 0.094, 0.20, 0.2001, 2, 1, 3]T,
Conclusions
Four new improvements are made in MOGA. They include (i) a new stopping criterion, (ii) a metric to ensure uniformity and spread of Pareto solutions, (iii) incorporating an interactive decision-making technique in the fitness assignment stage, and (iv) reducing computation time by eliminating some constraint evaluations. The MOGA presented in this paper can detect the Pareto set reliably and quickly. It can also obtain the Pareto set irrespective of whether or not it is in a nonconvex region. This is especially useful in engineering design problems where nonconvex Pareto sets are encountered. MOGA with the improvements also "spreads" out in search of the Pareto set, thereby detecting a larger range of the Pareto optimal points when compared with MOGA without the improvements. It can be clearly seen that for the vibrating platform problem, there are only a few Pareto solutions. This is not surprising, since in a discrete problem there are a finite number of Pareto points and niche formation deters the detection of all Pareto solutions. On the other hand, for continuous variable problems such as the two-bar truss, the Pareto set is essentially a continuous curve, i.e. it is always possible to
154
B0 Generations
Starting iteration 25O
25O
213:)
2C0
41,
15o ,0o
1C0
50
50
0 -,350
6
0 -300
-:250 -2C0
-150
-100
.35O
-50
t
9,300
-250
-103
.50
(-) F~-,~a~,-~-~ fr~w~-~ (Hz)
(-) Fundame=dal frequency (Hz)
150 Generations
100 Generations 50O
50O
40O
4(]O A
t
20O
0 0 -400
-2GO -150
Q
-300
-200
tO0
-1C0
(-) Fundamental frequency (Hal
0 -4OO
O
-300
-200
-100
(-) Fundamental frequency (Hz)
Fig. 9. Evolution of the Pareto set (MOGA without improvements) for the vibrating platform find a Pareto solution which lies in the neighbourhood of any two Pareto points. Therefore, in continuous problems, two close Pareto points can be used as parents to generate another Pareto point in the neighbourhood. That is precisely how the MOGA proceeds. The MOGA can thus keep proliferating and gradually detect a large part of the Pareto set. As shown, the stopping criterion introduced greatly reduces the number of function evaluations. An important part of this criterion is to assess when the Pareto set has been obtained. The L2-based technique described does detect, in a heuristic sense, the Pareto set. There is however scope for further improvement in assessing the quality of the Pareto set in a more reliable and quantitative terms. This aspect should be further investigated as part of the future research in this area.
Acknowledgements The work presented in this paper was supported in part by NSF grant DMI-9700059, Maryland Industrial Partnerships, and ONR contract N000149810842. Such support does not constitute an endorsement of the opinions expressed in the paper by the funding agency.
References Azarm, S.; Narayanan, S. 1999: A multiobjective interactive se
quential hybrid optimization technique for design decision making. Eng. Opt. (accepted) Baker, J.E. 1987: Reducing bias and inefficiency in the selection algorithm. In: Grefenstette, a. (ed.) Proc. 2-nd Int. Con]. on Genetic Algorithms and Their Applications, pp. 14-21. New Jersey Chankong, V.; Haimes, Y.Y. 1983: Multiobjective decision making: theory and methodology. New York: Elsevier Science Publishing Co., Inc. Eschenaue L H.; Koski, J.; Osyczka, A. (eds.) 1990: Multicriteria design optimization. Berlin, Heidelberg, New York: Springer Fleming, P.J.; Pashkevic, A.P. 1985: Computer aided control system design using a multiobjective optimization approach. Proc. IEER Control '85 Conf., pp. 174-179 Fonseca, C.M.; Fleming, P.J. 1993: Genetic algorithms for multiobjective optimization: formulation, discussion, and generalization. In: Forrest, S. (ed.) Proc.5-th Int. Conf. on Genetic Algorithms, pp. 416-423 Fonseca, C.M.; Fleming, P. J. 1995: An overview of evolutionary algorithms in multiobjective optimization. Evolutionary Computation 3, 1-16 Fonseca, C.M.; Fleming, P.J. 1998: Mu]tiobjective optimization and multiple constraint handling with evolutionary algorithms. Part I: a unified formulation. IEEE Trans. Systems, Man, Cyber. 28, 26-37 Goldberg, D.E. 1989: Genetic algorithms in search, optimization, and machine learning. Reading, MA: Addison-Wesley
155
50 Generations
Starting iteration 250
%
200
'
0
203
' A
150
o
4. @
@
@
103
100 50
50
@
0 -40O
0
-4OO
-300
-100
-200
-300
-200
-103
(-) FtJclarnental frequency (Hz)
(-) Fundamental frequency (Hz)
127 Generations
100 Generations 500 40O
t
30O
2OO
%
2OO
@
100 0 -400
-300
-20O
100 -100
0
(-) FLu]damental frequency (Hz)
0 -4OO
41, 9
9 @
-300
-200
-100
(-) Fundamental frequency (Hz)
Fig. 10. Evolution of Pareto set (MOGA with improvements) for the vibrating platform Goldberg, D.E.; Segrest, P. 1987: Finite Markov chain analysis of genetic algorithms. In: Grefenstette, J.J. (ed.) Genetic algorithms and their applications. Proc.2-nd Int. Conf. on Genetic Algorithms, pp. 1-8
Liepins, G.E.; Hilliard, M.R.; Richardson, J.; Palmer, M. 1988: Genetic algorithms application to set covering and traveling salesman problems. Operations Research and Artificial Intelligence: The Integration of Problem-Solving Strategies, 117-132
Hajela, P.; Lin, C.-Y. 1992: Genetic search strategies in multicriterion optimal design. Struct. @tim. 4, 99-107
Messac, A. 1996: Physical programming: effective optimization for computational design. A I A A J. 34, 149-158
Horn, J.; Nafpliotis, N.; Goldberg, D.E. 1994: A niched Pareto genetic algorithm for multiobjective optimization. Proc. [EEE Conf. on Evolutionary Computation '9~, pp. 82-87
Miettinen, K.M. 1999: Nonlinear multiobjective optimization. Boston: Kluwer
Jones, D.F.; Tamiz, M.; Mirrazavi, S.K. 1998: Investigation into the incorporation and use of multiple objectives in genetic algorithms. Proc. MOPGP'98 (held in Quebec City, Canada) Kitsch, U. 198l: Optimal structural design. New York: McGrawHill Levine, D. 1996: PGA pack parallel genetic algorithm library. Mathematics and Computer Science, Argonne NationM Lab, Argonne, IL
Received: August 27", 1998 Revised manuscript received March 23, 1999 Communicated by J. Sobieski
Paili, N.; Azarm, S.; McCluskey, P.; Sundararajan, R. 1998: An interactive multistage (-inequality constraint method for multiple objectives decision making. Trans. ASME, J. Mech. Des. 120, 678-686 Schaffer, J.D. 1991: Mulitple objective optimization with vector evaluated genetic algorithms. In: Grefenstette, J. (ed.) Proc. 1-st Int. Conf. on Genetic Algorithms, pp. 93-100 Srinivas, N.; Deb, K. 1994: Multiobjective optimization using nondominated sorting in genetic algorithms. Evolutionary Computation 2, 221-248