Mingxin YUAN
, Sun-an WANG, Canyang WU, Kunpeng LI
School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China
c
Higher Education Press and Springer-Verlag 2009
Abstract Inspired by the mechanism of Jerne’s idiotypic network hypothesis, a new adaptive immune network algorithm (AINA) is presented through the stimulation and suppression between the antigen and antibody by taking the environment and robot behavior as antigen and antibody respectively. A guiding weight is defined based on the artificial potential field (APF) method, and the guiding weight is combined with antibody vitality to construct a new antibody selection operator, which improves the searching efficiency. In addition, an updating operator of antibody vitality is provided based on the Baldwin effect, which results in a positive feedback mechanism of search and accelerates the convergence of the immune network. The simulation and experimental results show that the proposed algorithm is characterized by high searching speed, good convergence performance and strong planning ability, which solves the path planning well in complicated environments. Keywords artificial potential field, immune network, path planning, Baldwin effect
Path planning refers to searching an optimal and collisionfree path from the starting point to the goal according to certain optimization criterion (e.g., minimal working cost, shortest walking route) and is one of the most important tasks for autonomous mobile robots in navigation. At present, the common path planning methods include APF [1], visibility graphs [2], and so on, however they have some deficiencies in global optimization and robustness, etc. In Received October 9, 2008; accepted December 15, 2008 E-mail:
[email protected]
recent years, with the development of artificial intelligence, genetic algorithm (GA) [3], particle swarm optimization algorithm (PSOA) [4], ant colony algorithm (ACA) [5] and so on have been focused on the path planning, which has already demonstrated robustness and flexibility. However, to some extent, GA and PSO have deficiencies in complicated coding and large computing, and the planning ability and efficiency of ACA are limited to the grid division. The biological immune system (BIS) is a highly evolved, complicated adaptability system which can identify and resist antigenic foreign bodies like bacteria and viruses and maintain the stability in vivo environment. BIS is characterized by immunological memory, immunological tolerance, pattern recognition and so on, and provides new thoughts to the engineering problem, especially robotics. In the past several years, the immune system has captured many scholars’ attention, and many kinds of immune algorithms and models based on the immune mechanism have been designed by some scholars. Ishiguro et al. [6] provided a kind of behavior arbitration mechanism based on Farmer’s dynamic model [7], and the mechanism was evaluated on a garbage-collecting problem [8]. Li et al. [9] solved the dog and sheep problem in the distributed autonomous robotics system. Whitbrook et al. [10] realized motion planning. Farmer’s dynamic model demonstrates robustness and flexibility in the path planning, but is short of global optimization ability. Chen et al. [11] presented a new planning method based on an immune genetic algorithm using operators of crossover, mutation, vaccines and so on, which improved the immune planning ability well, however, the planning ability and efficiency were also limited to the magnitude of the grid. Wang et al. [12, 13] converted the path planning into space search in the antibody network using the stimulation and suppression between the antigen and antibody, and provided a new simple immune network algorithm (SINA).
248
Mingxin YUAN, et al. APF-guided adaptive immune network algorithm for robot path planning
Experimental results show that the SINA is characterized by good quickness and flexibility, however, the searching precision and convergence property need further improvement. In this paper, based on the Jerne’s idiotypic network hypothesis, a new adaptive immune algorithm (AINA) is presented. An antibody selection operator by taking the planning results of APF and antibody vitality as variables, is constructed, and an updating operator of antibody vitality is designed based on the Baldwin effect, which improve the planning property of the immune network. This paper is arranged as follows: Section 2 gives the planning model based on the immune network. Section 3 presents the adaptive immune network algorithm for path planning. Section 4 is the simulation and discussion. Section 5 provides an experiment. The conclusions are given in Section 6.
The biological immune system is a highly complicated system. At present, the commonly accepted immune theories are immune clone choice theory and idiotypic network hypothesis. The latter was proposed by Jerne in 1974 [14, 15]. The idea of Jerne’s is schematically shown in Fig. 1. From the figure it can be seen that the immune system is formed by a network of B-lymphocytes, and Blymphocytes communicate with each other to finish the task of the immune system through interaction among the antibodies. The idiotope Id1 of antibody 1 (Ab1) stimulates B-lymphocyte 2, which attaches Ab2 to its surface, through the paratope P 2. Viewed from the standpoint of Ab2, the idiotope Id1 of Ab1 works simultaneously as an antigen. As a result, the B-lymphocyte 1 with Ab1 are suppressed by Ab2. On the other hand, Ab3 stimulates Ab1 since the idiotope Id3 of Ab3 works as an antigen viewed from Ab1. The members of the whole network form a large-scale closed chain through mutual
stimulation and suppression, and work as a self and non-self recognizer [6, 10]. The artificial immune system (AIS) is a simulation of the biological immune system. Since the real robot is static before the optimal path is found, a virtual robot is provided to finish the optimization of the immune network in this paper. The environment surrounding the virtual robot is regarded as the antigen, and the antigen epitope is the environmental information including the obstacles and goal. The virtual robot action aiming at the environment is regarded as the antibody, the antibody idiotope is also an environmental information, and the antibody paratope is the moving instruction. The match between the antigen and antibody is called stimulation and the non-match is called suppression, which is evaluated by antigen-antibody affinity. Inspired by the mechanism of idiotypic network hypothesis, the planning model based on the immune network is constructed as shown in Fig. 2. The virtual robot sets off from the “S” point, and searches the “G” point through the stimulation and suppression between the antigen and antibody. The moving trail of the virtual robot can be described with a series of antibodies, and the connection among antibodies is not invariable, which depends on the environment and antibody vitality. A large-scaled artificial immune system is constructed from “S” to “G”. The essence of path planning is to find an inner figure composed of antibodies, and the figure describing the moving trail of the virtual robot is the best [12].
Fig. 2
Planning model based on immune network
3.1 Basic definitions Fig. 1
Jerne’s idiotypic network
To simplify the description of AINA, some definitions are given firstly as follows.
Front. Comput. Sci. China 2009, 3(2): 247–255
Definition 3.1 Let an omni-directional virtual mobile robot (see Fig. 3) be the immune agent. The virtual robot is equipped symmetrically but unevenly with eight virtual sensors around it. The detection direction of the virtual sensors is 1–8, and the detection distance is divided into three degrees: near, mid and far. The virtual robot can move according to eight different moving instructions C = {a, b, c, d, e, f, g, h}, i.e. {forward, left forward, right forward, left move, right move, left back, right back and back}, and the stop instruction is adopted at the goal.
249
(2)
Ab = (IAbO , IAbG , Com), where, IAbO =
8 i=1
AbOi is the obstacle information. ∀i =
1, 2, . . . , 8, AbOi ∈ {00, 01, 10, 11}. IAbG =
8 i=1
AbGi is
the goal information. ∀i = 1, 2, . . . , 8, AbGi ∈ {0, 1}. The elements of the above two sets are defined as the same way as in antigen. Com ∈ C. Definition 3.4 In this paper, antigen-antibody affinity ζ is used to evaluate the degree of environmental match: ζ = (IAgO − IAbO ) · (IAgG − IAbG )
(3)
where the signs “·” and “– ” represent “AND” and “NOT” logical operators, respectively. Definition 3.5 Let antibody vitality τ denote the correctness of executing instructions for the antigen. The higher the correctness is, the stronger the antibody vitality is. In this paper, antibody vitality is updated according to the activated times. The vitality of the new antibody is initialized according to the distance variance [12] after the antibody transition. 3.2 Main operators Fig. 3 Immune agent
3.2.1 Antibody transition operator
Definition 3.2 Let antigen (Ag) be the environment information surrounding the virtual robot: Ag = (IAgO , IAgG )
(1)
where, IAgO is the obstacle information and is coded binarily according to the detection direction and distance of sensors: IAgO = ∀i = 1, 2, . . . , 8,
8 i=1
AgOi ,
AgOi ∈ {00, 01, 10, 11},
the element of the set denotes non-obstacle, near distance, middle distance and far distance at direction i, respectively. IAgG is the goal information and is coded according to the detection direction: IAgG = ∀i = 1, 2, . . . , 8,
8 i=1
AgGi ,
AgGi ∈ {0, 1},
the element of the set, denotes the non-goal and goal at direction i, respectively. Definition 3.3 Let antibody (Ab) be the virtual robot action aiming at the environment:
The transition probability of the antibody is generally designed based on antibody vitality [12] during the search. To improve the searching efficiency of the proposed algorithm, the APF method is introduced in this paper, and a new antibody transition operator is constructed by combining antibody vitality and planning results of the APF method. APF was proposed by Khatib in 1986 [16, 17]. The basic concept of the APF method is: The attractive potential field Uatt is constructed according to the goal, the repulsive potential field Urep is constructed according to the obstacles, and the virtual robot moves to the goal under the total virtual force of the attractive force Fatt of Uatt and the repulsive force Frep of Urep . Let X, Xobs and Xgoal denote the position of the virtual robot, obstacle and goal, respectively. The attractive and repulsive potential fields are generally defined as follows: 1 2 ka X − Xgoal , 2 ⎧ 2 1 1 ⎪ ⎨ kr − , ρ ρ0 ; 2 ρ(X, Xobs ) ρ0 Urep (X) = ⎪ ⎩ 0, ρ > ρ0 , Uatt (X) =
(4)
(5)
where, ρ(X, Xobs ) = X − Xobs is the Euclidean distance between the virtual robot and obstacle, ρ0 is the influence
250
Mingxin YUAN, et al. APF-guided adaptive immune network algorithm for robot path planning
range of the obstacle, the ka and kr are positive scaling factors. The attractive and repulsive forces are generally defined as the negative gradient of the attractive and repulsive potential fields, respectively, in terms of position, i.e., Fatt = −∇Uatt (X) = −ka X − Xgoal , Frep = −∇Urep (X) ⎧ 1 1 ⎪ ⎨ kr − ∇ρ(X, Xobs ), ρ(X, Xobs ) ρ0 = ⎪ ⎩ 0,
(6)
ρ ρ0 ;
ρ > ρ0 . (7) The avoidance angle of the virtual robot is defined as follows:
α = ∠ Fatt + (8) Frep . Supposing that the angle between the executing direction of the antibody instruction and the x axis is β, the guiding weight of the antibody based on the APF method is defined as follows: σ(t) = γ · exp(cos(α − β)),
(9)
where, γ is the adjusting coefficient. The closer the direction between execution of the antibody instruction and avoidance of the virtual robot is, the bigger the guiding weight is. To improve the selecting efficiency of the antibody, the transition probability is constructed by combining the antibody vitality τ and the guiding weight σ. pi (t) =
[τi (t)] · [σi (t)]2 , 8
2 [τj (t)] · [σj (t)]
(10)
3.2.3 Antibody vitality updating operator Basic medical research indicates that the Baldwin effect [18, 19] exists in the biological immune system. The Baldwin effect states that once an individual learns or achieves some useable properties, its offspring will achieve the same properties with higher probability. To improve the searching efficiency of AINA, we have further encouragement aiming at successful antibodies after the vitality of all activated antibodies is updated, which will make the successful antibody be selected again with high probability when the same antigen is met during the next planning task, form a positive feedback mechanism and accelerate the recognition of the antigen. To ensure the dynamic property of theantibody database, all antibody vitalities will be attenuated with certain coefficients. Accordingly, the updating operators of antibody vitality include the following three parts. Vitality updating operator of activated antibody: τ (t + 1) = τ (t) + μ
where, τi (t) is the vitality of the antibody whose paratope is ith instruction. In this paper, the selection of the antibody is carried out based on roulette-wheel algorithm. The antibody whose vitality is strong and instruction direction is close to the avoidance α is selected with higher probability, at the same time, other antibodies with the same idiotope will have the possibility of being selected, too. 3.2.2 Antibody learning operator To solve the collision and local wandering during the antibody transition, the learning strategy of antibody vitality is defined: (11)
where, n is the layer number of the network, Nn is the total layer number, χ is the dynamic learning coefficient and 0 < χ < 1, ε is the static learning coefficient and ε > 0.
Na , Nt
(12)
where, Na is the number of times of activation of activated antibodies in one cycle, Nt is the number of antibodies composing the planning path, and μ is the adjusting coefficient. Vitality updating operator of successful antibody: τ (t + 1) = τ (t) + σ
j=1
τ n (t) = τ n (t − 1) − χNn −n+1 ε,
The learning purpose is to decrease the vitality of antibodies which cause the collision and local wandering, and increase the attending probability of other antibodies holding the same idiotope.
Ns , Nt
(13)
where, Ns is the number of times of activation of the successful antibody in one cycle, and σ is the adjusting coefficient. Antibody vitality attenuation: τ (t + 1) = (1 − ρ) · τ (t),
(14)
where, ρ is the attenuating coefficient. Attenuating coefficient ρ reflects the remains of antibody vitality as time goes on, which will affect the global searching ability and convergence speed of AINA. If ρ is increased, the vitality of the antibody that is rarely used tends to 0, which affects the randomness of AINA and decreases the global searching ability. If ρ is decreased, the randomness and global searching ability are increased, however, the convergence speed will be decreased. In this paper, the attenuating coefficient ρ is adaptively changed and expressed as follows: ρmax − ρmin , (15) ρ = ρmin + t Tmax
Front. Comput. Sci. China 2009, 3(2): 247–255
251
where, ρmin and ρmax are the minimal and maximal attenuating coefficients, respectively, and Tmax is the maximal cycle times. 3.3 Flow of AINA Step 1 Initialize parameters of AINA: Tmax , ka , kr , γ, ρmin , ρmax and so on. Step 2
Initialize the planning task.
Step 3 Antigen recognition. Extract the environment information and code on antigen according to Eq. (1). Step 4 Antibody match. Look for the matching antibody according to Eq. (3) from the antibody database. If there is none, eight new antibodies with the same idiotope but different paratopes are produced according to Eq. (2), otherwise go to Step 5. Step 5
Antibody transition according to Eq. (10).
Step 6 Judge whether there is collision and wandering. If not, go to the next step, otherwise, the learning strategy of antibody vitality is carried out according to Eq. (11), and go to Step 3. Step 7 Judge whether the robot has reached the goal. If not, go to Step 3, otherwise, hold optimal path and update antibody vitality. Step 8 Judge whether the algorithm has finished all cycles. If not, go to Step 2, otherwise output the optimal path.
To verify the validity of AINA, we have some simulation experiments with Matlab 7.0 on an Intel Pentium IV 2.99GHz computer with 512MB RAM in two environments as shown in Fig. 4. The transition probability of the antibody is carried out with two modes, namely, antibody vitality [12] and Eq. (10). Two modes are denoted with AINA (Vitality) and AINA (APF), respectively. The experimental results are compared with those of SINA and ACA [5]. The convergence condition of the four algorithms is: The length of the optimal antibody path does not change for fifteen generations. Considering the randomness, each algorithm in each environment is independently tested thirty times. Because the antibody path is tortuous and not suitable for robot to move in local region, the path is simply treated by straightening and smoothing. All the parameters of the four algorithms are from experimental value. In AINA, Tmax =50, ρ0 =4m, ka =10, kr =2, χ=0.25, ε=1, μ=1, σ=3, ρmin =0.01,
Fig. 4
Simulation environments, (a) environment I; (b) environment II
ρmax =0.95. The parameters of SINA are the same as parameters in Ref. [12]. In ACA, the environment is divided into 20×50, the ant number is 20, the pheromone and goal enlightening factors are 1 and 2, respectively, and the evaporation factor is 0.1. Table 1 shows the comparisons of planning properties among the four algorithms. From four path lengths, it can be seen that the length of SINA is the longest, which shows its weak planning ability. The planning ability of AINA is obviously improved by constructing new updating operators of antibody vitality and adaptive attenuating coefficient. Since ACA has the ability of global optimization, the path length is shorter than that of SINA, but the length is longer than that of AINA (APF) and similar to that of AINA (Vitality). Simulations show that the planning length will be shorter with further division of the grid, however the planning efficiency is decreased significantly. From the average convergence generation, it can be seen that the convergence speed of SINA is the fastest for its strong space searching and immunological memory, however, it is only the local convergence. The convergence speed of AINA is quicker than that of ACA. The antibody transition based on APF accelerates the convergence speed of AINA, and its convergence property is better than that of AINA (Vitality). Figure 5 and Fig.6 are the evolving curves of the immune network for three immune algorithms in two environments respectively. From the figures, it can be seen that an immune
252
Mingxin YUAN, et al. APF-guided adaptive immune network algorithm for robot path planning
Table 1
Comparisons of planning properties among three algorithms environment I
planning properties SINA
AINA (Vitality)
environment II
AINA (APF)
ACA
SINA
AINA (Vitality)
AINA (APF)
ACA
average path length of antibody (ant) (m)
25.28
24.34
23.88
24.26
28.48
25.30
24.78
25.58
average smooth path length (m)
23.34
22.81
22.34
22.74
24.13
23.91
23.76
24.07
optimal path length of antibody (ant) (m)
24.80
23.60
23.20
23.48
27.20
23.60
23.26
23.81
optimal smooth path length (m)
22.19
22.02
21.56
22.15
23.04
22.23
22.11
22.79
average convergence generation
12.65
14.70
13.95
25.67
8.95
13.35
12.60
26.83
network algorithm can make the robot avoid all kinds of obstacles, especially the local minimum environment in environment II, and find the optimal path by self-organization, self-learning and immunological memory, which preferably show its robustness and flexibility. Comparing (a), (b) and (c) of Fig.5 and Fig.6, it can be further seen that the updating operators of antibody vitality based on the Baldwin
Fig. 6
Fig. 5
Evolving curves of the immune network in environment I. (a) Evolving curves of SINA; (b) evolving curves of AINA (Vitality); (c) evolving curves of AINA (APF)
Evolving curves of the immune network in environment II. (a) Evolving curves of SINA; (b) evolving curves of AINA (Vitality); (c) evolving curves of AINA (APF)
effect realize the positive feedback of the immune network and the adaptive attenuating coefficient realizes the early approximate search and later subtle search during the antibody search, which make the convergence property of AINA be obviously better than that of SINA. In addition, the antibody transition operator based on the planning results of the
Front. Comput. Sci. China 2009, 3(2): 247–255
APF method accelerates the obstacle avoidance. Although the virtual robot is easy to trap into a local minimum for APF, it will get rid of the obstacles using the learning strategy of antibody vitality, and the corresponding activated and successful antibodies will be saved in the antibody database for the immunological memory of AINA. According to the Baldwin effect, the vitality of successful antibodies which get rid of the local minimum will be stronger, and the vitality of other antibodies will be weaker. When the same obstacles including the local minimum, are met in the next task, the successful antibodies will be chosen again with higher probability, which accelerates the obstacle avoidance velocity while improving the convergence property of immune network. Therefore, the convergence property of AINA (APF) is the best. Figures 7 and 8 are the optimal planning results of four algorithms in environment I and II, respectively. From the figures, it can be seen that the four algorithms all can find each optimal path. Figure 9 is the average convergence curves of four algorithms in two environments. From the figure, it can be seen that the planning property of SINA is weak, the property of AINA has been obviously improved, the property of AINA (Vitality) is similar to that of ACA, and the property of AINA (APF) is the best.
Fig. 7
Optimal planning results of four algorithms in environment I. (a) Optimal antibody (ant) path; (b) optimal smooth path
253
Fig. 8
Optimal planning results of four algorithms in environment II. (a) Optimal antibody (ant) path; (b) optimal smooth path
Fig. 9
Average convergence curves of four algorithms in two environments. (a) Environment I; (b) environment II
254
Mingxin YUAN, et al. APF-guided adaptive immune network algorithm for robot path planning
To further verify the validity of AINA in the practical environment, aiming at a known environment where there are four obstacles (see Fig. 10), we carried out the experiment. As shown in Fig. 10, the overall experimental system consists of a tracked mobile robot, a host computer, a wireless host computer
optimal path
location dongle
blind node (tracked robot) reference node obstacle
Fig. 10
Experiment environment for path planning
network based on Zigbee technology and some obstacles. The optimal path for the known environment is received in the host computer using AINA and some corresponding control instructions are sent to the mobile robot through a wireless network. The robot is wirelessly located through Zigbee technology. To construct the wireless network well, eight reference nodes are placed around the planning environment, and the blind node is the robot. The experimental result is shown in Fig. 11 with twelve photos, (a)–(l), taken during the experiment. From the above figure, it can be seen that the tracked robot can safely finish the navigation from the start point to the goal under the control of the host computer by referencing the planned optimal path. Fig. 11 (a) shows the starting position. Figs.11 (e)–(i) show the avoidance of the local minimum before the goal. Figs. 11 (j)–(l) show that the robot moves toward the goal. Comparing the optimal path with the real track of the tracked robot, it can be seen that there are some differences between them which were caused by the errors of wireless location. To further improve the navigation precision, the location of the robot can be finished using visual location.
! " Inspired by the mechanism of idiotypic network hypothesis, a new APF-guided adaptive immune network algorithm is presented in this paper. Through the theoretic analysis, simulation and experiment, we can draw the following conclusions. 1) Immune network algorithm for path planning is characterized by self-organization, self-learning. When the environment is more complicated, the virtual robot can achieve the feasible path through self-learning and avoid trapping into local minimum environment. 2) The antibody transition operator constructed through combining planning results of the APF method and antibody vitality can obviously improve the avoidance efficiency. 3) The updating operators of antibody vitality based on the Baldwin effect can realize the positive feedback of AINA during the antibody search, which accelerates the searching speed and improves the convergence property of the immune network.
Fig. 11
Experiment result in known environment
4) The adaptive adjustment of antibody vitality can improve the searching precision and efficiency of the immune network algorithm.
Front. Comput. Sci. China 2009, 3(2): 247–255
Compared with the corresponding SINA and ACA, the simulation results show that the optimal planning ability and convergence property of AINA is obviously improved, and the planning property of AINA (APF) is the best. The experiment shows the validity of the proposed AINA in the mobile robot navigation.
9.
10. Acknowledgements This work was supported by the National Natural Science Foundation of China (Grant No. 50575177).
# 1. Ge S S, Cui Y J. Dynamic motion planning for mobile robots using potential field method. Automous Robots, 2002, 13(l3): 207–222 2. Wei C Z, Ma Z, Chang J H. Reduction of visibility graph on global path planning for mobile robot. In: Proceeding of Chinese Control Conference. Piscataway: IEEE, 2006, 1605–1608 3. Liu J, Yang D Y. Path planning based on double-layer genetic algorithm. In: Proceedings of 3rd international conference on natural computation. Piscataway: IEEE, 2007, 357–361 4. Saska M, Macas M, Preucil L. Robot path planning using particle swarm optimization of ferguson splines. In: Proceedings of 11th IEEE International Conference on Emerging Technologies and Factory Automation. Piscataway: IEEE, 2006, 833–839 5. Liu G Q, Li T J, Li Y P. The ant algorithm for solving robot path planning problem. In: Proceedings of 3rd international conference on information technology and applications. Piscataway: IEEE, 2005, 25–27 6. Ishiguro A, Shirai Y, Kondo T, et al. Immunoid: an architecture for behavior arbitration based on the immune networks. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 1996, 1730–1736 7. Luh G C, Liu W W. Reactive immune network based mobile robot navigation. In: Proceedings of 3rd International Conference on Artificial Immune Systems. Berlin: Springer, 2004, 119–132 8. Vargas P A, De Castro L N, Michelan R. Implementation of an
11.
12.
13.
14. 15.
255
immuno-genetic network on a real Khepera II robot. In: Proceedings of International Conference on Evolutionary Computation. Piscataway: IEEE, 2003, 420–426 Li J H, Wang S A. Model of immune agent and application in path finding of autonomous robots. In: Proceedings of International Conference on Machine Learning and Cybernetics. Piscataway: IEEE, 2003, 1961–1964 Whitbrook A M, Aickelin U, Garibaldi J M. Idiotypic immune networks in mobile-robot control. In: Proceedings of IEEE Transactions on Systems, Man, and Cybernetics, Part B. Piscataway: IEEE, 2007, 37(6): 1581–1598 Chen Xi, Tan G Z, Jiang B. Real-time optimal path planning for mobile robots based on immune genetic algorithm. Journal of Central South University (Science and Technology), 2008, 39(3): 577–583 Wang S A, Zhuang J. An immunity algorithm for path finding and optimizing of the moving robot. Journal of System Simulation, 2002, 14(8): 995–997 Zhuang J, Wang S A. Further study of robot path planning algorithm based on artificial immune net theory. Journal of System Simulation, 2004, 16(5): 1017–1019 Jerne N K. The immune system. Scientific American, 1973, 229 (suppl 1): 52–60 Cayzer S, Aickelin U. A recommender system based on idiotypic artificial immune networks. Journal of Mathematical Modelling and
Algorithms, 2005, 4(2): 181–198 16. Jerne N K. Idiotypic networks and other preconceived ideas. Immunological Rev, 1984, 79: 5–24 17. Ge S S, Cui Y J. New potential functions for mobile robot path planning. IEEE Transactions on Robotics And Automation, 2000, 16 (5): 615–620 18. Khatib O. Real-time obstacle avoidance for manipulators and mobil robot. International Journal of Robotics Research, 1986, 5 (suppl 1): 90–98 19. Sun Y F, Zhang C K. Baldwin effect based chaotic parallel genetic algorithm with variable-scale learning. In: Proceedings of International Conference on Machine Learning and Cybernetics. Piscataway: IEEE, 2007, 1003–1008