Arab J Sci Eng (2011) 36:795–807 DOI 10.1007/s13369-011-0081-5
R E S E A R C H A RT I C L E - E L E C T R I C A L E N G I N E E R I N G
Serkan Aydin · Ilker Kilic · Hakan Temeltas
Using Linde Buzo Gray Clustering Neural Networks for Solving the Motion Equations of a Mobile Robot
Received: 21 October 2009 / Accepted: 25 May 2010 / Published online: 29 July 2011 © King Fahd University of Petroleum and Minerals 2011
Abstract In this paper, motion equations for the synchro-drive robot Nomad 200 are solved by using Linde Buzo Gray (LBG) clustering neural networks. The trajectories of the Nomad 200 are assumed to be composed of straight line segments and curves. The structure of the curves is determined by only two parameters, turn angle and translational velocity in the curve. The curves of the trajectories are found by using artificial neural networks (ANN) and the LBG clustered ANN. In this study a clustering method is used to improve the learning and test the performance of the ANN. In general, the LBG algorithm is used in image processing as a quantizer. This is the first publication where the LBG algorithm is successfully used in clustering ANN data sets. Thus, the best training data set of the ANN is achieved and minimum error values are obtained. It is shown that LBG-ANN models are better than the classic ANN models. Keywords Mobile robot · Neural networks · Clustering · Linde Buzo Gray algorithm
S. Aydin (B) TMYO, Industrial Electronics Program, Celal Bayar University, Turgutlu, Manisa, Turkey E-mail:
[email protected] I. Kilic Electrical and Electronics Department, Engineering Faculty, Celal Bayar University, Muradiye, Manisa, Turkey E-mail:
[email protected];
[email protected] H. Temeltas Electrical and Electronics Faculty, Electric Department, Istanbul Technical University, Maslak, Istanbul, Turkey E-mail:
[email protected]
123
796
Arab J Sci Eng (2011) 36:795–807
Abbreviations Mr Ann Lbg Lbg-Ann Cd Vgm Pca Wt Fcm Ecg Rbfn Lmn Mlps Bp E x y θ ω vt at as WJi W J i
Mobile robot Artificial neural network Linde Buzo Gray Linde Buzo Gray clustered artificial neural network Cell decomposition Visibility graphics method Principal component analysis Wavelet transform Fuzzy C-means clustering Electrocardiogram Radial basis function network Local model network Multi-layer perceptrons Back propagation Error value x axis y axis Angle of mobile robot according to the origin Rotational velocity Translational velocity Maximum translational acceleration Maximum rotational acceleration Weight of a neuron Difference weight at each iteration
1 Introduction Trajectory planning for mobile robots (MR) has been widely studied in the past two decades. The literature studies are classified into two categories: (1) closed planning (in real time, local) and (2) open planning (offline, global) [1]. Planning is termed close when the mobile robot is active and moving. In close planning the only criterion is reaching the target. In other words, it is not always possible to provide the required features of the planned trajectory in close planning. These requirements can only be achieved in open planning. Generally, in open planning, the trajectory has a three step hierarchy: path planning, smoothing the planned path and following the smoothed path. It is also possible to classify open planning into two groups; open-continuous and open-discrete. Trajectory planning in open-continuous planning can be achieved in two steps. In the first step, planning is achieved without discretizing the working space. In the second step the planned trajectory is observed by a controller. Generally, variational methods are used in this step. The equation system formed by the Lagrange multipliers method is solved using numerical methods. In open-discrete planning the three steps mentioned above are realized in order. First of all, the path in the discretized space is planned (splitting into cells, visibility graphics etc.). Then the path, which is planned in different ways, is smoothed. In the second
123
Arab J Sci Eng (2011) 36:795–807
797
step, different curve families from the literature, such as B-spline curves, cubic spirals, polynomial curves and clothoids, are used [2–5]. In the last step, a controller is required to follow the smoothed path. In another study [6] the path, which is planned in different ways (splitting into cells—or cell decomposition (CD), visibility graphics method (VGM) etc.), is smoothed using flat parts (rotational velocity ω = 0) and curved parts for the corners of the path (translational velocity vt = constant). Increasing or decreasing the translational velocity in the flat parts is achieved at maximum acceleration (at ). When increasing or decreasing the rotational velocity in the curved parts that are obtained at maximum acceleration (ω˙ = as ), the translational velocity is assumed as constant. The fundamental motion equations for the synchro-drive robot [7] are solved by using different approximations [8]. Fox’s study derives an approximation that models velocity as a piecewise constant function in time. Under this assumption, robot trajectories consist of sequences of many finite segments of circles. In this paper, motion equations are solved by using the Mclauren sequence approximation. After that, an artificial neural network (ANN) model is proposed for the curved parts. In other words, the ANN model is used instead of a deterministic model of the motion equations [6,9]. It is seen that the calculation time is decreased by a factor of 1/8 by using ANN models. In trajectory optimization, minimization of the calculation time becomes very important [6,9]. In order to obtain better ANN models, one of the most important things is to form the learning data sets. Different studies have been performed to improve the accuracy of the ANN models [10,11] and [12]. In [10], principal component analysis (PCA), wavelet transform (WT) and fuzzy c-means (FCM) clustering are used for data reduction. After that, the data is trained for the classification of electrocardiogram (ECG) arrhythmias using artificial neural networks. On the other hand, the main contribution of [11] is to present an approach for investigating the relationship between the clustering process on input–output training samples and the mean squared output error in the context of a radial basis function network (RBFN). The FCM algorithm is proposed in [12] in order to divide the input space into sub regions. Therefore a submodel multi-layer perceptron (MLP) is identified to represent a particular region. The local model network (LMN) represents the mobile robot by a set of locally valid submodels that are multi-layer perceptrons (MLPs). Training these submodels employs the back propagation (BP) algorithm. The paper proposes the FCM algorithm in this scheme to divide the input space into sub regions, and then an MLP submodel is identified to represent a particular region. In this study, the LBG clustering algorithm is used to improve accuracy of the ANN and more accurate models are achieved in this way. 2 Material and Methods 2.1 Material and Trajectory Planning The material is the mobile robot Nomad 200 (Fig. 1). The parameters of translational acceleration and translational velocity are defined as at = 0 and vt ∈ [0, 1] m/s – θ ∈ [0, ]◦ , respectively. The curved parts are recorded for the different values. During the trajectory tracking of the MR, the coordinates x and y (0.254 cm sensitivity) and velocities (vt 0.254 cm/s and ω 0.1 ◦ /second sensitivity) are recorded in a file 10 times a second. The curved parts are also calculated by using deterministic models of the Nomad 200. The observable trajectory plan, which is planned by the VGM or cell decomposition (CD), can be assumed to be an optimization problem [6]. The paths produced by VGM/CD are smoothed using curved parts (Fig. 2). 2.2 Curved and Flat Part Equations The curved and flat parts from the equations for the mobile robot (MR) Nomad 200, are investigated here. The wheel structure is shown in Fig. 3. The nonholonomic restrictions must be satisfied when the robot moves without sliding: x˙ sin(θ ) − y˙ cos(θ ) = 0
(1)
This means the direction of the translational velocity vector under the geometrical configuration (x, y, θ ) is always parallel to the wheel direction. The translational and rotational velocities and their derivatives are embedded into the geometrical configuration (x, y, θ , vt , ω) and named a dynamic configuration [5]. Here, vt is the translational velocity and w is
123
798
Arab J Sci Eng (2011) 36:795–807
Fig. 1 Nomad 200 robot
Fig. 2 Robot trajectories (s1 , s2 and s3 are the linear path pieces; the required distance for the α1 , α2 , spins and vt1 , vt2 translational velocities; d1 and d2 are the velocity change distances from 0 to vt1 , respectively.)
the rotational velocity. Equation 1 in this space can be shown as: (The dots above the parameters represent the time derivatives). x˙ = vt cos(θ ) = vx y˙ = vt sin(θ ) = v y
123
Arab J Sci Eng (2011) 36:795–807
799
Fig. 3 Wheel structure of the Nomad 200
θ˙ = ω v˙t = at ω˙ = as
(2)
where at and as are the translational and the rotational accelerations respectively. Because of the dynamic features of the robot, there are some restrictions on the velocity and the acceleration: |vt | ≤ vt max |ω| ≤ ωmax |at | ≤ at max |as | ≤ as max
(3)
In the applications the control vector is assumed as u = [vt ω]T . When both sides of Eq. 2 are integrated, Eq. 4 is obtained. t x = x0 +
vt cos(θ )dt t0
t y = y0 +
vt cos(θ )dt t0
t θ = θ0 +
vs dt
(4)
t0
t vt = vt0 +
at dt t0
t ω = ω0 +
as dt t0
123
800
Arab J Sci Eng (2011) 36:795–807
If at and as are defined constant as atmax and asmax respectively in Eq. 4 because of the problem definition in the introduction, then Eq. 5 is obtained. vt = vt0 + at |tt0 = vt0 + at (t − t0 ) a = a t max t (5) ω = ω0 + as |tt0 = ω0 + as (t − t0 ) a = a s
s max
Using Eq. 5 in the third definition of Eq. 4: t {(ω0 + as (t − t0 ))}as
θ = θ0 +
= as max
dt
t0
as (t − t0 )2 = θ0 + (ω0 − as t0 ) (t − t0 ) + 2
(6) as = as max
If Eq. 5 and Eq. 6 are embedded into the first two definitions of Eq. 4 then: x = x0 ⎧ ⎫ (v + a (t − t )) ⎪ ⎪ t0 t 0 t ⎪ ⎪ ⎨ ⎬ + (ω − a t )(t − t ) cos θ 0 0 s 0 0 + dt ⎪ ⎪ ⎪ ⎪ as (t−t0 )2 ⎭ ⎩ + 2 t0 as = as max at = at max y = y0 ⎧ ⎫ (vt0+ at (t − t0 )) ⎪ ⎪ ⎪ t ⎪ ⎬ ⎨ + (ω − a t )(t − t ) sin θ 0 0 s 0 0 + dt ⎪ ⎪ 2 ⎪ ⎪ 0) ⎭a = a + as (t−t t0 ⎩ 2 s s max at = at max
(7)
Since the trigonometric definitions (cosine and sine) in Eq. 7 contain t2 terms, then these integrals are called fresnel integrals [13]. The approximate results are obtained by extracting the trigonometric terms into a Mclaren sequence around a working point. ∞
θ2 θ4 θ 2n (−1)n =1− + ... (2n)! 2! 4! n=0 ∞ θ 2n+1 θ3 θ5 (−1)n =θ− + ... sin(θ ) = (2n + 1)! 3! 5!
cos(θ ) =
(8)
n=0
Referring to the solution suggestion given in the Introduction, the trajectories consist of flat and curved parts. In order to see the calculations more clearly, Eq. 7 is restructured when t0 = 0 is accepted in the flat and the curved parts. Since as = 0 and ω0 = 0 are in the smooth parts, Eq. 9 can be obtained from Eq. 7, as t {(vt0 + at t) cos (θ0 )}at
x = x0 +
= at max
dt
0
t2 = x0 + vt0 t + at cos (θ0 ) 2 at
= at max
t {(vt0 + at t) sin (θ0 )}at
y = y0 +
= at max dt
0
t2 = y0 + vt0 t + at sin (θ0 ) 2 at
123
= at max
(9)
Arab J Sci Eng (2011) 36:795–807
801
Fig. 4 Voronoi regions
Since the translational velocity is constant as vt = vt0 in the curved parts then at = 0 is assumed: t (v ) t0 2 x = x0 + cos θ0 + ω0 t + as2t as 0 t (v ) t0 2 y = y0 + sin θ0 + ω0 t + as2t
dt = asmax
(10) dt
as = asmax
0
Equation 10 can be solved extracting the Mclauren sequence as in Eq. 8. The switching times (as ) for the curved parts are calculated. The x and y values are obtained by replacing θ0 , w0 and asmax in Eq. 10 according to the switching times (as ) [9].
2.3 Linde Buzo Gray Algorithm (LBG) Image data compression using vector quantization (VQ) has received a lot of attention in the last twenty years because of its simplicity and adaptability [14,15]. In vector quantization N-dimensional data space is divided into L regions called Voronoi regions. Figure 4 shows an example of such regions for the dimension of N = 2. Linde, Buzo and Gray first suggested a practical suboptimal clustering analysis algorithm, known as the LBG algorithm [16] which is used in VQ in order to generate a codebook based on some training set. It is said that the algorithm only guarantees a locally optimum codebook relative to the source data used (the training set). The LBG algorithm can be defined as a mapping Q of K-dimensional Euclidean space R k into a finite subset Y of R k . Thus, Q : Rk → Y
(11)
where Y = xˆi ; i = 1, 2,3, . . . , N is the set of reproduction vectors and N is the number of vectors in Y. If a distortion measure d x, xˆ which represents the penalty or cost associated with reproducing vectors x by xˆ is defined, then the best mapping Q is the one which minimizes the distortion. The LBG algorithm and other variations of this algorithm are based upon this minimization, using a training data set as the signal. One simple distortion measure definition is the square error given by −1 2 K 2 d x, xˆ = x − xˆ = x j − xˆ j
(12)
j=0
123
802
Arab J Sci Eng (2011) 36:795–807
K is the number of x vectors. Let the expected distortion be approximated by the expression D (x, q (x)) =
N −1 1 d xi , xˆi N
(13)
i=0
The LBG algorithm for a known distribution training sequence follows these rules: (1) Let N = number of levels; distortion threshold ε ≥ 0. Assume an initial N level reproduction alphabet set to zero. Aˆ 0 , and a training sequence x j ; j = 0, 1, 2, . . . , n − 1 , and m = number of iterations, (2) Given Aˆ m = (yi ; i = 1, 2, . . . , N ), find the minimum distortion partition P Aˆ m = (Si ; i = 1, . . . , N ) of the training sequence: x j ∈ Si If d(x j , yi ) ≤ d(x j , yl )
(14)
for all l. Compute the average distortion. Dm = D
n−1 ˆ ˆ =n−1 Am , P Am min d x j , y j=0
y∈Am
(15)
(3) If (Dm−1 − Dm ) /Dm ≤ ε, stop the iteration with Aˆ m as the final reproduction alphabet; otherwise continue. (4) Find the optimal reproduction alphabet x(P( ˆ Aˆ m )) = xˆ (Si ) ; i = 1, 2, . . . , N for P( Aˆ m ) where xˆ (Si ) =
m 1 xj Si
(16)
j:x j ∈Si
(5) Set Aˆ m+1 = xˆ (Si ), increment m to m + 1, and go to step 2). In the above iterative algorithm, an initial reproduction alphabet Aˆ 0 is assumed in order to begin the algorithm. There are several techniques to obtain the initial codebook. The simplest technique is to use the first widely spaced words from the training data. Linde, Buzo and Gray use a splitting technique where the centroid for the training sequence is calculated and split into two close vectors. The centroids or the reproduction vectors for the two partitions are then found. Each resulting vector is then split into two vectors and the above procedure is repeated until an N level initial reproduction vector is obtained. The splitting is performed by adding a fixed perturbation vector ε to each vector yi generating two vectors yi + ε, yi − ε. 3 Artificial Neural Networks 3.1 Multi-layer Perceptron MLP consists of an input, an output and one or more hidden layers. Input layer neurons behave like a buffer that distribute the input signal xi to the hidden layer. The jth neuron of the hidden layer adds the product of the input signal xi and weight values wji and produces an output (17) w ji xi yj = f where, f is the sigmoid or the hyperbolic tangent function. In a similar way, output layer neurons yj values are calculated. Adjusting the weights wji by using a training algorithm is called the training of a network. The training algorithms find the wji values that minimize the equation: 2 1 E= (18) yd j − y j 2 j
At each iteration wji is added to the weights wji . The training process is stopped at the pre-determined iteration number or error value (E). There are different training algorithms. The wji is calculated according to the chosen training algorithm. Four different training algorithms are used in this paper. The following sections give short explanations of these algorithms.
123
Arab J Sci Eng (2011) 36:795–807
803
3.1.1 Gradient Descent Algorithm Gradient descent is one of the algorithms that is used in the back propagation of the error values. In the kth iteration, w ji (k) weight is calculated by w ji (k) = −α
∂E + μw ji (k − 1) ∂w ji (k)
(19)
where α is the learning coefficient, μ is the momentum coefficient and w ji (k − 1) is the change in the weight for the kth iteration. In the multilayer perceptron, the difference between the desired output yd (k) and the network output y(k) propagates from the output layer to the input layer and weights w ji (k) are changed. 3.1.2 Resilient Backpropagation Algorithm The amplitude of the gradient is not used in this algorithm. The weight w ji (k) is determined according to the regional change in the gradient [17]. The Aji (k) update value of weight w ji (k) is determined according to the sign of the partial differential of the error values at the successive iterations. The update weight values are determined by ⎧ ∂E ∂E ⎪ ⎨ −A ji (k) i f ∂w ji (k−1) ∂w ji (k) 0 ∂ E w ji (k) = A ji (k) (20) i f ∂w ji (k−1) ∂w∂jiE(k) 0 ⎪ ⎩ 0 other wise 3.1.3 Quasi Newton Algorithm This is a second order algorithm [18]. Second order algorithms like this use the quadratic model Newton step: w ji (k) = −H −1
∂E ∂w ji (k)
(21)
where, Hessian matrix H is the second order differential of the E. If E is quadratic and H is a positive definite matrix then it is sufficient to find minimum (they can be local) error values in one step. The iteration number increases if E is not quadratic. Calculating the inverse of the H matrix is computationally expensive. Hence, the Quasi-Newton Method computes the more efficient G matrix (the approximation of the inverse of the Hessian H) at all iterations. The update formula for the approximation of the inverse Hessian is known as the inverse Broyden–Fletcher–Goldfarb–Shannon update (BFGS update) [18]. Sometimes the Newton step with G −1 instead of H−1 leads to an increase of the error E, because the actual shape of the error surface is not approximated very accurately. Thus, w ji (k) often serves as a search direction and the step width α, in the search direction is determined by a line search procedure. The Backtracking algorithm is used (see [18] for details). The new weight vector w ji in an iteration computed by Quasi Newton is given by w ji (k) = −αG −1
∂E ∂w ji (k)
(22)
3.1.4 Scaled Conjugate Gradient Algorithm In this algorithm, the step width α is computed for the scaled product s of the Hessian H and the search direction P: α=−
PT ∂w∂jiE(k) PT .s
(23)
The new weight vector is finally determined by w ji (k + 1) = w ji (k) + αP
(24)
123
804
Arab J Sci Eng (2011) 36:795–807
Fig. 5 Modeling the curved parts with the ANN
3.1.5 Levenberg–Marquardt Algorithm This is also a second order algorithm. Like Quasi-Newton, this algorithm uses an approximation instead of the H matrix. The new weight vector is determined by w ji (k + 1) = w ji (k) − [J T J + μI ]−1 J T E
(25)
where, if μ = 0, then Eq. 25 becomes a Newton step, if μ → ∞ then it becomes a gradient descent equation. Different methods are found to determine μ [18]. This algorithm provides better solutions, if the JT J matrix is ill-conditioned [19].
3.2 Modeling Curved Parts Using ANN In order to decrease the optimization time achieved by the differential evolutional (DE) method, the ANN is used in the calculation of the curved parts of the trajectories. The proposed ANN application used for modeling the curved parts of the trajectories is shown in Fig. 5. There are 2 inputs, 1 or 2 hidden layers consisting of 5 neural cells and 1 output (2:5:1 and 2:5:5:1) in the proposed ANN structure. These 2 inputs are the rotational angle (θ ∈[1◦ –150◦ ]) and the translational rotation velocity (vti ∈ [0 − 1] m/s). The output is the remaining distance (αi ) beginning from the point where the rotation begins on the linear path part of the robot (Fig. 2). The mathematical calculation and geometrical shape of α1 are clearly shown in Fig. 6. Using triangular equations, α1 can be calculated from the equations below: b=
x12 + y12
2 x12 +y12 b 2 2 2 h = α1 − 2 = α1 − 2
tan
π −θ1 2
=
b 2h
=
x12 +y12 2 √ x12 +y12 2 α 2 − 1
α1 =
123
(x12 + y12 ) 1 +
tan
2
2
1 ((π −θ1 )/2)
(26)
Arab J Sci Eng (2011) 36:795–807
805
Fig. 6 Mathematical calculation and geometric shape of the α1
The training process is achieved by a multi-layer feed forward ANN architecture in which back propagation is used [6]. Maximum iteration number, minimum error goal and minimum gradient values are set to 30,000, 10−9 and 10−9 respectively.
4 Results and Discussion 4.1 Results The error values of the ANN application are shown in Table 1. The relative error definition, which is used for the comparisons in Table 1, is given in Eq. 27. ! E = εi !εi = (αdi − αri ) /αri i ∈ [1, n] (27) The average of the elements in Eq. 27 is E0 . EO =
n
100 |ε| / n %
(28)
i=1
EM = max(|100E|)%
(29)
EM is the maximum value of the relative error. E5 represents the elements that have a relative error greater than 5%: E5 = {εi | |100εi | > 5 i ∈ [1, n]}
(30)
The results of different training algorithms of ANN are shown in Table 1. The algorithms beginning with LBG use Linde Buzo Gray classification methods in order to obtain training data. The remaining algorithms select the training data linearly from the data space. It is seen that in all LBG-ANN models the average errors (EO ) are less than the results from the ANN models that have not used any classification. The comparison of the Nomad 200 tracked trajectory and the LBG-ANN Levenberg–Marquardt trajectory is shown in Fig. 7. In Table 1 the test phase errors apply to the different network architectures after 30,000 iterations.
123
806
Arab J Sci Eng (2011) 36:795–807
Table 1 Test phase errors for the different network architectures after 30,000 iterations
ANN Levenberg Marquardt LBG-ANN Levenberg–Marquardt ANN Quasi-Newton LBG-ANN Quasi-Newton ANN resilient LBG-ANN resilient ANN scaled conjugate gradient LBG-ANN scaled conjugate gradient
2:5:1 network architecture EO % EM %
E5
2:5:5:1 network architecture EO % EM % E5
0.65 0.13 1.32 1.01 2.4 2.24 0.54 0.426
276 31 374 574 3,252 4,250 91 268
0.12 0.034 9 0.26 1.17 1.13 0.24 0.12
12.71 8.30 19.58 23.26 32.2 41.34 9.73 15.56
1.7 5.97 50 10.34 16.68 23.56 5.53 7.09
0 18 15164 67 666 763 3 22
Fig. 7 Comparison of the Nomad 200 tracking trajectory and the LBG-ANN Levenberg–Marquardt trajectory
4.2 Discussion In this study, a new LBG classified ANN algorithm with different training methods is proposed for the motion equation calculations of the mobile robot Nomad 200. The proposed LBG-ANN method obtains the solutions for the MR in a very short time in contrast to the Mclauren sequence solutions for the MR in a very time consuming process. It is seen that the average mean square errors (EO ) of LBG-ANN models are less than the standard ANN models. The minimum average error is seen in the 2:5:5:1 LBG-ANN architecture. Compared to previous work, there are two major contributions from this study. One of them is that different training algorithms are used. The comparison results show that the Levenberg–Marquadt algorithm is superior. The other contribution is using the LBG clustering technique for the training data formation. It is clear that more accurate results are achieved.
References 1. Zefran, M.: Continuous methods for motion planning. PhD Thesis, University of Pennsylvania, Philadelphia (1996) 2. Zhang, J.; Knoll, A.: An enhanced optimization approach for generating smooth robot trajectories in the presence of obstacles. In: Proceedings of the European Chinese Automation Conference, pp. 263–268. London (1995) 3. Weber, H.: A motion planning and execution system for mobile robots driven by stepping motors. Robotics Auton. Syst. 33, 207–221 (2000) 4. Reuter, J.: Mobile robots trajectories with continuously differentiable curvature: an optimal control approach. In: Proceedings of IEEE/RSJ International Conference Intelligent Robots and System, pp. 38–43. Victoria B.C. (1998) 5. Scheuer, A.; Xie, M.: Continuous-curvature trajectory planning for maneuverable non-holonomic robots. In: Proceedings of IEEE/RSJ International Conference Intelligent Robots and System, pp. 1675–1680. Kyongju (1999) 6. Aydın, S.: Optimal motion planning with evolutionary methods for mobile robots. Ph. D. Thesis, Istanbul Technical University, Istanbul (2003)
123
Arab J Sci Eng (2011) 36:795–807
807
7. Borenstein J.; Koren Y.: The vector field histogram-fast obstacle avoidance for mobile robots. IEEE Trans. Robotics Autom. 7(3), 278–288 (1991) 8. Fox, D.; Burgard, W.; Thrun, S.: The dynamic window approach to collision avoidance. IEEE Robotics Autom. Mag. 4(1), 23–46 (1997) 9. Aydın, S.; Temelta¸s, H.: Planning optimal trajectories for mobile robots using an evolutionary method with fuzzy components. In: Wang, L.; Chen, K.; Ong, Y.S. (eds.) Advances in Natural Computation, pp. 703–712. Springer, ISSN 0302-9743, LNCS 3612 (2005) 10. Ceylan, R.; Ozbay, Y.: Comparison of FCM, PCA and WT techniques for classification ECG arrhythmias using artificial neural network. Expert Syst. Appl. 33, 286–295 (2007) 11. Uykan, Z.; Güzelis, C.; Çelebi, M.E.; Koivo, H.N.: Analysis of input–output clustering for determining centers of RBFN. IEEE Trans. Neural Netw. 11 4, 851–858 (2000) 12. Awad, H.A.; Al-Zorkany, MA.: Mobile robot navigation using local model networks. Int. J. Inf. Technol. 1(2), 58–65 (2005) 13. Kreyszig E.: Advanced Engineering Mathematics. Wiley, Singapore (1993) 14. Gray, R.M.: Vector Quantization. IEEE ASSP Mag. 1, 4–29 (1984) 15. Idris, F.M.; Panchanathan, S.: Spatio-temporal indexing of vector quantized video sequences. IEEE Trans. Circuits Syst. Video Technol. 7, 728–735 (1997) 16. Lin, Y.C.; Tai, S.C.: A fast Linde–Buzo–Gray algorithm in image vector quantization. IEEE Trans. Circiuts Syst.-II Anal. Digit. Signal Process. 45, 432–445 (1998) 17. Sa˘gıro˘glu, s¸.; Be¸sdok, E.; Erler, M.: Control chart pattern recognition using artificial neural network. Turkish J. Electr. Eng. 8(2), 137–147 (2000) 18. Bajramovic, F.; Gruber, C.; Sick, B.: A comparison of first- and second-order training algorithms for dynamic neural networks. In: IEEE International Joint Conference on Neural Networks, pp. 25–29. Budapest (2004) 19. Jang, R.J.S.; Sun, C.T.; Mizutani, E.: Neuro-fuzzy and soft computing: a computational approach to learning and machine intelligence. Prentice Hall, New Jersey (1997)
123