Neural Computing and Applications https://doi.org/10.1007/s00521-018-3440-2
(0123456789().,-volV)(0123456789().,-volV)
ORIGINAL ARTICLE
A novel hybrid metaheuristic algorithm for model order reduction in the delta domain: a unified approach Souvik Ganguli1
•
Gagandeep Kaur1 • Prasanta Sarkar2
Received: 12 May 2017 / Accepted: 16 March 2018 The Natural Computing Applications Forum 2018
Abstract Delta operator parameterization provides a unified framework in modeling, analysis and design of discrete-time systems, in which the resultant model converges to its continuous-time counterpart at high sampling limit. Capitalizing this unique property of delta operator, a new hybrid algorithm combining gray wolf optimizer and firefly algorithm has been proposed for model order reduction of high-dimensional linear discrete-time system. It has been shown that the reduced discrete-time model inherits all the dominant characteristics of the higher-order discrete-time model and with the increase in sampling frequency it converges to the continuous-time reduced model. The effectiveness of the proposed method is illustrated with the help of an example. Keywords Model order reduction (MOR) Pseudo random binary sequence (PRBS) Delta operator modeling Hybrid gray wolf optimizer (HGWO) Abbreviations ABC Artificial bee colony bGWO Binary gray wolf optimizer DE Differential evolution EPDGWO Evolutionary population dynamics gray wolf optimizer FA Firefly algorithm GA Genetic algorithms GWO Gray wolf optimizer HS Harmony search HGWO Hybrid gray wolf optimizer IAE Integral of absolute error ITAE Integral of time weighted absolute error ISE Integral of square error ITSE Integral of time weighted square error IWO Invasive weed optimization PRBS Pseudo random binary sequence PSO Particle swarm optimization & Souvik Ganguli
[email protected] 1
Department of Electrical and Instrumentation Engineering, Thapar Institute of Engineering and Technology, Patiala, Punjab 147004, India
2
Department of Electrical Engineering, National Institute of Technical Teachers’ Training and Research, Kolkata, West Bengal 700106, India
MEMS MIMO MOR SISO SSE
Micro-electro-mechanical system Multi-input multi-output Model order reduction Single-input single-output Sum of square error
1 Introduction Model order reduction (MOR) provides a systematic means for modeling, analysis, design and implementation of large-scale systems and finds extensive applications in computational physics [1], MEMS devices [2], chemical processes [3] etc., to mention a few. Diminution of largescale systems has been explored by researchers across the world for several decades, and numerous methods have been advocated to obtain reduced models in both time and frequency domains. Further, the reduction techniques associated with SISO systems have also been extended to reduce MIMO systems as well. In course of time, many composite methods have also matured [4, 5]. In recent times, several evolutionary algorithms and their variants [6–13] have either been used alone or in juxtaposition with classical techniques for order reduction of both continuous- and discrete-time systems. The essence of these evolutionary-based techniques in order reduction is normally based on minimization of error criterion, which
123
Neural Computing and Applications
is usually integral of square error (ISE) obtained between the step responses of higher-order and reduced-order systems [14, 15]. But step input is not a sound benchmark for tackling identification problems of linear time-invariant systems since the fast transients are not distinctly understood. Hence, response matching is carried out in this paper using pseudo random binary sequence (PRBS). PRBS has manifold advantages, viz. it possesses white noise like properties, has low crest factor and whose frequency content can be altered by changing the sampling rate. Sum of square error (SSE) performs equally well as ISE and has been considered in this paper as the optimal criteria to estimate the unknown parameters of the reduced-order model. In addition to this, hybrid methods also proved their mettle to develop reduced models with expected better accuracy and faster convergence [16, 17]. But unfortunately in both the papers, the results obtained employing the parent heuristics have not been reported. The present paper not only makes the comparison of the proposed hybrid topology with the parent heuristic techniques, but also considers some standard evolutionary approaches for comparison as well in addition to the latest reference papers quoted in this field of study. Moreover, the majority of researchers have used classical techniques to obtain the denominator polynomial to ensure stability of the reduced model, while the coefficients of the numerator polynomial are obtained using metaheuristic approaches. Even Sokolo et al. [18] applied constraint to assure stability of the reduced-order model using HS algorithm, while Khademi et al. [19] added an extra matrix inequality constraint to preserve the minimum phase characteristic of the reduced-order system. No research article has ever considered both the constraints together. Hence, to provide better reduced-order models, assuring both stability as well as minimum phase characteristics of the reduced-order system employing a metaheuristic technique, the need of some kind of constrained optimization is perceived. Thus, this paper involves handling of both minimum phase characteristics as well as stability as constraints to determine the unknown parameters of the reduced-order model applying suitable constraints. The necessity of a hybrid metaheuristic algorithm to solve this hard optimization task thus seems truly justified as well. Further, the methodology for order reduction problems through metaheuristic algorithms does not provide a unique solution and hence are not so popular. Even a single test run for these algorithmic approaches is not satisfactory. Multiple test runs have to be performed to validate a model. Though few papers in particular [20, 21] mentioned the use of test runs in the MOR problem, they have not carried out further Wilcoxon test [22] to verify whether the results obtained are significant or not in comparison to
123
other standard heuristic approaches. So far no research article has ever highlighted upon this with reference to MOR problems with metaheuristic approaches which has been carried out in this paper. Sensitivity analysis pertaining to parameter setting is one of the key issues to establish the robustness of a new algorithm. In this paper, sensitivity analysis in the context of parameter setting has been carried out to study the robustness of the proposed approach. This adds another value to the novel contribution of this paper. In addition, the continuous-time system, even with the use of hybrid approach, provides slow convergence rate to determine the unknown parameters of the reduced-order model than its discrete-time counterpart. Moreover, to implement continuous-time models in fast digital hardware for control, it is often enticing to develop a discrete-time model. However, the current practice of using shift operator and its associated transfer function description does not support high speed digital data and lead to numerical ill conditioning. Discrete-time representation using delta operator leads to a significant milestone in the literature of systems theory and control which not only circumvent the above problem, but also guide very high speed digital computation with enhanced numerical stability [23]. Thus, this paper proposes modeling of linear dynamical systems in the delta domain to extract the benefits of superior numerical robustness in computation, quality coefficients representation of finite word length for implementation and improved numerical conditioning at a high sampling frequency. In the process, modeling in the delta domain unifies the analysis of continuous-time with the discrete-delta domain at a high sampling frequency. This is the most vital contribution of this research work. An upcoming metaheuristic algorithm coined as GWO simulates the leadership hierarchy and hunting behavior of gray wolves present in nature [24] and found applications in diverse disciplines of engineering [25–27]. In the recent past, several variants like EPDGWO [28], bGWO [29], levy-flight-based GWO [30] as well as hybridizations with DE [31], PSO [32] and GA [33] have also been developed to improve upon the performance of basic GWO. Hybridization is an inherent feature of high performing algorithms these days. Pure algorithms cannot deliver an optimal solution within an acceptable time frame and hence are always inferior to hybridizations [34]. FA developed by Yang [35] drew inspiration from the flashing behavior of fireflies found in the tropical countries. The algorithm finds its suitability both for global search [36] as well as its local search [34] abilities while hybridizing with other algorithms. Despite the fact that numerous reduction techniques with metaheuristic algorithms reported in the literature, still there is a necessity for the development of a global reduction method. Thus,
Neural Computing and Applications
researchers are constantly making efforts toward achieving a global optimum reduction method applicable to all types of systems as well as obtaining accurate reduced-order models with minimum computational effort. In this paper, a unified framework for model order reduction has been developed in the complex delta domain, employing the benefits of GWO and FA in a hybrid framework considering stability and minimum phase features as constraints. FA is used to initialize the solution vector, while GWO is employed to fine tune the solutions in the proposed hybrid technique. The rest of the paper is organized as follows. The problem of model order reduction in the delta domain is formulated in Sect. 2. The basics of FA and GWO are introduced in Sects. 3. Section 4 presents the proposed approach for MOR in the delta domain. Simulations are carried out in Sect. 5. Section 6 concludes the paper along with directions for future work.
2 Problem formulation The section is created to present an alternative formulation of discrete-time systems, the so-called d-operator parameterization. The d-operator [23] is defined in the time domain as: d¼
q1 D
where ai and bi are the coefficients of the denominator and the numerator polynomials, respectively. It is also assumed that Gd ðcÞ is irreducible, i.e., Nk1 ðcÞ and Dk ðcÞ have no factors in common. The objective is to obtain a reduced system of order rðr\kÞ such that it preserves all the necessary features of the higher-order system and represented as follows: Pr1 i Nr1 ðcÞ di c GRd ðcÞ ¼ ¼ Pi¼0 : ð6Þ r i Dr ðcÞ c i¼0 i c
3 Background study In this section, the two parent algorithms, viz. FA and GWO of which the hybrid approach is constituted are briefed to set up an appropriate background for the proposed hybrid method. The proposed hybrid technique is further to solve the problem of model order reduction in the delta domain.
3.1 Firefly algorithm (FA) Xin-She Yang developed the firefly algorithm [35] with the following assumptions: •
ð1Þ •
where D denotes the sampling period while q is the forward shift operator. Operating d on a differential signal xðtÞ gives dxðtÞ ¼
xðt þ DÞ xðtÞ : D
ð2Þ •
It is straightforward to see that lim dxðtÞ ¼
D!0
d xðtÞ dt
ð3Þ
which indicate the close relationship between the discretetime d-operator and the continuous-time differential operator dtd at high sampling rate. Similarly relation exists in the complex domain as well. The delta transform operator c is defined as c¼
z1 D
ð4Þ
Let the delta transfer function of the higher-order singleinput single-output (SISO) stable discrete-time system be described by Pk1 i bi c Nk1 ðcÞ Gd ðcÞ ¼ ¼ Pi¼0 ð5Þ k i Dk ðcÞ i¼0 ai c
All fireflies are unisex so that one firefly is attracted to other fireflies regardless of their sex Attractiveness is proportional to their brightness; thus, for any two flashing fireflies, the less bright one will move toward the brighter one. The attractiveness is proportional to the brightness, and they both decrease as their distance increases. If no one is brighter than a particular firefly, it moves randomly The brightness or the light intensity of a firefly is affected or determined by the landscape of the objective function to be optimized.
There are two issues in FA, viz. variation of light intensity and formulation of attractiveness. For simplicity, it is assumed that the attractiveness of a firefly is determined by its brightness or light intensity which in turn is associated with the encoded objective function. In the simplest case for maximum optimization problems, the brightness I ðxÞ of a firefly at a particular location ðxÞ is chosen as I ðxÞ a F ðxÞ. The attractiveness b is relative; it will be seen in the eyes of the beholder or judged by the other fireflies. So it will vary with the distance rij between firefly i and firefly j: As light intensity decreases with the distance from its source, the light is also absorbed in the media; hence it is concluded that the attractiveness should vary with the degree of absorption. The light intensity I ðr Þ varies with the distance r monotonically and exponentially as:
123
Neural Computing and Applications
I ¼ I0 ecr
ð7Þ
where I0 denotes the original light intensity, whereas c is the light absorption coefficient. As a firefly’s attractiveness is proportional to the light intensity seen by adjacent fireflies, the attractiveness b of a firefly is defined as b ¼ b0 ecr
2
ð8Þ
where b0 is the attractiveness at r ¼ 0: It is worth pointing out that the exponent cr 2 can be replaced by other functions such as cr m when m [ 0. The distance between any two fireflies i and j at xi and xj , respectively, is the Cartesian distance which is calculated as Xn qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð9Þ rij ¼ xi xj ¼ ðxid xjd Þ2 d¼1 where xi;d is the dth component of the spatial coordinate xi of ith firefly. The movement of a firefly i is attracted to another more attractive (brighter) firefly j which is determined by 2
xi ¼ xj þ b0 ecr ðxi xj Þ þ aðrand 0:5Þ:
ð10Þ
3.2 Gray wolf optimizer (GWO) GWO is a population-based metaheuristic algorithm mimicking the leadership hierarchy and hunting mechanism of gray wolves found in the remote areas of Eurasia and Northern America [24]. Gray wolves are considered as apex predators, belonging at the top of the food chain. They live in groups (packs), each group containing 5–12 members on average. All the members in the group maintain a strict social hierarchy as shown in Fig. 1. As seen from Fig. 1, four types of gray wolves such as alpha, beta, delta and omega are employed for simulating the leadership hierarchy. In the hierarchy, alpha ðaÞ is considered the most dominating member. The rest of the
subordinates to a are beta ðbÞ and delta (d), which help to control the majority of wolves in the hierarchy that are considered as omega ðxÞ. The x wolves are of the lowest ranking in the social hierarchy. In GWO algorithm, the hunting is guided by a; b and d. The x solutions follow these three wolves. During hunting, the gray wolves encircle the prey. The mathematical model of the encircling behavior is presented as: ~ ¼ ~ ~p ðtÞ X ~ðtÞ D CX ð11Þ ~ðt þ 1Þ ¼ X ~p ðtÞ ~ A~ D X
ð12Þ
~ are the where t indicates the current iteration, ~ A and C ~ coefficient vectors, X p denotes the position vector of the ~ represents the position vector of a gray wolf. prey, while X ~ are computed using the following The vectors ~ A and C equations: ~ ¼ 2 rand2 C
ð13Þ
~ ~ rand1 ~ A ¼ 2a a
ð14Þ
where ~ a is linearly decreased from 2 to 0 over the course of iterations while rand1 and rand2 denote random numbers lying in the range ð0; 1Þ: The hunting operation of the gray wolves is usually guided by the alpha wolves. The beta and delta wolves occasionally participate in the hunting process. Thus, in the mathematical model for the hunting behavior of gray wolves, it is assumed that the alpha, beta and delta type gray wolves have better knowledge about the potential location of prey. Hence, the first three best solutions acquired are saved, and the other search agents are obliged to update their positions according to the location of the best search agents. The following mathematical equations are thus framed as: ~a ¼ ~ ~a X ~ D C1X ð15Þ ~b ¼ ~ ~b X ~ D C2X ð16Þ ~c ¼ ~ ~c X ~ D C3X ð17Þ Thus ~1 ¼ X ~a ~ ~a X A1 D
ð18Þ
~2 ¼ X ~b ~ ~b X A2 D
ð19Þ
~c ~ ~c ~3 ¼ X A3 D X
ð20Þ
and finally ~ ~ ~ ~ðt þ 1Þ ¼ ðX 1 þ X 2 þ X 3 Þ X 3 Fig. 1 Social hierarchy of gray wolves
123
ð21Þ
Neural Computing and Applications Fig. 2 Pseudocode for model order reduction algorithm using HGWO
Begin: Initialize the algorithm parameters: Max_iter: Maximum number of iterations n: number of fireflies γ: the light absorption coefficient β0: the initial brightness of a given firefly D: the search domain Define the objective function f(X) where X=(x1,x2,………xd)T Generate the initial population of fireflies Xi (i=1,2,…….n) Determine the light intensity Ii of the ith firefly Xi via the fitness function computed using the following steps: 1. Discretize the higher order continuous-time model by incorporating sample and hold with the input and obtain the discrete-time model in the complex delta domain 2. Assume the order of the reduced model and its structure as per equation (6) 3. Excite the higher order and reduced order system by PRBS sequence 4. Compute the fitness function as per equation (22a) subject to equation (22b) t=1 while t
Ii) then Move firefly i towards j end if Vary attractiveness with distance ‘r’ Evaluate new solutions and update light intensity end for j end for i Rank the fireflies and find the current best t=t+1 end Consider the best solution obtained by FA as the initial guess for GWO Use GWO algorithm to search around global best, which is found by FA Output the solution obtained from GWO Post process results and visualization End
123
Neural Computing and Applications Table 1 Test system and its delta operator representation Plant model
Transfer function representations in continuous and discrete-delta domains
Test system
Transfer function in s-domain s8 Transfer function in delta domain for sampling time of 0.001 s
18s7 þ 514s6 þ 5982s5 þ 36380s4 þ 122664s3 þ 222088s2 þ 185760s þ 40320 þ 36s7 þ 546s6 þ 4536s5 þ 22449s4 þ 67284s3 þ 118124s2 þ 109584s þ 40320
20c7 þ 510c6 þ 5930c5 þ 35970c4 þ 121020c3 þ 218680c2 þ 182590c þ 39600 40c7 þ 540c6 þ 4500c5 þ 22210c4 þ 66430c3 þ 116400c2 þ 107790c þ 39600
The gray wolves complete the hunt by attacking the prey when it stops moving. In this phase, the value of ~ a is decreased and thereby the fluctuation range of ~ A is reduced. When ~ A has random values in the range ½1; 1; the search agent’s next location will be in anywhere between its current position and the position of the prey.
space. The result obtained by FA is then considered as the initial starting point for GWO. The search process then switches to GWO to speed up convergence to the global optimum. Thus, the hybrid algorithm finds an optimum more precisely.
5 Proposed method 4 HGWO algorithm Talbi formulated a taxonomy for hybrid metaheuristic algorithms according to which two algorithms can be hybridized in high-level or low-level with relay or coevolutionary method as homogeneous or heterogeneous [37]. In this paper, GWO is hybridized with FA using low-level relay-type heterogeneous hybrid architecture. The hybrid is low-level in the sense that the functionalities of both the algorithms are retained in the hybrid framework. The hybrid technique is a relay-type because both the algorithms have been used one after another. Two distinct algorithms are associated to yield the desired results. Hence, the hybrid is a heterogeneous type hybrid algorithm. The main rationale behind hybridization is to overcome the demerits of existing individual component of optimization algorithms and to accomplish an improved version. Moreover, it is also required to figure out the strength of the proposed algorithm to achieve an optimal solution within an acceptable time frame and finally to balance exploration and exploitation efficiently. Both these terms are used to explore the new probable outcomes and to intensify the current solution to make it more admirable. A new hybrid algorithm, termed as HGWO, is presented in this paper by the fusion of GWO and FA. The balance between exploration and exploitation is realized with FA being employed as a global optimizer, while GWO executes local search to manifest exploitation feature. The search begins with FA by initializing a group of random agents. The computation continues for a certain number of iterations to obtain the global best position in the solution
123
A hybrid intelligent technique is used to obtain the numerator and denominator coefficients of the reducedorder system by minimizing the following fitness function: J¼
N X
½yd ðcÞ yRd ðcÞ2
ð22aÞ
i¼1
subject to di [ 0 and ci [ 0
ð22bÞ
where yd ðcÞ and yRd ðcÞ are output samples of original and reduced-order system, respectively, subject to PRBS input in the delta domain. di and ci denote, respectively, the coefficients of the numerator and denominator polynomials of the reduced system. The first inequality constraint limits the presence of right half plane zeros in the reduced-order model, whereas the second constraint helps the system satisfy Routh-Hurwitz stability condition [38]. Though the reduced-order system may be of any type/order, for the sake of simplicity, the following reduced-order system is considered in this paper: GRd ðcÞ ¼
d0 þ d1 c : c 0 þ c 1 c þ c 2 c2
ð23Þ
Normally, as per literature, c2 ¼ 1 and d0 ¼ c0 . So the three unknown parameters of the reduced-order models of desired fixed structure are thus calculated by minimizing the performance index J as given in Eq. (22) by the above mentioned hybrid optimization technique. In a metaheuristic approach for model order reduction, the primary objective is to simplify the higher-order discrete-time system by determining an optimal reduced-order model through the minimization of error estimate using PRBS
Neural Computing and Applications Table 2 Algorithm parameters
Algorithm GA
PSO
DE
ABC
HS
IWO
FA
GWO
Parameters Search agents
20
Maximum iterations
50
Crossover percentage
0.8
Mutation percentage
0.2
Selection pressure
10
Search agents
20
Maximum iterations
50
Inertia weight
1 ? 0.7
Random velocity rule weight (lower bound)
0
Random velocity rule weight (upper bound)
2
Search agents
20
Maximum iterations
50
Crossover factor
0.8
Scaling factor
0.5
Search agents Maximum iterations
20 50
Onlooker bee
10
Employed bee
10
Scout number
01
Search agents
20
Maximum iterations
50
Harmony memory size
20
HMCR
0.9
Pitch adjustment rate
0.1
Search agents
20
Maximum iterations
50
Minimum number of seeds
0
Initial value of standard deviation
0.5
Final value of standard deviation
0.001
Search agents
20
Maximum iterations Light absorption coefficient (c)
50 1
Initial brightness of firefly (b)
0.2
Randomness (a)
0.25
Search agents
20
Maximum iterations a
HGWO
Values
50 2 ? 0 (linear decrease)
A
2 a rand1 a
C
2 rand2
Search agents
20
Maximum iterations
50
a
2 ? 0 (linear decrease)
A
2 a rand1 a
C
2 rand2
Light absorption coefficient (c)
1
Initial brightness of firefly (b) Randomness (a)
0.2 0.25
123
Neural Computing and Applications Table 3 Statistical analysis of objective function (J)
Test system G(c)
Methods
d.
Standard deviation 5.7324e209
5.2381e209
3.1347e208
8.9835e209
4.4976e-06
4.5983e-05
2.1390e-05
1.1379e-05
GWO
1.4328e-06
8.2030e-05
3.2639e-05
2.2477e-05
GA
1.0443e-06
3.3641e-05
1.2200e-05
1.0450e-05
PSO
9.0286e-06
5.3967e-05
2.1459e-05
1.2897e-05
DE
1.0592e-05
7.9838e-05
3.4968e-05
2.5596e-05
ABC
1.0557e-05
8.7118e-05
3.0582e-05
1.9846e-05
HS
1.9728e-05
8.9437e-05
4.6308e-05
2.7494e-05
IWO
8.5537e-06
8.0793e-05
4.0768e-05
2.2028e-05
Discretize the higher-order continuous model by incorporating sample and hold and obtain the discrete-time model in the complex delta domain (c). Assume order of the reduced model and its structure. Excite the higher-order and reduced-order system by PRBS sequence. Compute the fitness function SSE with constraints
Step 4 Compare the new fitness function value with the previously computed value and update the fitness function Step 5 Step 4 is repeated until termination condition is met Step 6 Post-process results and visualization. The hybrid technique is applied to find the reducedorder model parameters. The initial step before applying any metaheuristic algorithm to solve a typical MOR
123
Average
HGWO
Step 1 Initialize the user-defined algorithm parameters Step 2 Initialize population by considering their upper and lower bounds Step 3 Calculate the initial fitness of each search agent by the following steps:
b. c.
Worst value
FA
response matching. Thus, the coefficients of the numerator and denominator in the reduced-order model are considered as free parameters for optimization. However, the choice of the coefficients has to be selected in such a way so as to satisfy both minimum phase characteristics and Routh-Hurwitz stability criteria. Owing to this, the constraints have been added to the fitness function so that stability and minimum phase features are achieved. The final coefficients are obtained for which the SSE value is optimum. Thus, the method provides an inbuilt stability and minimum phase preserving features. Delta operator modeling facilitates unification of continuous- and discrete-time domain results at higher sampling frequency. The steps of a generalized metaheuristic method for model order reduction in the delta domain are thus provided below:
a.
Best value
problem is to choose the upper and lower bounds for each decision variable. As per literature, the bounds on the parameters are normally provided by -ve and ?ve values from the highest coefficients of either numerator or denominator polynomial of the original higher-order system in the respective domains. However, this choice may not contribute satisfactory results every time to reach a global optimal solution for which sensitivity analysis of parameters is carried to verify the robustness of the proposed technique. The pseudocode of the proposed algorithm for model order reduction of linear time-invariant system in the delta domain is elaborated in Fig. 2 for the convenience of the readers.
6 Results and discussions To illustrate the method of reduced-order modeling in the d-domain and to observe the behavior of the resulting reduced-order model, a continuous-time system of eighth order is considered from [10]. The transfer function of this system is given in Table 1. The resultant model in the delta domain is represented in this table for a sampling time of 0.001 s. The choice of sampling time tends to unify continuous-time and discretetime domain results at a high sampling frequency. To solve the problem of model order reduction of the above-mentioned test system using the proposed technique, a total of 20 search agents is selected to obtain the global optimum in 50 iterations. The choice is purposeful as ð20 50Þ ¼ 1000 number of function evaluations are carried out due to this which can be considered sufficient to identify the three unknown decision variables of the reduced system. The lower and upper bounds of the decision variables are normally considered as –ve and ?ve values of the highest coefficients from either the numerator or the denominator polynomial of the original higher-order system in the respective domains. Suitable constraints as mentioned in Eq. (22b) are applied to preserve stability and minimum phase characteristics of the reduced-order system. The
Neural Computing and Applications Table 4 p values for Wilcoxon signed rank test System
Proposed method
FA
GWO
GA
PSO
DE
ABC
HS
IWO
Test system
HGWO
8.8575e-05
7.2156e-05
4.8161e-05
6.7633e-05
5.7483e-05
2.5793e-05
5.3429e-05
4.7182e-05
Table 5 Best-fitted reduced systems in continuous and discrete-delta domains with computation times and minimum fitness value (J) Methods
Reduced models (continuous-time)
Reduced models (discrete-delta)
Minimum fitness value (J)
HGWO (proposed)
17:9224sþ3:7921 s2 þ7:5641sþ3:7921
17:8567cþ3:7778 c2 þ7:5393cþ3:7778
5.2381e209
6.3489
3.6395
FA
17:85sþ14:34 s2 þ7:783sþ14:34
17:7878cþ14:2841 c2 þ7:7667cþ14:2841
4.4976e-06
6.5488
3.7428
GWO
17:98sþ2:673 s2 þ7:559sþ2:673
17:9132cþ2:66286 c2 þ7:53298cþ2:66286
1.4328e-06
6.4272
3.7331
GA
17:99sþ6:695 s2 þ10:55sþ6:695
17:8987cþ6:65933 c2 þ10:5002cþ6:65933
1.0443e-06
6.6793
3.8571
PSO
17:89sþ10:98 s2 þ7:719sþ10:98
17:8302cþ10:94 c2 þ7:7cþ10:94
9.0286e-06
6.7009
3.8699
DE
17:9sþ10:3 s2 þ8:047sþ10:3
17:833cþ10:262 c2 þ8:02472cþ10:262
1.0592e-05
6.6525
3.8872
ABC
23:12sþ10:34 s2 þ20:61sþ10:34
22:8845cþ10:236 c2 þ20:4095cþ10:236
1.0557e-05
13.2837
7.7244
HS
17:83sþ1:664 s2 þ7:529sþ1:664
17:7595cþ1:65741 c2 þ7:50223cþ1:65741
1.9728e-05
6.7272
3.8744
IWO
17:89sþ10:89 s2 þ7:727sþ10:89
17:8306cþ10:8472 c2 þ7:70763cþ10:8472
8.5537e-06
11.2747
6.4863
proposed method is not only compared with the parent heuristics FA and GWO but also with some standard heuristics like GA, PSO, DE, ABC, HS and IWO available in the literature. The number of search agents and maximum number of iterations are considered to be same for all the algorithmic-based approaches for model order reduction. The other user-defined parameters for all the algorithms are also specified in Table 2. The proposed algorithm is run on a workstation, which has Pentium i7, 2.4 GHz processor and 32.0 GB RAM and has been coded using MATLAB R2013a version. Since heuristic algorithms are stochastic processes, they have to be run more than 10 times to generate meaningful statistical results. Each of these algorithms is thus given 20 independent test runs. The statistical assessments (average and standard deviation) of the fitness function for each algorithm are provided in Table 3. In addition to this, the best and worst values for all the algorithms are also reported in this table. The best results obtained with respect to the best values, average, standard deviations are highlighted with bold letters. It is clear from the results of Table 3 that the proposed HGWO technique outperforms some of the well-known standard algorithms as well as its constituent algorithms. Further, the nonparametric Wilcoxon signed rank test [22] is carried out to validate the significance of the results
Average CPU time in continuous-time domain (s)
Average CPU time in discrete-delta domain (s)
obtained, and the calculated p values are quoted in Table 4 as metrics of significance. Table 4 clearly suggests that the results obtained by the proposed technique are meaningful. The best-fitted reduced system and the average computation time for both the continuous-time and discrete-delta domains with each of the methods are presented in Table 5. Additionally, the minimum fitness values for each of the algorithms are also indicated in this table. The best values in each column of Table 5 are marked in bold letters. It is evident from Table 5 that the parameters of the reduced system in the delta domain show a close resemblance with that in the continuous-time domain for all the algorithms considered. Even the computation time in the delta domain is comparatively faster than its continuoustime counterpart. The proposed algorithm requires the least computation in both the continuous and discrete-delta domains. The convergence characteristics of the fitness function value in the delta domain with the number of iterations for the best-fitted models indicated in Table 5 are plotted in Fig. 3 for 20, 30 and 50 iterations. It is clearly visible from Fig. 3 that the proposed method converges at a faster rate as compared to the rest of the algorithms. The step responses of the original and reducedorder model are shown in Fig. 4. Moreover, sensitivity analysis of the fixed parameters, namely a, b and c, of the
123
Neural Computing and Applications
(a)
0.06 0.05
Fitness Value (J)
Fig. 3 a Convergence characteristics of reduced system with different algorithms for 20 iterations. b Convergence characteristics of reduced system with different algorithms for 30 iterations. c Convergence characteristics of reduced system with different algorithms for 50 iterations
0.04 0.03 0.02 0.01 0
0
2
4
6
8
10
GA
PSO
DE
12
14
16
18
20
No. of Iterations
(b)
ABC
FA
GWO
HGWO
0.06
Fitness Value (J)
0.05 0.04 0.03 0.02 0.01 0
0
5
10
15
20
25
30
No. of Iterations
(c)
GA
PSO
DE
ABC
FA
GWO
10
15
20
25
30
35
HGWO
0.06
Fitness Value (J)
0.05 0.04 0.03 0.02 0.01 0
0
5
40
45
50
No. of Iterations GA
proposed hybrid technique is conducted. Each of these parameters is perturbed by 20%: After perturbing the parameters, twenty independent trial runs are further given
123
PSO
DE
ABC
FA
GWO
HGWO
to obtain the average, best and worst values of the fitness function. The effects of the perturbations in the parameters are shown in Table 6.
Neural Computing and Applications Fig. 4 Step response matching of test system in the delta domain with proposed HGWO
Step Response 2.5
org_sys red_sys 2
Amplitude
1.5
1
0.5
0 0
1
2
3
4
5
6
7
8
9
10
Average
Worst
Time (seconds)
Table 6 Percentage deviation from fitness function with perturbed algorithm parameters of HGWO
Best parameter-D
Best
Average
Worst
Best parameter ? D
Best
0.8a
0.069
0.208
0.368
1.2a
0.087
0.356
0.408
- 0.8a
0.075
0.192
0.353
- 1.2a
0.027
0.137
0.326
0.8b
0.147
0.388
0.439
1.2b
0.117
0.224
0.304
- 0.8b
0.081
0.126
0.263
- 1.2b
0.019
0.232
0.329
0.8c - 0.8c
0.113 0.225
0.271 0.406
0.459 0.543
1.2c - 1.2c
0.046 0.079
0.351 0.278
0.484 0.392
It is noticeable from Fig. 4 that the step response of the reduced-order model of the proposed technique closely matches its respective higher-order system. Further, it is seen the step response in the delta domain closely resemble that in the continuous-time revealing the unification of discrete and continuous-time domains at higher sampling frequency. In Table 6, maximum deviation from the fitness function has been observed as 0.225, 0.406 and 0.484%, respectively, for best, average and worst results. Hence, it can be inferred that the results obtained through the proposed hybrid method are not much susceptible subject to the change in parameter limits. The time domain parameters, viz. rise time, settling time and the maximum peak overshoot and the frequency domain parameters like gain margin and phase margin of the original higher-order system and its respective reduced system are enumerated below in Table 7 ensuring that the dynamic properties of the original systems are preserved in their reduced-order models. The results of some of the latest topologies
reported in the literature as well as that obtained through standard and parent heuristic algorithms are also compared with the proposed methods. From Table 7, it is seen that the proposed technique produces nearly a close match in the delta domain except settling time. An intelligent controller can be taken up in future to incorporate adequate corrective measure. In addition to this, the most frequently used performance indices, viz. integral of absolute error (IAE), integral of time weighted absolute error (ITAE), integral of square error (ISE), integral of time weighted square error (ITSE) and H! norm of the obtained reduced-order models in the delta domain are then compared with those obtained by other methods as shown in Table 8. The best values for each of the test systems are indicated with the help of bold letters. It is to be noted that the error indices achieved by the proposed methods supersede the existing methods reported in the literature. The pole–zero locations of the original test system, reduced model of the proposed method and the
123
Neural Computing and Applications Table 7 Quantitative comparison of time and frequency domain parameters for test system Methods
Rise time (s)
Original model
0.0569
HGWO
0.0570
FA GWO
Settling time (s)
Overshoot (%)
Gain margin (dB)
Phase margin ()
4.8201
120.3504
Inf
114.3178
7.8623
120.5271
Inf
114.2263
0.0567
1.9560
100.4958
Inf
113.1031
0.0569
11.1679
124.3888
Inf
114.3426
GA
0.0644
5.5850
63.8405
Inf
124.4703
PSO
0.0567
2.6442
105.3090
Inf
113.4455
DE
0.0577
2.9862
99.8189
Inf
114.7141
ABC
0.0720
3.9695
12.9467
Inf
150.7439
HS
0.0575
17.8700
126.7996
Inf
114.6547
IWO Balanced truncation [4]
0.0567 0.0106
2.6730 5.9155
105.2746 827.1002
Inf Inf
113.4893 114.1115
Hankel norm approximation [4]
0.0556
5.4781
131.4322
Inf
113.4696
Routh approximation [5] Pade´ approximation [5]
0.5517
8.7371
57.1965
Inf
119.2033
0.0660
4.7970
124.9653
Inf
112.0898
[10]
0.0594
5.1041
122.7736
Inf
112.8395
Reduced models
[11]
0.0667
4.7990
123.3054
Inf
112.2626
[12]
0.0457
4.4003
123.5585
Inf
112.5763
[13]
0.0614
5.3071
123.6928
Inf
112.7340
[16]
0.0635
4.5717
129.9951
Inf
111.4461
[17]
0.0598
4.3566
135.5033
Inf
110.8376
Table 8 Comparison of performance indices with different reduction techniques for test system in the delta domain Methods
IAE
ITAE
ISE
ITSE
H1 norm
HGWO
9.3320e206
8.0432e207
9.8615e210
9.1824e211
3.1929e204
FA GWO
3.7748e-05 1.0619e-05
2.3263e-06 8.1545e-07
1.7281e-08 1.4339e-09
1.0067e-09 1.3665e-10
9.1981e-04 3.2494e-04
GA
0.0020
1.6287e-04
3.9834e-05
3.6035e-06
0.0258
PSO
2.6730e-05
1.5914e-06
9.0289e-09
5.0625e-10
7.0551e-04
DE
2.5477e-04
2.0918e-05
6.5458e-07
6.0681e-08
0.0036
ABC
0.0049
4.2723e-04
2.8666e-04
2.8336e-05
0.0993
HS
1.3855e-04
8.2397e-06
1.7397e-07
1.0072e-08
0.0020
IWO
2.6894e-05
1.6909e-06
8.6131e-09
5.1762e-10
6.2494e-04
Balanced truncation [4]
9.2187e-05
6.4352e-06
7.4268e-08
5.5806e-09
0.0013
Hankel norm approximation [4]
5.6979e-04
3.2395e-05
3.2086e-06
1.7726e-07
0.0098
Routh approximation [5] Pade´ approximation [5]
0.0133
8.0573e-04
0.0016
9.5914e-05
0.1888
0.0017
9.5837e-05
2.7241e-05
1.5038e-06
0.0272
[10]
5.0732e-04
2.9681e-05
2.7660e-06
1.6909e-07
0.0113
[11]
0.0018
1.0165e-04
3.0334e-05
1.6665e-06
0.0277
[12]
0.0030
1.7154e-04
8.2177e-05
4.5233e-06
0.0454
[13] [16]
8.6304e-04 0.0013
4.9389e-05 7.2677e-05
7.5296e-06 1.6410e-05
4.2735e-07 9.5188e-07
0.0162 0.0250
[17]
7.2702e-04
4.7553e-05
6.2770e-06
4.8105e-07
0.0200
123
Neural Computing and Applications Table 9 Comparison of pole– zero locations for test system
System Original system
Poles
Zeros
- 8.0000
- 7.4292 ? 0.0000i
- 7.0000
- 5.8917 ? 0.0000i
- 6.0000
- 5.0078 ? 0.6543i
- 5.0000
- 5.0078 - 0.6543i
- 4.0000
- 2.4498 ? 0.5278i
- 3.0000
- 2.4498 - 0.5278i
- 2.0000
- 0.3195 ? 0.0000i
- 1.0000 Proposed system
- 7.0242
- 0.2116
- 0.5399 FA
- 4.7872
- 0.8033
- 2.9955 GWO
- 7.1869
- 0.1487
- 0.3719 GA
- 9.8708 - 0.6782
- 0.3721
PSO
- 5.8374
- 0.6138
- 1.8814 DE
- 6.4491
- 0.5756
- 1.5976 ABC
- 20.0956
HS
- 7.3010
- 0.4474
- 0.5146 - 0.0933
- 0.2279 IWO
- 5.8722
- 0.6085
- 1.8544 Balanced truncation [4]
- 6.6362
- 0.0609
- 0.7287 Hankel norm approximation [4]
- 6.2230
- 0.2791
- 0.8052 Routh approximation [5]
- 0.5870 ? 0.2953i - 0.5870 - 0.2953i
- 0.2170
Pade approximation [5]
- 5.0356
- 0.3193
- 0.9574 [10]
- 6.0187
- 0.3101
- 0.8743 [11]
- 5.0352
[12]
-8
- 0.3219
- 0.9571 - 0.3621
-1 [13]
- 5.7862
- 0.2968
- 0.8408 [16]
- 5.0196
- 0.3249
- 1.0110 [17]
reduced systems developed by different heuristic algorithms and those reported in the literature are enumerated in Table 9.
- 5.1329
- 0.3310
Since the constraints of minimum phase feature and stability are already included in the optimization problem, the reduced-order model using the proposed technique not
123
Neural Computing and Applications
only turns out to be stable but also is of minimum phase. It is also evident from the results provided in Table 9 as well. Thus, the proposed reduced model is computationally efficient, stable and minimum phase in the delta domain with minimum fitness value in contrast to the existing algorithms.
new mixed approach for model order reduction. This method may further be extended to reduce higher-order MIMO systems. Order reduction of unstable, non-minimum phase, fraction order and interval systems can also be attempted in the future with the proposed method.
7 Conclusions
References
A novel method based on a hybrid paradigm comprising of GWO and FA is investigated for order reduction of largescale linear time-invariant system in the delta domain. Using the concept of error minimization, the SSE between the original and reduced-order model with respect to PRBS response matching is minimized to realize the reduced model. Constraints have been added to preserve stability and minimum phase characteristics. The fitness value of the proposed technique surpasses than that obtained by some popular metaheuristic algorithms as well as the parent heuristics of which they are constituted for test system considered in this paper. Wilcoxon signed rank test supports the significance of the results obtained by the hybrid approach. The time and frequency domain parameters of the reduced system obtained by utilizing the proposed hybrid method also display a nearly close match with the original system. The comparison of step response shows that the reduced-order model provides a good approximation of the original model. It is clearly visible from the response characteristics, that at high sampling rate, the discrete model converges to the analog model leading to unified treatment of continuous with discrete-time systems. The proposed technique also outperforms the existing techniques in terms of common error indices in control. Sensitivity analysis is performed to check the robustness of the algorithm with respect to parameter variations which proves fruitful for the proposed method to solve model order reduction. In addition, the computation algorithm is simple and devoid of numerical ill conditioning unlike shift operator representations. A comparison with z-domain reduction at appropriate sampling frequency could be carried out to check deviation of results from continuous and discrete-delta domains. An intelligent controller applying approximate model matching can also be devised to take care of settling time issue using the proposed heuristic technique. Modifications in parent heuristics could be incorporated for further improvement in the hybrid approach. However, drastic improvement in the result will not be possible as the number of decision variables considered is less. A third-order reduced system could have given better match of the original system. Classical methods could be clubbed with the proposed method to yield a
1. Lucia DJ, Beran PS, Silva WA (2004) Reduced-order modeling: new approaches for computational physics. Prog Aerosp Sci 40(1):51–117 2. Nayfeh AH, Younis MI, Abdel-Rahman EM (2005) Reducedorder models for MEMS applications. Nonlinear Dyn 41(1):211–236 3. Dorneanu B, Bildea CS, Grievink J (2009) On the application of model reduction to plantwide control. Comput Chem Eng 33(3):699–711 4. Schilders WH, Van der Vorst HA, Rommes J (2008) Model order reduction: theory, research aspects and applications. Springer, Berlin. ISBN 978-3-540-78841-6 5. Fortuna L, Nunnari G, Gallo A (2012) Model order reduction techniques with applications in electrical engineering. Springer, Berlin. ISBN 978-1-4471-3198-4 6. Mukherjee S, Mittal RC (2005) Order reduction of linear discrete systems using a genetic algorithm. Appl Math Model 29(6):565–578 7. Desai SR, Prasad R (2013) A novel order diminution of LTI systems using Big Bang Big Crunch optimization and Routh Approximation. Appl Math Model 37(16):8016–8028 8. Abu-Al-Nadi DI, Alsmadi OM, Abo-Hammour ZS, Hawa MF, Rahhal JS (2013) Invasive weed optimization for model order reduction of linear MIMO systems. Appl Math Model 37(6):4570–4577 9. Sikander AA, Prasad BR (2015) A novel order reduction method using cuckoo search algorithm. IETE J Res 61(2):83–90 10. Sikander A, Prasad R (2015) Soft computing approach for model order reduction of linear time invariant systems. Circuits Syst Signal Process 34(11):3471–3487 11. Biradar S, Hote YV, Saxena S (2016) Reduced-order modeling of linear time invariant systems using big bang big crunch optimization and time moment matching method. Appl Math Model 40(15):7225–7244 12. Sikander A, Prasad R (2017) New technique for system simplification using Cuckoo search and ESA. Sa¯dhana¯ 42(9):1453–1458 13. Sikander A, Thakur P (2017) Reduced order modelling of linear time-invariant system using modified cuckoo search algorithm. Soft Comput. https://doi.org/10.1007/s00500-017-2589-4 14. Mishra R, Das KN (2016) Chemo-inspired genetic algorithm and application to model order reduction problem. In: Proceedings of fifth international conference on soft computing for problem solving. Springer, Singapore, pp 31–41. https://doi.org/10.1007/ 978-981-10-0448-3_3 15. Ganji V, Mangipudi S, Manyala R (2017) A novel model order reduction technique for linear continuous-time systems using PSO-DV algorithm. J Control Autom Electr Syst 28(1):68–77 16. Narwal A, Prasad R (2017) Optimization of LTI systems using modified clustering algorithm. IETE Tech Rev 34(2):201–213 17. Sikander A, Prasad R (2017) A new technique for reduced-order modelling of linear time-invariant system. IETE J Res 63(3):316–324
123
Neural Computing and Applications 18. Soloklo HN, Farsangi MM (2013) Multi-objective weighted sum approach model reduction by Routh-Pade approximation using harmony search. Turk J Electr Eng Comput Sci 21(Suppl 2):2283–2293. https://doi.org/10.3906/elk-1112-31 19. Khademi G, Mohammadi H, Dehghani M (2015) Order reduction of linear systems with keeping the minimum phase characteristic of the system: LMI based approach. IJST Trans Electr Eng 39(E2):217–227. https://doi.org/10.22099/ijste.2015.3493 20. Bansal JC, Sharma H (2012) Cognitive learning in differential evolution and its application to model order reduction problem for single-input single-output systems. Memet Comput. https:// doi.org/10.1007/s12293-012-0089-8 21. Sharma H, Bansal JC, Arya KV (2012) Fitness based differential evolution. Memet Comput 4(4):303–316 22. Rosner B, Glynn RJ, Lee ML (2006) The Wilcoxon signed rank test for paired comparisons of clustered data. Biometrics 62(1):185–192 23. Middleton RH, Goodwin GC (1990) Digital control and estimation: a unified approach (Prentice Hall information and system sciences series). Prentice Hall, Englewood Cliffs 24. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61 25. Mirjalili S (2015) How effective is the Grey Wolf optimizer in training multi-layer perceptrons. Appl Intell 43(1):150–161 26. Kamboj VK, Bath SK, Dhillon JS (2016) Solution of non-convex economic load dispatch problem using grey wolf optimizer. Neural Comput Appl 27(5):1301–1316 27. Khairuzzaman AK, Chaudhury S (2017) Multilevel thresholding using grey wolf optimizer for image segmentation. Expert Syst Appl 86:64–76
28. Saremi S, Mirjalili SZ, Mirjalili SM (2015) Evolutionary population dynamics and grey wolf optimizer. Neural Comput Appl 26(5):1257–1263 29. Emary E, Zawbaa HM, Hassanien AE (2016) Binary grey wolf optimization approaches for feature selection. Neurocomputing 172:371–381 30. Heidari AA, Pahlavani P (2017) An efficient modified grey wolf optimizer with Le´vy flight for optimization tasks. Appl Soft Comput 60:115–134 31. Jayabarathi T, Raghunathan T, Adarsh BR, Suganthan PN (2016) Economic dispatch using hybrid grey wolf optimizer. Energy 111:630–641 32. Kamboj VK (2016) A novel hybrid PSO–GWO approach for unit commitment problem. Neural Comput Appl 27(6):1643–1655 33. Lu C, Gao L, Li X, Xiao S (2017) A hybrid multi-objective grey wolf optimizer for dynamic scheduling in a real-world welding industry. Eng Appl Artif Intell 57:61–79 34. Rizk-Allah RM, Zaki EM, El-Sawy AA (2013) Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems. Appl Math Comput 224:473–483 35. Yang XS (2010) Firefly algorithm, stochastic test functions and design optimisation. Int J Bio Inspired Comput 2(2):78–84 36. Khajehzadeh M, Taha MR, Eslami M (2013) A new hybrid firefly algorithm for foundation optimization. Natl Acad Sci Lett 36(3):279–288 37. Talbi EG (2002) A taxonomy of hybrid metaheuristics. J Heuristics 8(5):541–564 38. Anagnost JJ, Desoer CA (1991) An elementary proof of the Routh–Hurwitz stability criterion. Circuits Syst Signal Process 10(1):101–114
123