International Journal of Control, Automation and Systems 15(X) (2017) 1-8 http://dx.doi.org/10.1007/s12555-016-0347-5
ISSN:1598-6446 eISSN:2005-4092 http://www.springer.com/12555
Leader-follower Type Distance-based Formation Control of a Group of Autonomous Agents Kwang-Kyo Oh* and Hyo-Sung Ahn Abstract: We study distance-based formation control of a group of mobile agents consisting of several leaders and the remaining followers. The leaders sense their own positions with respect to a global reference frame whereas the followers sense only relative positions of their neighbors with respect to their own local reference frames. The relative position sensing topology is given as a directed acyclic graph. The objective of the agents is to reach their desired positions in the global reference frame. By applying a distance-based control law to the followers, we allow the agents to reach their desired positions if the sensing graph satisfies some graph rigidity conditions. Further we study the case that the desired positions are time-varying. Keywords: Autonomous agents, formation control, graph rigidity, leader-follower.
1.
INTRODUCTION
A significant amount of research efforts have been focused on formation control. Based on the types of sensed and controlled variables, the majority of the existing results can be categorized into position-, displacement-, and distance-based schemes [1], while there exist other approaches that do not fit into the categorization [2, 3]. The problem setup of displacement-based control is similar to that of consensus [4, 5]. Though position-based control is an effective solution to drive agents to their desired positions, it requires the agents to be equipped with costly position sensors. In contrast, distance-based control is advantageous in terms of sensing requirement while it is not effective to drive agents to their desired positions. In practice, it is desirable to drive agents to their desired positions without requiring costly position sensors. Based on this motivation, we study formation control of a group of agents consisting of several leaders and the remaining followers. We assume that the leaders are able to sense their own positions with respect to a global reference frame while the followers are able to sense only relative positions of their neighbors with respect to their own local reference frames without any knowledge on the global reference frame. The objective of the agents is to move to the desired positions in the global reference frame. Under this problem setup, we propose a position control law for the leaders and a distance-based control law for the followers,
which allows the agents to reach their desired positions. The proposed strategy allows the follower agents to their desired positions even if the followers do not have position sensors by selecting several agents as the leader agents. Further, the proposed strategy can reduce communication burden because agents can reach their desired positions by providing the several leader agents with control signals. Contributions of this paper can be summarized as follows: First, we propose a formation control strategy for a group of agents consisting of several leaders and the remaining followers, which allows the agents to asymptotically reach their destinations, assuming that the followers do not have costly position sensors. Note that this has not been achieved under the existing distance-based control schemes [6–9]. Second, we study formation tracking of a group of single- or double-integrator modeled agents assuming that the majority of the agents are under a distance-based control law. In distance-based formation control, formation shape control has been primarily concerned and formation tracking problems have not been addressed in the literature [6–9]. We show that position tracking errors remain small under the proposed formation control strategy if the initial positions errors are small and the desired positions change slowly even if the follower agents are under a distance-based control law. This paper is organized as follows: Mathematical background is reviewed in Section 2. In Section 3, formation control strategy for single-integrator modeled agents
Manuscript received June 8, 2016; revised August 10, 2016; accepted September 13, 2016. Recommended by Associate Editor Nam H. Jo under the direction of Editor Euntai Kim. This work was conducted within the project “Free Piston Engine Linear Generator for CHP” (No. EO170024) at the Korea Institute of Industrial Technology (KITECH). Kwang-Kyo Oh is with Automotive Components and Materials R&D Group, KITECH, Gwangju, Korea (e-mail: kwangkyo.oh@ gmail.com). Hyo-Sung Ahn is with School of Mechanical Engineering, Gwangju Institute of Science and Technology, Gwangju, Korea (e-mail:
[email protected]). * Corresponding author.
c ⃝ICROS, KIEE and Springer 2017
Kwang-Kyo Oh and Hyo-Sung Ahn
2
is proposed. The proposed control strategy is applied to double-integrator modeled agents in Section 4. Simulation results are provided in Section 5. Concluding remarks are then provided in Section 6. 2.
PRELIMINARIES
The set of real numbers is denoted by R. For p1 , . . . , pN ∈ Rn , we denote [pT1 · · · pTN ]T ∈ RnN by p if there is no confusion. A directed graph is defined as a pair G := (V, E), where V is the set of nodes and E is the set of ordered pairs of the nodes, called edges. The set of neighbors of node i is defined as Ni := { j ∈ V : (i, j) ∈ E}. An undirected graph is defined as a pair G := (V, E), where V is the set of nodes and E is the set of unordered pairs of the nodes, called edges. The set of neighbors of node i is defined as Ni := { j ∈ V : {i, j} ∈ E}. Let G = (V, E) be an undirected graph with N nodes and M edges. Let pi ∈ Rn be position assigned to node i. Then p ∈ RnN is said to be a realization of G in Rn . The pair (G, p) is said to be a framework of G in Rn . By ordering edges in E, an edge function gG : RnN → RM associated with (G, p) is defined as 1 gG (p) := [· · · ∥p ji ∥2 · · · ]T , {i, j} ∈ E. 2
(1)
p˙i = ui , i = 1, . . . , N,
where pi ∈ Rn and ui ∈ Rn are the position and control input of agent i with respect to a global reference frame. We denote the global reference frame by g ∑. We assume that several agents are leaders and the remaining agents are followers. We denote leaders by 1, . . . , L and followers by L + 1, . . . , N. We assume that the leaders sense their own positions with respect to g ∑ while the followers sense only relative positions of their neighbors with respect to their own local reference frames, orientations of which are not aligned to that of g ∑. Further we assume that the followers do not have any knowledge on g ∑. Specifically they know neither the origin nor the orientation of g ∑. Let the relative position sensing topology be modeled by a directed acyclic graph G = (V, E). Detailed requirement on the sensing graph G is discussed in Section 3.2 Let the local reference frame of follower i be denoted by i ∑. Then (i, j) ∈ E means that follower i senses the relative position of agent j with respect to i ∑. By adopting a notation in which superscript denotes the corresponding reference frame, we assume that follower i senses the following variables: piji := pij − pii , j ∈ Ni ,
Rigidity of frameworks is defined as follows: Definition 1 [10]: A framework (G, p) is said to be rigid in Rn if there exists a neighborhood Up of p ∈ RnN −1 such that g−1 G (gG (p)) ∩U p = gK (gK (p)) ∩U p , where K is the complete graph with N nodes. Note that the graph rigidity naturally connects to the essence of distance-based formation control because it allows agents to achieve their desired formation by controlling edge lengths (inter-agent distances). We next relate rigidity to properties of graphs by imposing regularity condition on realizations. A realization is said to be regular if its coordinates are algebraically independent over rational numbers. Generic rigidity of graphs is defined as follows: Definition 2 [11]: A graph G is said to be generically rigid in Rn if (G, p) is rigid in Rn for any regular realization p. An implication of regularity is non-degeneracy. If a realization is regular on the plane, any node and its neighbor nodes are not located on a straight line. If n = 3, regularity ensures that any node and its neighbor nodes are not located on a plane. 3.
SINGLE-INTEGRATOR CASE
3.1. Problem formulation We consider the following N single-integrators in ndimensional space:
(2)
(3)
where pii and pij denote the positions of follower i and agent j with respect to i ∑. Let p∗i ∈ Rn be the desired position of agent i with respect to g ∑. The objective of the agents is to achieve p → p∗ . Since leader i senses pi , it is able to achieve pi → p∗i by directly controlling its position. In contrast, follower i senses only piji , thus it is not able to control pi directly. For this reason, we apply a control law to achieve ∥piji ∥ → ∥p∗j − p∗i ∥ for follower i in below. The problem for the single-integrator modeled agents (2) is stated as follows: Problem 1: Consider the single-integrators (2) over a directed acyclic sensing graph G. Assume that leader i ∈ {1, . . . , L} senses pi while follower i ∈ {L + 1, . . . , N} senses p jj − pii for j ∈ Ni . Let p∗i be given to leader i and ∥p∗j − p∗i ∥ for j ∈ Ni be given to follower i. Design control laws for the leaders and followers such that p∗ is asymptotically stable with respect to (2).
Requirements on (G , p) In Section 3.3, we will propose a control strategy that allows the followers to actively control distances to their neighbors while allowing the leaders to directly control their positions. In this control strategy, neighbors of an agent are required to act as beacon nodes for the agent. Conceptually, the following process occurs dynamically by the proposed control strategy: 3.2.
Leader-follower Type Distance-based Formation Control of a Group of Autonomous Agents
• The leaders move to their destinations by directly controlling their positions. • Follower L + 1 has only leaders as its neighbors. This follower moves to its destination by controlling distances to its neighbors. Its neighbors act as beacon nodes. • Any other follower i has some neighbors, each of which is either a leader or a follower. Leaders move to their destinations by position control and follower neighbors move to their destinations by distance control. In this way, the neighbors act as beacon nodes to this follower i.
3
Fig. 1. An example of (G, p∗ ).
To drive the followers to their unique destinations based on a distance-based control law, we need the following conditions: • Every follower has at least n neighbors; • The position of each follower and the position of its neighbors are not degenerate in Rn , i.e. the positions are not collinear if n = 2 and not coplanar if n = 3. ¯ be an To formulate the above conditions, let G¯ = (V, E) ¯ undirected graph such that E satisfies ¯ • If (i, j) ∈ E, {i, j} ∈ E; ¯ • For i, j ∈ {1, . . . , L} and i ̸= j, {i, j} ∈ E. Then the requirements on G and p∗ can be described as follows: Assumption 1: • G is directed acyclic and G¯ is generically rigid; • For any i ∈ {L + 1, . . . , N}, the dimension of the affine span of p∗i and p∗j for j ∈ Ni is n. Generic rigidity of G¯ ensures that every follower has at least n neighbors [12]. Further the second condition in Assumption 1 describes non-degeneracy condition that allows followers to determine their destinations uniquely. Note that the generic rigidity of G¯ and the degeneracy of p∗ allows agents to achieve their desired formation by controlling inter-agent distances. Based on this fact, we can show that the desired formation is achieved by proving that the desired inter-agent distances are achieved. We then provide a procedure for constructing G and p∗ that satisfies the conditions Assumption 1 based on Henneberg vertex addition sequence [13]: Procedure 1: 1) Desired positions of leaders: Assign positions to p∗1 , . . . , p∗L such that the dimension of the affine span of the positions is n. 2) Edges and positions of followers: (a) For a follower i, assign a position to p∗i . (b) If n = 2, select j, k ∈ {1, . . . , i − 1} such that the dimension of the affine span of p∗i , p∗j , and p∗k is 2; if there is no such j and k, go to (a). If n = 3, select j, k, l ∈ {1, . . . , i − 1} such that the dimension of the affine span of p∗i , p∗j , p∗k , and p∗l is 3; if there is no such j, k, and l go to (a).
¯ p∗ ). Fig. 2. An example of (G,
Procedure 1, which is based on Henneberg vertex addition sequence [13], always ensures generic rigidity of G¯ and thus ensures that every follower has at least n neighbors [12]. Further the second condition in Assumption 1 is always satisfied by the procedure. In Fig. 11 and 2, we ¯ p∗ ). Note that G¯ in provide an example of (G, p∗ ) and (G, Fig. 2 can be constructed by Henneberg vertex addition sequence [13]. Assume that agents 1, 2, and 3 are leaders and the remaining agents are followers. As shown in Fig. 1, which is the sensing graph of the agents, every follower has at least two neighbors and the dimension of the affine span of its position and the positions of its neigh¯ p∗ ) in Fig. 2 satisfies the conditions bors is two. Thus (G, in Assumption 1.
Formation control strategy For brevity, we introduce the following notations: For 1 ≤ i ≤ j ≤ N, 3.3.
V[i: j] := {i, . . . , j}, p[i: j] := [pTi · · · pTj ]T . Denote p j − pi and p∗j − p∗i by p ji and p∗ji , respectively. Further denote p∗i − pi by p˜i . Consider the single-integrator modeled agents (2). Since the leaders sense their own position and know their 1 Note that an edge from a node i to a node j means that agent i senses the relative position of agent j in Fig. 1, which means that j ∈ Ni .
Kwang-Kyo Oh and Hyo-Sung Ahn
4
destinations with respect to g ∑, a control law is naturally designed as follows: ui = p˙∗i + kLp (p∗i − pi ),
(4)
where kLp > 0. For the followers, we use a gradient control law. Define a potential function ϕi : Rn(i−1) × Rn → R for follower i as:
ϕi (p[1:i−1] , pi ) :=
kF p 4
∑
( )2 ∥p ji ∥2 − ∥p∗ji ∥2 ,
(5)
j∈Ni
where kF p > 0. A control law for agent i ∈ V[L+1,N] can be designed by using the gradient of the potential function ϕi as ui = −∇ pi ϕi (p[1:i−1] , pi )
∂ ϕi (p[1:i−1] , pi ) ∂ pi ( ) = −kF p ∑ ∥p ji ∥2 − ∥p∗ji ∥2 p ji . =−
(6)
j∈Ni
Let p˜i := p∗i − pi . The agents can be described by the following error dynamics: p˙˜i = −kLp p˜i , i ∈ V[1:L] , p˜˙i = fi ( p˜i , p˜[1:i−1] ) + p˙∗i , i ∈ V[L+1:N] ,
(7a) (7b)
Specifically, if (9a) is locally input-to-state stable with p˜[1:i−1] as input and p˜[1:i−1] = 0 is locally asymptotically stable with respect to (9b), it follows that p˜[1:i] = 0 is locally asymptotically stable with respect to (9) [14, Lemma 5.6]. In the following, we prove that the origin p˜ = 0 is locally asymptotically stable with respect to (7) based on the following mathematical induction: • First Step: The origin p˜[1:L] = 0 is exponentially stable with respect to p˙˜[1:L] = −kLp p˜[1:L] . • Second Step: We show that (9) is locally input-tostate stable with p˜[1:i−1] as input. Then it follows that the origin is locally asymptotically stable with respect to (9) under the assumption that the origin p˜[1:i−1] = 0 is locally asymptotically stable with respect to (9b). • Third Step: Finally, we show that the origin is locally asymptotically stable with respect to p˙˜[1:N] = f p˜[1:N] ( p˜[1:N] ) based on mathematical induction. In the above steps, the first step is obvious. For the second step, we show that, for any i ∈ V[L+1:N] , (9a) is locally input-to-state stable with p˜[1:i−1] as input based on Lemma 5.4 in [14]. To apply Lemma 5.4 in [14], we first show local asymptotic stability of the following unforced dynamics of (9a): p˙˜i = fi ( p˜i , 0).
(11)
We have the following lemma:
where fi ( p˜i , p˜[1:i−1] ) := − kF p
Lemma 1: Let Assumption 1 hold. For i ∈ V[L+1:N] , the origin p˜i = 0 is locally asymptotically stable with respect to (11).
∑ ∥p∗ji − p˜ ji ∥2 (p∗ji − p˜ ji )
j∈Ni
+ kF p
∑ ∥p∗ji ∥2 (p∗ji − p˜ ji ).
(8)
j∈Ni
3.4. Formation regulation Let p∗ be constant. Then follower i can be described by the following cascade system: p˜˙i = fi ( p˜i , p˜[1:i−1] ), p˙˜[1:i−1] = f[1:i−1] ( p˜[1:i−1] ),
(9a) (9b)
where f[1:i−1] ( p˜[1:i−1] ) :=
−kLp p˜[1:L] fL+1 ( p˜L+1 , p˜[1:L] ) .. .
.
(10)
fi−1 ( p˜i−1 , p˜[1:i−2] ) The cascade system stability theory allows us to investigate local asymptotic stability of p˜[1:i] = 0 with respect to (9) by checking • input-to-state stability of (9a) with p˜[1:i−1] as input; • local asymptotic stability of p˜[1:i−1] = 0 with respect to (9b).
Proof: Local asymptotic stability of the origin with respect to (11) is equivalent to the local asymptotic stability of p∗i with respect to p˙i = −∇ pi ϕi (p∗[1:i−1] , pi ).
(12)
To analyze stability properties of (12), we define a function Vi : Rn → R as Vi (pi ) := ϕi (p∗[1:i−1] , pi ) ( ∗ )2 kF p = ∥p j − pi ∥2 − ∥p∗ji ∥2 , ∑ 4 j∈Ni which is continuously differentiable. Obviously, Vi (p∗i ) = 0. Further, due to Assumption 1, there exists a neighborhood Up∗i of p∗i such that V (pi ) > 0 for all pi ̸= p∗i and pi ∈ Up∗i . Note that the conditions in Assumption 1 ensure that at most two distinct positions can satisfy Vi (pi ) = 0 and one of the positions is p∗i . Thus we take Vi as a Lyapunov function candidate for the unforced dynamics (12). The time-derivative of Vi along the trajectory of (12) can be arranged as
∂ Vi (pi ) V˙i (pi ) = p˙i = −∥∇Vi (pi )∥2 . ∂ pi
Leader-follower Type Distance-based Formation Control of a Group of Autonomous Agents
Since Vi is an analytic function, it follows from Lojasiewicz’s inequality [15] that there exist a neighborhood Up′ ∗i of p∗i and constants ki > 0 and ρi ∈ [0, 1) such that ∥∇Vi (pi )∥ ≥ ki ∥Vi (pi )∥ρi for any pi ∈ Up′ ∗i . Due to the positive definiteness of Vi in Up∗i , V˙i+1 is negative definite in Up∗i ∩ Up′ ∗i , which implies that pi = p∗i is locally asymptotically stable with respect to (12), which completes the proof. □ It then follows from Lemma 5.4 in [14] and Lemma 1 that (9a) is locally input-to-state stable with p˜[1:i−1] as input: Lemma 2: Let Assumption 1 hold. For i ∈ V[L+1:N] , (9a) is locally input-to-state stable with p˜[1:i−1] as input. Proof: It follows from Lemma 1 that the origin p˜i = 0 is locally asymptotically stable with respect to (11). Further, fi defined in (8) is continuously differentiable in p˜i and p˜[1:i−1] . Thus it follows from Lemma 5.4 in [14] that (9a) is locally input-to-state stable with p˜[1:i−1] as input.□ For the third step, we use a mathematical induction to show local asymptotic stability of the origin p˜ = 0 with respect to (7) as follows: Theorem 1: Let Assumption 1 hold. The origin p˜ = 0 is locally asymptotically stable with respect to (7). Proof: Consider the following cascade system: p˙˜L+1 = fL+1 ( p˜L+1 , p˜[1:L] ), p˜˙[1:L] = −kLp p˜[1:L] .
(13a) (13b)
It is obvious that the origin p˜[1:L] = 0 is exponentially stable with respect to (13b). Further, it follows from Lemma 2 that (13a) is input-to-state stable with p˜[1:L] as input. It then follows from Lemma 5.6 in [14] that the origin p˜[1:L+1] = 0 is locally asymptotically stable with respect to (13). Next suppose that, for any i ∈ V[L+1,N] , (9b) is locally asymptotically stable. From Lemma 2, (9a) is locally input-to-state stable with p˜[1:i−1] as input. Thus it follows from Lemma 5.6 in [14] that the origin p˜[1:i] = 0 is locally asymptotically stable with respect to (9). Then, based on a mathematical induction, we conclude that the origin p˜[1:i] = 0 is locally asymptotically stable with respect to (9) for any i ∈ V[L+1,N] . Thus p˜ = 0 is locally asymptotically stable with respect to (7). □ 3.5. Formation tracking It is often the case that the objective of agents is to move along prescribed trajectories. In such a case, the error dynamics of follower i can be described as p˙˜i = fi ( p˜i , p˜[1:i−1] ) + p˙∗i ,
(14)
5
where fi is defined in (8). The following theorem confirms that p˜i is bounded for i ∈ V[L+1:N] when ∥ p˜[1:i−1] ∥ and ∥ p˙∗i ∥ are sufficiently small: Theorem 2: Let Assumption 1 hold. For i ∈ V[L+1:N] , (14) is locally input-to-state stable with p˜[1:i−1] and p˙∗i as input. Proof: Suppose that p˜[1:i−1] = 0 and p˙∗i = 0. From (14), we obtain the following unforced dynamics: p˙˜i = fi ( p˜i , 0).
(15)
Due to Assumption 1, it follows from Lemma 1 that p˜i = 0 is locally asymptotically stable with respect to (15). Further, fi ( p˜i , p˜[1:i−1] ) + p˙∗i is continuously differentiable in p˜[1:i−1] and p˙∗i . Thus it follows from Lemma 5.4 in [14] that (14) is locally input-to-state stable with p˜[1:i−1] and p˙∗i as input. □ Theorem 2 shows that the proposed strategy can be utilized to drive followers to their remote desired positions by using several leaders. 4. DOUBLE-INTEGRATOR CASE 4.1.
Problem formulation
Consider the following N double-integrator modeled agents over a directed acyclic graph G: { p˙i = vi , i = 1, . . . , N, (16) v˙i = ui , where pi ∈ Rn , vi ∈ Rn , and ui ∈ Rn denote the position, velocity, and control input of agent i with respect to g ∑. Let agent i = {1, . . . , L} be a leader and agent i ∈ {L + 1, . . . , N} be a follower. We assume that leaders sense pi and vi while follower i senses piji for j ∈ Ni and vii . Let p∗ ∈ RnN be given with respect to g ∑. Then p∗i is given to leader i while ∥p∗ji ∥ for j ∈ Ni are given to follower i. The objective of the agents (16) is to achieve p → p∗ . The formation control problem is then stated as follows: Problem 2: Consider the double-integrator modeled agents (16) over a directed acyclic sensing graph G. Assume that leader i ∈ {1, . . . , L} senses pi and vi while follower i ∈ {L + 1, . . . , N} senses p jj − pii for j ∈ Ni and vii . Let p∗i be given to leader i and ∥p∗ji ∥ for j ∈ Ni be given to follower i. Design control laws for the leaders and followers such that p∗ is asymptotically stable with respect to (2).
Formation control strategy Consider the double-integrator modeled agents (16). For leader i, a control law can be designed as 4.2.
ui = v˙∗i + kLv ( p˙∗i − vi ) + kLp (p∗i − pi ),
(17)
Kwang-Kyo Oh and Hyo-Sung Ahn
6
where kLv > 0 and kLp > 0. To design a control law for the followers, define potential function ψi : Rn(i−1) × Rn × Rn → R for follower i as
ψi (p[1:i−1] , pi , vi ) kF p 1 := ∥vi ∥2 + 2 4
∑ (∥p ji ∥2 − ∥p∗ji ∥2 )2 ,
j∈Ni
where kF p > 0. A control law for follower i can be designed as ui = −kFv ∇vi ψi (p[1:i−1] , pi , vi ) − ∇ pi ψi (p[1:i−1] , pi , vi ) ( ) = −kFv vi − kF p ∑ ∥p ji ∥2 − ∥p∗ji ∥2 p ji , (18) j∈Ni
where kFv > 0. Note that the control law (18) can be described with respect to i ∑ as ) ( uii = −kFv vii − kF p ∑ ∥piji ∥2 − ∥p∗ji ∥2 piji , j∈Ni
which shows that it can be implemented in ∑ by using for j ∈ Ni and vii . Let p˜i := p∗i − pi and v˜i := p˙∗i − vi . Then the error dynamics for the agents can be written as { p˙˜i = v˜i , (19a) i ∈ {1, . . . , L}, v˙˜i = −kLv v˜i − kLp p˜i , { p˙˜i = v˜i , i ∈ {L + 1, . . . , N}, v˙˜i = fi ( p˜i , p˜[1:i−1] ) − kFv v˜i , (19b) i
piji
where fi is defined in (8). 4.3. Formation regulation Let p∗ be constant. The leaders can be described as p˙˜[1:L] = v˜[1:L] , v˙˜[1:L] = −kLp p˜[1:L] − kLv v˜[1:L] .
(20a) (20b)
Then the origin [ p˜T[1:L] v˜T[1:L] ]T = 0 is exponentially stable with respect to (20). Follower i can be described by the following cascade system: { p˙˜i = v˜i , (21a) v˙˜i = fi ( p˜i , p˜[1:i−1] ) − kFv v˜i , { p˙˜[1:i−1] = v˜[1:i−1] , v˙˜[1:i−1] = f[1:i−1] ( p˜[1:i−1] ) − [kLv v˜T[1:L] kFv v˜TL+1:i−1 ]T , (21b) where f[1:i−1] is defined in (10). We first show (21a) is locally input-to-state stable with p˜[1:i−1] as input based on Lemma 5.4 in [14]. To apply Lemma 5.4 in [14] to (21a), we show that the origin [ p˜Ti v˜Ti ]T = 0 is locally asymptotically stable with respect to the following unforced dynamics: { p˙˜i = v˜i , (22) v˙˜i = fi ( p˜i , 0) − kFv v˜i .
Lemma 3: Let Assumption 1 hold. For i ∈ V[L+1:N] , the origin [ p˜Ti v˜Ti ]T = 0 is locally asymptotically stable with respect to (22). Proof: The local asymptotic stability of [ p˜Ti v˜Ti ]T = 0 with respect to (22) is equivalent to local asymptotic staT T bility of [pTi vTi ]T = [p∗T i 0 ] with respect to { p˙i = ∇vi ψi (p∗[1:i−1] , pi , vi ), v˙i = −∇ pi ψi (p∗[1:i−1] , pi , vi ) − kFv ∇vi ψi (p∗[1:i−1] , pi , vi ). (23) We then consider the following one-parameter family Hλ of dynamical systems: p˙i = − λ ∇ pi ψi (p∗[1:i−1] , pi , vi ) + (1 − λ )∇vi ψi (p∗[1:i−1] , pi , vi ),
(24a)
v˙i = − (1 − λ )∇ pi ψi (p∗[1:i−1] , pi , vi ) − kFv ∇vi ψi (p∗[1:i−1] , pi , vi ),
(24b)
where λ ∈ [0, 1]. When λ = 0, (24) is identical to (23). When λ = 1, (24) is described as p˙i = −∇ pi ψi (p∗[1:i−1] , pi , vi ), v˙i =
−kFv ∇vi ψi (p∗[1:i−1] , pi , vi ).
(25a) (25b)
According to [16], for all λ ∈ [0, 1], parameterized systems of the form (24) share an identical equilibrium set and the set has identical local stability properties. It follows from Theorem 1 that pi = p∗i is locally asymptotically stable with respect to (25a). Further, it is obvious that vi = 0 is exponentially stable with respect to (25b). T T Thus [pTi vTi ]T = [p∗T i 0 ] is locally asymptotically stable with respect to (25). Then, based on the result in [16], we T T conclude that [pTi vTi ]T = [p∗T i 0 ] is locally asymptotically stable with respect to (23). Therefore, [ p˜Ti v˜Ti ]T = 0 is locally asymptotically stable with respect to (22). □ It then follows that (21a) is locally input-to-state stable with p˜[1:i−1] as input: Lemma 4: Let Assumption 1 hold. For i ∈ V[L+1:N] , (21a) locally input-to-state stable with p˜[1:i−1] as input. Proof: It follows from Lemma 3 that [ p˜Ti v˜Ti ]T = 0 is locally asymptotically stable with respect to (22). Further, the right-hand side of (21a) is continuously differentiable in p˜[1:i−1] . Thus it follows from Lemma 5.4 in [14] that (21a) locally input-to-state stable with p˜[1:i−1] as input. □ Based on a mathematical induction, we show local asymptotic stability of the origin [ p˜T v˜T ]T = 0 with respect to { p˙˜i = v˜i , i ∈ V[1:L] , (26a) v˙˜i = −kLp p˜i − kLv v˜i , { p˙˜i = v˜i , i ∈ V[L+1:N] . (26b) v˙˜i = fi ( p˜i , p˜[1:i−1] ) − kFv v˜i ,
Leader-follower Type Distance-based Formation Control of a Group of Autonomous Agents
Theorem 3: Let Assumption 1 hold. Then [ p˜T v˜T ]T = 0 is locally asymptotically stable with respect to (26). Proof: Consider the following cascade system: { p˙˜L+1 = v˜L+1 , (27a) v˙˜L+1 = fL+1 ( p˜L+1 , p˜[1:L] ) − kFv v˜L+1 , { p˜˙[1:L] = v˜[1:L] , (27b) v˙˜[1:L] = −kLp p˜[1:L] − kLv v˜[1:L] . It is obvious that the origin [ p˜T[1:L] v˜T[1:L] ]T = 0 is exponentially stable with respect to (27b). Further it follows from Lemma 4 that (27a) is locally input-to-state stable with p˜[1:L] as input. It then follows from Lemma 5.6 in [14] that the origin [ p˜T[1:L+1] v˜T[1:L+1] ]T = 0 is locally asymptotically stable with respect to (27). Next suppose that, for any i ∈ V[L+1,N] , (21b) is locally asymptotically stable. From Lemma 4, (21a) is locally input-to-state stable with p˜[1:i−1] as input. Thus it follows from Lemma 5.6 in [14] that the origin [ p˜T[1:i] v˜T[1:i] ]T = 0 is locally asymptotically stable with respect to (21). Based on a mathematical induction, [ p˜T[1:i] v˜T[1:i] ]T = 0 is locally asymptotically stable with respect to (21) for i ∈ V[L+1,N] , which completes the proof. □ 4.4. Formation tracking Assume that p∗ is not constant. The error dynamics for agent i ∈ V[L+1:N] can be arranged as {
p˜˙i = v˜i , v˙˜i = fi ( p˜i , p˜[1:i−1] ) − kFv v˜i + p¨∗i + kFv p˙∗i ,
Initial positions Final positions
10
5
0
−5 −5
0
5
10
15
Fig. 3. Formation tracking: single-integrator case. 15 Initial positions Final positions
10
5
(28)
Theorem 4: Let Assumption 1 hold. For i ∈ V[L+1:N] , Then (28) is locally input-to-state stable with p˜[1:i−1] and − p¨∗i − kFv p˙∗i as input. + kFv p˙∗i
Proof: Suppose that p˜[1:i−1] = 0 and = 0. From (28), we obtain the following unforced dynamics: { p˙˜i = v˜i , (29) v˜˙i = fi ( p˜i , 0) − kFv v˜i . It then follows from Lemma 3 that [ p˜Ti v˜Ti ]T = 0 is locally asymptotically stable with respect to (29). Further, the right-hand side of (28) is continuously differentiable in p˜[1:i−1] and p¨∗i + kFv p˙∗i . Thus it follows from Lemma 5.4 in [14] that (28) is locally input-to-state stable with p˜[1:i−1] and p¨∗i + kFv p˙∗i as input. □ 5.
15
0
where fi is defined in (8). The following theorem confirms that, for i ∈ V[L+1:N] , p˜i is bounded when ∥ p˜[1:i−1] ∥ and ∥ p¨∗i + kFv p˙∗i ∥ are sufficiently small:
p¨∗i
7
SIMULATION RESULTS
In this section, we present formation tracking simulation results of single- and double-integrators on the plane
−5 −5
0
5
10
15
Fig. 4. Formation tracking: double-integrator case.
under the proposed control strategy. The sensing graph for the agents is depicted in Fig. 1. We assume that agents 1, 2, and 3 are leaders and the remaining agents are followers for both of the single- and double-integrator cases. In the simulation of formation tracking, we assume that the desired positions are given as follows: • p∗i (0) is given as follows: p∗1 (0) = (0, 0), p∗2 (0) = (−0.5, −1), p∗3 (0) = (0.5, −1), p∗4 (0) = (0, −2), p∗5 (0) = (−1, −2), p∗6 (0) = (1, −2), p∗7 (0) = (−0.5, −3), p∗8 (0) = (0.5, −3), p∗9 (0) = (−1.5, −3), and p∗10 (0) = (1.5, −3). • p∗i (t) = p∗i (0) + (t,t) if 0 < t ≤ 10. • p∗i (t) = p∗i (0) + (10, 10) if t > 10. Figs. 3 and 4 show the positions of the single- and doubleintegrators, respectively, under the proposed control strategy for 0 ≤ t ≤ 20. As shown in Figs. 3 and 4, the position
Kwang-Kyo Oh and Hyo-Sung Ahn
8
errors are bounded as expected. Moreover, the position errors asymptotically converge to zero when t > 10.
[10] L. Asimow and B. Roth, “The rigidity of graphs II,” Journal of Mathematical Analysis and Applications, vol. 68, no. 1, pp. 171-190, 1979. [click]
6. CONCLUSION
[11] W. Whiteley, “Rigidity and scene analysis,” in Handbook of Discrete and Computational Geometry, J. Goodman and J. O’Rourke, Eds., CEC Press, Boca Raton, FL, 2004.
We proposed a leader-follower type formation control strategy that allowed the majority of agents to reach their destinations without costly position sensors. We applied the proposed control strategy to single- and doubleintegrators. Further we studied formation tracking under the proposed control strategy. There are several further research directions. First, an immediate direction is to investigate global stability properties. Second, it would be interesting to study perfect formation tracking control. Finally, it would be interesting to utilize bearing information contained in the relative position measurements to achieve the desired formation. REFERENCES [1] K.-K. Oh, M.-C. Park, and H.-S. Ahn, “A survey of multiagent formation control,” Automatica, vol. 53, no. 3, pp. 424-440, 2015. [click] [2] Y. Dai and S. G. Lee, “Formation control of mobile robots with obstacle avoidance based on goacm using onboard sensors,” International Journal of Control, Automation and Systems, vol. 12, no. 5, pp. 1077-1089, 2014. [click] [3] B. S. Park and S. J. Yoo, “Adaptive leader-follower formation control of mobile robots with unknown skidding and slipping effects,” International Journal of Control, Automation and Systems, vol. 13, no. 3, pp. 587-594, 2015. [click] [4] Y. Zheng and L. Wang, “Consensus of switched multiagent systems,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 63, no. 3, pp. 314-318, 2016. [click] [5] Y. Zheng, Y. Zhu, and L. Wang, “Consensus of heterogeneous multi-agent systems,” IET Control Theory & Applications, vol. 5, no. 16, pp. 1881-1888, 2011. [click] [6] L. Krick, M. E. Broucke, and B. A. Francis, “Stabilization of infinitesimally rigid formations of multi-robot networks,” International Journal Control, vol. 82, no. 3, pp. 423-439, 2009. [click] [7] C. Yu, B. D. O. Anderson, S. Dasgupta, and B. Fidan, “Control of minimally persistent formations in the plane,” SIAM Journal on Control and Optimization, vol. 48, no. 1, pp. 206-233, 2009. [click] [8] T. H. Summers, C. Yu, S. Dasgupta, and B. D. O. Anderson, “Control of minimally persistent leader-remotefollower and coleader formations in the plane,” IEEE Transactions on Automatic Control, vol. 56, no. 12, pp. 2778-2792, 2011. [9] M. Cao, C. Yu, and B. D. O. Anderson, “Formation control using range-only measurements,” Automatica, vol. 47, no. 4, pp. 776-781, 2011. [click]
[12] G. Laman, “On graphs and rigidity of plane skeletal structures,” Journal of Engineering Mathematics, vol. 4, no. 4, pp. 331-340, 1970. [click] [13] T. S. Tay and W. Whiteley, “Generating isostatic frameworks,” Structural Topology, vol. 11, 1985. [14] H. Khalil, Nonlinear Systems, 2nd ed., Prentice-Hall, Upper Saddle River, NJ, 1996. [15] R. A. Absil and K. Kurdyka, “On the stable equilibrium points of gradient systems,” Systems and Control Letters, vol. 55, no. 7, pp. 573-577, 2006. [click] [16] F. Dörfler and F. Bullo, “Topological equivalence of a structure-preserving power network model and a nonuniform kuramoto model of coupled oscillators,” Proceedings of the 50th IEEE Conference on Decision and Control and the 2011 European Control Conference, 2011, pp. 7099-7104.
Kwang-Kyo Oh received the B.S. degree in mineral and petroleum engineering and the M.S. degree in electrical and computer engineering from Seoul National University, Seoul, Korea, in 1998 and 2001, respectively, and the Ph.D. degree in mechatronics from Gwangju Institute of Science and Technology, Gwangju, Korea, in 2013. He is currently with Korea Institute of Industrial Technology, Gwangju, Korea. His research interests are in the areas of control theory and applications with emphasis on cooperative control of multi-agent systems. Hyo-Sung Ahn is a Professor and Dasan Professor at the School of Mechanical Engineering, Gwangju Institute of Science and Technology (GIST), Gwangju, Korea. He received the B.S. and M.S. degrees in astronomy from Yonsei University, Seoul, Korea, in 1998 and 2000, respectively, the M.S. degree in electrical engineering from the University of North Dakota, Grand Forks, in 2003, and the Ph.D. degree in electrical engineering from Utah State University, Logan, in 2006. Since July 2007, he has been with the School of Mechatronics and School of Mechanical Engineering. Before joining GIST, he was a Senior Researcher with the Electronics and Telecommunications Research Institute, Daejeon, Korea. He is the author of the research monograph Iterative Learning Control: Robustness and Monotonic Convergence for Interval Systems (Springer-Verlag, 2007). His research interests include distributed control, aerospace navigation and control, network localization, and learning control.