J Optim Theory Appl DOI 10.1007/s10957-016-0984-0
Hierarchical Control for the Wave Equation with a Moving Boundary Isaías Pereira de Jesus1
Received: 29 March 2016 / Accepted: 9 July 2016 © Springer Science+Business Media New York 2016
Abstract This paper addresses the study of the hierarchical control for the onedimensional wave equation in intervals with a moving boundary. This equation models the motion of a string where an endpoint is fixed and the other one is moving. When the speed of the moving endpoint is less than the characteristic speed, the controllability of this equation is established. We assume that we can act on the dynamic of the system by a hierarchy of controls. According to the formulation given by Stackelberg (Marktform und Gleichgewicht. Springer, Berlin, 1934), there are local controls called followers and global controls called leaders. In fact, one considers situations where there are two cost (objective) functions. One possible way is to cut the control into two parts, one being thought of as “the leader” and the other one as “the follower.” This situation is studied in the paper, with one of the cost functions being of the controllability type. We present the following results: the existence and uniqueness of Nash equilibrium, the approximate controllability with respect to the leader control, and the optimality system for the leader control. Keywords Hierarchical control · Stackelberg strategy · Approximate controllability · Optimality system Mathematics Subject Classification 35Q10 · 35B37 · 35B40
Communicated by Roland Glowinski.
B 1
Isaías Pereira de Jesus
[email protected] DM, Universidade Federal do Piauí, Teresina, PI 64049-550, Brazil
123
J Optim Theory Appl
1 Introduction In classical control theory, we usually find a state equation or system and one control with the mission of achieving a predetermined goal. Frequently (but not always), the goal is to minimize a cost functional in a prescribed family of admissible controls. A more interesting situation arises when several (in general, conflictive or contradictory) objectives are considered. This may happen, for example, if the cost function is the sum of several terms and it is not clear how to average. It is also expected to have more than one control acting on the equation. In these cases, we are led to consider multi-objective control problems. In contrast to the mono-objective case and depending on the characteristics of the problem, there are many strategies for the choice of good controls. Moreover, these strategies can be cooperative (when the controls mutually cooperate in order to achieve some goals) or noncooperative. There exist several equilibrium concepts for multi-objective problems, with origin in game theory, mainly motivated by economics. Each of them determines a strategy. Let us mention the noncooperative optimization strategy proposed by Nash [1], the Pareto cooperative strategy [2], and the Stackelberg hierarchical-cooperative strategy [3]. In the context of game theory, Stackelberg’s strategy is normally applied within games where some players are in a better position than others. The dominant players are called the leaders and the sub dominant players the followers. One situation where this concept is very convincing is when the players choose their strategies one after another and the player who does the first move has an advantage. This interpretation however is not so convincing in continuous time where players choose their strategies at each time, but one could think of the situation where one player has a larger information structure. From a mathematical point of view, Stackelberg’s concept is successful because it leads to results. In the context of the control of PDEs, a relevant question is whether one is able to steer the system to a desired state (exactly or approximately) by applying controls that correspond to one of these strategies. This paper was inspired by ideas of Lions [4], where we investigate a similar question of hierarchical control employing the Stackelberg strategy in the case of time-dependent domains. We list below some related works on this subject up to date. • The papers by Lions [5,6], where the author gives some results concerning Pareto and Stackelberg strategies, respectively. • The paper by Díaz and Lions [7], where the approximate controllability of a system is established following a Stackelberg–Nash strategy and the extension in Díaz [8], that provides a characterization of the solution by means of Fenchel–Rockafellar duality theory. • The papers [9,10], where Glowinski, Ramos and Periaux analyze the Nash equilibrium for constraints given by linear parabolic and Burger’s equations from the mathematical and numerical viewpoints.
123
J Optim Theory Appl
• The Stackelberg–Nash strategy for the Stokes systems has been studied by González et al. in [11]. • In Limaco et al. [12], the authors present the Stackelberg–Nash equilibrium in the context of linear heat equation in non-cylindrical domains. • The paper by Araruna et al. [13], where the authors study the Stackelberg–Nash exact controllability for linear and semilinear parabolic equations. • The paper by Ramos and Roubicek [14], where the existence of a Nash equilibrium is proved for a nonlinear distributed parameter predator–prey system and a conceptual approximation algorithm is proposed and analyzed. In this paper, we present the following results: the existence and uniqueness of Nash equilibrium, the approximate controllability with respect to the leader control, and the optimality system for the leader control. The remainder of the paper is organized as follows. In Sect. 2, we present the problem. Section 3 is devoted to establish the existence and uniqueness of Nash equilibrium. In Sect. 4, we study the approximate controllability with respect to the leader control. In Sect. 5, we present the optimality system for the leader control. Finally, in the Sect. 6 we present the conclusions.
2 Problem Formulation As in [15], given T > 0, we consider the non-cylindrical domain defined by := (x, t) ∈ R2 ; 0 < x < αk (t), t ∈]0 T [ , Q where αk (t) := 1 + kt, 0 < k < 1. 0∗ , with := Σ 0 ∪ Σ Its lateral boundary is defined by Σ 0 := {(0, t); t ∈]0, T [} Σ
and
0∗ = Σ\ Σ 0 := {(αk (t), t); t ∈]0, T [}. Σ
We also denote by Ωt and Ω0 the intervals ]0, αk (t)[ and ]0, 1[, respectively. Consider the following wave equation in the non-cylindrical domain Q: u − u x x = 0 in Q, 0 , w on Σ u(x, t) = ∗ , 0 on Σ 0
u(x, 0) = u 0 (x), u (x, 0) = u 1 (x) in Ω0 ,
(1)
where u is the state variable, w is the control variable and (u 0 (x), u 1 (x)) ∈ L 2 (0, 1) × ∂u H −1 (0, 1). By u = u (x, t) we represent the derivative and by u x x = u x x (x, t) ∂t 2 ∂ u the second-order partial derivative . Equation (1) models the motion of a string ∂x2
123
J Optim Theory Appl
with a fixed endpoint and a moving one. The constant k is called the speed of the moving endpoint. In spite of a vast literature on the controllability problems of the wave equation in cylindrical domains, there are only a few works dealing with non-cylindrical case. We refer to [16–22] for some known results in this direction. In this article, motivated by the arguments contained in the work of Lions [4], we investigate a similar question of hierarchical control for Eq. (1), employing the Stackelberg strategy in the case of time-dependent domains. 0 into two disjoint parts Following the work of Lions [4], we divide Σ 0 = Σ 1 ∪ Σ 2 , Σ
(2)
i ), i = 1, 2. w = { w1 , w 2 }, w i := control function in L 2 (Σ
(3)
and consider
Thus, we observe that the system (1) can be rewritten as follows: u − u x x = 0 in Q, ⎧ 1 , 1 on Σ ⎨w 2 , 2 on Σ u(x, t) = w ⎩ Σ 0 , 0 on Σ\ u(x, 0) = u 0 (x), u (x, 0) = u 1 (x) in Ω0 .
(4)
In the decomposition (2), (3), we establish a hierarchy. We think of w 1 as being the “main” control, the leader, and we think of w 2 as the follower, in Stackelberg terminology. Associated with the solution u = u(x, t) of (4), we will consider the (secondary) functional
1 σ 2 w1 , w 2 ) := 2 ) − u 2 ) dxdt + w 2 d Σ, (5) w1 , w J2 ( (u( 2 2 Σ2 2 Q and the (main) functional 1 J( w1 ) := 2
1 Σ
w 12 dΣ,
(6)
where σ > 0 is a constant and u 2 is a given function in L 2 ( Q). Remark 2.1 From the regularity and uniqueness of the solution of (4) (see Remark 4.4), the cost functionals J2 and J are well defined. The control problem that we will consider is as follows: the follower w 2 assume that the leader w 1 has made a choice. Then, it tries to find an equilibrium of the cost J2 , w1 ) (depending on w 1 ), satisfying: that is, it looks for a control w 2 = F( 2 ). J2 ( w1 , w 2 ) ≤ J2 ( w1 , w 2 ), ∀ w 2 ∈ L 2 (Σ
123
(7)
J Optim Theory Appl
The control w 2 , solution of (7), is called Nash equilibrium for the cost J2 and it depends on w 1 (cf. Aubin [23]). Remark 2.2 In another way, if the leader w 1 makes a choice, then the follower w 2 makes also a choice, depending on w 1 , which minimizes the cost J2 , that is, w1 , w 2 ) = J2 (
inf
2 ) w 2 ∈L 2 (Σ
w1 , w 2 ). J2 (
(8)
This is equivalent to (7). This process is called Stackelberg–Nash strategy; see Díaz and Lions [7]. After this, we consider the state u ( w1 , F( w1 )) given by the solution of u − u x x = 0 in Q, ⎧ 1 , 1 on Σ ⎨w 2 , w1 ) on Σ u(x, t) = F( ⎩ Σ 0 , 0 on Σ\ u(x, 0) = u 0 (x), u (x, 0) = u 1 (x) in Ω0 .
(9)
We will look for any optimal control w 1 such that w1 )) = J( w1 , F(
inf
1 ) w 1 ∈L 2 (Σ
w1 )), J(w 1 , F(
(10)
subject to the following restriction of the approximate controllability type
u(x, T ; w 1 , F( w1 )), u (x, T ; w 1 , F( w1 )) ∈ B L 2 (Ωt ) (u 0 , ρ0 ) × B H −1 (Ωt ) (u 1 , ρ1 ), (11) where B X (C, r ) denotes the ball in X with center C and radius r . To explain this optimal problem, we are going to consider the following subproblems: 2 = F( w1 ) • Problem 1 Fixed any leader control w 1 , find the follower control w (depending on w 1 ) and the associated state u, solution of (4) satisfying the condition (8) (Nash equilibrium) related to J2 , defined in (5). • Problem 2 Assuming that the existence of the Nash equilibrium w 2 was proved, 1 ), prove that the solutions (u(x, t; w , w 1 , then when w 1 varies in L 2 (Σ 1 2 ), u (x, t; w 1 , w 2 ), u (x, T ; w 1 , w 2 ) , w 2 )) of the state Eq. (4), evaluated at t = T , that is, u(x, T ; w generate a dense subset of L 2 (Ωt ) × H −1 (Ωt ). Remark 2.3 By the linearity of system (9), without loss of generality we may assume that u 0 = 0 = u 1 . Following the work of Lions [4], we divide Σ0 into two disjoint parts Σ0 = Σ1 ∪ Σ2 ,
(12)
123
J Optim Theory Appl
and consider w = {w1 , w2 }, wi := control function in L 2 (Σi ), i = 1, 2.
(13)
We can also write w = w1 + w2 , with Σ0 = Σ1 = Σ2 .
(14)
the point (y, t), with y = Note that when (x, t) varies in Q Q = Ω×]0, T [, where Ω :=]0, 1[. Then, the application
x , varies in αk (t)
→ Q, ζ (x, t) = (y, t) ζ :Q is of class C 2 and the inverse ζ −1 is also of class C 2 . Therefore, the change of variables u(x, t) = v(y, t), transforms the initial-boundary value problem (4) into the equivalent system v + Lv = 0 in Q, ⎧ ⎨ w1 on Σ1 , v(y, t) = w2 on Σ2 , ⎩ 0 on Σ\Σ0 , v(y, 0) = v0 (y), v (y, 0) = v1 (y), y ∈ Ω,
(15)
where
β (y, t) γk (y) 1 − k2 y2 k vy + v y , βk (y, t) = , γk (y) = −2ky, Lv = − y αk (t) αk (t) αk (t) v0 (y) = u 0 (x), v1 (y) = u 1 (x) + kyu x (0), Σ = Σ0 ∪ Σ0∗ , Σ0 = {(0, t) : 0 < t < T }, Σ0∗ = {(1, t) : 0 < t < T }, Σ0 = Σ1 ∪ Σ2 . We consider the coefficients of the operator L satisfying the following conditions: (H1) βk (y, t) ∈ C 1 (Q); (H2) γk (y) ∈ W 1,∞ (Ω). In this way, it is enough to investigate the control problem for the equivalent problem (15). • Cost functionals in the cylinder Q From the diffeomorphism ζ, which transforms into Q, we transform the cost functionals J2 , Jinto the cost functionals J2 , J defined Q by J2 (w1 , w2 ) :=
1 2
T 0
Ω
αk (t)[v(w1 , w2 ) − v2 (y, t)]2 dy dt +
and J (w1 ) :=
123
1 2
σ 2
Σ2
w22 dΣ (16)
Σ1
w12 dΣ,
(17)
J Optim Theory Appl
where σ > 0 is a constant and v2 (y, t) is a given function in L 2 (Ω × (0, T )). Remark 2.4 Using similar technique as in [22], we can prove the following: For each i ), i = 1, 2, there exists exactly one v0 ∈ L 2 (Ω), v1 ∈ H −1 (Ω) and wi ∈ L 2 (Σ solution v to (15) in the sense of a transposition, with v ∈ C [0, T ]; L 2 (Ω)) ∩ C 1 [0, T ]; H −1 (Ω)). Thus, in particular, the cost functionals J2 and J are well we obtain defined. Using the diffeomorphism ζ −1 (y, t) = (x, t), from Q onto Q, a unique global weak solution u to the problem (4) with the regularity u ∈ C [0, T ]; L 2 (Ωt )) ∩ C 1 [0, T ]; H −1 (Ωt )). Associated with the functionals J2 and J defined above, we will consider the following subproblems: • Problem 3 Fixed any leader control w1 , find the follower control w2 (depending on w1 ) and the associated state v solution of (15) satisfying (Nash equilibrium) J2 (w1 , w2 ) =
inf
w 2 ∈L 2 (Σ2 )
J2 (w1 , w 2 ),
(18)
related to J2 defined in (16). • Problem 4 Assuming that the existence of the Nash equilibrium w2 was proved, then when w1 varies in L 2 (Σ1 ), prove that the solutions (v(y, t; w1 , w2 ), v (y, t; w1 , w2 )) of the state Eq. (15), evaluated at t = T , that is, (v(y, T ; w1 , w2 ), v (y, T ; w1 , w2 )), generate a dense subset of L 2 (Ω) × H −1 (Ω).
3 Nash Equilibrium In this section, fixed any leader control w1 ∈ L 2 (Σ1 ), we determine the existence and uniqueness of solutions to the problem inf
w2 ∈L 2 (Σ2 )
J2 (w1 , w2 ),
(19)
and a characterization of this solution in terms of an adjoint system. In fact, this is a classical-type problem in the control of distributed systems (cf. Lions [24]). It admits an unique solution w2 = F(w1 ).
(20)
The Euler–Lagrange equation for problem (19) is given by
0
T
Ω
αk (t)(v − v2 ) v dy dt + σ
Σ2
w2 w 2 dΣ = 0, ∀ w 2 ∈ L 2 (Σ2 ),
(21)
123
J Optim Theory Appl
where v is solution of the following system v = 0 in Q, v + L ⎧ ⎨ 0 on Σ1 , 2 on Σ2 , v= w ⎩ 0 on Σ\ (Σ1 ∪ Σ2 ) , v (y, 0) = 0, v (y, 0) = 0, y ∈ Ω.
(22)
In order to express (21) in a convenient form, we introduce the adjoint state defined by p + L ∗ p = αk (t) (v − v2 ) in Q, p(T ) = p (T ) = 0, y ∈ Ω,
p = 0 on Σ,
(23)
where L ∗ is the formal adjoint of the operator L. Multiplyng (23) by v and integrating by parts, we find
T 0
Ω
αk (t)(v − v2 ) v dy dt +
1 2 Σ2 αk (t)
2 dΣ = 0, py w
(24)
so that (21) becomes p y = σ αk2 (t) w2 on Σ2 .
(25)
We summarize these results in the following theorem: Theorem 3.1 For each w1 ∈ L 2 (Σ1 ), there exists a unique Nash equilibrium w2 in the sense of (18). Moreover, the follower w2 is given by w2 = F(w1 ) =
1 p y on Σ2 , σ αk2 (t)
(26)
where {v, p} is the unique solution of (the optimality system) v + Lv = 0 in Q, p + L ∗ p = αk (t) (v − v2 ) in Q, ⎧ w1 on Σ1 , ⎪ ⎪ ⎨ 1 p y on Σ2 , v= ⎪ σ αk2 (t) ⎪ ⎩ 0 on Σ\Σ0 , p = 0 on Σ,
v(0) = v (0) = 0,
p(T ) = p (T ) = 0, y ∈ Ω. (27)
Of course, {v, p} depends on w1 : {v, p} = {v(w1 ), p(w1 )}.
123
(28)
J Optim Theory Appl
4 On the Approximate Controllability Since we have proved the existence, uniqueness, and characterization of the follower w2 , the leader w1 now wants that the solutions v and v , evaluated at time t = T , to be as close as possible to (v 0 , v 1 ). This will be possible if the system (27) is approximately controllable. We are looking for inf
1 2
Σ1
w12 dΣ,
(29)
where w1 is subject to v(T ; w1 ), v (T ; w1 ) ∈ B L 2 (Ω) (v 0 , ρ0 ) × B H −1 (Ω) (v 1 , ρ1 ),
(30)
assuming that w1 exists, ρ0 and ρ1 being positive numbers arbitrarily small and {v 0 , v 1 } ∈ L 2 (Ω) × H −1 (Ω). As in [15], we assume that (31) T > Tk∗ and
1 0
(32)
Now as in the case (14) and using Holmgren’s Uniqueness Theorem (cf. [25]; and see also [15] for additional discussions), the following approximate controllability result holds: Theorem 4.1 Assume that (31) and (32) hold. consider w1 ∈ L 2 (Σ1 ) and w2 Let us a Nash equilibrium in the sense (18). Then v(T ), v (T ) = (v(., T, w1 , w2 ), v (., T, w1 , w2 )), where v solves the system (15), generates a dense subset of L 2 (Ω) × H −1 (Ω). Proof We decompose the solution (v, p) of (27) setting v = v0 + g,
p = p0 + q,
(33)
where v0 , p0 is given by v0 + L v0 = 0 in Q, ⎧ 0 on Σ1 , ⎪ ⎪ ⎨ 1 ( p0 ) y on Σ2 , v0 = ⎪ σ αk2 (t) ⎪ ⎩ 0 on Σ\Σ0 , p0
v0 (0) = v0 (0) = 0, y ∈ Ω,
(34)
∗
+ L p0 = αk (t) (v0 − v2 ) in Q, p0 = 0 on Σ, p0 (T ) = p0 (T ) = 0, y ∈ Ω,
(35)
123
J Optim Theory Appl
and {g, q} is given by g + L g = 0 in Q, ⎧ w1 on Σ1 , ⎪ ⎪ ⎨ 1 q y on Σ2 , g= ⎪ σ αk2 (t) ⎪ ⎩ 0 on Σ\Σ0 , g(0) = g (0) = 0, y ∈ Ω, q + L ∗ q = αk (t)g in Q,
(36)
q = 0 on Σ, q(T ) = q (T ) = 0, y ∈ Ω.
(37)
We next set A : L 2 (Σ1 ) −→ H −1 (Ω) × L 2 (Ω) w1 −→ A w1 = g (T ; w1 ) + δg(T ; w1 ), −g(T ; w1 ) ,
(38)
which defines A ∈ L L 2 (Σ1 ); H −1 (Ω) × L 2 (Ω) , where δ is a positive constant. Using (33) and (38), we can rewrite (30) as Aw1 ∈ {−v0 (T ) + δg(T ) + B H −1 (Ω) (v 1 , ρ1 ), −v0 (T ) + B L 2 (Ω) (v 0 , ρ0 )}.
(39)
We will show that Aw1 generates a dense subspace of H −1 (Ω) × L 2 (Ω). For this, let { f 0 , f 1 } ∈ H01 (Ω) × L 2 (Ω) and consider the following systems (“adjoint states”): ϕ + L ∗ ϕ = αk (t) ψ in Q, ϕ = 0 on Σ, ϕ(T ) = f 0 , ϕ (T ) = f 1 , y ∈ Ω, ψ + L ψ = 0 in Q, ⎧ 0 on Σ1 , ⎪ ⎪ ⎨ 1 ϕ y on Σ2 , ψ= ⎪ σ αk2 (t) ⎪ ⎩ 0 on Σ\Σ0 , ψ(0) = ψ (0) = 0, y ∈ Ω.
(40)
(41)
Multiplying (41)1 by q, (40)1 by g, where q, g solve (37) and (36), respectively, and integrating in Q we obtain
0
123
T
Ω
αk (t)g ψ dy dt = −
1 σ
1
4 Σ2 αk (t)
q y ϕ y dΣ,
(42)
J Optim Theory Appl
and
g (T ), f 0 H −1 (Ω)×H 1 (Ω) 0
+ δ g(T ), f L 2 (Ω)×H 1 (Ω) − g(T ), f 0
1
0
=−
1 2 Σ1 αk (t)
ϕ y w1 dΣ. (43)
Considering the left-hand side of the above equation as the inner product of {g (T )+ δg(T ), −g(T )} with { f 0 , f 1 } in H −1 (Ω) × L 2 (Ω) and H01 (Ω) × L 2 (Ω), we obtain
A w1 , f
=−
1 2 Σ1 αk (t)
ϕ y w1 dΣ,
where ., . represent the duality pairing between H −1 (Ω) × L 2 (Ω) and H01 (Ω) × L 2 (Ω). Therefore, if
g (T ), f 0 H −1 (Ω)×H 1 (Ω) + δ g(T ), f 0 L 2 (Ω)×H 1 (Ω) − g(T ), f 1 = 0, 0
for all w1 ∈ L 2 (Σ1 ), then
0
ϕ y = 0 on Σ1 .
(44)
ψ = 0 on Σ, so that ψ ≡ 0.
(45)
ϕ + L ∗ ϕ = 0, ϕ = 0 on Σ,
(46)
Hence, in case (14), Therefore
and satisfies (44). Therefore, according to Holmgren’s Uniqueness Theorem (cf. [25]; and see also [15] for additional discussions) and if (31) holds, then ϕ ≡ 0, so that f 0 = 0, f 1 = 0, which completes the proof.
5 Optimality System for the Leader Thanks to the results obtained in Sect. 3, we can take, for each w1 , the Nash equilibrium w2 associated with the solution v of (15). We will show the existence of a leader control w1 solution of the following problem: inf
w1 ∈Uad
J (w1 ),
(47)
where Uad is the set of admissible controls Uad := {w1 ∈ L 2 (Σ1 ); v solution of (15) satisfying (30)}.
(48)
For this, we will use a duality argument due to Fenchel and Rockafellar [26] (cf. also [27,28]). The following result holds:
123
J Optim Theory Appl
Theorem 5.1 Assume the hypotheses (H 1)–(H 2), (14), (31) and (32) are satisfied. Then for { f 0 , f 1 } in H01 (Ω) × L 2 (Ω) we uniquely define {ϕ, ψ, v, p} by ϕ + L ∗ ϕ = αk (t)ψ in Q, ψ + Lψ = 0 in Q, v + Lv = 0 in Q, p + L ∗ p = αk (t)(v − v2 ) in Q, ϕ = 0 on Σ, ⎧ ⎪ ⎪ 0 on Σ1 , ⎨ 1 ϕ y on Σ2 , ψ= ⎪ σ αk2 (t) ⎪ ⎩ 0 on Σ\Σ0 , ⎧ 1 ⎪ ⎪ ϕ y on Σ1 , ⎪− 2 ⎪ ⎨ αk (t) 1 v= p y on Σ2 , ⎪ ⎪ ⎪ σ αk2 (t) ⎪ ⎩ 0 on Σ\Σ0 , p = 0 on Σ, ϕ(., T ) = f 0 , ϕ (., T ) = f 1 in Ω, v(0) = v (0) = 0 in Ω, p(T ) = p (T ) = 0 in Ω.
(49)
We uniquely define { f 0 , f 1 } as the solution of the variational inequality v (T, f ) − v 1 , f 0 − f 0 H −1 (Ω)×H 1 (Ω) − v(T, f ) − v 0 , f1 − f1 0 0 1 +ρ1 || f || − || f 0 || + ρ0 | f | − | f 1 | ≥ 0, ∀ f ∈ H01 (Ω) × L 2 (Ω). (50)
Then the optimal leader is given by
w1 = −
1 αk2 (t)
ϕ y on Σ1 ,
where ϕ corresponds to the solution of (49). Proof We introduce two convex proper functions as follows, firstly F1 : L 2 (Σ1 ) −→ R ∪ {∞},
1 F1 (w1 ) = w 2 dΣ 2 Σ1 1 the second one
123
F2 : H −1 (Ω) × L 2 (Ω) −→ R ∪ {∞}
(51)
J Optim Theory Appl
given by F2 (Aw1 ) = F2 {g (T, w1 ) + δg(T, w1 ), −g(T, w1 )} = ⎧ g (T ) + δg(T ) ∈ v 1 − v0 (T ) + δg(T ) + ρ1 B H −1 (Ω) , ⎨ 0, if = −g(T ) ∈ −v 0 + v0 (T, w1 ) − ρ0 B L 2 (Ω) , ⎩ +∞, otherwise. (52) With these notations, problems (29)–(30) become equivalent to inf
w1 ∈L 2 (Σ1 )
F1 (w1 ) + F2 (Aw1 )
(53)
provided we prove that the range of A is dense in H −1 (Ω) × L 2 (Ω), under conditions (31) and (32). By the Duality Theorem of Fenchel and Rockafellar [26] (see also [27,28]), we have inf w1 ∈L 2 (Σ1 ) [F1 (w1 ) + F2 (Aw1 )] =− inf [F1∗ A∗ { f 0, f 1 } + F2∗ {− f 0, − f 1 }],
(54)
( f 0, f 1 )∈H01 (Ω)×L 2 (Ω)
where Fi∗ is the conjugate function of Fi (i = 1, 2) and A∗ the adjoint of A. We have A∗ : H01 (Ω) × L 2 (Ω) −→ L 2 (Σ1 ) 1 ( f 0 , f 1 ) −→ A∗ f = − 2 ϕ y , αk (t)
(55)
where ϕ is given in (40). We see easily that F1∗ (w1 ) = F1 (w1 )
(56)
and F2∗ ({ f 0, f 1 }) = v0 (T ) − v 0 , f 1 + v 1 − v0 (T ) + δg(T ), f 0 H −1 (Ω)×H 1 (Ω) 0
+ρ1 || f 0 || + ρ0 | f 1 |.
(57)
Therefore, the (opposite of) right-hand side of (54) is given by 2 1 1 − inf ϕ y2 d Σ + v 0 − v0 (T ), f1 2 f ∈H01 (Ω)×L 2 (Ω) 2 Σ1 αk (t) f 0 H −1 (Ω)×H 1 (Ω) + ρ1 || f 0 || + ρ0 | f 1| . − v 1 − v0 (T ) + δg(T ), 0
(58)
123
J Optim Theory Appl
This is the dual problem of (29), (30). We have now two ways to derive the optimality system for the leader control, starting from the primal or from the dual problem.
6 Conclusions In this paper, we have investigated the question of hierarchical control for the onedimensional wave equation employing the Stackelberg strategy in the case of timedependent domains. The main achievements of this paper are the existence and uniqueness of Nash equilibrium, the approximate controllability with respect to the leader control, and the optimality system for the leader control. As a future work we are looking for improvements and generalizations of these results to other models. To close this section, we make some comments and briefly discuss some possible extensions of our results and also indicate open issues on the subject. • It seems natural to expect that the controllability holds when the speed of the moving endpoint is positive and less than 1. However, we did not have success in extending the approach developed in Theorem 4.1 for this case. • In the problem present in Sect. 2, the domain grows linearly in time (function alpha). It would be quite interesting to study the controllability for (1) in the non when the speed of the moving endpoint is greater than 1. cylindrical domain Q However, it seems very difficult and remains to be done. For interested readers on this subject, we cite for instance [15,18], and [19]. Acknowledgments The author wants to express his gratitude to the anonymous reviewers for their questions and commentaries; they were very helpful in improving this article.
References 1. 2. 3. 4. 5. 6. 7.
8. 9. 10. 11.
Nash, J.: Noncooperative games. Ann. Math. 54, 286–295 (1951) Pareto, V.: Cours d’économie politique. Rouge, Laussane (1896) von Stackelberg, H.: Marktform und Gleichgewicht. Springer, Berlin (1934) Lions, J.-L.: Hierarchic control. Math. Sci., Proc. Indian Acad. Sci. 104, 295–304 (1994) Lions, J.-L.: Contrôle de Pareto de Systèmes Distribués. Le cas d’ évolution C.R. Acad. Sc. Paris, série I. 302(11), 413–417 (1986) Lions, J.-L.: Some remarks on Stackelberg’s optimization. Math. Mod. Methods Appl. Sci. 4, 477–487 (1994) Díaz, J., Lions, J.-L.: On the approximate controllability of Stackelberg-Nash strategies. In: Díaz, J.I. (ed.) Ocean Circulation and Pollution Control Mathematical and Numerical Investigations, pp. 17–27. Springer, Berlin (2005) Díaz, J.: On the von Neumann problem and the approximate controllability of Stackelberg-Nash strategies for some environmental problems. Rev. R. Acad. Cien., Ser. A. Math. 96(3), 343–356 (2002) Glowinski, R., Ramos, A., Periaux, J.: Nash equilibria for the multi-objective control of linear differential equations. J. Optim. Theory Appl. 112(3), 457–498 (2002) Glowinski, R., Ramos, A., Periaux, J.: Pointwise control of the burgers equation and related Nash equilibrium problems : computational approach. J. Optim. Theory Appl. 112(3), 499–516 (2002) González, G., Marques-Lopes, F., Rojas-Medar, M.: On the approximate controllability of StackelbergNash strategies for Stokes equations. Proc. Amer. Math. Soc. 141(5), 1759–1773 (2013)
123
J Optim Theory Appl 12. Limaco, J., Clark, H., Medeiros, L.A.: Remarks on hierarchic control. J. Math. Anal. Appl. 359, 368–383 (2009) 13. Araruna, F.D., Fernández-Cara, E., Santos, M.C.: Stackelberg-Nash exact controllability for linear and semilinear parabolic equations. ESAIM: Control, Optim. Calc. Var. 21(3), 835–856 (2015) 14. Ramos, A.M., Roubicek, T.: Nash equilibria in noncooperative predator-prey games. Appl. Math. Optim. 56(2), 211–241 (2007) 15. Cui, L., Song, L.: Controllability for a wave equation with moving boundary. J. Appl. Math. (2014). doi:10.1155/2014/827698 16. Araruna, F.D., Antunes, G.O., Medeiros, L.A.: Exact controllability for the semilinear string equation in non cylindrical domains. Control Cybern. 33, 237–257 (2004) 17. Bardos, C., Chen, G.: Control and stabilization for the wave equation. Part III: domain with moving boundary. SIAM J. Control Optim 19, 123–138 (1981) 18. Cui, L., Liu, X., Gao, H.: Exact controllability for a one-dimensional wave equation in non-cylindrical domains. J. Math. Anal. Appl. 402, 612–625 (2013) 19. Cui, L., Song, L.: Exact controllability for a wave equation with fixed boundary control. Bound. Value Problems (2014). doi:10.1186/1687-2770-2014-47 20. Jesus, I.: Remarks on hierarchic control for the wave equation in moving domains. Arch. Math. 102, 171–179 (2014) 21. Miranda, M.: Exact controlllability for the wave equation in domains with variable boundary. Rev. Mat. Univ. Complut. Madr. 9, 435–457 (1996) 22. Miranda, M.: HUM and the wave equation with variable coefficients. Asympt. Anal. 11, 317–341 (1995) 23. Lions, J.-L.: L’analyse non Linéaire et ses Motivations Économiques. Masson, Paris (1984) 24. Lions, J.-L.: Contrôle optimal des systèmes gouvernés par des équations aux dérivées partielles. Dunod, Paris (1968) 25. Hörmander, L.: Linear partial differential operators. Die Grundlehren der mathematischen Wissenschaften, Bd. 116. Academic Press. Inc., Publishers, New York (1963) 26. Rockafellar, R.: Convex Analysis. Princeton University Press, Princeton (1969) 27. Brezis, H.: Functional analysis, Sobolev spaces and partial differential equations. Springer, Berlin (2010) 28. Ekeland, I., Teman, R.: Analyse convexe et problèmes variationnels Dunod. Gauthier-Villars, Paris (1974)
123