Appl Math Optim DOI 10.1007/s00245-016-9358-0
Optimal Control of High-Order Elliptic Obstacle Problem Radouen Ghanem1 · Ibtissam Nouri1
© Springer Science+Business Media New York 2016
Abstract We consider an optimal control problem for the obstacle problem with an elliptic polyharmonic obstacle problem of order 2m, where the obstacle function is assumed to be the control. We use a Moreau–Yosida approximate technique to introduce a family of problems governed by variational equations. Then, we prove optimal solutions existence and give an approximate optimality system and convergence results by passing to the limit in this system. Keywords Optimal control · Obstacle problem · Polyharmonic variational inequality · Moreau–Yosida approximation Mathematics Subject Classification
49K20 · 35J35 · 35J87 · 74G10
1 General Introduction Optimal control of partial differential equations [20] and variational inequalities [18], [9] are an important area of applied mathematics which have various applications in different domains [15]. The variational inequalities and related optimal control problems have been studied extensively for decades [4] and the references therein where the governing system is an obstacle variational inequality. This problem has been studied extensively during the last years by many authors [5], [26], where the authors have studied optimal control problems for obstacle problems (or variational inequalities) where the obstacle are a known function, and the control
B 1
Radouen Ghanem
[email protected] Numerical analysis, optimization and statistical laboratory (LANOS), Badji-Mokhtar, Annaba University, P.O. Box 12, 23000 Annaba, Algeria
123
Appl Math Optim
variables appear in the variational inequality. In other words, controls do not change the obstacle, where this kind of problem can be called indirect obstacle optimal control problem. Recently, optimal obstacle control problems for variational inequality have been considered in many different aspects. One of the most important aspect is to take the obstacle as the control, such case can be called direct obstacle optimal control problem or self obstacle optimal control problem. In this paper we investigate optimal control problems governed by variational inequalities of obstacle type. It should be pointed that, in the optimal control problem of a variational inequality the main difficulty comes from the fact that the mapping T between the control and the state (control-to-state operator) is not Gâteaux differentiable. To overcome this difficulty, different methods have been used, Mignot [25] used canonical derivative, Ito et al. [17], Barbu [4] and the references therein tackled the optimal control by using a Moreau–Yosida approximation technique to reformulate the governing variational inequality problem to the problem governed by a variational equation and our approach is based on the penalty method and Barbu’s treatment as a penalty parameter approaching zero. We then obtain a system of optimality for suitable approximations of the original problem which can be easily used from the numerical point of view. The problem that we are going to study can be written in wider class of problems, which can be formally described as follows min {J (y, χ ) , y = T (χ ) , χ ∈ Uad ⊂ U }
(P 0 ),
where T is an operator which associates y to χ , when y is a solution to ∀v ∈ K (y, χ ) , A (y, χ ) , v − y ≥ 0,
(obs)
where K is a multiplication from χ × U to 2χ when χ is a Banach space and A is a differential operator from Y to the dual Y . Let h be an application from R × R to R, then the variational inequality that relates the control χ to the state y can be written as A (y, χ ) , v − yY,Y + h (χ , v) − h (χ , y) ≥ (χ , v − y) , ∀y ∈ Y, where this formulation gives obstacle problems where the obstacle is the control. Following the previous ideas, we may apply a smoothed penalization approach to the considered problem. More precisely, the idea is to approximate the obstacle problem by introducing an approximating parameter δ, where the approximating method is based on a penalization method and it consists in replacing the obstacle problem by a family of semilinear equations [3]. The first work on the optimal obstacle control problem was that of Adams et al. [2] in 1998. Later, Lou considered the regularity of the obstacle control problem in [23], [24]. He generalized regularity results, where the author shows H 2, p -regularity of the
123
Appl Math Optim
optimal pair. Recently Adams and Lenhart continued the work begun in [3] where a non zero source term was added to the right hand side of the state equation. Yuquan and Chen [29], consider a quasilinear optimal control problem of obstacle type without source terms. Bergounioux et al. [7] have studied semilinear obstacle optimal control, where the admissible controls are bounded in the sobolev space H 2 . In [14], Ghanem considered obstacle optimal control problem, where the admissible controls is taken to be all the H 2 Sobolev space, and the author gives a complete optimality conditions system more complete than the one given in [6,7,29], Lou in [23]. Most important relayed work on optimal control has been done for control problem governed by second order state equation, but less works has been done to study optimal control problem governed by high order state equations [16]. Recently Adams et al. [1] and Di Donato et al. [12], considered Optimal Control of a biharmonic obstacle Problem, where the authors gives a system of optimality condition by using the same techniques as [3,7]. In this paper we consider a variational inequality of an obstacle type, where the underlying partial differential operator is polyharmonic of order 2m [10,13], we study an optimal control problem where the state function is the solution of an unilateral obstacle problem and the control function is taken to be the obstacle. Our goal in this paper is twofold. First, as usual, in optimal control theory, we are looking for a first order necessary optimality system. Secondly, we generalize some results given in [1,14,24] by using the same techniques used in the latter papers by considering a polyharmonic obstacle problem. Let us give the outline of the paper. After stating the precise assumptions and some well-known results in Sect 2. Section 3, is devoted to the formulation of the optimal control problem, we give assumptions for the state equation, and give preliminaries results. In Sect. 4 we study the variational inequality, give control-to-state operator properties and proof an existence result for optimal solution. In Sect. 5, necessary optimality conditions are derived by the study of related approximated problem. The last section is devoted to the consideration of optimal control and the limit optimality condition system.
2 Preliminary and Known Results In this section, we give a brief overview on some perquisites of nonsmooth analysis which are needed in the sequel. 2.1 General Derivative Results Let V be reflexive and Banach space with the dual V , such that V ⊂ H ⊂ V algebraically and topologically, where H is a real Hilbert space identified with its own dual, where the norms of V and H will be denoted by · and |·| respectively where · ∗ , then if v belongs to V and v belongs to V its thedual space of V with norm V , v , v defines the value of v in v.
123
Appl Math Optim
Next let ϕ : V −→ R be locally Lipschitz defined by ϕ (x) =
j (x) d x, such
that j : R −→ R, where j is locally Lipschitz function . For such a function we can define the generalized directional derivative of j at x in the direction h, as follows j x + λh − j x . j (x, h) = lim sup λ x →x,λ↓0 0
Then we define the following set ∂ jc (x) = ξ ∈ R : ξ · h ≤ j 0 (x, h) for all h ∈ R . where the set ∂ jc : R → 2R \ {∅} is called the (generalized) subdifferential of j at x (see for example [11]). We give the following conditions: B1 For ζi ∈ ∂ jc (si ) with i = 1, 2, there is a constant c1 ≥ 0 such that ζ1 ≤ ζ2 + c1 (s2 − s1 ) p−1 . B2 There is a constant c2 ≥ 0 such that |ζ | ≤ c2 1 + |s| p−1 , ∀ζ ∈ ∂ jc (s) , s ∈ R. By the growth hypothesis B1 , ϕ is locally Lipschitz continuous on L p (). Thus q ϕ is Locally Lispchitz so that Clarke’s generalized gradient ∂ϕc : L p () → 2 L () is well defined. By Aubin–Clarke theorem [11], for each u ∈ L p (), we have ζ ∈ ∂ϕc (u) ⇒ ζ ∈ L q () with ζ (x) ∈ ∂ϕc (u (x)) for almost everywhere x ∈ . Moreover, if ϕ is also convex, then the Clarke’s generalized gradient ∂ϕc coincide with the usual subdifferential ∂ϕ. 2.2 General Variational Inequality Results We will assume some familiarity with variational inequalities where in this section we give some known results on elliptic variational inequalities in order to have all we have needed for of this paper and for more details we can consult [4,21]. the sequel Let A in L V, V be a linear continues operator from V to V and ϕ : V → R be a lower semi-continuous convex function on V and whose effective domain D (ϕ), D (ϕ) = {v ∈ V ; ϕ (v) < +∞} , is nonempty and if f is a given element of V , we consider the following problem
123
Find y in V, such that (Ay − f, v − y) + ϕ (v) − ϕ (y) ≥ 0, for all v in V.
(1)
Appl Math Optim
This is an abstract elliptic variational inequality problem associated with the operator A and the convex function ϕ, and can be equivalently expressed as
Find y in V, such that Ay + ∂ϕ (y) f,
where ∂ϕ : V → V is the subdiferential of ϕ defined by
∂ϕ (x) = x ∈ V : x , y − x ≤ ϕ (y) − ϕ (x) , for all y ∈ V . In the special case where ϕ = I K is the indicator function of a closed convex subset K of V , given by I K (x) =
0 +∞
if x ∈ K otherwise,
which is convex and lower discontinuous. The variational inequality (1), can be equivalently considered as the following problem
Find y in V, such that (Ay − f, v − y) + I K (v) − I K (y) ≥ 0, for all v in V,
(2)
or equivalently as
Find y in V, such that Ay + ∂ I K (y) f,
(3)
where ∂ I K (y) is defined by ⎧ if y ∈ K˚ ⎨ {0} ∂ I K (y) = N K (y) if y ∈ ∂ K ⎩ ∅ if y ∈ /K where N K (y) is the normal cone of K at y such that the subdifferntial ∂ I K of I K is given by ∂ I K (y) = {z ∈ V : (z, y − v) ≥ 0, ∀y ∈ K } . As far as the existence solution is concerned in problem (1), we first note the following results Theorem 2.1 Let A : V → V be a monotone, demicontinuous operator and let ϕ : V → R be a lower semicontinuous, proper, convex function. Assume that there exists y0 ∈ dom (ϕ) such that lim
y V →∞
((Ay, y − y0 ) + ϕ (y)) = +∞. y V
123
Appl Math Optim
Then problem (1) has at least one solution. Moreover, the set of solutions is bounded, convex, and closed in V . If the operator A is strictly monotone (Ay2 − Ay1 , y2 − y1 ) = 0 ⇐⇒ y2 = y1 , then the solution is unique.
Proof [21] When, we consider the special case, where ϕ = I K , we have
Corollary 2.1 Let A : V → V be a monotone, demicontinuous operator and let K be a closed subset of V . Assume either that there is y0 ∈ K such that lim
y V →∞
(Ay, y − y0 ) = +∞, y V
or that K is bounded, then problem (2) has at least one solution. Moreover, the set of solutions is bounded, convex, and closed in V . If the operator A is strictly monotone then the solution is unique.
Proof [21]
Penalization is one of the common method used to study variational inequalities [9,22]. The main idea of this method is to replace the considered inequality with a sequence of equation. In the sequel of this paper we deal with the penalized version of problem (3), such that for every δ in R+ , we get
Find y in V, such that Ay + ∂ I Kδ (y) ∈ f,
where ∂ I Kδ (y) is the Yoshida approximation of the subdifferential ∂ I K (y) of I K (y) namely ∂ I Kδ (y) = y−PδK (y) where PK (y) is the projection of y on K [4].
3 Statement of Optimal Control Problem Let be a bounded open subset of R N with N ≥ 2, with a regular boundary ∂. For m, p m ∈ N∗ , p ∈ ]1, +∞[, W0 () is the closure of D () in W m, p (), with respect to the norm ⎛
u → u m, p
⎞1/ p Dα u p p ⎠ , =⎝ L () |α|≤m
where W m, p () is defined by
W m, p () = u | D α u ∈ L p () , |α| ≤ m , 1 < p < ∞,
123
Appl Math Optim p and its dual space is denoted by W −m,q (), where q = p−1 is the Hölder conjugate of p. Let θ = (θ0 , θ1 , . . . , θm−1 ) be the trace mapping from W m, p () to ϑ = m−1 m, p W m−i−1/ p, p () with kernel W0 (). i=0
Let N1 (resp. N2 ) be the number of derivations up to order m − 1 (resp. of order m m). For every y ∈ W m, p (), we set Aα (x, η (y) , D m y) : x → Aα (x, η (y) , D y), N N β 1 2 defined from × R × R to R where η (y) = η : |β| ≤ m − 1 such that D α y denotes the set of derivatives defined as n ∂ |α| y α D y= , where |α| = αi ≤ m . ∂ x1α1 · · · ∂ xnαn i=1
Let K be a closed and nonempty convex set, given in general case as
m, p K (ϕ) = v ∈ W0 () | J (v − ϕ) (x) ∈ X a.e. x ∈ .
(4)
where X is a closed convex set such that the following conditions are fulfilled: C1 X is a closed convex set in V containing at least the origin. m, p C2 J : W0 () → V is continues linear map. m, p C3 ϕ belongs to W m+1, p () ∩ W0 (). With the assumptions C1 -C3 , we can verify that K (ϕ) is closed and convex. In addition, K (ϕ) is nonempty since ϕ belongs to K (ϕ). In the particular case when J = I , where I is the identity operator of W m, p (),
andm,Vp is a closed subspace m, p containing W0 (), such that X = x ∈ W0 () , x ≥ 0 , we obtain the set
m, p K (ϕ) = v ∈ W0 () | v ≥ ϕ for a.e. x ∈ .
(5)
In the following, we consider the obstacle problem : find y ∈ K (ϕ), such that Ay + g (y) − f, v − y ≥ 0, for all v in K (ϕ) ,
(6)
where g is a nondecreasing, C 1 real-valued function that satisfies the following assumption G (Growth condition) ∃γ ∈ R, ∃ν ≥ 0 such that ∀y ∈ R |g (y) | ≤ γ + ν|y| p−1 , we denote similarly the real valued function g and the Nemitsky operator such that g (y) (x) = g (y (x)) where A is a quasilinear differential operator type of the form given by in the divergence form Ay = of Leray-Lions (−1)|α| D α (Aα (·, η (y) , D m y)), such that
|α|≤m
Ay, v = σ (y, v) =
|α|≤m
Aα ·, η (y) , D m y D α vd x, ∀y, v ∈ W m, p () ,
where the coefficients Aα of the operator A satisfy the following assumptions [27]:
123
Appl Math Optim
Assumptions A Further we impose the following conditions of Leray-Lions type on the coefficient functions Aα (x, η, ζ ) : × R N1 × R N2 → R : A1 (Carathéodory condition) For almost all x ∈ , the function η, ζ → Aα (x, η, ζ ) is continuous in R N1 × R N2 and for all η, ζ the function x → Aα (x, η, ζ ) is measurable. A2 ](Growth condition ) There exists c1 > 0, s1 ∈ L q (), such that, for almost all x ∈ , ∀ (η, ζ ) ∈ R N1 × R N2 , ∀α ∈ N N , |α| ≤ m |Aα (x, η, ζ )| ≤ c1 |η| p−1 + |ζ | p−1 + s1 (x) . Then from above we deduce that, for all y in W m, p (), the function Aα (x, η (y) , D m y) belongs to L q (). A3 (Coercivity condition) There exist c2 > σ > 0 and s2 ∈ L 1 () such that, for almost all x ∈ , ∀ (η, ζ ) ∈ R N1 × R N2
Aα (x, η, ζ ) ηα ≥ − (c2 − σ )
|α|≤m−1
p ζ α − s2 (x) . |α|=m
A 3 When, A has a stronger coerciveness, namely: There exists c such that, for almost all x ∈ , ∀ζ = (η, ζ ) ∈ R N1 × R N2 ,
p ζ α .
Aα (x, η, ζ ) ζ α ≥ c
|α|≤m
|α|≤m
A4 (Monotony condition) For almost all x ∈ , ∀η ∈ R N1 , ∀ζ, ζ ∈ R N2
α α Aα (x, η, ζ ) − Aα x, η, ζ ζ − ζ > 0, if ζ = ζ .
|α|=m
A5 For almost all x ∈ , ∀η1 , η2 ∈ R N1
|Aα (x, η2 ) − Aα (x, η1 )| ≤ C
|α|≤m
ηα − ηα p−1 . 2 1 |α|≤m
A6 For almost all x ∈ , ∀η1 , η2 ∈ R N1 η α − η α p ≤ |Aα (x, η2 ) − Aα (x, η1 )| η2α − η1α . 2 1
C
|α|≤m
|α|≤m
A7 For almost all x ∈ , ∀η, ξ ∈ R N1 and for all p ≥ 2, we have 1.
N ∂ Ai α (x, η) ξi ξ j ∂η j
i, j=1 |α|≤m
] 0, 1 ].
123
≥ C
|α|≤m
k + |ηα |
p−2
|ξ |2 , where k belongs to
Appl Math Optim
2.
N ∂ Ai α C|ηα | p−2 . (x, η) ≤ ∂η j
i, j=1 |α|≤m
|α|≤m
m, p
From the assumptions given on Aα and g and for any ϕ in W0 () the obstacle m, p problem given by (4) and (6), has a unique solution y that belongs to W0 () (see 2m, p for example [8]), furthermore if we suppose that ϕ belongs to W (), then the m, p solution y belong to W 2m, p () ∩ W0 () (see for example [8,19]). In the sequel we set the following optimal control problem (P) as
Find a ϕ¯ in Uad such that J (ϕ) ¯ = inf J (ϕ)
(P)
ϕ∈Uad
m, p
where Uad is a closed and convex subset of W m+1, p () ∩ W0 (). In what follows, any ϕ in Uad satisfying (P) is referred to as an optimal control and the corresponding state y = T (ϕ) is called an optimal state. m, p Now, we regard ϕ in Uad as the control variable, and y = T (ϕ) in W0 () as the corresponding state variable, where the operator T (control-to-state operator) m, p is defined from Uad to W0 (). We try to find an obstacle ϕ in Uad . Thus, it is reasonable to introduce the following cost functional J (ϕ) =
F (x, y (ϕ)) +
ν 2p
|Lϕ| p d x,
which we try to minimize, where ν is a strictly given positive constant and L is an m+1, p auto-adjoint linear differential operator defined from W0 () to L p (), where the following assumptions are satisfied: F1 The function F : × R → R is measurable with respect to the first component and of class C 1 with respect to the second variable; this means that F (x, y (ϕ)) satisfies the Carathéodory conditions. F2 There exist c1 and c2 in R+ such that c1 + c2 y 2 (x) ≤ F (x, y (ϕ)) . F3 There exist L 1 (x) in L 1 () and L q (x) in L q (), where q < q, such that |F (x, y (ϕ))| ≤ L 1 (x) and ∇ Fy (x, y (ϕ)) ≤ L q (x) , for all x ∈ . F4 There exists a constant C > 0 such that |F (x, y (0))| ≤ C, for a.e x ∈ , this means that F (x, y (ϕ)) satisfies the boundness condition.
123
Appl Math Optim
H1 (Semi-continuity) m+1, p
∀vk ∈ W0
() , ifvk weakly converges to v in W m+1, p () then lim inf
k→∞
|Lvk | d x ≥ p
|Lv| p d x.
H2 (Equivalents norms) The L p () norm of Lv is equivalent to the W m+1, p () norm of v p
p
∃c > 0, ∀v ∈ W m+1, p () , such that Lv L p () ≥ c v W m+1, p () . To derive necessary conditions of the previous problem (P), we need to differentiate the map ϕ → T (ϕ). Since the map ϕ → T (ϕ) is not directly differentiable, an approximation problem with a semilinear partial differential equation is introduced. The approximated map ϕ → Tδ (ϕ) will be then differentiable and approximate necessary optimality conditions will be derived [26].
4 Existence of an Optimal Control Let E ⊂ be any compact subset, we define K E a nonempty closed convex set of m, p W0 () by
K E = v ∈ C0∞ () : v ≥ 1 on E , and consider the following variational inequality,
such that Find y in K E , σ (y, v − y) ≥ 0, for all v ∈ K E ,
where the unique solution y of (4), is called the capacitary potential of E and can be equivalently defined by [28] Cap(m, p) (E) = inf D α ξ L p () , ξ ∈K E
for any E ⊂ .
Lemma 4.1 Let y = T (ϕ) be the solution of the problem (6). Then there exists a nonegative Borel measure μ ∈ W −m,q () such that
(−1)|α| D α Aα ·, η (y) , D m y + g (y) − f = μ in ,
|α|≤m
and y =
123
∂i y = 0 on ∂, where i = 1, . . . , m − 1, ∂ηi
(7)
Appl Math Optim
where (7) is understood in the sense of distribution, and derivatives at ∂, such that
|α|≤m
m, p
α
Aα x, η (y) , D y D ξ d x +
=
m
ξ dμ for all ξ ∈ W0
∂ denotes the outside normal ∂η
g (y) ξ d x −
f ξdx
() .
(8)
Proof Let an arbitrary ξ in C0∞ () such that ξ ≥ 0, for an arbitrary ε > 0, we take v = y + εξ in K (ϕ), and by substituting v into (6), we obtain |α|≤m
Aα x, η (y) , D m y D α ξ d x +
g (y) ξ d x −
f ξ d x ≥ 0.
This that there exists a nonnegative Borel measure μ in W −m,q () such means |α| that (−1) D α (Aα (x , η (y) , D m y)) + g (y) − f = μ. Then, by density of |α|≤m
C0∞ () in W0
m, p
m, p
(), we conclude that (8) holds for any ξ in W0
().
Lemma 4.2 Let y = T (ϕ) be the solution of (6) and let μ in W −m,q () be the Borel measure defined in the previous Lemma 4.1, then 1. There exists a positive constant k such that μ (E) ≤ k Cap(m, p) (E) for each set E ⊂ whose distance from the boundary ∂ is positive. 2. y = ϕ μ-a.e., where y and ϕ are the representatives defined Cap(m, p) -a.e., respectively. Proof For the proof of this theorem, we use the same techniques as [1] with appropriate modification. For the proof of the first point, we take an arbitrary ξ in C0∞ () such 1 on E. From Lemma 4.1, we know that, that ξ ≥ 0 on and ξ ≥ |α| α m (−1) D Aα x, η (y) , D y + g (y) − f = μ in in the sense of distri|α|≤m
bution. Then we can write μ (E) =
χ E dμ ≤
ξ dμ,
where χ A is the indicator function of the set A defined by χ A (x) =
1 0
if x ∈ A otherwise.
123
Appl Math Optim
Therefore, by Hölder inequality, we get
ξ dμ ≤
Aα x, η (y) , D m y D α ξ d x
|α|≤m
+
1
1
q
|g (y) |q d x
p
|ξ | p d x
+
1
1
q
| f |q d x
p
|ξ | p d x
.
From assumptions G and A2 and by using a Sobolev injection Theorem, we deduce that
p−1 ξ dμ ≤ + s1 (x) D α ξ d x c1 ηα (y) |α|≤m
+c2
α p D ξ d x
1
p
+ c3
α p D ξ d x
1
p
.
By the Hölder inequality, we deduce that
1 q 1 p p p−1 q α α D ξ d x + s1 (x) d x c1 η (y)
ξ dμ ≤
|α|≤m
+c2
α p D ξ d x
1
p
+ c3
α p D ξ d x
1
p
,
and by the Minkowski inequality, we obtain
ξ dμ ≤
|α|≤m
+ c2
c1
1 α η (y) p q d x +
α p D ξ d x
p
+ c3
|s1 (x)|
q
1
1
α p D ξ d x
q
α p D ξ d x
1
p
.
From (4) and by the properties of s1 (x), we get μ (E) ≤ k
1 p Dα ξ p d x . |α|≤m
When we take the infimum over all ξ in K E , we get μ (E) ≤ k inf
ξ ∈K E
123
Dα ξ |α|≤m
L p ()
= k Cap(m, p) (E) .
1 p
Appl Math Optim
For the proof of the second point, we choose to take v = ϕ in (6) and get Aα x , η (y) , D m (y) D α (ϕ − y) d x |α|≤m
+
g (y) (ϕ − y) d x −
f (ϕ − y) d x ≥ 0.
This implies
(ϕ − y) dμ ≥ 0 and it follows that μ {x ∈ : y > ϕ a.e.} = 0.
(9)
In the following, we will show that μ x ∈ : y > ϕ Cap(m, p) -a.e. = 0 and to do, so we write
x ∈ : y > ϕ Cap(m, p) -a.e. = {x ∈ : y > ϕ a.e.} ∪ Q,
where Cap(m, p) (Q) = 0 and Q ∩ {x ∈ : y > ϕ a.e.} = ∅. Then, by using (9) and a result of part 1, we obtain
μ x ∈ : y > ϕ Cap(m, p) -a.e. = μ {x ∈ : y > ϕ a.e.} + μ (Q) = 0. Theorem 4.1 Let y2 (respectively y1 ) be the solution of (7) with obstacle ϕ2 (respectively with obstacle ϕ1 ). Then there exists a constant C such that y2 − y1 W m, p () ≤ C ϕ2 − ϕ1 W m, p () . Proof Since y2 and y1 are the solutions to (7) with obstacles ϕ2 and ϕ1 , respectively, it follows from Lemma 4.1 that there exist nonnegative measures μ2 and μ1 in W −m,q () such that |α|≤m
(−1)|α| D α (Aα (x, η (y1 ) , D m y1 )) + g (y1 ) − f = μ1 in
y1 = ϕ1 μ1 -a.e.
and |α|≤m
(−1)|α| D α (Aα (x, η (y2 ) , D m y2 )) + g (y2 ) − f = μ2 in
y2 = ϕ2 μ2 -a.e.
By Lemma 4.2, for i = 1, 2, we have yi ≥ ϕi Cap(m, p) -a.e. Thus, Cap(m, p) {x ∈ : yi < ϕi a.e.} = 0. Hence, by the first part of Lemma 4.2, we can write μ1 {x ∈ : y2 < ϕ2 a.e.} = 0.
123
Appl Math Optim
Therefore −
y2 dμ1 ≤ −
ϕ2 dμ1 .
Similarly, we obtain −
y1 dμ2 ≤ −
ϕ1 dμ2 .
From above, we get |α|≤m
Aα x, η (y2 ) , D m y2 − Aα x, η (y1 ) , D m y1 , D α (y2 − y1 ) d x
+
(g (y2 ) − g (y1 ) , y2 − y1 ) d x =
(y2 − y1 ) d (μ2 − μ1 ) in .
By assumption A6 and from the monotony properties of g, we deduce p
y2 − y1 W m, p () ≤ y2 dμ2 + y1 dμ1 − y1 dμ2 − = ϕ2 dμ2 + ϕ1 dμ1 − y1 dμ2 − ≤ ϕ2 dμ2 + ϕ1 dμ1 − ϕ1 dμ2 − ≤ (ϕ2 − ϕ1 ) d (μ2 − μ1 ) .
y2 dμ1 y2 dμ1
(10)
ϕ2 dμ1
From the right hand side of the above inequality, we obtain (ϕ2 − ϕ1 ) d (μ2 − μ1 ) Aα x, η (y2 ) , D m y2 − Aα x, η (y1 ) , D m y1 D α (ϕ2 − ϕ1 ) d x. ≤
|α|≤m
Then from assumption A2 and by using the Hölder inequality, we obtain (ϕ2 − ϕ1 ) d (μ2 − μ1 ) p−1 D α y2 − D α y1 p d x 1/ p ≤
|α|≤m
×
1/ p D α (ϕ2 − ϕ1 ) p d x .
|α|≤m
Finally, by taking into account (10), we get y2 − y1 W m, p () ≤ C ϕ2 − ϕ1 W m, p () .
123
Appl Math Optim
Theorem 4.2 Under assumptions H1 and H2 , there exists an optimal control to problem (P). Proof Let {ϕk }k be in W m+1, p () a minimizing sequence as lim J (ϕk ) = inf J (ϕ) = d, ϕ∈Uad
k→∞
where yk = T (ϕk ) denotes the solution of (6) that corresponds to ϕk , such that
|α|≤m
Aα x, η (yk ) , D m yk D α (v − yk ) d x
+
g (yk ) (v − yk ) d x ≥
f (x) (v − yk ) d x, for all v in K (ϕk ) . (11)
By the assumption F2 given on F ·, Tϕk (·) , we get ν μ()c1 +c2 yk 2L 2 () + 2p
p Lϕk L p () ≤
ν p Lϕk L p () . F x, Tϕk (x) d x + 2 p
From the equivalents norms assumption given by H2 , we deduce cν 2p
|α|≤m+1
|D
αϕ |p k
≤ μ()c1 + c2 yk 2L 2 () + 2νp Lϕk L p () p ≤ F x, Tϕk (x) d x + 2νp Lϕk L p () . p
By the bounds {J (ϕk )}k , we conclude p
c ϕk W m+1, p () ≤ d.
(12)
As {ϕk }k is a bounded sequence in W m+1, p (), that converges weakly to some ϕ in W m+1, p (), and strongly in W m, p (). Therefore ϕ belongs to Uad since Uad is weakly closed. In the sequel, we choose {yk }k to be a sequence of solutions of (6), that corresponds to the sequence of controls {ϕk }k . When we take v = ϕk in (11), we get
|α|≤m
Aα x, η (yk ) , D m yk D α (ϕk − yk ) d x
+
g (yk ) (ϕk − yk ) d x ≥
f (x) (ϕk − yk ) d x.
(13)
By using assumptions A2 and A6 and by the use of (13) and by using Hölder’s inequality, we have
123
Appl Math Optim
k1
D α (yk − ϕk ) p d x
|α|≤m
≤
|α|≤m
Aα x, η (yk ) , D m yk − Aα x, η (ϕk ) , D m ϕk D α (yk − ϕk ) d x,
and p
k1 yk − ϕk W m, p () ≤
|α|≤m
Aα x, η (yk ) , D m yk D α (yk − ϕk ) d x
⎞ p−1 q p−1 q cεq ⎝ α m q η (ϕk ) |s1 (x)| d x ⎠ c1 dx + c1 D ϕk dx + + q |α|
c p + p yk − ϕk W m, p () . ε p
From the properties of s1 (x), we deduce that p k1 yk − ϕk W m, p () ≤ f (x) (yk − ϕk ) d x − g (yk ) (yk − ϕk ) d x ⎞ ⎛ q q cεq ⎝ p−1 p−1 ηα (ϕk ) c1 dx + c1 D m ϕk d x + c⎠ + q |α|
+
c
p
εp p
yk − ϕk W m, p () .
Then, by the monotony property of g and the Young inequality, we get k1 yk − ϕk W m, p () ≤ εq f (x) L q () + ε p1 p yk − ϕk L p () − g (ϕk ) (yk − ϕk ) d x ⎞ ⎛ q q q cε ⎝ p−1 ηα (ϕk ) p−1 d x + + c1 D m ϕk d x + c⎠ q q
p
q
p
|α|
+
c εp p
p
yk − ϕk W m, p () .
By a Sobolev injection Theorem, we get p
k1 yk − ϕk W m, p () ≤
εq q
q
f (x) L q () +
c
εp p
p
yk − ϕk L p () +
εq γ q μ () q
cεq cεq ν q c p p + max{ , } ϕk W m, p () + p yk − ϕk W m, p () q q ε p ⎞ ⎛ α p−1 q c cεq ⎝ p η (ϕk ) + d x + c⎠ + p yk − ϕk W m, p () . q ε p |α|
123
Appl Math Optim
Therefore, by a suitable choice of ε, we deduce p
p
q
c yk − ϕk W m, p () ≤ c1 + c2 ϕk W m, p () + c3 f (x) L q () ⎞ ⎛ α p−1 q η (ϕk ) +c4 ⎝ d x + c⎠ . |α|≤m
Then, by using the properties of f , by a Sobolev injection Theorem and form (12), we get yk − ϕk W m, p () ≤ c1 ϕk W m+1, p () + c2 , and we deduce yk W m, p () ≤ C, where C is independent of k. Thus, up a subsequence, we have the convergence of yk m, p to y weakly in W0 () and strongly in L p (). After that, we prove that y = T (ϕ) is the solution of (6). First, we prove that Aα x, η (yk ) , D m yk − Aα x, η (y) , D m y D α |α|≤m
(yk − y) d x goes to 0, when k goes ∞. By assumption A2 , and for all 0 ≤ |α| ≤ m, we get 1/q Aα x, η (yk ) , D m yk q d x
≤
c1
|η (yk )|
p
1 q
d x + c1
m p q1 D yk dx +
1
|s1 (x)|
q
q
.
From the properties of s1 (x), and from above we deduce, that for all 0 ≤ |α| ≤ m, (14) Aα x, η (yk ) , D m yk L q () ≤ C, and that Aα (x, η (yk ) , D m yk ) converge weakly in L q () for all 0 ≤ |α| ≤ m. Then from (11) and by the previous convergence results given on ϕk and yk , we deduce that y ≥ ϕ, almost every where in . By taking v = vk = max {y, ϕk } in (11) we get the strongly convergence of vk to m, p y in W0 (). Then we have
|α|≤m
Aα x, η (yk ) , D m yk D α (vk − yk ) d x
+
g (yk ) (vk − yk ) d x ≥
f (x) (vk − yk ) d x.
123
Appl Math Optim
From assumptions A2 and A6 and by (14), we get 0≤
|α|≤m
α Aα x, η (yk ) , D m yk − Aα x, η (y) , D m y D yk − D α y d x,
then 0≤
Aα x, η (yk ) , D m yk D α (vk − y) d x
|α|≤m
−
Aα x, η (y) , D m y D α yk − D α y d x
|α|≤m
+
g (yk ) (vk − yk ) d x −
f (x) (vk − yk ) d x.
From assumption A2 and by using Hölder inequality, we get ⎛⎛ ⎜ 0 ≤ c ⎝⎝
⎞1/q |η (yk )|q( p−1) ⎠
|α|≤m
1/q
+ −
|s1 (x)|q d x
⎛
⎞1/q D m yk q( p−1) ⎠ +⎝ |α|≤m
⎞1/ p D α (vk − y) p d x ⎠ ×⎝ ⎛
|α|≤m
Aα x, η (y) , D m y D α yk − D α y d x −
|α|≤m
f (x) (vk − yk ) d x
+
g (yk ) (vk − yk ) d x,
and 0 ≤ c vk − y W m, p () −
|α|≤m
Aα x, η (y) , D m y D α yk − D α y d x
−
f (x) (vk − yk ) d x +
g (yk ) (vk − yk ) d x.
Then, by using the convergence results given above, we get that |α|≤m
α Aα x, η (yk ) , D m yk − Aα x, η (y) , D m y D yk − D α y d x
go to 0 when k go to ∞.
123
Appl Math Optim
Now, we prove that D α yk converges to D α y strongly in L p () and Aα (x, η (yk ) , converges weakly to Aα (x, η (y), D m y) in L q (). Thanks to the assumptions A2 and A6 , we have D α (yk − y) p d x k1
D m yk )
|α|≤m
≤
|α|≤m
α D yk − D α y d x. Aα x, η (yk ) , D m yk − Aα x, η (y) , D m y
So D α yk converges strongly to D α y in L p () when k goes to ∞. Then we deduce m, p that yk converges strongly to y in W0 (), and for all α, we deduce that D α yk α converge to D y as k goes to ∞ almost every where in . By assumption A1 , we deduce that Aα (x, η (yk ) , D m yk ) converges to Aα (x, η (y) , m D y) as k go to ∞ almost every where in . Now, we prove that y = T (ϕ). For any v in K (ϕ), max {v, ϕk } belongs to K (ϕk ) and by the convergence results given above on ϕk , we get that max {v, ϕk } converges m, p strongly to v = max {v, ϕ}, in W0 (). Thus by (11), and the convergence results given above on Aα (x, η (yk ) , D m (yk )) and yk , we deduce Aα x, η (y) , D m y D α (v − y) d x |α|≤m
+
g (y) (v − y) d x ≥
f (x) (v − y) d x.
Finally, we conclude with (4) that y = T (ϕ). Then from the above Theorem 4.1, we deduce that yk = T (ϕk ) converges weakly m, p to y = T (ϕ) in W0 (), and by the lower semi-continuity of the operator L, we obtain p ¯ p d x. lim inf (Lϕk ) d x ≥ (Lϕ) k→∞
Hence, by using the convergence results given in the previous results on yk , and by the properties given on the minimizing sequence {ϕk }k , we get inf J (ϕ) = lim J (ϕk )
ϕ∈Uad
F x, Tϕk (x) + lim 2νp |Lϕk | p d x k→∞ ≥ lim F x, Tϕk (x) + lim inf 2νp |Lϕk | p d x k→∞ k→∞ ¯ p dx ≥ F x, Tϕ¯ (x) d x + 2νp |Lϕ| = J (ϕ) ¯ = inf J (ϕ) , k→∞
= lim
k→∞
ϕ∈Uad
and hence concluding that ϕ¯ is optimal.
123
Appl Math Optim
5 Approximate Problems In this section we introduce a family of approximate problems related to the problem (P) to derive an approximate necessary optimality system. The obstacle problem given by (3) and (5), can be equivalently written as − (Ay + g (y) − f ) ∈ ∂ I K (ϕ) (y) in and y=
∂i y = 0 on ∂, where i = 1, . . . , m − 1, ∂ηi
such that ∂ I K (ϕ) (y) is the subdifferential on y of the indicator function I K (ϕ) defined by
"
∂ I K (ϕ) (y) = v ∈ L () | q
v (x) (y (x) − z (x)) d x ≥ 0 for all z in K ,
or, equivalently
∂ I K (ϕ) (y) = ∂ I K + (y−ϕ) (y) = v ∈ L q () | v ∈ β0 (y − ϕ) a.e. in , where ∂ I K (ϕ) (y) defined above denotes the normal cone to the set K at the point y and that K + is a closed and convex set given by
m, p K + (y − ϕ) = y ∈ W0 () | y (x) ≥ 0, a.e. in . where β0 (·) is the maximal monotone graph defined from R to 2R by ⎧ ⎨ 0 if r > 0 β0 (r ) = R− if r = 0 ⎩ ∅ if r < 0
(15)
Then, the above problem can be equivalently written as Ay + g (y) + β0 (y − ϕ) = f in and y = where i = 1, . . . , m − 1.
∂i y = 0 on ∂, ∂ηi
In this section we introduce a family of approximate problems. For δ > 0 we denote by βδ (·) the Yosida approximation of β0 (·) given by (15). m, p Then, for ϕ ∈ W0 (), we consider the following quasilinear equation ⎧ (−1)α D α Aα (x, η (y) , D m y) + g (y) + βδ (y − ϕ) = f (x) in ⎪ ⎨ |α|≤m ∂i y ⎪ ⎩y = = 0, for i = 1, . . . , m − 1 ∂ηi
123
on ∂
(16)
Appl Math Optim
where 1δ β (·) is the Moreau-Yoshida approximation of β0 (·) given by ⎧ 0 if r > $0 % 1⎨ 2 1 if r ∈ − 21 , 0 δ β (r ) = δ ⎩ −r r + 41 if r < − 21 In the sequel, to simplify the writing we put βδ (r ) = 1δ β (r ). By the definition of β0 (y − ϕ), we also deduce that it belongs to L q () we deduce that the above problem m, p (16) has unique solution y in W 2m, p () ∩ W0 () (see for example [10], [30]). Thus from the properties of βδ (·) and g (·), the above equation given by (16) has m, p a unique solution y δ in W 2m, p () ∩ W0 () (see for example [10]). If we set δ y = T (ϕ), we introduce the approximate cost functional J δ by J (ϕ) = F x, T δ (ϕ) + δ
ν 2p
|Lϕ| p d x,
then the approximate control problems P δ is defined as
δ that Find aδ ϕ ∈ Uad such δ J ϕ = inf J δ (ϕ) .
(P δ )
ϕ∈Uad
Theorem 5.1 Problem (P δ ) admits an optimal pair y δ , ϕ δ . For the proof of the previous Theorem 5.1, we use the same techniques and follow the same steps as for the proof of Theorem 4.2 with suitable modification. Now we establish the following convergence theorem, which is crucial in the next section. Theorem 5.2 Let y δ , ϕ δ be an optimal pair for problem (P δ ). Then y δ converges m, p strongly to y in W0 () and ϕ δ converges weakly to ϕ in W m+1, p () and ϕ δ m, p converges strongly to ϕ in W0 (), where (y, ϕ) is the given optimal pair for problem (P). Proof From the properties of the cost functional and assumptions F4 and H2 , we obtain δ ν δp ϕ ≤ J c d x, ≤ (0) p W m+1, p ()
and thus δ ϕ
W m+1, p ()
≤ C,
where C is independent of δ. Then, up a subsequence, we get that ϕ δ converge weakly m, p to & ϕ in W m+1, p () and strongly to & ϕ in W0 ().
123
Appl Math Optim
By multiplying Eq. (16) by v = ϕ δ − y δ and integrating both side over , we get |α|≤m
Aα x, η y δ , D m y δ D α ϕ δ − y δ d x +
=−
βδ y δ − ϕ δ ϕ δ − y δ d x +
g yδ ϕδ − yδ d x
f (x) ϕ δ − y δ d x.
From the properties of β (·), we note that − βδ y δ − ϕ δ ϕ δ − y δ d x ≥ 0.
Thus, we have |α|≤m
+
Aα x, η y δ , D m y δ D α ϕ δ − y δ d x
g yδ ϕδ − yδ d x ≥
f (x) ϕ δ − y δ d x.
(17)
From assumptions A2 and A6 and by (17), we get p k1 D α y δ − ϕ δ L p () ≤ p−1 + k2 ϕ δ m, p W
|α|≤m
Aα x, η y δ , D m y δ D α y δ − ϕ δ d x
δ y − ϕ δ m, p , W () ()
and p k1 D α y δ − ϕ δ L p () ≤
f (x) y δ − ϕ δ d x −
p−1 + k2 ϕ δ W m, p () y δ − ϕ δ W m, p () .
g yδ yδ − ϕδ d x
By Hölder inequality, we get ⎛ ⎞1/q ⎛ ⎞1/ p δ p q p k1 y − ϕ δ W m, p () ≤ ⎝ g y δ d x ⎠ ⎝ y δ − ϕ δ d x ⎠
⎛ +⎝
⎞1/q ⎛ | f (x)|q d x ⎠
p−1 +k2 ϕ δ m, p W
⎞1/ p δ y − ϕδ p d x ⎠
δ y − ϕ δ m, p . W () ()
Hence, we get δ y − ϕ δ p m, p ≤ C, W ()
123
⎝
Appl Math Optim
and δ y
≤ C.
W m, p ()
(18)
Then, up a subsequence, we deduce that y δ converges weakly to y in W m, p () and strongly to y in L p (). m, p Multiplying Eq. (16), by ψ in W0 (), such that ψ ≥ 0, from assumption A2 and integrating both sides over , we obtain 0≤
p−1 −β y δ − ϕ δ ψd x ≤ δk2 y δ W m, p () ψ W m, p () +δ g y δ ψd x + δ f (x) ψd x.
It can be deduce that the previous quantity goes to 0 as δ goes to 0, and then we get 0≤
y−& ϕ )) ψd x ≤ lim inf (−β (& δ→0
−β y δ − ϕ δ ψd x = 0.
This implies that β (& y−& ϕ ) = 0, and therefore by the definition of β (·), we deduce that & y≥& ϕ almost every where in . From assumptions A2 and A6 and Eq. (16), we have k1
p D α& y − Dα yδ d x
|α|≤m
≤
|α|≤m
Aα x, η (& y) , D m & y − y δ d x. y − Aα x, η y δ , D m y δ D α &
By the definition of β (·), we can write p D α& y − Dα yδ d x ≤
k1
|α|≤m
+ − +
( y δ <ϕ δ ) |α|≤m
|α|≤m
y) , D m & y − yδ d x Aα x, η (& y Dα &
y, ϕ δ − y δ d x βδ y δ − ϕ δ max &
y − max & y, ϕ δ d x Aα x, η y δ , D m y δ D α &
y, ϕ δ − y δ d x − g y δ max &
f (x) max & y, ϕ δ − y δ d x.
123
Appl Math Optim
Then, we get α p p−1 D & y − y δ W m, p () y − D α y δ d x ≤ & y W m, p () & k1 |α|≤m
p−1
+k2 y δ W m, p () y δ − max & y, ϕ δ W m, p () 1 q
q q δp max & y, ϕ δ − y δ L p () + γ d x + ν y L p ()
+ c y δ − max & y, ϕ δ p .
(19)
L ()
δ and from the fact that max{& y, ϕ δ } From the convergence results given above on y p m, p D α& y − D α y δ d x goes to 0 converges strongly to & y in W0 (), we have
yδ
|α|≤m
m, p
converge strongly to & y in W0 (), and as δ goes to 0 and thus, we deduce that α& to D y almost every where in , by assumption A1 , we deduce that thatD α yδ goes y) almost every where in . Aα x, η y δ , D m y δ goes to Aα (x, η (& y) , D m & To prove Aα x, η y δ , D m y δ
L q ()
≤ C,
(20)
we use the method given in the proof of Theorem 4.2. Then we deduce that same y) in L q () as δ goes Aα x, η y δ , D m y δ converges weakly to Aα (x, η (& y) , D m & to 0. Now, we prove that & y= ϕ). From (5), we see that & y belongs to K (& ϕ ). For any
T (& m, p v in K (& ϕ ), we have max v, ϕ δ converges strongly to v, in W0 () and
βδ y δ − ϕ δ max v, ϕ δ − y δ d x 0≤−
= Aα x, η y δ , D m y δ D α max v, ϕ δ − y δ d x |α|≤m
+
g y δ max v, ϕ δ − y δ d x −
f (x) max v, ϕ δ − y δ d x,
then, as δ goes to 0, we deduce that the right hand of the previous inequality goes to Aα x, η (& y D α (v − & g (& y) (v − & y) d x y) , D m & y) d x + |α|≤m
−
f (x) (v − & y) d x.
Thus, & y = T (& ϕ ), means that & y is the solution of (6) corresponding to & ϕ. y = y. From the convergence results To achieve the proof, we show that & ϕ = ϕ and & given on ϕ δ and y δ , and by the weak lower semi-continuity of the cost functional, we
123
Appl Math Optim
have
ϕ| p d x F (x, T (& ϕ )) + 2νp |L& p ≤ lim inf F x, T δ ϕ δ + 2νp Lϕ δ d x = lim inf J δ ϕ δ δ→0 δ→0 ≤ lim sup J δ ϕ δ ≤ lim sup J δ (ϕ) ≤ F x, T δ (ϕ) + 2νp |Lϕ| p d x δ→0 δ→0 F (x, T (ϕ)) + 2νp |Lϕ| p d x = J (ϕ) . ≤
J (& ϕ) =
ϕ ). Thus & ϕ =ϕ We have seen that, ϕ is optimal to problem (P), so J (ϕ) ≤ J (& almost every where in , and by the uniqueness of the solution of (6), we get & y = y.
6 Characterization of the Optimal Control Lemma 6.1 The map y δ = Tδ (ϕ) is differentiable in the following sense; for given ϕ and l in Uad there exists a ξ δ in H0m (), such that y δ (ϕ + εl) − y δ (ϕ) converges weakly to ξ δ in H0m () , ε as ε goes to 0. Furthermore, ξ δ is the solution of the following problem ∂ Aα x, η y δ (ϕ) , D m y δ (ϕ) D α ξ δ + g y δ (ϕ) ξ δ ∂η |α|≤m +βδ y δ − ϕ ξ δ − l = 0 in , (21)
with
(−1)|α| D α
∂i ξ δ = 0 , for i = 0, ..., m − 1 on ∂. ∂ηi
Proof From the state equation given by (16), we can get
(−1)|α| D α Aα x, η y δ (ϕ + εl) , D m y δ (ϕ + εl)
|α|≤m
−
(−1)|α| D α Aα x, η y δ (ϕ) , D m y δ (ϕ)
|α|≤m
+ g y δ (ϕ + εl) − g y δ (ϕ) = − βδ y δ (ϕ + εl) − ϕ − εl − βδ y δ (ϕ) − ϕ in ,
(22)
123
Appl Math Optim
with
∂ i y δ (ϕ + εl) − y δ (ϕ) = 0, for i = 0, ..., m − 1 on the boundary ∂. ∂ηi
By multiplying (22) by the test function ψ = y δ (ϕ + εl) − y δ (ϕ), and integrating m times by parts the both sides over , we obtain ⎛ ⎝ (−1)|α| D α Aα x, η y δ (ϕ + εl) , D m y δ (ϕ + εl)
|α|≤m
−
(−1)|α| D
α
Aα
⎞ x, η y δ (ϕ) , D m y δ (ϕ) ⎠ y δ (ϕ + εl) − y δ (ϕ)
|α|≤m
+
δ g y (ϕ + εl) − g y δ (ϕ) y δ (ϕ + εl) − y δ (ϕ) d x
δ =− βδ y (ϕ + εl) − ϕ − εl − βδ y δ (ϕ) − ϕ × y δ (ϕ + εl) − y δ (ϕ) d x.
(23)
From assumption A6 with p = 2, we get 2 C y δ (ϕ + εl) − y δ (ϕ) H m () Aα x, η y δ (ϕ + εl) , D m y δ (ϕ + εl) ≤ |α|≤m
−Aα x, η y δ (ϕ) , D m y δ (ϕ) D α y δ (ϕ + εl) − y δ (ϕ) d x, and
βδ y δ (ϕ + εl) − ϕ − εl − βδ y δ (ϕ) − ϕ d x 1
=
βδ θ y δ (ϕ + εl) − ϕ − εl + (1 − θ ) y δ (ϕ) − ϕ
0
× dθ y δ (ϕ + εl) − y δ (ϕ) − εl d x,
(24)
then from (23), we obtain 2 C y δ (ϕ + εl) − y δ (ϕ)
H m ()
+ε
123
1 l 0
βδ (.) dθ
$
1 ≤−
2 βδ (.) dθ y δ (ϕ + εl) − y δ (ϕ) d x
0
% y δ (ϕ + εl) − y δ (ϕ) d x.
Appl Math Optim
Therefore, by the properties of β (·), we have 2 C y δ (ϕ + εl) − y δ (ϕ) H m () ≤ε ε ≤ δ ≤
ε δ
1
l
βδ (·) dθ
$
% y δ (ϕ + εl) − y δ (ϕ) d x
0
1/2 1/2 δ y (ϕ + εl) − y δ (ϕ)2 d x |l|2 d x δ l L 2 () y (ϕ + εl) − y δ (ϕ) L 2 () ,
then, we deduce that y δ (ϕ + εl) − y δ (ϕ) ε
≤ H m ()
c δ
l L 2 () .
y δ (ϕ + εl) − y δ (ϕ) is bounded in H0m (). Thus ε y δ (ϕ + εl) − y δ (ϕ) weakly converges to ξ δ in there exists ξ δ in H0m () such that ε H0m () as ε goes to 0. To show that ξ δ satisfies (21), we multiply Eq. (22) by an arbitrary v in H0m () and integrating both sides over , together with Eq. (24), and when ε goes to 0, we obtain
Therefore, we deduce that
∂ Aα x, η y δ (ϕ) , D m y δ (ϕ) D α ξ δ (−1) D ∂η |α|≤m +g y δ (ϕ) ξ δ + βδ y δ − ϕ ξ δ − l = 0 in ,
with
|α|
α
∂i ξ δ = 0, for i = 0, ..., m − 1 on ∂. ∂ηi
Now, we provethefollowing theorem which gives necessary conditions for optimal pairs of problem P δ . Theorem 6.1 Let y δ , ϕ δ be an optimal solution of problem P δ , then there exists m δ an function p in H0 () and an arbitrary l in Uad such that the triplet δ adjoint δ δ y , p , ϕ satisfies the following system
123
Appl Math Optim
⎧ |α| ⎪ v dx D α Aα x, η y δ ϕ δ , D m y δ ϕ δ (−1) ⎪ ⎪ ⎪ ⎪ ⎪ |α|≤m ⎪ ⎪ ⎪ m, p ⎪ δ − ϕδ v d x + δ v dx = ⎪ + y β g y f (x) v d x, for all v in W0 () ⎪ δ ⎪ ⎪ ⎪ ⎪ ⎪ ∂ Aα ⎪ ⎪ x, η y δ ϕ δ , D m y δ ϕ δ D α pδ w d x Dα (−1)|α| ⎪ ⎪ ∂η ⎪ ⎨ |α|≤m (25) δ δ δ + βδ y − ϕ p w d x ⎪ ⎪ ⎪ ⎪ ⎪ ⎪+ ⎪ g y δ pδ w d x = ∇ Fyδ x, y δ ϕ δ w d x, for all w in H0m () ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ y δ − ϕ δ lp δ + ν δ p−2 Lϕ δ , Ll d x ≥ 0, for all l in U ⎪ ⎪ β Lϕ ad ⎪ δ ⎪ ⎪ ⎪ ⎪ i δ i δ i δ ⎪ ∂ y ∂ ϕ ∂ p ⎪ ⎩ with = = = 0, for i = 0, ..., m − 1 on ∂ ∂ηi ∂ηi ∂ηi
Proof Let y δ , ϕ δ be an optimal pair for problem P δ , by taking an arbitrary l in Uad and in equation given by (21), if we replace ϕ by ϕ δ ), we get ∂ Aα x, η y δ ϕ δ , D m y δ ϕ δ D α ξ δ + βδ y δ − ϕ δ ξ δ ∂η |α|≤m +g y δ ξ δ = βδ y δ − ϕ δ l in .
(−1)|α| D α
∂i ξ δ = 0, for i = 0, ..., m − 1 on ∂. ∂ηi To simplify the writing, we set
with
∂ Aα x, η y δ ϕ δ , D m y δ ϕ δ D α ξ δ ∂η |α|≤m & δ. +βδ y δ − ϕ δ ξ δ + g y δ ξ δ = Aξ (−1)|α| D α
(26)
where, for all l in Uad , we can write ⎧ δ & = β y δ − ϕ δ l, in ⎨ Aξ δ ∂i ξ δ ⎩ = 0, for i = 0, ..., m − 1 on ∂. ∂ηi & we multiply the first part of the left side of To determine an adjoint operator of A, Eq. (26) by p δ in H m () and if we integrated by parts m times over , we get δ δ δ α δ ∂ Aα m δ x, η y ϕ , D y ϕ D ξ pδ d x (−1) D ∂η |α|≤m ∂ Aα x, η y δ ϕ δ , D m y δ ϕ δ D α ξ δ D α p δ d x. = ∂η
|α|
|α|≤m
123
α
Appl Math Optim
Similarly, if we set
∂i ξ δ = 0 for i = 0, ..., m − 1 on ∂, we will have ∂ηi
δ δ δ α δ δ ∂ Aα m δ x, η y ϕ , D y ϕ D p ξ dx (−1) D ∂η |α|≤m ∂ Aα x, η y δ ϕ δ , D m y δ ϕ δ D α p δ D α ξ δ d x. = ∂η
|α|
α
|α|≤m
Consequently, the adjoint function p δ will satisfy ∂ Aα x, η y δ ϕ δ , D m y δ ϕ δ D α p δ ∂η |α|≤m + βδ y δ − ϕ δ p δ + g y δ p δ = ∇ Fyδ x, y δ ϕ δ in ,
(−1)|α|
(27)
∂ i pδ = 0, for i = 0, ..., m − 1, on ∂. ∂ηi Since ϕ δ is an optimal control to problem P δ and from the cost function J , we get
with
Jδ ϕ δ + εl − Jδ ϕ δ lim inf ε ε→0+ p−2 ∇ Fyδ x, y δ ϕ δ ξ δ + ν Lϕ δ Lϕ δ , Ll d x ≥ 0, for all l in Uad . =
(28) Multiplying the adjoint state Eq. (27) by ξ δ , gives ∇ Fyδ x, y δ ϕ δ ξ δ ⎛ ∂ Aα x, η y δ ϕ δ , D m y δ ϕ δ D α p δ =⎝ (−1)|α| D α ∂η |α|≤m δ δ δ δ δ δ +g y p + βδ y − ϕ p ξ ,
(29)
therefore by using (28) and (29), we get Jδ ϕ δ + εl − Jδ ϕ δ lim inf ε ε→0+ ⎛ δ δ δ α δ |α| α ∂ Aα m δ ⎝ x, η y ϕ , D y ϕ = D p (−1) D ∂η |α|≤m
123
Appl Math Optim
δ δ δ δ δ δ +g y p + βδ y − ϕ p ξ
p−2 + ν Lϕ δ Lϕ δ , Ll d x ≥ 0.
(30)
Multiplying Eq. (21) by p δ , yields ∂ Aα x, η y δ ϕ δ , D m y δ ϕ δ D α p δ ξ δ ∂η |α|≤m + g y δ pδ ξ δ d x + βδ y δ − ϕ δ ξ δ p δ d x = βδ y δ − ϕ δ lp δ d x.
(−1)|α| D α
For all ϕ in Uad and with help of Eqs. (29) and (30), we obtain
p−2 βδ y δ − ϕ δ lp δ + ν Lϕ δ Lϕ δ , Ll d x ≥ 0, for all l in Uad .
Finally, we obtain the optimality system ⎧ ⎪ |α| ⎪ D α Aα x, η y δ ϕ δ , D m y δ ϕ δ v d x (−1) ⎪ ⎪ ⎪ ⎪ |α|≤m ⎪ ⎪ ⎪ δ δ ⎪ m, p δ ⎪ y v d x + v d x = β −ϕ g y f (x) v d x, for all v in W0 () + ⎪ δ ⎪ ⎪ ⎪ ⎪ ⎪ δ δ δ α δ ⎪ |α| α ∂ Aα m δ ⎪ w dx x, η y ϕ , D ϕ D D y p (−1) ⎪ ⎪ ⎨ ∂η |α|≤m + βδ y δ − ϕ δ p δ w dx ⎪ ⎪ ⎪ δ δ δ ⎪ δ δ ⎪ ⎪ + y p x, y ϕ w d x, for all w in H0m () g w d x = ∇ F ⎪ y ⎪ ⎪ ⎪ ⎪ δ p−2 δ δ ⎪ δ δ ⎪ Lϕ β y lp − ϕ + ν Lϕ , Ll d x ≥ 0, for all l in Uad ⎪ ⎪ ⎪ δ ⎪ ⎪ ⎪ ∂ i yδ ∂ i ϕδ ∂ i pδ ⎪ ⎪ ⎩ with = = = 0, for i = 0, ..., m − 1 on ∂. ∂ηi ∂ηi ∂ηi δ Lemma 6.2 Since the hypothesis of Theorem δ 6.1δ areδ fulfilled, p converges weakly m to p in H () as δ goes to 0 and −βδ y − ϕ p converges star-weakly to μ in H −m () ∩ M , where p δ solves the adjoint equation given in (25) and M is the set of all regular signed measures on .
123
Appl Math Optim
Proof By taking w = p δ in the adjoint equation of the optimality system given by (25), by using assumption A7 and from (18), we obtain C
|α|≤m
+
p−2 α δ 2 D p d x + k + η α y δ
2 βδ y δ − ϕ δ p δ d x
2 g y δ p δ d x ≤ C p δ L p () ,
where p < 2 is the dual exponent of q. Then, by the properties of β (·) and g (·), we get δ p
H m ()
≤ C.
(31)
Thus, we deduce that p δ converges weakly to p in H m (). Again, from the adjoint equation of optimality system given by (25), we get D α w ηα y δ p−2 D α p δ d x C |α|≤m
1 1 p1 ( p−2) p1 p2 α δ α p2 α δ p2 ≤ η y D w D p dx dx ,
|α|≤m
|α|≤m
where we can choose p1 =
p p−2
and p2 =
p , thus we obtain 2
D α w ηα y δ p−2 D α p δ d x C |α|≤m
≤
|α|≤m
1 p1 2 p α p α δ 2p p D w 2 D p d x , p
α δ p η y d x
|α|≤m
then, again by using a general Hölder inequality , we get D α w ηα y δ p−2 D α p δ d x C |α|≤m
≤
|α|≤m
1 p1 p
α δ p η y d x
p
1 1 α1 α2 α δ α2 α α1 D w dx D p × dx , |α|≤m
|α|≤m
123
Appl Math Optim 1 p2
such that
= C
1 α1
+
1 α2 ,
then we can chose α1 = α2 = p, thus we have
D α w ηα y δ p−2 D α p δ d x
|α|≤m
≤
|α|≤m
1 p1 p
α δ p η y d x
p
1 1 p p α δp α p D w dx D p dx × ,
|α|≤m
|α|≤m
then if we chose to take p = 2, we deduce ⎛ α β y δ − ϕ δ p δ w d x ≤ C ⎝ w L 2 + p δ m D w () δ H () |α|≤m
⎞ ⎠. L 2 ()
Then, from (31), we deduce that δ β y − ϕ δ p δ −m ≤ C. δ H () Let Sλ (r ) ∈ C 1 (R) be a family of smooth approximations of sign r , such that for all r in R Sλ (r ) is positif, and ⎧ ⎪ if r > λ ⎨1 Sλ (r ) = 0 if r = 0 ⎪ ⎩ −1 if r < −λ. It’s easy to see that, the adjoint equation, given in the optimality system (25), can be written in the strong formulation as ⎧ δ δ δ α δ ⎪ |α| α ∂ Aα m δ ⎪ x, η y ϕ , D y ϕ D p (−1) D ⎪ ⎪ ∂η ⎪ ⎨ |α|≤m +βδ y δ − ϕ δ p δ + g y δ p δ = ∇ Fyδ x, y δ ϕ δ in ⎪ ⎪ ⎪ ∂ i pδ ⎪ ⎪ ⎩ , for i = 0, . . . , m − 1, on ∂ ∂ηi
(32)
By multiplying the equation given in (32) by Sδ p δ and integrating over , and thanks to assumption A7 and equation (18) we can get
123
β y δ − ϕ δ p δ Sδ p δ d x ≤ Cδ.
Appl Math Optim
By passing to the limit with λ going to 0, we have δ β y − ϕ δ p δ 1 ≤ Cδ. L () Then, we deduce that β y δ − ϕ δ p δ converge weakly star to μ in M . From Eq. 32, together with the properties of β (·) and g (·), we deduce that
|α|≤m
≤
D α pδ
∂ Aα x, η y δ (ϕ) , D m y δ (ϕ) D α p δ d x ∂η
∇ Fy x, y δ ϕ δ p δ d x.
By passing to the limit with δ goes to 0, we obtain
|α|≤m
∂ Aα x, η (y (ϕ)) , D m y (ϕ) D α pd x ≤ D p ∂η α
∇ Fy (x, y (ϕ)) pd x,
and from equation above, it’s easy to deduce that μ, p ≥ 0 .
' + ( converge to μ, y − ϕM,C Lemma 6.3 When δ goes to 0 μδ , y δ − ϕ δ and we get μ, y − ϕM,C = 0 . ' + ( Proof By the definition of β (·), we deduce that μδ , y δ − ϕ δ = 0, where v + = max {0, v}. From the convergence results given on y δ and ϕ δ , we deduce that y δ − ϕ δ m, p () converge weakly to y − ϕ in W m, p (), then ( injection of W ' by the compact + to μ, y − ϕM,C in C () we get the weak convergence of μδ , y δ − ϕ δ and we conclude that μ, y − ϕM,C = 0. ) * ) * Lemma 6.4 When δ goes to 0, ξ δ , p δ converges to ξ , p H −m (),H m () and we get ) * μ, y − ϕM,C = ξ , p H −m (),H m () = 0. Proof In the sequel, we set ξ δ = βδ y δ − ϕ δ and μδ = βδ y δ − ϕ δ p δ . From the definition of βδ (·), we get δ δ ξ ,p =
βδ y δ − ϕ δ p δ d x δ 1 1 1 pδ d x − y − ϕδ + = δ y δ −ϕ δ ≤− 21 4 δ − 21 ≤y δ −ϕ δ ≤0 2 × y δ − ϕ δ p δ d x,
123
Appl Math Optim
and by the definition of βδ (·), we obtain δ δ μ , y − ϕδ =
βδ y δ − ϕ δ p δ y δ − ϕ δ d x δ δ 1 2 δ = p dx − y −ϕ δ y δ −ϕ δ ≤− 21 δ − 21 ≤y δ −ϕ δ ≤0 2 × y δ − ϕ δ p δ d x.
Then, we get δ δ δ δ 2 ξ , p − μ , y − ϕ δ 1 δ 1 q δ ≤ c q y −ϕ + χ δ δ 1 y −ϕ ≤− 2 q δ 2 L () " 2− p 2p δ 1 p m , meas y δ − ϕ δ ≤ − H () 2 by the definition of β (·), we deduce that " q δ 1 1 δ δ y − ϕ δ + 1 χ ≤ meas y − ϕ ≤ − . 1 δ δ q y −ϕ ≤− 2 2 2 4 L q () From above, we know that β (·) is bounded in L q (), and hence we get
1 q y δ − ϕ δ + d x 4 y δ −ϕ δ ≤− 12 δ 1 q q δ δ δ 2 ≤ d x. y −ϕ y − ϕ + dx + 4 y δ −ϕ δ ≤− 21 − 21 ≤y δ −ϕ δ ≤0
It is easy to see that the right side of the above inequality is equal to δ β y − ϕ δ q d x, which yields
q δ y − ϕ δ + 1 χ ≤ β y δ − ϕ δ L q () . y δ −ϕ δ ≤− 12 q 4 L () From the state given by (16), it is clear that βδ y δ − ϕ δ belong to L q (). equation δ Thus we have β y − ϕ δ L q () ≤ δC and then we get δ y − ϕ δ + 1 q χ 4 y δ −ϕ δ ≤− 21
123
L q ()
≤ β y δ − ϕ δ L q () ≤ δC.
Appl Math Optim
Therefore deduce meas y δ − ϕ δ ≤ − 21 ≤ δC and then we get y δ − ϕ δ ≤ − 21 δ δwe δ δ and 2 ξ , p − μ , y − ϕ δ goes to 0, when δ goes to 0. From Lemma 6.3, it s clear that lim ξ δ , p δ → 0. δ→0
p−2 Lemma 6.5 When δ goes to 0, β y δ − ϕ δ p δ l + ν Lϕ δ Lϕ δ Ll d x con p−2 verges to μl + ν |Lϕ| LϕLl d x and we get μl + ν |Lϕ| p−2 LϕLl d x ≥ 0.
Proof From the convergence results given above on β y δ − ϕ δ p δ and ϕδ , we p−2 β y δ − ϕ δ p δ l + ν Lϕ δ Lϕ δ Ll d x goes to deduce that (μl +ν |Lϕ| p−2 LϕLl d x and from the projection equation given by the optimality sys μl + ν |Lϕ| p−2 LϕLl d x ≥ 0. tem (25), we get
From the previous Lemmas, we obtain the following theorem Theorem 6.2 Let ϕ be an optimal solution to the problem given by (P) . Then, there exist μ in H −m () and p in H0m (), such that the following optimality system is satisfied ⎧ (−1)|α| D α (Aα (x, η (y (ϕ)) , D m y (ϕ))) + ξ + g (y) = f (x) in ⎪ ⎪ ⎪ |α|≤m ⎪ ⎪ ⎪ ⎪ ⎪ |α| α ∂ Aα m y (ϕ)) D α p + μ + g (y) p ⎪ D , D η (ϕ)) (−1) (x, (y ⎪ ⎪ ∂η ⎪ |α|≤m ⎪ ⎪ ⎪ ⎪ = ∇ F y in (ϕ)) (x, ⎪ y ⎪ ⎪ ⎪ p−2 ⎪ μl + ν |Lϕ| Lϕ Ll d x ≥ 0, for all l in Uad ⎪ ⎪ ⎨ ) * μ, y − ϕM,C = 0, ξ , p H −m (),H m () = 0 ⎪ ⎪ α ⎪ ∂ Aα ⎪ |α| α m ⎪ x, η (y (ϕ)) , D y (ϕ) D p p (−1) D ⎪ ⎪ ∂η ⎪ |α|≤m ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ) − * ∇ Fy (x, y (ϕ)) p d x ≤ 0, ⎪ ⎪ ⎪ ξ , μ −m ≤0 ⎪ H (),H m () ⎪ ⎪ ip iϕ ⎪ ∂i y ⎪ ∂ ∂ ⎪ ⎩ = = = 0, for i = 0, ..., m − 1 on ∂. ∂ηi ∂ηi ∂ηi
References 1. Adams, D.R., Hrynkiv, V., Lenhart, S.: Optimal control of a biharmonic obstacle problem. In: Around the research of Vladimir Maz’ya. III, Int. Math. Ser. (N. Y.) vol. 13, pp. 1–24. Springer, New York (2010) 2. Adams, D.R., Lenhart, S., Yong, J.: Optimal control of the obstacle for an elliptic variational inequality. Appl. Math. Optim. 38(2), 121–140 (1998) 3. Adams, D.R., Lenhart, S.: An obstacle control problem with a source term. Appl. Math. Optim. 47(1), 79–95 (2003)
123
Appl Math Optim 4. Barbu, V.: Analysis and Control of Nonlinear Infinite-Dimensional Systems. Mathematics in Science and Engineering, vol. 190. Academic Press Inc, Boston (1993) 5. Bergounioux, M.: Optimal control of an obstacle problem. Appl. Math. Optim. 36(2), 147–172 (1997) 6. Bergounioux, M., Lenhart, S.: Optimal control of bilateral obstacle problems. SIAM J. Control Optim. 43(1), 240–255 (2004) 7. Bergounioux, M.E., Lenhart, S.: Optimal control of the obstacle in semilinear variational inequalities. Positivity 8(3), 229–242 (2004) 8. Bidaut-Véron, M.F.: Variational inequalities of order 2m in unbounded domains. Nonlinear Anal. 6(3), 253–269 (1982) 9. Brezis, H., Stampacchia, G.: Sur la régularité de la solution d’inéquations elliptiques. Bull. Soc. Math. France 96, 153–180 (1968) 10. Browder, F.E.: Nonlinear elliptic boundary value problems and the generalized topological degree. Bull. Am. Math. Soc. 76, 999–1005 (1970) 11. Clarke, F.H.: Optimization and nonsmooth analysis, Classics in Applied Mathematics, vol. 5, 2nd edn. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (1990) 12. Di Donato, D., Mugnai, D.: On a highly nonlinear self-obstacle optimal control problem. Appl. Math. Optim. 72(2), 261–290 (2015) 13. Gazzola, F., Grunau, H.C., Sweers, G.: Polyharmonic boundary value problems, Lecture Notes in Mathematics, vol. 1991. Springer, Berlin (2010). Positivity preserving and nonlinear higher order elliptic equations in bounded domains 14. Ghanem, R.: Optimal control of unilateral obstacle problem with a source term. Positivity 13(2), 321–338 (2009) 15. Glowinski, R., Lions, J.L., Trémolières, R.: Numerical analysis of variational inequalities, Studies in Mathematics and its Applications, vol. 8. North-Holland Publishing Co., Amsterdam (1981). Translated from the French 16. He, Z.X., Moro¸sanu, G.: Optimal control of biharmonic variational inequalities. An. Stiin¸ ¸ t. Univ. Al. I. Cuza Ia¸si Sec¸t. I a Mat. 35(2),153–170 (1989) 17. Ito, K., Kunisch, K.: Optimal control of obstacle problems by H 1 -obstacles. Appl. Math. Optim. 56(1), 1–17 (2007) 18. Kinderlehrer, D., Stampacchia, G.: An introduction to variational inequalities and their applications, Classics in Applied Mathematics, vol. 31. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2000). Reprint of the 1980 original 19. Liang, J., Santos, L.: On a class of nonlinear high order variational inequality systems. Differ. Integr. Equ. 6(6), 1519–1530 (1993) 20. Lions, J.L.: Optimal control of systems governed by partial differential equations. Translated from the French by S. K. Mitter. Die Grundlehren der mathematischen Wissenschaften, Band 170. Springer, New York (1971) 21. Lions, J.L.: Quelques méthodes de résolution des problèmes aux limites non linéaires. Gauthier-Villars, Dunod (1969) 22. Lions, J.L., Stampacchia, G.: Variational inequalities. Commun. Pure Appl. Math. 20, 493–519 (1967) 23. Lou, H.: On the regularity of an obstacle control problem. J. Math. Anal. Appl. 258(1), 32–51 (2001) 24. Lou, H.: An optimal control problem governed by quasi-linear variational inequalities. SIAM J. Control Optim. 41(4), 1229–1253 (2002) 25. Mignot, F.: Contrôle dans les inéquations variationelles elliptiques. J. Funct. Anal. 22(2), 130–185 (1976) 26. Mignot, F., Puel, J.P.: Optimal control in some variational inequalities. SIAM J. Control Optim. 22(3), 466–476 (1984) 27. Skrypnik, I.V. :Solvability of boundary value problems for nonlinear elliptic equations that are not in the divergence form. Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) 84, 243–251, 314, 320 (1979). Boundary value problems of mathematical physics and related questions in the theory of functions, 11 28. Wallin, H.: Riesz potentials, k, p-capacity, and p-modules. Mich. Math. J. 18, 257–263 (1971) 29. Ye, Y., Chen, Q.: Optimal control of the obstacle in a quasilinear elliptic variational inequality. J. Math. Anal. Appl. 294(1), 258–272 (2004) 30. Zgurovski˘ı, M.Z., Melc nik, V.S.: The penalty method for variational inequalities with multivalued mappings. III. Kibernet. Sistem. Anal. 37(2), 203–213 (2001)
123