J Optim Theory Appl DOI 10.1007/s10957-015-0763-3
Chain Rules for a Proper ε-Subdifferential of Vector Mappings César Gutiérrez1 · Lidia Huerga2 · Vicente Novo2 · Lionel Thibault3
Received: 14 November 2014 / Accepted: 23 May 2015 © Springer Science+Business Media New York 2015
Abstract In this paper, we derive exact chain rules for a proper epsilon-subdifferential in the sense of Benson of extended vector mappings, recently introduced by ourselves. For this aim, we use a new regularity condition and a new strong epsilon-subdifferential for vector mappings. In particular, we determine chain rules when one of the mappings is linear, obtaining formulations easier to handle in the finite-dimensional case by considering the componentwise order. This Benson proper epsilon-subdifferential generalizes and improves several of the most important proper epsilon-subdifferentials of vector mappings given in the literature and, consequently, the results presented in this work extend known chain rules stated for the last ones. As an application, we derive a characterization of approximate Benson proper solutions of implicitly constrained convex Pareto problems. Moreover, we estimate the distance between the objective values of these approximate proper solutions and the set of nondominated attained values.
Communicated by Boris S. Mordukhovich.
B
Vicente Novo
[email protected] César Gutiérrez
[email protected] Lidia Huerga
[email protected] Lionel Thibault
[email protected]
1
Universidad de Valladolid, Valladolid, Spain
2
Universidad Nacional de Educación a Distancia, Madrid, Spain
3
Université Montpellier 2, Montpellier, France
123
J Optim Theory Appl
Keywords Vector optimization · Proper ε-efficiency · Proper ε-subdifferential · Strong ε-efficiency · Strong ε-subdifferential · Nearly cone-subconvexlikeness · Linear scalarization Mathematics Subject Classification
90C48 · 90C25 · 90C29 · 49J52 · 49K27
1 Introduction In [1], we introduced a Benson proper epsilon-subdifferential for extended vector mappings, defined by means of a new concept of approximate proper solution in the sense of Benson of a vector optimization problem also introduced in [1] (see also [2]), which is a generalization of the notion of approximate proper efficiency given by Gao et al. in [3], and it is based on the concept of approximate efficiency introduced by Gutiérrez et al. in [4,5]. These approximate Benson proper solutions stand out because they can be characterized through approximate solutions of related scalar optimization problems under generalized convexity assumptions, and they have a good limit behavior when the precision error tends to zero, in the sense that the limits of the approximate Benson proper solutions belong to the closure of the efficient set (see [1,6]). The Benson proper epsilon-subdifferential inherits the good properties of these solutions. In particular, it can be characterized in terms of epsilon-subgradients of associated scalar mappings, under generalized convexity assumptions. Moreover, it generalizes the proper epsilon-subdifferentials given by El Maghri [7] and Tuan [8], respectively, improving them, because it has a better limit behavior when the error tends to zero (see [1]). Its basic properties were studied in [1,6], and also existence conditions and Moreau–Rockafellar type theorems were obtained in these papers. The aim of this work is to establish, in the setting of locally convex Hausdorff topological vector spaces, exact chain rules for this Benson proper epsilon-subdifferential under generalized convexity assumptions. For this aim, we use a new type of strong epsilon-subdifferential, introduced by ourselves in [6]. This strong subdifferential is defined in terms of a new concept of approximate strong solution of a vector optimization problem also introduced in [6], and it is characterized through epsilonsubgradients of associated scalar mappings, without any assumption. The obtained results extend those given by El Maghri in [9]. The paper is structured as follows. In Sect. 2, we state the framework, some notations and several necessary previous results. In Sect. 3, we obtain two exact chain rules for the Benson proper epsilon-subdifferential of the composition of vector mappings in locally convex Hausdorff topological vector spaces. The first one is given by using the regularity condition introduced by El Maghri in [7], and the second one is obtained via a weaker regularity condition given in [6], which extends to the Benson proper epsilonsubdifferential the well known regularity condition due to Raffin [10]. Moreover, we state chain rules when one of the implicated mappings is linear, which reduce to easier formulations in the finite-dimensional case with the componentwise order. These particular formulae are important from a practical point of view, since they can be applied, for instance, in scalarization processes. In Sect. 4, we apply some
123
J Optim Theory Appl
of the previous results in order to characterize approximate Benson proper solutions of implicitly constrained convex Pareto problems. This characterization is interesting because, as we will prove, the distance between the images of these solutions and the set of nondominated attained values is less than a previously fixed tolerance. This information is very useful for decision makers, in order to satisfy previous requirements or goals. Finally, we state the conclusions.
2 Preliminaries Let X and Y be real locally convex Hausdorff topological vector spaces, where Y is ordered by a cone D. We consider that D is nontrivial (D = {0}), pointed (D ∩ (−D) = {0}), convex and closed. As usual, the order in Y is given by D as follows: y1 , y2 ∈ Y,
y1 ≤ D y2 ⇐⇒ y2 − y1 ∈ D.
The topological dual spaces of X and Y are denoted by X ∗ and Y ∗ , respectively, and the dual pairing in X (respectively, Y ) is denoted by x ∗ , x , x ∗ ∈ X ∗ , x ∈ X (respectively, y ∗ , y , y ∗ ∈ Y ∗ , y ∈ Y ). We consider a locally convex topology T on Y ∗ compatible with the dual pair, i.e., (Y ∗ , T )∗ = Y . The set of continuous linear mappings from X to Y is denoted by L(X, Y ). For a set F ⊂ Y , we denote by int F, cl F and cone F the topological interior, the closure and the cone generated by F, respectively. We recall that D has a compact base iff there exists a compact and convex set B ⊂ D, such that 0 ∈ / B and cone B = D. p The nonnegative orthant of R p is denoted by R+ and R+ := R1+ . In Y , we define an element +∞Y , which is the supremum with respect to ≤ D and we denote Y := Y ∪ {+∞Y }. It holds that y ≤ D +∞Y , for all y ∈ Y , and if there exists y ∈ Y such that +∞Y ≤ D y, then y = +∞Y . The algebraic operations of Y are extended as follows: +∞Y + y = y + (+∞Y ) = +∞Y , α · (+∞Y ) = +∞Y , ∀ y ∈ Y, ∀ α > 0. We assume that 0 · (+∞Y ) = 0. In particular, if Y = R and D = R+ , then R = R ∪ {+∞}, where +∞ := +∞R . We denote the positive and the strict positive polar cone of D, respectively, by D + := {μ ∈ Y ∗ : μ, d ≥ 0, ∀ d ∈ D}, D s+ := {μ ∈ Y ∗ : μ, d > 0, ∀ d ∈ D\{0}}. Given an arbitrary mapping f : X → Y , we denote the effective domain and the image of f by dom f and Im f , respectively, i.e., dom f := {x ∈ X : f (x) ∈ Y }, Im f := { f (x): x ∈ dom f }.
123
J Optim Theory Appl
We say that f is proper iff dom f = ∅. Moreover, for each y ∗ ∈ Y ∗ , we set / dom f . Therefore, y ∗ ◦ f : X → R and y ∗ , f (x) := +∞, ∀ x ∈ X, x ∈ ∗ dom(y ◦ f ) = dom f . In this work, we consider the following vector optimization problem: Minimize f (x) subject to x ∈ S, (PS ) where f is proper and S ⊂ X is nonempty. We assume that S0 := S ∩ dom f = ∅ in order to avoid trivial problems. We say that (PS ) is a Pareto problem, denoted by (P S ), p when X = Rn , Y = R p and D = R+ . In this case, we denote f := ( f 1 , f 2 , . . . , f p ) on dom f . Let us recall that f is D-convex on S iff S is convex and f (αx1 + (1 − α)x2 ) ≤ D α f (x1 ) + (1 − α) f (x2 ), ∀ x1 , x2 ∈ S, ∀ α ∈]0, 1[. In particular, if f is D-convex on S, then S0 and f (S0 ) + D are convex. Now, we recall some classical solution notions of problem (PS ). It is said (see, for instance, [11–13]) that a point x0 ∈ S0 is a strong efficient solution of problem (PS ), denoted by x0 ∈ SE( f, S), iff f (x0 ) ≤ D f (x), for all x ∈ S. On the other hand, a point x0 ∈ S0 is a Benson proper efficient solution of problem (PS ), denoted by x0 ∈ Be( f, S), iff cl cone( f (S0 ) − f (x0 ) + D) ∩ (−D) = {0}. Clearly, SE( f, S) ⊂ Be( f, S) ⊂ dom f . For each nonempty set C ⊂ Y , we define the mappings C, C0 : R+ ⇒ Y as follows: C(ε) :=
εC, cone C,
if if
ε>0 ε=0
(1)
and C0 (ε) := C(ε)\{0}. The following notion of approximate efficiency was introduced by Gutiérrez, Jiménez and Novo (see [4,5]) and generalizes and unifies the most important approximate efficiency notions of the literature. In this concept, the error is quantified by a nonempty set C ⊂ Y and a nonnegative scalar ε. Definition 2.1 Let C ⊂ Y be nonempty and ε ≥ 0. A point x0 ∈ S0 is a (C, ε)efficient solution of problem (PS ), denoted by x0 ∈ AE( f, S, C, ε), iff ( f (S0 ) − f (x0 )) ∩ (−C0 (ε)) = ∅. Obviously, if cone C = D, then AE( f, S, C, 0) = E( f, S), where E( f, S) denotes the set of exact efficient solutions of (PS ). Let us recall that x0 ∈ E( f, S) iff x0 ∈ S and x ∈ S, f (x) ≤ D f (x0 ) ⇒ f (x) = f (x0 ).
123
J Optim Theory Appl
Next concept of proper ε-efficiency was introduced by Gutiérrez et al. in [2] (see also [1]), and it is a proper version in the sense of Benson of the (C, ε)-efficiency notion due to Gutiérrez, Jiménez and Novo stated in Definition 2.1. Definition 2.2 Let ε ≥ 0 and consider a nonempty set C ⊂ Y such that cl(C(0)) ∩ (−D\{0}) = ∅. A point x0 ∈ S0 is a Benson (C, ε)-proper solution of (PS ), denoted by x0 ∈ Be( f, S, C, ε), iff cl cone( f (S0 ) + C(ε) − f (x0 )) ∩ (−D) = {0}. Remark 2.1 The notion given in Definition 2.2 extends the concepts of approximate proper efficiency in the sense of Benson, introduced by Rong [14] and Gao et al. [3] (see [1, Remark 2.4]). Let us note that in the notion of approximate proper efficiency introduced by Rong [14], the error is quantified by a unique vector q ∈ D\{0}. This fact can cause, in practice, sets of approximate proper solutions too big, which can even contain points as far as one wants from the efficient set. In [5, Example 3.10], we illustrate this situation. In order to overcome this drawback, in Definition 2.2, we consider an approximation set C instead of a vector. Indeed, in [5, Theorem 3.7], we prove that for a suitable C, the set of Benson (C, ε)-proper solutions of (PS ) has a good limit behavior when the precision ε tends to zero, in the sense that all its elements are close to the efficient set. On the other hand, the approximate proper efficiency concept introduced by Gao et al. [3] is also based in the (C, ε)-efficiency notion. However, the sufficient conditions that guarantee a good limit behavior of the set of Benson (C, ε)-proper solutions are not satisfied when one considers Gao, Yang and Teo’s notion. The following concept of approximate strong efficient solution of problem (PS ) was introduced in [6], and it is motivated by the (C, ε)-efficiency notion and it generalizes the strong q-efficiency notion (see [15]). Definition 2.3 Let ε ≥ 0 and a nonempty set C ⊂ D. A point x0 ∈ S is a (C, ε)-strong efficient solution of (PS ), denoted by x0 ∈ SE( f, S, C, ε), iff f (x0 ) − q ≤ D f (x), ∀ q ∈ C(ε), ∀ x ∈ S. In next definition, we recall the Brøndsted-Rockafellar ε-subdifferential [16]. Definition 2.4 Let h: X → R be a proper mapping and ε ≥ 0. The ε-subdifferential of h at a point x0 ∈ dom h is defined as follows: ∂ε h(x0 ) := {x ∗ ∈ X ∗ : h(x) ≥ h(x0 ) − ε + x ∗ , x − x0 , ∀ x ∈ X }. The elements of ∂ε h(x0 ) are called ε-subgradients of h at x0 . In the following definition, we recall a generalized convexity notion, introduced by Gutiérrez et al. in [2]. It is an “approximate” version of the well known nearly cone-subconvexlikeness concept due to Yang et al. [17].
123
J Optim Theory Appl
Definition 2.5 Consider ε ≥ 0, two nonempty sets M ⊂ X and C ⊂ Y and a proper vector mapping f : X → Y such that M0 := M ∩ dom f = ∅. It is said that f is nearly (C, ε)-subconvexlike on M iff cl cone( f (M0 ) + C(ε)) is convex. In [1, Theorem 2.7], the reader can find sufficient conditions that imply the nearly (C, ε)-subconvexlikeness of the mapping f . For a nonempty set C ⊂ Y , we define the mapping τC : Y ∗ → R∪{−∞} as follows: τC (y ∗ ) := inf { y ∗ , y }, ∀ y ∗ ∈ Y ∗ . y∈C
We also denote C τ + := {y ∗ ∈ Y ∗ : τC (y ∗ ) ≥ 0}, HY,D := {C ⊂ Y : C = ∅, cl(C(0)) ∩ (−D\{0}) = ∅}, FY,D := {C ⊂ Y : C = ∅, D s+ ∩ C τ + = ∅}. Observe that C τ + = cone C + . Given two nonempty sets C1 , C2 ⊂ Y and α1 , α2 ∈ R+ , observe that τC1 (α1 )+C2 (α2 ) (y ∗ ) = α1 τC1 (y ∗ ) + α2 τC2 (y ∗ ), ∀ y ∗ ∈ C1τ + ∩ C2τ + . Clearly, FY,D ⊂ HY,D , i.e., D s+ ∩ C τ + = ∅ ⇒ cl(C(0)) ∩ (−D\{0}) = ∅. Moreover, if C is convex and int D + = ∅, then by [18, Proposition 2] the reciprocal implication is also true. Next concept of Benson proper ε-subdifferential was introduced in [1] and is defined by means of Benson (C, ε)-proper solutions. Definition 2.6 Let x0 ∈ dom f , ε ≥ 0 and C ∈ HY,D . The Benson (C, ε)-proper subdifferential of f at x0 is the set Be ∂C,ε f (x0 ) := {T ∈ L(X, Y ): x0 ∈ Be( f − T, X, C, ε)}.
Remark 2.2 If C ⊂ Y satisfies that cl cone C = D, then from [1, Remark 2.4(a)], we have that Be ∂C,0 f (x0 ) = {T ∈ L(X, Y ): x0 ∈ Be( f − T, X )}.
This exact proper subdifferential will be denoted by ∂ Be f (x0 ). In the sequel, we denote f T := f − T , where T ∈ L(X, Y ). The following three results will be needed along the paper. They were given in [1] and provide a characterization of the Benson (C, ε)-proper subdifferential in terms of ε-subgradients of associated scalar mappings under generalized convexity assumptions.
123
J Optim Theory Appl
Theorem 2.1 [1, Theorem 4.4] Suppose that int D + = ∅. Let x0 ∈ dom f , ε ≥ 0, Be f (x ) and f − f (x ) is nearly (C, ε)-subconvexlike on and C ∈ HY,D . If T ∈ ∂C,ε 0 T T 0 X , then C ∈ FY,D and there exists μ ∈ D s+ ∩ C τ + such that μ ◦ T ∈ ∂ετC (μ) (μ ◦ f )(x0 ). For the following result, we use the convention
i∈∅
Ai = ∅.
Theorem 2.2 [1, Theorem 4.5] Let x0 ∈ dom f , ε ≥ 0 and C ∈ HY,D . It follows that Be {T ∈ L(X, Y ): μ ◦ T ∈ ∂ετC (μ) (μ ◦ f )(x0 )} ⊂ ∂C,ε f (x0 ). μ∈D s+ ∩C τ +
Corollary 2.1 [1, Corollary 4.6] Let x0 ∈ dom f , ε ≥ 0 and C ∈ HY,D . Suppose that int D + = ∅ and f T − f T (x0 ) is nearly (C, ε)-subconvexlike on X , for all T ∈ L(X, Y ). Then, C ∈ FY,D and Be ∂C,ε f (x0 ) =
{T ∈ L(X, Y ): μ ◦ T ∈ ∂ετC (μ) (μ ◦ f )(x0 )}.
μ∈D s+ ∩C τ +
Moreover, in [6] it was introduced the following strong ε-subdifferential through (C, ε)-strong efficient solutions of problem (P X ). Definition 2.7 Let ε ≥ 0 and C ⊂ D be a nonempty set. The (C, ε)-strong subdifferential of f at x0 ∈ dom f is given by s f (x0 ) := {T ∈ L(X, Y ): x0 ∈ SE( f − T, X, C, ε)}. ∂C,ε
The usual exact strong subdifferential of f at x0 ∈ dom f is denoted by ∂ s f (x0 ), i.e., (see, for instance, [19]), ∂ s f (x0 ) := {T ∈ L(X, Y ): f (x) ≥ D f (x0 ) + T (x − x0 ), ∀ x ∈ X } = {T ∈ L(X, Y ): x0 ∈ SE( f − T, X )}. Remark 2.3 Definition 2.7 reduces to the strong ε-subdifferential concept given by Kutateladze [15] by considering C = {q}, q ∈ D (see also [7]). The following result was proved in [6] and gives a characterization of the (C, ε)-strong subdifferential of f at a point x0 ∈ dom f through ε-subgradients of scalar mappings. Theorem 2.3 [6, Theorem 4.4] Let x0 ∈ dom f , ε ≥ 0 and a nonempty set C ⊂ D. Then, s f (x0 ) = {T ∈ L(X, Y ): μ ◦ T ∈ ∂ετC (μ) (μ ◦ f )(x0 )}. ∂C,ε μ∈D + \{0}
123
J Optim Theory Appl
Next, we recall the notion of p-regular ε-subdifferentiability given by El Maghri in [7] (see also [9]), which will be used along the paper. Definition 2.8 Let x0 ∈ dom f and ε ≥ 0. It is said that f is p-regular ε-subdifferentiable at x0 iff ε = 0 and ∂(μ ◦ f )(x0 ) = μ ◦ ∂ s f (x0 ), ∀ μ ∈ D s+ ,
(2)
or ε > 0 and
∂ε (μ ◦ f )(x0 ) =
s μ ◦ ∂{q},1 f (x0 ), ∀ μ ∈ D s+ .
(3)
q∈D μ,q =ε
It is said that f is p-regular subdifferentiable at x0 iff f satisfies (2). In this paper, we also consider next regularity condition, which was introduced in [6, Definition 5.1]. It is defined in terms of sets C ⊂ D instead of vectors q ∈ D. Given a nonempty set C ⊂ Y , let C sτ + := {y ∗ ∈ Y ∗ : τC (y ∗ ) > 0}. Definition 2.9 Let x0 ∈ dom f , ε ≥ 0 and C ⊂ Y be a nonempty set. It is said that f is regular (C, ε)-subdifferentiable at x0 iff ε = 0 and ∂(μ ◦ f )(x0 ) = μ ◦ ∂ s f (x0 ), ∀ μ ∈ D s+ , or ε > 0 and ∂ε (μ ◦ f )(x0 ) =
C ⊂D
μ ◦ ∂Cs ,ε f (x0 ), ∀ μ ∈ D s+ ∩ C sτ + .
(4)
τC (μ)=1
Observe that inclusion “⊃” in (2), (3) and (4) always holds (for more details, see [6, Remark 5.2]), so in the last two definitions, we can replace the equalities by “⊂”. In [6, Proposition 5.8], it was proved that the p-regular ε-subdifferentiability at a point x0 ∈ dom f implies the regular (C, ε)-subdifferentiability at x0 . Moreover, in [6, Proposition 5.9], the reader can find a sufficient condition under which both concepts are equivalent.
3 Chain Rules for Benson ( Q, ε)-Proper Subdifferentials In this section, we derive formulae for the Benson (Q, ε)-proper subdifferential of the composition of a proper vector mapping f 1 : X → Y with another nondecreasing proper vector mapping f 2 : Y → Z , where Z is an ordered and locally convex Hausdorff topological vector space, ∅ = Q ⊂ Z , and the order cone K ⊂ Z is assumed to
123
J Optim Theory Appl
be nontrivial, pointed, closed and convex. Moreover, we set that f 2 (+∞Y ) = +∞ Z and we consider that the topology of Z ∗ is compatible with the dual pairing. As particular cases, we obtain calculus rules for the Benson (Q, ε)-proper subdifferential of A ◦ f 1 , for A ∈ L+ (Y, Z ) and f 2 ◦ B, for B ∈ L(X, Y ), where L+ (Y, Z ) denotes the subset of linear mappings T : Y → Z such that T (D) ⊂ K . We recall that f 2 is (D, K )-nondecreasing on a nonempty set F ⊂ Y , iff ∀y1 , y2 ∈ F,
y1 ≤ D y2 ⇒ f 2 (y1 ) ≤ K f 2 (y2 ).
In the same way as for the space Y , we denote H Z ,K := {Q ⊂ Z : Q = ∅, cl(Q(0)) ∩ (−K \{0}) = ∅}, F Z ,K := {Q ⊂ Z : Q = ∅, K s+ ∩ Q τ + = ∅}. For ε ≥ 0 and Q ∈ H Z ,K , we define (see (1)) S Q(ε) := {(Q 1 , Q 2 ) ∈ H Z ,K × 2 K : Q(ε) ⊂ Q 1 + Q 2 }. Theorem 3.1 Let ε ≥ 0, Q ∈ H Z ,K and x0 ∈ dom( f 2 ◦ f 1 ). Then,
Be ∂ Q,ε ( f 2 ◦ f 1 )(x0 ) ⊃
s (Q 1 ,Q 2 )∈S Q(ε) A∈∂ Q f 2 ( f 1 (x0 )) 2 ,1 Q 1 +K ⊂cl Q 1
Be ∂Q (A ◦ f 1 )(x0 ). 1 ,1
Moreover, if K has a compact base, then
Be ( f 2 ◦ f 1 )(x0 ) ⊃ ∂ Q,ε
s (Q 1 ,Q 2 )∈S Q(ε) A∈∂ Q
2
Be ∂Q (A ◦ f 1 )(x0 ). 1 ,1
(5)
,1 f 2 ( f 1 (x 0 ))
s f ( f 1 (x0 )) and Proof Let (Q 1 , Q 2 ) ∈ S Q(ε) , with Q 1 + K ⊂ cl Q 1 , A ∈ ∂ Q 2 ,1 2 Be (A ◦ f )(x ). Suppose by contradiction that B ∈ Be ( f ◦ f )(x ), i.e., B ∈ ∂Q / ∂ 1 0 1 0 Q,ε 2 1 ,1 / Be(( f 2 ◦ f 1 ) − B, X, Q, ε). Thus, there exist k0 ∈ K \{0} and nets (αi ) ⊂ R+ , x0 ∈ (xi ) ⊂ dom( f 2 ◦ f 1 ) and (qi ) ⊂ Q(ε) such that
αi (( f 2 ◦ f 1 )(xi ) − B(xi ) + qi − ( f 2 ◦ f 1 )(x0 ) + B(x0 )) → −k0 .
(6)
Since Q(ε) ⊂ Q 1 + Q 2 , there exist nets (qi1 ) ⊂ Q 1 and (qi2 ) ⊂ Q 2 such that s f ( f 1 (x0 )), it follows that qi = qi1 + qi2 , for each i. On the other hand, as A ∈ ∂ Q 2 ,1 2 ( f 2 − A)(y) + u − ( f 2 − A)( f 1 (x0 )) ∈ K , ∀y ∈ dom f 2 , ∀u ∈ Q 2 . Hence, in particular, we have that ( f 2 − A)( f 1 (xi )) + qi2 − ( f 2 − A)( f 1 (x0 )) ∈ K , ∀i,
123
J Optim Theory Appl
so there exists (ki ) ⊂ K such that ( f 2 ◦ f 1 )(xi ) + qi2 − ( f 2 ◦ f 1 )(x0 ) = ki + (A ◦ f 1 )(xi ) − (A ◦ f 1 )(x0 ).
(7)
Thus, taking into account statement (7), we deduce that (6) is equivalent to αi ((A ◦ f 1 )(xi ) − B(xi ) + qi1 + ki − (A ◦ f 1 )(x0 ) + B(x0 )) → −k0 , which implies that x0 ∈ / Be((A◦ f 1 )− B, X, Q 1 , 1), since Q 1 + K ⊂ cl Q 1 . Therefore, Be (A ◦ f )(x ) and we reach a contradiction. B∈ / ∂Q 1 0 1 ,1 Finally, suppose that K has a compact base. Reasoning in the same way as above and applying [1, Proposition 4.3(e)], it follows that Be Be (A ◦ f 1 )(x0 ) = ∂ Q (A ◦ f 1 )(x0 ), B∈ / ∂Q 1 +K ,1 1 ,1
obtaining again a contradiction, and (5) is proved.
Remark 3.1 In [7,9], the author considered a proper ε-subdifferential, defined in terms of proper approximate solutions with respect to a vector in the sense of Henig. Indeed, for q ∈ Y \(−D\{0}), he defined the proper q-subdifferential of f at a point x0 ∈ dom f as p
p
∂q f (x0 ) := {T ∈ L(X, Y ): x0 ∈ Eq ( f − T, X, D)}, p
where Eq ( f, S, D) denotes the set of proper approximate solutions of (PS ) with respect p to q in the sense of Henig, i.e., x0 ∈ Eq ( f, S, D) iff x0 ∈ S0 and there exists a convex cone D ⊂ Y , D = Y , with D\{0} ⊂ int D such that ( f (S) + q − f (x0 )) ∩ (−D \{0}) = ∅. Since D\{0} ⊂ int D , clearly Eq ( f, S, D) ⊂ Be( f, S, q + D, 1), so ∂q f (x0 ) ⊂ Be f (x0 ). Moreover, if int D + = ∅ and f T − f T (x0 ) is nearly (q + D, 1)∂q+D,1 subconvexlike on X , for all T ∈ L(X, Y ), then by [1, Remark 4.7(b)] and [7, p Be f (x0 ). Theorem 3.2, σ = p], we deduce that ∂q f (x0 ) = ∂q+D,1 Let q ∈ Z \(−K \{0}), x0 ∈ dom( f 2 ◦ f 1 ) and suppose that int K + = ∅ and ( f 2 ◦ f 1 )T − ( f 2 ◦ f 1 )T (x0 ) is nearly (q + K , 1)-subconvexlike on X , for all T ∈ L(X, Z ). Let q1 ∈ Z \(−K \{0}) and q2 ∈ K such that q1 + q2 = q (for example, q1 = q, q2 = 0). Thus, taking into account the relations above, if we consider ε = 1 and we define Q := q + K , Q 1 := q1 + K and Q 2 := {q2 }, then the first part of Theorem 3.1 reduces to the first part of [9, Theorem 3.1, σ = p, F ≡ 0]. Moreover, if we consider Q = Q 1 = K and Q 2 = {0}, then the first part of Theorem 3.1 reduces to the first part of [19, Theorem 4.2, σ = p]. p
p
Next result provides a calculus rule for the ε-subdifferential of the post-composition of the mapping f 1 with a scalar mapping h: Y → R. Part (a) was stated in [20, Theorem 2.8.10] and part (b) was proved in [21, Theorem 2.2].
123
J Optim Theory Appl
Theorem 3.2 Let ε ≥ 0 and h: Y → R be convex on Y . (a) Consider x0 ∈ dom(h ◦ f 1 ) and suppose that f 1 is D-convex on X , h is (D, R+ )nondecreasing on Im f 1 + D and the following constraint qualification holds: ¯ (MRQC1) There exists x¯ ∈ dom(h ◦ f 1 ) such that h is continuous at f 1 (x). Then, ∂ε (h ◦ f 1 )(x0 ) =
ε1 ,ε2 ≥0 ε1 +ε2 =ε
y ∗ ∈∂ε2 h( f 1 (x0 ))∩D +
∂ε1 (y ∗ ◦ f 1 )(x0 ).
(b) Consider T ∈ L(X, Y ), x0 ∈ dom(h ◦ T ) and suppose that (MRQC1) holds for f 1 = T . Then, ∂ε (h ◦ T )(x0 ) = T ∗ ∂ε h(T (x0 )), where T ∗ denotes the adjoint of T . Next theorem will be useful in the following, since it gives a sufficient condition for the mapping ( f 2 ◦ f 1 ) B − ( f 2 ◦ f 1 ) B (x0 ) to be nearly (Q, ε)-subconvexlike on X , for all B ∈ L(X, Z ). Theorem 3.3 Let ε ≥ 0, ∅ = Q ⊂ Z and x0 ∈ dom( f 2 ◦ f 1 ). If f 1 is D-convex on X , f 2 is K -convex on Y and (D, K )-nondecreasing on Im f 1 + D, and Q is convex and verifies Q + K = Q, then ( f 2 ◦ f 1 ) B − ( f 2 ◦ f 1 ) B (x0 ) is nearly (Q, ε)-subconvexlike on X , for all B ∈ L(X, Z ). Proof Let B ∈ L(X, Z ). It is well known and easy to check that the mapping ϕ: X → Z , ϕ = ( f 2 ◦ f 1 ) B − ( f 2 ◦ f 1 ) B (x0 ) is K -convex on X . Then, ϕ(X 0 ) + K is convex, where X 0 = dom( f 2 ◦ f 1 ). On the other hand, as Q + K = Q, it follows that cl(Q(ε)) = cl(Q(ε) + K ), ∀ε ≥ 0. Thus, cl cone(ϕ(X 0 ) + Q(ε)) = cl cone(ϕ(X 0 ) + cl(Q(ε))) = cl cone(ϕ(X 0 ) + cl(Q(ε) + K )) = cl cone(ϕ(X 0 ) + K + Q(ε)). This last set is convex, since ϕ(X 0 )+ K and Q(ε) are convex, and the proof finishes. The following concept is motivated by [7, Definition 2.1(ii)]. Definition 3.1 We say that f 2 is star K -continuous at a point y0 ∈ dom f 2 iff λ ◦ f 2 is continuous at y0 , for all λ ∈ K s+ .
123
J Optim Theory Appl
For λ ∈ K s+ , we define the set L D (λ) ⊂ L(Y, Z ) by L D (λ) := {A ∈ L(Y, Z ): λ ◦ A ∈ D + }. Clearly,
L D (λ) ⊂ L D , where
λ∈K s+
L D := {A ∈ L(Y, Z ): A(D) ∩ (−K \{0}) = ∅}. Theorem 3.4 Suppose that int K + = ∅. Consider ε ≥ 0, Q ∈ F Z ,K and x0 ∈ dom( f 2 ◦ f 1 ). Suppose that ( f 2 ◦ f 1 ) B − ( f 2 ◦ f 1 ) B (x0 ) is nearly (Q, ε)-subconvexlike on X for all B ∈ L(X, Z ), f 1 is D-convex on X , f 2 is K -convex on Y , (D, K )nondecreasing on Im f 1 + D and p-regular ε¯ -subdifferentiable at f 1 (x0 ), for all ε¯ ≥ 0, and the following constraint qualification holds: (MRQC2) There exists x¯ ∈ dom( f 2 ◦ f 1 ) such that f 2 is star K -continuous at f 1 (x). ¯ Then, Be ∂ Q,ε ( f 2 ◦ f 1 )(x0 ) ⊂
s λ∈K s+ ∩Q τ + (q, Q)∈K ˆ ×F Z ,K A∈∂{q},1 f 2 ( f 1 (x0 ))∩L D (λ)
∂ Be ˆ (A ◦ f 1 )(x 0 ). Q,1
ˆ Q(ε)= Q+{q} τ Qˆ (λ)≥0
Moreover, if either Q + K = Q or K has a compact base, then Be ∂ Q,ε ( f 2 ◦ f 1 )(x0 ) =
s ˆ (q, Q)∈K ×F Z ,K A∈∂{q},1 f 2 ( f 1 (x0 ))∩L D
Be ∂ Q,1 ˆ (A ◦ f 1 )(x 0 ).
(8)
ˆ Q(ε)= Q+{q} Be ( f ◦ f )(x ). By Theorem 2.1, there exists λ ∈ K s+ ∩ Q τ + such Proof Let B ∈ ∂ Q,ε 2 1 0 that
λ ◦ B ∈ ∂ετ Q (λ) ((λ ◦ f 2 ) ◦ f 1 )(x0 ). By the hypotheses, clearly λ◦ f 2 is convex on Y , (D, R+ )-nondecreasing on Im f 1 + D, and there exists a point x¯ ∈ dom(λ ◦ f 2 ◦ f 1 ) such that λ ◦ f 2 is continuous at ¯ Hence, applying Theorem 3.2(a) for h = λ ◦ f 2 , there exist δ1 , δ2 ≥ 0, with f 1 (x). δ1 +δ2 = ετ Q (λ), and y ∗ ∈ ∂δ2 (λ◦ f 2 )( f 1 (x0 ))∩ D + such that λ◦ B ∈ ∂δ1 (y ∗ ◦ f 1 )(x0 ). If δ2 > 0, since f 2 is p-regular δ2 -subdifferentiable at f 1 (x0 ), then there exist s f 2 ( f 1 (x0 )) such that y ∗ = λ ◦ A. Thus, clearly q ∈ K , with λ, q = δ2 and A ∈ ∂{q},1
123
J Optim Theory Appl
A ∈ L D (λ). Consider the set Qˆ := Q(ε)−{q}. It follows that τ Qˆ (λ) = ετ Q (λ)−δ2 = δ1 . Hence, we obtain that λ ◦ B ∈ ∂δ1 (λ ◦ (A ◦ f 1 ))(x0 ) = ∂τ Qˆ (λ) (λ ◦ (A ◦ f 1 ))(x0 ), (A ◦ f 1 )(x0 ), as we want to prove. and by Theorem 2.2, we have that B ∈ ∂ Be ˆ Q,1 s s If δ2 = 0, then there exists A ∈ ∂ f 2 ( f 1 (x0 )) = ∂{0},1 f 2 ( f 1 (x0 )) such that y ∗ = λ ◦ A, and considering q = 0 and Qˆ = Q(ε), the result follows. ˆ ∈ K × F Z ,K such that Suppose additionally that Q + K = Q and let (q, Q) ˆ ˆ ˆ and by Q(ε) = Q + {q}. Then, Q + K = Q(ε) + K − {q} ⊂ cl Q(ε) − {q} = cl Q, Theorem 3.1 we have that Be Be ∂ Q,1 ˆ (A ◦ f 1 )(x 0 ) ⊂ ∂ Q,ε ( f 2 ◦ f 1 )(x 0 ). s ˆ (q, Q)∈K ×F Z ,K A∈∂{q},1 f 2 ( f 1 (x0 )) ˆ Q(ε)= Q+{q}
Moreover, it follows that Be ∂ Q,ε ( f 2 ◦ f 1 )(x0 ) ⊂ λ∈K s+ ∩Q τ +
⊂
s ˆ (q, Q)∈K ×F Z ,K A∈∂{q},1 f 2 ( f 1 (x0 ))∩L D (λ)
∂ Be ˆ (A ◦ f 1 )(x 0 ) Q,1
ˆ Q(ε)= Q+{q} τ Qˆ (λ)≥0
s ˆ (q, Q)∈K ×F Z ,K A∈∂{q},1 f 2 ( f 1 (x0 ))∩L D
Be ∂ Q,1 ˆ (A ◦ f 1 )(x 0 ),
ˆ Q(ε)= Q+{q}
so (8) is proved. The same conclusion is obtained applying Theorem 3.1 when K has a compact base. Remark 3.2 (a) Observe that, if we suppose that Q is convex and Q + K = Q in Theorem 3.4, then by Theorem 3.3, we can avoid the hypothesis of nearly (Q, ε)-subconvexlikeness of ( f 2 ◦ f 1 ) B − ( f 2 ◦ f 1 ) B (x0 ) on X for all B ∈ L(X, Z ). (b) Theorem 3.4 reduces to [9, Theorem 3.1, σ = p, F ≡ 0] considering that Q = q + K , for q ∈ / −K \{0}, and reduces to [19, Theorem 4.2], when Q = K . Be ( f ◦ f )(x ), by assuming Now, we are going to prove another chain rule for ∂ Q,ε 2 1 0 the regular (Q, ε¯ )-subdifferentiability of the mapping f 2 at f 1 (x0 ). Given ε ≥ 0, Q ∈ F Z ,K , a proper mapping h: X → Z and x0 ∈ dom h, we define the set-valued mapping Γ Q,ε h(x0 ): K s+ ∩ Q τ + ⇒ L(X, Z ) as
Γ Q,ε h(x0 )(λ) := {T ∈ L(X, Z ): λ ◦ T ∈ ∂ετ Q (λ) (λ ◦ h)(x0 )}, and we denote Γ h(x0 )(λ) := Γ Q,0 h(x0 )(λ), whenever cl cone Q = K .
123
J Optim Theory Appl
Observe that under the hypotheses of Corollary 2.1 referred to the set Q and the mapping h, we have that
Be h(x0 ) = ∂ Q,ε
Γ Q,ε h(x0 )(λ).
λ∈K s+ ∩Q τ +
Moreover, we define the sets Λ K , Λ Q (λ) ⊂ 2 K , with λ ∈ K s+ ∩ Q τ + , by Λ K := {Q ⊂ K : Q = ∅, Q = Q + K , Q is convex}, Λ Q (λ) := {Q ∈ Λ K : τ Q (λ) = τ Q (λ)}, and the set S Q (λ) ⊂ F Z ,K × 2 K by S Q (λ) := {(Q 1 , Q 2 ) ∈ F Z ,K × 2 K , τ Q 1 (λ) ≥ 0, τ Q (λ) ≥ τ Q 1 +Q 2 (λ)}. Next result is obtained without requiring any additional assumption. Theorem 3.5 Let ε ≥ 0, Q ∈ F Z ,K and x0 ∈ dom( f 2 ◦ f 1 ). It follows that Be ( f 2 ◦ f 1 )(x0 ) ⊃ ∂ Q,ε λ∈K s+ ∩Q τ +
Γ Q 1 ,ε1 (A ◦ f 1 )(x0 )(λ).
s ε1 ,ε2 ≥0 A∈∂ Q f 2 ( f 1 (x0 )) 2 ,ε2 (Q 1 ,Q 2 )∈F Z ,K ×2 K (Q 1 (ε1 ),Q 2 (ε2 ))∈S Q(ε) (λ)
Proof Consider λ ∈ K s+ ∩ Q τ + , ε1 , ε2 ≥ 0, (Q 1 , Q 2 ) ∈ F Z ,K × 2 K with s f ( f 1 (x0 )) and B ∈ Γ Q 1 ,ε1 (A ◦ f 1 )(x0 )(λ). (Q 1 (ε1 ), Q 2 (ε2 )) ∈ S Q(ε) (λ), A ∈ ∂ Q 2 ,ε2 2 Then, λ ◦ B ∈ ∂ε1 τ Q 1 (λ) (λ ◦ (A ◦ f 1 ))(x0 ), i.e., (λ ◦ A ◦ f 1 )(x) ≥ (λ ◦ A ◦ f 1 )(x0 ) − ε1 τ Q 1 (λ) + λ ◦ B, x − x0 , ∀x ∈ dom f 1 . (9) s On the other hand, as A ∈ ∂ Q f ( f 1 (x0 )), by Theorem 2.3, it follows that 2 ,ε2 2
λ ◦ A ∈ ∂ε2 τ Q 2 (λ) (λ ◦ f 2 )( f 1 (x0 )), i.e., (λ ◦ f 2 )(y) ≥ (λ ◦ f 2 )( f 1 (x0 )) − ε2 τ Q 2 (λ) + λ ◦ A, y − f 1 (x0 ) , ∀y ∈ dom f 2 .
123
J Optim Theory Appl
In particular, from the inequality above, we have for all x ∈ dom( f 2 ◦ f 1 ) that (λ ◦ f 2 ◦ f 1 )(x) ≥ (λ ◦ f 2 ◦ f 1 )(x0 ) − ε2 τ Q 2 (λ) + λ ◦ A, f 1 (x) − f 1 (x0 ) . (10) Thus, adding in (9) and (10) and taking into account that ε1 τ Q 1 (λ) + ε2 τ Q 2 (λ) = τ Q 1 (ε1 )+Q 2 (ε2 ) (λ) ≤ ετ Q (λ), we obtain for all x ∈ dom( f 2 ◦ f 1 ) that λ ◦ ( f 2 ◦ f 1 )(x) ≥ λ ◦ ( f 2 ◦ f 1 )(x0 ) − ε1 τ Q 1 (λ) − ε2 τ Q 2 (λ) + λ ◦ B, x − x0
≥ λ ◦ ( f 2 ◦ f 1 )(x0 ) − ετ Q (λ) + λ ◦ B, x − x0 . Hence, it follows that λ ◦ B ∈ ∂ετ Q (λ) (λ ◦ ( f 2 ◦ f 1 ))(x0 ), and by Theorem 2.2, we Be ( f ◦ f )(x ), as we want to prove. conclude that B ∈ ∂ Q,ε 2 1 0 Theorem 3.6 Suppose that int K + = ∅. Consider ε ≥ 0, Q ∈ F Z ,K and x0 ∈ dom( f 2 ◦ f 1 ). Assume that ( f 2 ◦ f 1 ) B − ( f 2 ◦ f 1 ) B (x0 ) is nearly (Q, ε)-subconvexlike on X for all B ∈ L(X, Z ), f 1 is D-convex on X , f 2 is K -convex on Y , (D, K )nondecreasing on Im f 1 + D and regular (Q, ε¯ )-subdifferentiable at f 1 (x0 ), for all ε¯ ≥ 0, and (MRQC2) holds. Then, Be ∂ Q,ε ( f 2 ◦ f 1 )(x0 ) ⊂ λ∈K s+ ∩Q τ + ε2 ,ε1 ≥0 ε2 +ε1 =ε
Q ∈Λ
Q (λ)
Γ Q,ε1 (A ◦ f 1 )(x0 )(λ).
s A∈∂ Q ,ε f 2 ( f 1 (x 0 ))∩L D (λ) 2
Be ( f ◦ f )(x ). By Theorem 2.1, there exists λ ∈ K s+ ∩ Q τ + such Proof Let B ∈ ∂ Q,ε 2 1 0 that
λ ◦ B ∈ ∂ετ Q (λ) ((λ ◦ f 2 ) ◦ f 1 )(x0 ). From the hypotheses, we can apply Theorem 3.2(a) for h = λ ◦ f 2 , from which there exist δ1 , δ2 ≥ 0, with δ1 + δ2 = ετ Q (λ), and y ∗ ∈ ∂δ2 (λ ◦ f 2 )( f 1 (x0 )) ∩ D + such that λ ◦ B ∈ ∂δ1 (y ∗ ◦ f 1 )(x0 ). We analyze the following cases. If τ Q (λ) > 0 and δ2 > 0, then by the regular (Q, δ2 )-subdifferentiability of f 2 at f 1 (x0 ) and [6, Theorem 5.6(c)], it follows that
∂δ2 (λ ◦ f 2 )( f 1 (x0 )) =
Q ∈Λ Q (λ)
so there exists Q ∈ Λ Q (λ) and A ∈ ∂ s
Q ,τ
Thus, A ∈ L D (λ) and B ∈ Γ Q,
δ1 τ Q (λ)
λ ◦ ∂s
Q ,τ
δ2 Q (λ)
δ2 Q (λ)
f 2 ( f 1 (x0 )),
f 2 ( f 1 (x0 )) such that y ∗ = λ ◦ A.
(A ◦ f 1 )(x0 )(λ). By denoting ε1 :=
δ1 τ Q (λ)
and
123
J Optim Theory Appl s 2 ε2 := τ Qδ(λ) , we have that ε1 + ε2 = ε, A ∈ ∂ Q ,ε f 2 ( f 1 (x 0 )) ∩ L D (λ) and B ∈ 2 Γ Q,ε1 (A ◦ f 1 )(x0 )(λ), as we want to prove. If τ Q (λ) > 0 and δ2 = 0, since f 2 is regular (Q, 0)-subdifferentiable at f 1 (x0 ), then it follows that ∂δ2 (λ ◦ f 2 )( f 1 (x0 )) = λ ◦ ∂ s f 2 ( f 1 (x0 )). By [6, Proposition 4.3(b)], s we deduce that ∂ s f 2 ( f 1 (x0 )) = ∂ Q ,0 f 2 ( f 1 (x 0 )), where Q = q + K , with q ∈ K such that λ, q = τ Q (λ). Clearly, Q ∈ Λ Q (λ), and reasoning in the same way as s above, we have that A ∈ ∂ Q ,ε f 2 ( f 1 (x 0 )) ∩ L D (λ) and B ∈ Γ Q,ε1 (A ◦ f 1 )(x 0 )(λ), 2 with ε1 := ε and ε2 := 0. Finally, if τ Q (λ) = 0, then δ1 = δ2 = 0 and, again, by the regular (Q, 0)subdifferentiability of f 2 at f 1 (x0 ), we see that A ∈ ∂ Ks ,ε2 f 2 ( f 1 (x0 )) ∩ L D (λ) and B ∈ Γ Q,ε1 (A ◦ f 1 )(x0 )(λ), with ε2 := 0 and ε1 := ε.
As a direct consequence of Theorems 3.5 and 3.6, we obtain the following chain rule. Corollary 3.1 With the same hypotheses as Theorem 3.6, we have that Be ∂ Q,ε ( f 2 ◦ f 1 )(x0 ) =
Γ Q,ε1 (A ◦ f 1 )(x0 )(λ).
s λ∈K s+ ∩Q τ + Q ∈Λ Q (λ) A∈∂ Q ,ε f 2 ( f 1 (x 0 ))∩L D (λ) 2 ε1 ,ε2 ≥0 ε1 +ε2 =ε
From Theorem 3.3 and Corollary 3.1, we obtain the following result, in which we replace the assumption of nearly (Q, ε)-subconvexlikeness on X of the mapping ( f 2 ◦ f 1 ) B − ( f 2 ◦ f 1 ) B (x0 ), for all B ∈ L(X, Z ), by the hypotheses Q convex and Q + K = Q, which are more easy to check. Theorem 3.7 Suppose that int K + = ∅. Let x0 ∈ dom( f 2 ◦ f 1 ) and suppose that f 1 is D-convex on X , f 2 is K -convex on Y and (D, K )-nondecreasing on Im f 1 + D and (MRQC2) holds. (a) Let ε ≥ 0 and Q ∈ F Z ,K be convex such that Q + K = Q. If f 2 is regular (Q, ε¯ )-subdifferentiable at f 1 (x0 ), for all ε¯ ≥ 0, then Be ( f 2 ◦ f 1 )(x0 ) ∂ Q,ε = λ∈K s+ ∩Q τ + ε2 ,ε1 ≥0 ε2 +ε1 =ε
Q ∈Λ
Q (λ)
Γ Q,ε1 (A ◦ f 1 )(x0 )(λ).
s A∈∂ Q ,ε f 2 ( f 1 (x 0 ))∩L D (λ) 2
(b) If f 2 is p-regular subdifferentiable at f 1 (x0 ), then ∂ Be ( f 2 ◦ f 1 )(x0 ) =
Γ (A ◦ f 1 )(x0 )(λ).
λ∈K s+ A∈∂ s f 2 ( f 1 (x0 ))∩L D (λ)
In the following result, we obtain formulae for the calculus of the Benson (Q, ε)proper subdifferential of the mapping f 2 ◦ T , for T ∈ L(X, Y ).
123
J Optim Theory Appl
For λ ∈ K s+ , we define the set K (λ) ⊂ L(X, Z ) as K (λ) := {B ∈ L(X, Z ): λ ◦ B = 0}. Theorem 3.8 Suppose that int K + = ∅. Let ε ≥ 0, Q ∈ F Z ,K , T ∈ L(X, Y ) and x0 ∈ dom( f 2 ◦ T ). Suppose that ( f 2 ◦ T ) B − ( f 2 ◦ T ) B (x0 ) is nearly (Q, ε)subconvexlike on X for all B ∈ L(X, Z ), f 2 is K -convex on Y and (MRQC2) holds for f 1 = T . (a) If f 2 is regular (Q, ε¯ )-subdifferentiable at T (x0 ), for all ε¯ ≥ 0, then Be ( f 2 ◦ T )(x0 ) = ∂ Q,ε
λ∈K s+ ∩Q τ + Q ∈Λ Q (λ)
s ∂Q ,ε f 2 (T (x 0 )) ◦ T + K (λ) .
(b) If f 2 is p-regular ε¯ -subdifferentiable at T (x0 ), for all ε¯ ≥ 0, then Be ∂ Q,ε ( f 2 ◦ T )(x0 ) =
λ∈K s+ ∩Q τ +
q∈K , λ,q =ετ Q (λ)
s ∂{q},1 f 2 (T (x0 )) ◦ T + K (λ) .
s Proof (a) Inclusion“⊃”: Let λ ∈ K s+ ∩ Q τ + , Q ∈ Λ Q (λ), A ∈ ∂ Q ,ε f 2 (T (x 0 )), B ∈ K (λ) and let B = (A ◦ T ) + B. Then, it follows that λ ◦ (B − (A ◦ T )) = 0. On the other hand, since λ ◦ A ◦ T ∈ X ∗ , it follows for all ε ≥ 0 that
Γ Q,ε (A ◦ T )(x0 )(λ) = {T ∈ L(X, Z ): λ ◦ T ∈ ∂ετ Q (λ) (λ ◦ A ◦ T )(x0 )} = {T ∈ L(X, Z ): λ ◦ T = λ ◦ A ◦ T } = {T ∈ L(X, Z ): λ ◦ (T − (A ◦ T )) = 0}.
(11)
Thus, we have in particular that B ∈ Γ Q,0 (A ◦ T )(x0 )(λ), and applying Theorem 3.5, Be ( f ◦ T )(x ). we deduce that B ∈ ∂ Q,ε 2 0 Be ( f ◦ T )(x ). By Theorems 2.1 and 3.2(b), there exists Inclusion “⊂”: Let B ∈ ∂ Q,ε 2 0 λ ∈ K s+ ∩ Q τ + such that λ ◦ B ∈ ∂ετ Q (λ) ((λ ◦ f 2 ) ◦ T )(x0 ) = T ∗ ∂ετ Q (λ) (λ ◦ f 2 )(T (x0 )). Hence, there exists y ∗ ∈ ∂ετ Q (λ) (λ ◦ f 2 )(T (x0 )) such that λ ◦ B = y ∗ ◦ T . If τ Q (λ) > 0 and ε > 0, then by the regular (Q, ετ Q (λ))-subdifferentiability of f 2 at T (x0 ) and Theorem [6, Theorem 5.6(c)] it follows that ∂ετ Q (λ) (λ ◦ f 2 )(T (x0 )) =
Q ∈Λ
Q (λ)
s λ ◦ ∂Q ,ε f 2 (T (x 0 )),
123
J Optim Theory Appl s ∗ so there exist Q ∈ Λ Q (λ) and A ∈ ∂ Q ,ε f 2 (T (x 0 )) such that y = λ ◦ A. Therefore, λ ◦ B = λ ◦ A ◦ T , which implies that B − (A ◦ T ) ∈ K (λ) and then
s B ∈ (A ◦ T ) + K (λ) ⊂ ∂ Q ,ε f 2 (T (x 0 )) ◦ T + K (λ). If τ Q (λ) = 0 or ε = 0, since f 2 is regular (Q, 0)-subdifferentiable at T (x0 ), then s we have that ∂(λ ◦ f 2 )(T (x0 )) = λ ◦ ∂ Q ,0 f 2 (T (x 0 )), where Q = q + K , q ∈ K and λ, q = τ Q (λ), and following an analogous reasoning as above, we obtain the result. (b) Inclusion “⊃”: Let λ ∈ K s+ ∩ Q τ + , q ∈ K such that λ, q = ετ Q (λ), A ∈ s ∂{q},1 f 2 (T (x0 )), B ∈ K (λ) and let B = A ◦ T + B. Consider the set Qˆ := Q(ε) − {q}. Since τ Qˆ (λ) = ετ Q (λ) − λ, q = 0, it follows that λ ∈ Qˆ τ + . Thus, Qˆ ∈ F Z ,K , and by (11) and Theorem 3.5, we deduce that Be B ∈ Γ Q,1 ˆ (A ◦ T )(x 0 )(λ) ⊂ ∂ Q,ε ( f 2 ◦ T )(x 0 ).
Inclusion “⊂” follows reasoning in a similar way as in part (a), but taking into account the p-regular ετ Q (λ)-subdifferentiability of f 2 at T (x0 ). p
In the particular case when Y = Rn , D = Rn+ , Z = R p and K = R+ , we obtain Be ( f ◦ T )(x ). the following result, which facilitates the calculus of ∂ Q,ε 2 0 Theorem 3.9 Let ε ≥ 0, Q ∈ FR p ,R p , T ∈ L(X, Rn ) and x0 ∈ dom( f 2 ◦ T ). + Suppose that ( f 2 ◦ T ) B − ( f 2 ◦ T ) B (x0 ) is nearly (Q, ε)-subconvexlike on X for all p B ∈ L(X, R p ), f 2 is R+ -convex on Rn and p-regular ε¯ -subdifferentiable at T (x0 ), for all ε¯ ≥ 0. Then,
Be ( f 2 ◦ T )(x0 ) = ∂ Q,ε
p
q∈R+ λ,q =ετ Q (λ)
λ∈int R+ ∩Q τ +
p
⎧ p ⎨ ⎩
j=1
⎫ ⎬ T ∗ ∂q j ( f 2 ) j (T (x0 )) + K (λ) . ⎭
Proof The proof follows directly by Theorem 3.8 and [6, Theorem 4.6].
In the following result, we state chain rules for the Benson (Q, ε)-proper subdifferential of the map T ◦ f 1 , where in this case T ∈ L+ (Y, Z ). Let us denote L0+ (Y, Z ) := {T ∈ L+ (Y, Z ): T (D\{0}) ⊂ K \{0}}. Theorem 3.10 Suppose that int K + = ∅. Let ε ≥ 0, Q ∈ F Z ,K , T ∈ L0+ (Y, Z ) and x0 ∈ dom f 1 . Suppose that (T ◦ f 1 ) B − (T ◦ f 1 ) B (x0 ) is nearly (Q, ε)-subconvexlike on X , for all B ∈ L(X, Z ).
123
J Optim Theory Appl
(a) If f 1 is regular (C, ε¯ )-subdifferentiable at x0 , for a nonempty set C ⊂ D, such that D s+ ⊂ C sτ + , and for all ε¯ ≥ 0, then, Be (T ◦ f 1 )(x0 ) = ∂ Q,ε
λ∈K s+ ∩Q τ +
C ∈Λ D τC (λ◦T )=1
T ◦ ∂Cs ,ετ Q (λ) f 1 (x0 ) + K (λ) .
(12)
(b) If f 1 is p-regular ε¯ -subdifferentiable at x0 , for all ε¯ ≥ 0, it follows that Be (T ◦ f 1 )(x0 ) = ∂ Q,ε
λ∈K s+ ∩Q τ +
q∈D λ◦T,q =ετ Q (λ)
s T ◦ ∂{q},1 f 1 (x0 ) + K (λ) .
(13)
Proof (a) Inclusion “⊃”: Let λ ∈ K s+ ∩ Q τ + , C ∈ Λ D with τC (λ ◦ T ) = 1, A ∈ ∂Cs ,ετ Q (λ) f 1 (x0 ), B ∈ K (λ) and let B = T ◦ A + B. Then, it follows that λ ◦ B = (λ ◦ T ) ◦ A. Since λ ◦ T ∈ D + \{0}, by Theorem 2.3, we have that λ ◦ B = (λ ◦ T ) ◦ A ∈ ∂ετ Q (λ)·τC (λ◦T ) (λ ◦ T ◦ f 1 )(x0 ) = ∂ετ Q (λ) (λ ◦ (T ◦ f 1 ))(x0 ), Be (T ◦ f )(x ). and by Theorem 2.2, we deduce that B ∈ ∂ Q,ε 1 0 Be Inclusion “⊂”: Let B ∈ ∂ Q,ε (T ◦ f 1 )(x0 ). By Theorem 2.1, there exists λ ∈ K s+ ∩ τ Q + such that λ ◦ B ∈ ∂ετ Q (λ) ((λ ◦ T ) ◦ f 1 )(x0 ). Since λ ∈ K s+ and T ∈ L0+ (Y, Z ), it follows that λ ◦ T ∈ D s+ . Thus, λ ◦ T ∈ D s+ ∩ C sτ + . If ετ Q (λ) > 0, then by the regular (C, ετ Q (λ))-subdifferentiability of f 1 at x0 and [6, Theorem 5.6(b)], there exists C ∈ Λ D satisfying τC (λ ◦ T ) = 1, and A ∈ ∂Cs ,ετ Q (λ) f 1 (x0 ) such that λ ◦ B = λ ◦ T ◦ A, which implies that B − T ◦ A ∈ K (λ). Therefore, B ∈ T ◦ ∂Cs ,ετ Q (λ) f 1 (x0 ) + K (λ), and (12) is proved. On the other hand, if ετ Q (λ) = 0, then by applying a similar reasoning as above and taking into account that f 1 is regular (C, 0)-subdifferentiable at x0 , we have that B ∈ T ◦ ∂ s f 1 (x0 ) + K (λ). Thus, considering q ∈ D\{0} such that λ ◦ T, q = 1 and C := q + D ∈ Λ D , by [6, Proposition 4.3(b)], it follows that ∂ s f 1 (x0 ) = ∂Cs ,0 f 1 (x0 ), and part (a) is proved. (b) It follows reasoning as in part (a), but considering the assumption of p-regular ετ Q (λ)-subdifferentiability of f 1 at x0 instead of the regular (C, ετ Q (λ)) subdifferentiability of f 1 at x0 .
Remark 3.3 (a) If C ⊂ D, then assumption D s+ ⊂ C sτ + is satisfied if 0 ∈ / cl C and some of conditions (a)–(e) of [6, Remark 5.5] holds. In this sense, let us note that condition (b) of [6, Remark 5.5] is not correct. However, it becomes true by adding the assumption 0 ∈ cl Q.
123
J Optim Theory Appl
(b) If we consider Q = q¯ + K , for q¯ ∈ Z \(−K \{0}) and ε = 1, then equality (13) reduces to
Be ∂ Q,1 (T ◦ f 1 )(x0 ) =
s {T ◦ ∂{q},1 f 1 (x0 ) + K (λ)},
(14)
q∈D λ∈K s+ λ,q ≥0 ¯ λ◦T,q = λ,q
¯
which does not coincide with [9, Proposition 3.2]. In the following example, we show that [9, Proposition 3.2] is not correct. Example 3.1 Consider X = R, Y = Z = R2 , D = K = R2+ and let us define the mapping f : R → R2 by f (x) := (0, x), for all x ∈ R. Let T : R2 → R2 be the identity mapping on R2 , q¯ = (1, −1) and x0 = 0. Clearly, f is R2+ -convex on R and then, in particular, (T ◦ f ) B − (T ◦ f ) B (0) is nearly (q¯ + K , 1)-subconvexlike on R for all B ∈ L(R, R2 ) (see Theorem 3.3). Clearly, f is p-regular ε-subdifferentiable at 0, for all ε ≥ 0. By [9, Proposition 3.2], we have that
Be ∂q+K ¯ ,1 f (0) =
s ∂{q},1 f (0) + Z p (R, R2 ),
(15)
q∈R2+ q¯ R2 \{(0,0)} q +
q R2 \{(0,0)} q¯ +
where Z p (R, R2 ) = {A ∈ L(R, R2 ): ∃λ ∈ int R2+ : λ ◦ A = 0}. s f (0) = {(0, 1)}, for all q ∈ R2+ , Applying [6, Theorem 4.6], it follows that ∂{q},1 and it is easy to check that Z p (R, R2 ) = {(a1 , a2 ) ∈ R2 : a1 a2 < 0} ∪ {(0, 0)}. Thus, by formula (15), we have that Be 2 ∂q+K ¯ ,1 f (0) = {(a1 , a2 ) ∈ R : a1 (a2 − 1) < 0} ∪ {(0, 1)}. Be Next, we derive ∂q+K ¯ ,1 f (0) by applying formula (14). It follows that
K (λ) = {(a1 , a2 ) ∈ R2 : λ1 a1 + λ2 a2 = 0}, ∀λ ∈ int R2+ , and therefore, by (14), we have that Be ∂q+K ¯ ,1 f (0) =
123
{(a1 , a2 ) ∈ R2 : λ1 a1 + λ2 (a2 − 1) = 0}
λ1 ,λ2 >0 q∈R2+ λ1 ≥λ2 λ◦T,q = λ,q
¯
J Optim Theory Appl
=
{(a1 , a2 ) ∈ R2 : λ1 a1 + λ2 (a2 − 1) = 0}
λ1 ,λ2 >0 λ1 ≥λ2
= {(a1 , a2 ) ∈ R2 : a1 < 0, a1 + a2 ≥ 1} ∪ {(a1 , a2 ) ∈ R2 : a1 > 0, a1 + a2 ≤ 1} ∪ {(0, 1)}.
(16)
Be Moreover, if we derive ∂q+K ¯ ,1 f (0) by applying Corollary 2.1, then we obtain the same result as in (16), which shows that [9, Proposition 3.2] is not correct. p
In the particular case when Y = Rn , D = Rn+ , Z = R p and K = R+ , we obtain the following result, which is an immediate consequence of Theorem 3.10 (b) and [6, Theorem 4.6]. Theorem 3.11 Let ε ≥ 0, Q ∈ FR p ,R p , T ∈ L0+ (Rn , R p ) and x0 ∈ dom f 1 . + Suppose that (T ◦ f 1 ) B − (T ◦ f 1 ) B (x0 ) is nearly (Q, ε)-subconvexlike on X , for all B ∈ L(X, R p ), and f 1 is p-regular ε¯ -subdifferentiable at x0 , for all ε¯ ≥ 0. Then, Be ∂ Q,ε (T ◦ f 1 )(x0 )
=
p λ∈int R+ ∩Q τ +
q∈Rn+ λ◦T,q =ετ Q (λ)
⎧ ⎨ ⎩
⎛ T ◦⎝
n
j=1
⎫ ⎬ ∂q j ( f 1 ) j (x0 )⎠ + K (λ) . ⎭ ⎞
4 Application to Pareto Problems In this section, we apply the calculus rule (13) and we derive a characterization of Benson (C, ε)-proper solutions of implicitly constrained convex Pareto problems. Moreover, we obtain an estimation for the distance between the objective value of a Benson (C, ε)-proper solution of the problem and the efficient Pareto boundary. Thus, the characterization proved in this section could be useful to derive multiple criteria decision-making techniques, in order to provide feasible points, whose objective values are close to be nondominated. In the sequel, we consider the Pareto problem (P S ). Moreover, we assume that S is convex and closed and f i is convex, for all i ∈ {1, 2, . . . , p}. For each ε ≥ 0, we refer the ε-normal set of S at a point x0 ∈ S to Nε (S, x0 ), i.e., Nε (S, x0 ) := {v ∈ Rn : v, x − x0 ≤ ε, ∀x ∈ S}. On the other hand, we denote the set of all real matrices with r rows and s columns by Mr ×s , the zero element of R p by 0 p and the transpose of y ∈ R p by y t . We assume that the elements of R p are row vectors, and we consider the extended mapping f¯S : Rn → R2 p , given by f¯S (x) := ( f (x), 0 p ) for all x ∈ S and f¯S (x) = +∞R2 p otherwise. The characterization of Benson (C, ε)-proper solutions of convex Pareto problems is a consequence of the following three lemmas.
123
J Optim Theory Appl p
p
Lemma 4.1 Let x0 ∈ S, ε ≥ 0 and μ = (μ1 , μ2 ) ∈ R+ × R+ . Then,
∂ε μ, f¯S (x0 ) =
p
∂εi ((μ1 )i f i )(x0 ) + Nε0 (S, x0 ).
ε0 ,ε1 ,...,ε p ≥0 i=1 ε0 +ε1 +...+ε p =ε
Proof Clearly, μ, f¯S (x) =
p (μ1 )i f i (x) + I S1 (x), ∀ x ∈ Rn , i=1
where I S1 : Rn → R is the scalar indicator function of the set S. Then, the result follows by applying [20, Theorem 2.8.7]. M1 the row-wise block matrix with 2 p rows and n In the sequel, we denote by M2 columns defined by M1 , M2 ∈ M p×n . Moreover, we assume that the order cone of 2p R2 p is R+ . M1 p p ∈ M2 p×n . It Lemma 4.2 Let x0 ∈ S, q = (q1 , q2 ) ∈ R+ × R+ and M = M2 s follows that M ∈ ∂{q},1 f¯S (x0 ) if and only if the following condition is satisfied: p
p
∀ (μ1 , μ2 ) ∈ (R+ × R+ )\{(0 p , 0 p )} there exist ε0 , ε1 , . . . , ε p ≥ 0 such that p
εi = μ1 , q1 + μ2 , q2 ,
i=0 p μ1 M1 + μ2 M2 ∈ ∂εi ((μ1 )i f i )(x0 ) + Nε0 (S, x0 ).
(17)
i=1 s f¯S (x0 ) if and only Proof By Theorem 2.3, we have that M ∈ ∂{q},1 p p μ1 M1 + μ2 M2 ∈ ∂ μ,q μ, f¯S (x0 ), ∀ μ = (μ1 , μ2 ) ∈ (R+ × R+ )\{(0 p , 0 p )}.
Then, the result follows by applying Lemma 4.1 with ε = μ1 , q1 + μ2 , q2 . Lemma 4.3 Let x0 ∈ S. The function f¯S is p-regular ε-subdifferentiable at x0 , for all ε ≥ 0. p p Proof Let μ = (μ1 , μ2 ) ∈ int R+ × int R+ , ε ≥ 0 and λ ∈ ∂ε μ, f¯S (x0 ). Clearly, μ, f¯S = μ1 , f + I S1 . Then, by applying [20, Theorem 2.8.7], we deduce that there exist ε1 , ε2 ≥ 0, ε1 + ε2 = ε, λ1 ∈ ∂ε1 μ1 , f (x0 ) and λ2 ∈ Nε2 (S, x0 ) such that λ = λ1 + λ2 .
123
J Optim Theory Appl
By [7, Remark 3.3], we see that f is p-regular ε1 -subdifferentiable at x0 and so p s f (x0 ) such that λ1 = μ1 M1 . there exists q1 ∈ R+ , μ1 , q1 = ε1 and M1 ∈ ∂{q 1 },1 p On the other hand, by [7, Lemma 5.3], there exists q2 ∈ R+ , μ2 , q2 = ε2 , and M2 ∈ M p×n such that λ2 = μ2 M2 and M2 (x − x0 )t ≤R p q2 , ∀ x ∈ S.
(18)
+
M1 ∈ M2 p×n . Clearly, λ = μM and μ, q = ε, where M2 2p s f¯S (x0 ). Indeed, consider an arbitrary q = (q1 , q2 ) ∈ R+ . Moreover, M ∈ ∂{q},1 p p vector (v1 , v2 ) ∈ (R+ × R+ )\{(0 p , 0 p )} and let us prove that (17) is satisfied. As s f (x0 ), by Theorem 2.3 and [20, Theorem 2.8.7] there exist δ1 , . . . , δ p ≥ 0 M1 ∈ ∂{q 1 },1 such that δ1 + δ2 + · · · + δ p = v1 , q1 and
Consider M =
v1 M1 ∈
p ∂δi ((v1 )i f i )(x0 ). i=1
By (18), we have that v2 M2 ∈ Nδ0 (S, x0 ), δ0 = v2 , q2 . Therefore, p
δi = v1 , q1 + v2 , q2 ,
i=0
v1 M1 + v2 M2 ∈
p ∂δi ((v1 )i f i )(x0 ) + Nδ0 (S, x0 ), i=1
and the proof finishes. p
p
Theorem 4.1 Consider ε ≥ 0 and let C ⊂ R+ be a convex set such that C + R+ = C p and cone C = R+ . A point x0 ∈ S is a Benson (C, ε)-proper solution of problem (P S ) p p if and only if there exist λ ∈ int R+ , q1 , q2 ∈ R+ and matrices M1 , M2 ∈ M p×n such that λ, q1 + q2 = ετC (λ), λ(M1 + M2 ) = 0n and condition (17) is satisfied. Proof Let us define T : R p × R p → R p by T (z 1 , z 2 ) := z 1 + z 2 , for all z 1 , z 2 ∈ R p . Clearly, Be( f, S, C, ε) = Be(T ◦ f¯S , X, C, ε). Then, x0 ∈ S is a Benson (C, ε)-proper Be (T ◦ f¯ )(x ). solution of problem (P S ) if and only if 0 ∈ ∂C,ε S 0 It is not hard to check that the assumptions of Theorem 3.10 are satisfied by 2p p considering X = Rn , Y = R2 p , Z = R p , D = R+ , K = R+ , Q = C and f 1 = f¯S . In particular, the nearly (Q, ε)-subconvexlikeness of the mapping (T ◦ f 1 ) B − (T ◦ f 1 ) B (x0 ) on X for all B ∈ L(X, Z ) is a consequence of Theorem 3.3, and the p-regular ε¯ -subdifferentiability of f¯S at x0 for all ε¯ ≥ 0 was stated Be (T ◦ f¯ )(x ) if and only if there exist λ ∈ int R p , in Lemma 4.3. Then, 0 ∈ ∂C,ε S 0 + M1 p ∈ M2 p×n satisfying (17), such q1 , q2 ∈ R+ with λ, q1 + q2 = ετC (λ) and M2 that λ(M1 + M2 ) = 0n .
123
J Optim Theory Appl
Let us denote by 1 the 1 norm of R p , by B1 the closed unit ball of R p given by the norm 1 , and by Bc1 the complement of B1 . In the following result, we estimate the distance between the objective value of a Benson (C, ε)-proper solution of problem (P S ) and the Pareto boundary. p
Theorem 4.2 Consider problem (P S ). Let x0 ∈ S, ε ≥ 0, and C = Bc1 ∩ R+ . Suppose that E( f, S) is nonempty and bounded. If x0 ∈ Be( f, S, C, ε), then inf
x∈E( f,S)
f (x) − f (x0 )1 ≤ ε.
Proof By [1, Theorem 3.7(a)], we deduce that x0 ∈ AE( f, S, C, ε), because C + p R+ \{0} = C. On the other hand, by [22, Theorem 4.2], we obtain that f (S) ⊂ p f (E( f, S)) + R+ . Thus, there exists u ∈ E( f, S) such that f (u) ≤R p f (x0 ), and as + x0 ∈ AE( f, S, C, ε), we have that f (u) − f (x0 ) ∈ εB1 , which finishes the proof. As a direct consequence of Theorems 4.1 and 4.2, we obtain the following result. p
Corollary 4.1 Consider problem (P S ). Let x0 ∈ S, ε ≥ 0 and C = Bc1 ∩ R+ . Suppose p p that E( f, S) is nonempty and If there exist λ ∈ int R+ , q1 , q2 ∈ R+ with bounded. M1 ∈ M2 p×n satisfying (17), such that λ(M1 + M2 ) = λ, q1 +q2 = ετC (λ) and M2 0n , then inf
x∈E( f,S)
f (x) − f (x0 )1 ≤ ε.
5 Conclusions In this paper, we deal with a Benson proper epsilon-subdifferential of a vector mapping introduced by ourselves in [1]. This proper epsilon-subdifferential has a better limit behavior, when the precision error tends to zero, than other proper epsilonsubdifferentials of the literature (see, for instance, [7,8] and [1, Section 3]). Thus, it is important to provide calculus rules for computing it. In particular, this work focuses on calculus rules for the composition in the setting of locally convex Hausdorff topological vector spaces (in [6], the reader can find several sum rules). Specifically, we have stated two chain rules. The first one is obtained by assuming a regularity condition introduced by El Maghri in [7], and it extends the formulae given by this author for the proper epsilon-subdifferential introduced by himself. The second one is derived by assuming a recent regularity condition given by ourselves in [6]. This regularity condition is weaker than the one due to El Maghri, and it is defined in terms of a new strong epsilon-subdifferential, also introduced in [6]. Moreover, we have stated chain rules in the particular case when one of the two involved mappings is linear. These chain rules reduce to easy formulae in the Pareto case. Finally, as an application, we have used several results of this work to state a multiplier rule for approximate Benson proper solutions of implicitly constrained convex Pareto problems. Furthermore, by considering a particular case of this multiplier rule,
123
J Optim Theory Appl
we have derived a sufficient optimality condition for this type of solutions in such a way that the distance between their objective values and the Pareto boundary is less than a previously fixed precision. Acknowledgments This work was partially supported by Ministerio de Economía y Competitividad (Spain) under project MTM2012-30942. The authors are very grateful to the anonymous referee for his/her helpful comments and suggestions.
References 1. Gutiérrez, C., Huerga, L., Jiménez, B., Novo, V.: Proper approximate solutions and ε-subdifferentials in vector optimization: Basic properties and limit behaviour. Nonlinear Anal. 79, 52–67 (2013) 2. Gutiérrez, C., Huerga, L., Novo, V.: Scalarization and saddle points of approximate proper solutions in nearly subconvexlike vector optimization problems. J. Math. Anal. Appl. 389, 1046–1058 (2012) 3. Gao, Y., Yang, X., Teo, K.L.: Optimality conditions for approximate solutions of vector optimization problems. J. Ind. Manag. Optim. 7, 483–496 (2011) 4. Gutiérrez, C., Jiménez, B., Novo, V.: On approximate efficiency in multiobjective programming. Math. Methods Oper. Res. 64, 165–185 (2006) 5. Gutiérrez, C., Jiménez, B., Novo, V.: A unified approach and optimality conditions for approximate solutions of vector optimization problems. SIAM J. Optim. 17, 688–710 (2006) 6. Gutiérrez, C., Huerga, L., Jiménez, B., Novo, V.: Proper approximate solutions and ε-subdifferentials in vector optimization: Moreau–Rockafellar type theorems. J. Convex Anal. 21, 857–886 (2014) 7. El Maghri, M.: Pareto–Fenchel ε-subdifferential sum rule and ε-efficiency. Optim. Lett. 6, 763–781 (2012) 8. Tuan, L.A.: ε-optimality conditions for vector optimization problems with set-valued maps. Numer. Funct. Anal. Optim. 31, 78–95 (2010) 9. El Maghri, M.: Pareto–Fenchel ε-subdifferential composition rule and ε-efficiency. Numer. Funct. Anal. Optim. 35, 1–19 (2014) 10. Raffin, C.: Contribution à l’Étude des Programmes Convexes Définis dans des Espaces Vectoriels Topologiques. Thèse, Université Pierre et Marie Curie, Paris (1969) 11. Göpfert, A., Riahi, H., Tammer, C., Z˘alinescu, C.: Variational Methods in Partially Ordered Spaces. Springer, New York (2003) 12. Jahn, J.: Vector Optimization. Theory, Applications, and Extensions. Springer, Berlin (2011) 13. Sawaragi, Y., Nakayama, H., Tanino, T.: Theory of Multiobjective Optimization. Academic Press, Orlando (1985) 14. Rong, W.: Proper ε-efficiency in vector optimization problems with cone-subconvexlikeness. Acta Sci. Natur. Univ. NeiMongol 28, 609–613 (1997) 15. Kutateladze, S.S.: Convex ε-programming. Soviet Math. Dokl. 20, 391–393 (1979) 16. Brøndsted, A., Rockafellar, R.T.: On the subdifferentiability of convex functions. Proc. Am. Math. Soc. 16, 605–611 (1965) 17. Yang, X.M., Li, D., Wang, S.Y.: Near-subconvexlikeness in vector optimization with set-valued functions. J. Optim. Theory Appl. 110, 413–427 (2001) 18. Borwein, J.: Proper efficient points for maximizations with respect to cones. SIAM J. Control Optim. 15, 57–63 (1977) 19. El Maghri, M., Laghdir, M.: Pareto subdifferential calculus for convex vector mappings and applications to vector optimization. SIAM J. Optim. 19, 1970–1994 (2009) 20. Z˘alinescu, C.: Convex Analysis in General Vector Spaces. World Scientific, Singapore (2002) 21. Hiriart-Urruty, J.-B.: ε-subdifferential calculus. Res. Notes Math. 57, 43–92 (1982) 22. Gutiérrez, C., López, R., Novo, V.: Existence and boundedness of solutions in infinite-dimensional vector optimization problems. J. Optim. Theory Appl. 162, 515–547 (2014)
123