J Anal https://doi.org/10.1007/s41478-018-0098-1 ORIGINAL RESEARCH PAPER
Infinite horizon optimal control of mean‑field delay system with semi‑Markov modulated jump‑diffusion processes R. Deepa1 · P. Muthukumar1
Received: 16 November 2017 / Accepted: 18 May 2018 © Forum D’Analystes, Chennai 2018
Abstract This paper describes the study of infinite horizon optimal control of stochastic delay differential equation with semi-Markov modulated jump-diffusion processes in which the control domain is not convex. In addition, the drift, diffusion, jump kernel term and cost functional are modulated by semi-Markov processes and expectation values of the state processes. Since the control domain is non-convex, the system exhibits non-guarantee to exist optimal control. Therefore, the concerned system is transformed into relaxed control model where the set of all relaxed controls forms a convex set, which gives the existence of optimal control. Further, stochastic maximum principle and necessary condition for optimality are established under convex perturbation technique for the relaxed model. Finally, an application of the theoretical study is shown by an example of portfolio optimization problem in financial market. Keywords Infinite-horizon · Mean field optimal control · Relaxed control · SemiMarkov modulated jump-diffusion processes · Stochastic maximum principle Mathematics Subject Classification 35B50 · 60H10 · 93E20.
This work was supported by Science Engineering Research Board (SERB), DST, Govt. of India under YSS Project F.No: YSS/2014/000447 dated 20.11.2015. The first author is thankful to UGC, New Delhi for providing BSR fellowship for the year 2015. * R. Deepa
[email protected] P. Muthukumar
[email protected] 1
Department of Mathematics, The Gandhigram Rural Institute (Deemed to be University), Gandhigram, Tamilnadu 624 302, India
13
R. Deepa, P. Muthukumar
1 Introduction Stochastic processes with future state depends on the present state and independent of the past state is known as Markov processes. Semi-Markov processes is an extension of Markov processes in which the waiting time of each state is distributed by any general distribution besides an exponential distribution. It has wide application in queuing theory, finance, computer science and engineering (see [5, 9, 13]). Particularly Ghosh and Saha in [14] studied the discounted optimal control problems for a class of Markov processes with age dependent transition rates by constructing an equivalent semi-Markov decision processes. Further, stochastic maximum principle for the optimal control of semi-Markov modulated jump-diffusion processes is studied by Deshpande in [10]. Meanwhile, mean-field theory is widely used in interacting many-body systems such as game theory, financial market, computer network (see [6, 11]). In the meanfield type stochastic optimal control problem, the coefficients of state equation and cost functional are dependent on the expectation value of its state processes. The presence of the mean-field term makes the control problem to time-inconsistent and the dynamic programming principle can not be applied, which motivates to establish the stochastic maximum principle to solve this type of optimal control problems. In particular, Ma and Liu in [20] studied the stochastic maximum principle for optimal control of risk-sensitive mean-field type stochastic system. Recently, Hafayed et al. in [17] investigated the mean-field system with singular control under partial information of dynamical system. They established the necessary and sufficient conditions for optimality by convex variation methods and duality technique. However, some natural phenomena have memory dependence, that is their dynamics at present time also depends on past history, such models are regarded as delay differential equations (see [19, 22]). For example, delays occur naturally in population dynamics models and financial market. Here delays in the dynamics are represented as memory or inertia. Agram and Rose in [3], studied the optimal control of forward–backward stochastic delay system by using duality method. Moreover, financial markets exhibits the jump type behavior in many situation (see [27, 28] and references therein). In order to generalize the concept of delay and jump with mean-field type stochastic system, Shen et al. in [23, 25] discussed the optimal control of mean-field stochastic delay system with jump processes. They derived the necessary condition and stochastic maximum principle by taking the assumption that the control domain is convex. The above studies are discussed through finite horizon version only. Optimal control problem of finding the behavior of admissible trajectories are not always having finite time, it also infinite time (see [2, 16, 21, 24, 26]). In economics, the control process of an industry is usually considered under the implicit assumption that the controlled object will exist forever, or at least indefinitely long. Then the optimal admissible control is aimed at maximizing the entire future profit. This leads us to the study of infinite horizon stochastic optimal control problems. In such problems, the initial state is fixed and the terminal state is free, while the utility functional to be maximized is defined as an improper integral. Dmitruk in [12] considered a broad
13
Infinite horizon optimal control of mean‑field delay system…
class of problems on infinite horizon case including economic dynamical problems and proposed sufficient conditions that guaranteeing the existence of solutions. From the above motivation the authors in this manuscript constructed optimal control of infinite horizon mean-field delay system with semi-Markov modulated jump diffusion processes. Main aim is to establish the infinite horizon version of stochastic maximum principle for the proposed mean-field system, under non-convex control domain. There are significant research attention for solving these type of problems under convex control domain (see [2, 21, 23–25]). Moreover, optimal control is exist in this case, but for non-convex control domain there is no guarantee for existence of optimal control. This is a challenging problem for solving stochastic optimal control problem under non-convex control domain. For finding the optimal control in this case, the set of all strict control variables are embedded into a wider class of measure valued controls (see [4, 7, 29]). This new control set forms a convex set which exhibits the existence of optimal control. This motivates the research to formulate the system of mean-field optimal control through semi-Markov modulated jump-diffusion processes under non-convex control domain. In this present study the authors consider the following infinite horizon optimal control of stochastic delay differential equation driven by semi-Markov modulated jump-diffusion processes,
dX u (t) = b(t, X u (t), X u (t − 𝛿), E[X u (t)], E[X u (t − 𝛿)], 𝜃(t), u(t))dt + 𝜎(t, X u (t), X u (t − 𝛿), E[X u (t)], E[X u (t − 𝛿)], 𝜃(t), u(t))dW(t) +
∫Γ
̃ g(t, X u (t), X u (t − 𝛿), E[X u (t)], E[X u (t − 𝛿)], 𝜃(t), u(t), 𝛾)N(dt, d𝛾),
X u (t) = x0 ∈ ℝ,
t ∈ [−𝛿, 0], where 𝛿 > 0,
(1)
where X u (t) = X u (t, 𝜔);t ≥ 0 , 𝜔 ∈ Ω is a stochastic processes as a state processes and X u (t − 𝛿) is state delay, which are defined on a complete probability space (Ω, , P) with right continuous filtration {t }t≥0 and (0) contains all P null sets. X u (t) , X u (t − 𝛿) , E[X u (t)] , E[X u (t − 𝛿)] are real valued functions on [0, ∞) . Here E refers the ̃ d𝛾) is expectation value. W(t) = W(t, 𝜔) is a one dimensional Brownian motion, N(dt, a compensated Poisson measure, where 𝛾 ∈ Γ(∶= ℝ − {0}) and 𝜃(t) is a semi Markov processes taking values in (∶= {1, … , M}, where M is a positive integer) . Also u(t) = u(t, 𝜔) is a strict (classical) control variable and t adapted process with values in the[non-convex set U ] ⊂ ℝ . The set of all strict controls u(t) are denoted by with E supt∈[0,∞) |u(t)|2 < ∞ holds. The coefficients b, 𝜎 are real valued functions on [0, ∞) × ℝ × ℝ × ℝ × ℝ × × U × Ω and g is also real valued function on u [0, ∞) × ℝ ×[ℝ × ℝ × ℝ × ] × U × Γ × Ω . The solution X (t) of the system (1) is ∞ u 2 exist, with E ∫0 |X (t)| dt < ∞ holds, see [2]. Minimization of the cost functional corresponding to the controlled system (1) is given as follows: ] [ ∞ f (t, X u (t), X u (t − 𝛿), E[X u (t)], E[X u (t − 𝛿)], 𝜃(t), u(t), Y(t))dt , (u) = E �0 (2)
13
R. Deepa, P. Muthukumar
where Y(t) represents the amount of time the process 𝜃(t) is at the current state after the last jump. All the coefficients defined in (1) and (2) are continuously differentiable with respect to their arguments and have bounded derivatives. Also, f is a real valued function on [0, ∞) × ℝ × ℝ × ℝ × ℝ × × U × [0, ∞) × Ω and which satisfies [ ∞ ] | | E |f (t, X u (t), X u (t − 𝛿), E[X u (t)], E[X u (t − 𝛿)], 𝜃(t), u(t), Y(t))|dt < ∞. | ∫0 |
Note that, the control domain U is not convex which implies that the system (1) and (2) exhibits the non guarantee to exist the optimal control. In order to overcome this limitation, the set of all strict controls are embedded into a wider class of measure valued controls. That is, U-valued process u(t) is replaced by P(U)-valued process q(t), where P(U) is the space of probability [ measure on U. ] The set of all P(U)-valued process q(t) is denoted by with E supt∈[0,∞) |q(t)|2 < ∞ holds, which forms a convex set. Since as a convex set, which exhibits the existence of optimal control. In particular, relaxed optimal control of mean-field type stochastic differential system is studied by [8], without considering any jump processes. To the best of authors knowledge this is the first attempt to consider infinite horizon relaxed control model of mean-field stochastic delay system with semi-Markov modulated jump-diffusion processes. Thus the system (1) is transformed under the relaxed control model of the following form: dX q (t) =
∫U
+
∫U
+
∫Γ ∫U
b(t, X q (t), X q (t − 𝛿), E[X q (t)], E[X q (t − 𝛿)], 𝜃(t), a)q(t)(da)dt
𝜎(t, X q (t), X q (t − 𝛿), E[X q (t)], E[X q (t − 𝛿)], 𝜃(t), a)q(t)(da)dW(t) ̃ g(t, X q (t), X q (t − 𝛿), E[X q (t)], E[X q (t − 𝛿)], 𝜃(t), 𝛾, a)q(t)(da)N(dt, d𝛾),
X q (t) = x0 ∈ ℝ,
t ∈ [−𝛿, 0], where 𝛿 > 0,
(3)
where q(t)(da) is a progressively measurable process with values in the set of probability measures P(U) and a ∈ U . Also, the cost functional (2) is written by using relaxed control q(t) as follows: (q) = E
[
�0
∞
�U
] ( ) f t, X q (t), X q (t − 𝛿), E[X q (t)], E[X q (t − 𝛿)], 𝜃(t), Y(t), a q(t)(da)dt .
(4)
The transformed system (3) and (4) are the generalization of system (1) and (2). This paper is organized as follows: Sect. 2 of this paper recalls the basic definitions and notations for semi-Markov processes and relaxed control, which are useful to prove the main result. Infinite horizon version of stochastic maximum principle and necessary conditions for optimality of the proposed problem are established under the
13
Infinite horizon optimal control of mean‑field delay system…
convex perturbation technique in Sects. 3 and 4 respectively. In Sect. 5, an example is given to demonstrate the theoretical study.
2 Preliminaries One dimensional Brownian motion W(t) and semi-Markov processes {𝜃(t)}t≥0 are independent for every t ≥ 0 . Let (pij ) denotes the transition probability and F(t|i) the conditional holding time distribution of {𝜃(t)}t≥0 . If 0 ≤ t0 ≤ t1 ≤ ⋯ are jump times of 𝜃 , then
P(𝜃(tn+1 ) = j, tn+1 − tn ≤ t|𝜃(tn ) = i) = pij F(t|i),
where the transition matrix (pij ) is irreducible. For each i and t ≥ 0 , F(t|i) < 1 . Moreover this distribution has continuously differentiable and bounded density f (⋅|i) . For a fixed t, the Markov renewal counting processes is defined as n(t) ∶= max{n ∈ ℕ ∶ tn ≤ t} . Let Y(t) ∶= t − tn(t) represents the amount of time such that the process 𝜃(t) is at the current state after the last jump and the process {(𝜃(t), Y(t))} is Markov. The differential generator of the process {(𝜃(t), Y(t))} is given by
𝜙(i, y) =
∑ f (y|i) d p [𝜙(j, 0) − 𝜙(i, y)], 𝜙(i, y) + dy 1 − F(y|i) j≠i,j∈ ij
where 𝜙 is real valued function on × [0, ∞) and is a C1 function (see [15]). Let us define,
f h (y|i) 𝜆ij (y) = pij ≥ 0, 1 − F h (y|i)
∀ i ≠ j;
𝜆ij (y) = −
M ∑
𝜆ij (y)
∀ i ∈ .
j∈,j≠i
For i ≠ j ∈ , y ∈ [0, ∞) , Λij (y) be consecutive left closed, right open intervals of the real line, each having length 𝜆ij (y). Define h̄ ∶ × [0, ∞) × Γ → ℝ and ḡ ∶ × [0, ∞) × Γ → [0, ∞) by { ̄ y, 𝛾) = j − i if 𝛾 ∈ Λij (y) h(i, 0 otherwise, { y if 𝛾 ∈ Λij (y), j ≠ i ḡ (i, y, 𝛾) = 0 otherwise. Let ([0, ∞) × ℝ) be the set of all nonnegative integer-valued 𝜎-finite measures on ̃ Y(t)} is defined by the stochastic Borel 𝜎-field of ([0, ∞) × ℝ) . The processes {𝜃(t), integral with respect to Poisson random measure as follows
13
R. Deepa, P. Muthukumar
t
̃ + ̃ = 𝜃(0) 𝜃(t)
∫0 ∫Γ
̄ 𝜃(s−), ̃ h( Y(s−), 𝛾)N(ds, d𝛾),
t
Y(t) = t −
∫0 ∫Γ
̃ ḡ (𝜃(s−), Y(s−), 𝛾)N(ds, d𝛾),
where N(ds, d𝛾) is an ([0, ∞) × ℝ)-valued Poisson random measure with intensity ̃ , where 𝜈(⋅) is a Lebesgue 𝜈(d𝛾)dt independent of the -valued random variable 𝜃(0) ̃ is measure on ℝ. By the definition Y(t) represents the amount of time the process 𝜃(t) at the current state after the last jump and the corresponding one dimensional Pois̃ d𝛾) = N(ds, d𝛾) − 𝜈(d𝛾)dt. son measure as N(ds, Remark 1 From the definition of 𝜃(t) in Eq. (1) and Theorem 2.1 of [13], we have ̃ = 𝜃(t) for t ≥ 0. 𝜃(t) To establish the optimality condition for the system (3) and associated cost functional (4), we need to develop an adjoint equation by using the following Hamiltonian functional. The Hamiltonian is a real valued function on [0, ∞) × ℝ × ℝ × ℝ × ℝ × × × [0, ∞) × ℝ × ℝ × ℝ for the system (3) and (4) is defined as follows: (t, X q (t), X q (t − 𝛿), E[X q (t)], E[X q (t − 𝛿)], i, q(t), Y(t), q (t), q (t), q (t, 𝛾)) =
�U
f (t, X q (t), X q (t − 𝛿), E[X q (t)], E[X q (t − 𝛿)], i, Y(t), a)q(t)(da) q (t)b(t, X q (t), X q (t − 𝛿), E[X q (t)], E[X q (t − 𝛿)], i, a)q(t)(da)
+
�U
−
�Γ �U
+
�U
+
�Γ �U
q (t)g(t, X q (t), X q (t − 𝛿), E[X q (t)], E[X q (t − 𝛿)], i, 𝛾, a)q(t)(da)𝜈(d𝛾)
q (t)𝜎(t, X q (t), X q (t − 𝛿), E[X q (t)], E[X q (t − 𝛿)], i, a)q(t)(da) q (t, 𝛾)g(t, X q (t), X q (t − 𝛿), E[X q (t)], E[X q (t − 𝛿)], i, 𝛾, a)q(t)(da)𝜈(d𝛾),
which can be written as q (t) =
�U −
f q (t, a)q(t)(da) +
�Γ �U
�U
q (t)bq (t, a)q(t)(da) +
q (t)gq (t, 𝛾, a)q(t)(da)𝜈(d𝛾) +
�Γ �U
�U
q (t)𝜎 q (t, a)q(t)(da)
q (t, 𝛾)gq (t, 𝛾, a)q(t)(da)𝜈(d𝛾),
(5)
where (t) , (t) , (t, 𝛾) are adjoint processes. The adjoint stochastic differential equation corresponding to the system (3) and (4) is as follows: q
13
q
q
Infinite horizon optimal control of mean‑field delay system…
) ( q q d q (t) = − xq (t) + xq (t) + x̃ (t) + x̃ (t) dt + 𝛿
𝛿
+
(0) = 0.
�U �Γ
̃ q (t, 𝛾)q(t)(da)N(dt, d𝛾),
�U
q (t)q(t)(da)dW(t) (6)
q
Here, x (t) , x𝛿 (t) , x̃ (t) and x̃ are represented as partial derivatives with respect to 𝛿 X q (t) , X q (t − 𝛿) , E[X q (t)] and E[X q (t − 𝛿)] respectively. One can easily prove the existence of solutions q (t) , q (t) , q (t, 𝛾) in (6) similar to the proof of Theorem 5.1 in [1]. q
q
q
q
3 Stochastic maximum principle for optimality In this section, infinite horizon version of stochastic maximum principle for meanfield, optimal control of stochastic delay differential equation through semi-Markov processes is derived. Definition 1 An admissible relaxed control 𝜇(t) ∈ is called optimal relaxed control for the system (3) and (4) if it satisfies the following condition:
(𝜇) = inf (q). q(t)∈
In order to prove the stochastic maximum principle for optimality of the system (3) and its cost functional (4), the following assumptions are needed: ( A1) The Hamiltonian function in (5) is convex, (A2) Conditional minimum principle
E[(t, X 𝜇 (t), X 𝜇 (t − 𝛿), E[X 𝜇 (t)], E[X 𝜇 (t − 𝛿)], 𝜇(t), i, Y(t), 𝜇 (t), 𝜇 (t), 𝜇 (t, 𝛾))] = min E[(t, X 𝜇 (t), X 𝜇 (t − 𝛿), E[X 𝜇 (t)], E[X 𝜇 (t − 𝛿)], q(t), i, Y(t), 𝜇 (t), 𝜇 (t), 𝜇 (t, 𝛾))]. q∈
(A3) Transversality condition
lim E[ 𝜇 (T)(X q (T) − X 𝜇 (T))] ≤ 0.
T→∞
Theorem 1 Let X 𝜇 (t) , E[X 𝜇 (t)] be the state process and its mean of the system (3) and (4) corresponding to the admissible relaxed control 𝜇(t) ∈ . Suppose the adjoint processes 𝜇 (t) , 𝜇 (t) , 𝜇 (t, 𝛾) are solutions of the adjoint stochastic differential equation (6) and the assumptions (A1)−(A3) holds, then 𝜇(t) is an optimal relaxed control for the system (3) and (4).
Proof To show 𝜇(t) ∈ is an optimal relaxed control, it is sufficient to prove that an arbitrary relaxed control q(t) ∈ such that (q) − (𝜇) ≥ 0 . For,
13
R. Deepa, P. Muthukumar
(q) − (𝜇) = E
�0
∞
[
�U
f q (t, a)q(t)(da) −
�U
] f 𝜇 (t, a)𝜇(t)(da) dt.
By the definition of the Hamiltonian function in (5), (q) − (𝜇)
[
) b𝜇 (t, a)𝜇(t)(da) �U �0 �U ( ) + 𝜇 (t) gq (t, a)q(t)(da)𝜈(d𝛾) − g𝜇 (t, a)𝜇(t)(da)𝜈(d𝛾) �Γ �U �Γ �U ( ) − 𝜇 (t) 𝜎 q (t, a)q(t)(da) − 𝜎 𝜇 (t, a)𝜇(t)(da) �U �U ( )] −𝜇 (t, 𝛾) gq (t, 𝛾, a)q(t)(da)𝜈(d𝛾) − g𝜇 (t, 𝛾, a)𝜇(t)(da)𝜈(d𝛾) dt. �Γ �U �Γ �U ∞
=E
(
( q (t) − 𝜇 (t)) − 𝜇 (t)
bq (t, a)q(t)(da) −
Since by assumption (A1), the Hamiltonian is convex,
q (t) − 𝜇 (t) ≥ x𝜇 (t)(X q (t) − X 𝜇 (t)) + x𝜇 (t)(X q (t − 𝛿) − X 𝜇 (t − 𝛿)) + x̃𝜇 (t)(E[X q (t)] − E[X 𝜇 (t)]) + x̃𝜇 (t)(E[X q (t − 𝛿)] − E[X 𝜇 (t − 𝛿)]). 𝛿
𝛿
Now, (q) − (𝜇) ≥ E
�0
∞
[
x𝜇 (t)(X q (t) − X 𝜇 (t)) + x𝜇 (t)(X q (t − 𝛿) − X 𝜇 (t − 𝛿)) 𝛿
+ x̃𝜇 (t)(E[X q (t)] − E[X 𝜇 (t)]) + x̃𝜇 (t)(E[X q (t 𝛿
( − 𝜇 (t) ( + 𝜇 (t)
( − 𝜇 (t)
�U
bq (t, a)q(t)(da) −
�Γ �U
�U
− 𝛿)] − E[X 𝜇 (t − 𝛿)]) ) b𝜇 (t, a)𝜇(t)(da)
gq (t, a)q(t)(da)𝜈(d𝛾) −
�Γ �U
)
g𝜇 (t, a)𝜇(t)(da)𝜈(d𝛾) )
�U �U ( )] − 𝜇 (t, 𝛾) gq (t, 𝛾, a)q(t)(da)𝜈(d𝛾) − g𝜇 (t, 𝛾, a)𝜇(t)(da)𝜈(d𝛾) dt. �Γ �U �Γ �U 𝜎 q (t, a)q(t)(da) −
𝜎 𝜇 (t, a)𝜇(t)(da)
Applying the Itô formula for the processes 𝜇 (t)(X q (t) − X 𝜇 (t)),
13
(7)
Infinite horizon optimal control of mean‑field delay system…
E[ 𝜇 (T)(X q (T) − X 𝜇 (T))] − E[ 𝜇 (0)(X q (0) − X 𝜇 (0))] ( ) T{ 𝜇 q 𝜇 (t) b (t, a)q(t)(da) − b (t, a)𝜇(t)(da) =E �U �U �0 ( )} gq (t, a)q(t)(da)𝜈(d𝛾) − g𝜇 (t, a)𝜇(t)(da)𝜈(d𝛾) dt − 𝜇 (t) �Γ � U �Γ �U ( T T q 𝜇 𝜇 𝜇 𝜎 q (t, a)q(t)(da) + (t) (X (t) − X (t))d (t) + �U �0 �0 ) ( T 𝜇 𝜇 − 𝜎 (t, a)𝜇(t)(da) dt + (t, 𝛾) gq (t, 𝛾, a)q(t)(da)𝜈(d𝛾) �Γ �U �U �0 )] − g𝜇 (t, 𝛾, a)𝜇(t)(da)𝜈(d𝛾) dt. �Γ �U
(8)
Here
�0
T
(X q (t) − X 𝜇 (t))d 𝜇 (t) = −
�0
T
x𝜇 (t)(X q (t) − X 𝜇 (t))
{
+ x𝜇 (t)(X q (t − 𝛿) − X 𝜇 (t − 𝛿))
+ x̃𝜇 (t)(E[X q (t)] − E[X 𝜇 (t)])
(9)
𝛿
} + x̃𝜇 (t)(E[X q (t − 𝛿)] − E[X 𝜇 (t − 𝛿)]) dt. 𝛿
Substituting (9) in (8) and taking limit as T → ∞ gives
lim E[ 𝜇 (T)(X q (T) − X 𝜇 (T))] ) ( ∞{ q 𝜇 𝜇 b (t, a)𝜇(t)(da) b (t, a)q(t)(da) − =E (t) �U �0 �U ( ) − 𝜇 (t) gq (t, a)q(t)(da)𝜈(d𝛾) − g𝜇 (t, a)𝜇(t)(da)𝜈(d𝛾) �Γ �U �Γ �U
T→∞
− x𝜇 (t)(X q (t) − X 𝜇 (t)) − x𝜇 (t)(X q (t − 𝛿) − X 𝜇 (t − 𝛿))
− x̃𝜇 (t)(E[X q (t)] − E[X 𝜇 (t)]) − x̃𝜇 (t)(E[X q (t − 𝛿)] − E[X 𝜇 (t − 𝛿)]) 𝛿 ( ) 𝜇 q + (t) 𝜎 (t, a)q(t)(da) − 𝜎 𝜇 (t, a)𝜇(t)(da) dt �U �U ( )} gq (t, 𝛾, a)q(t)(da)𝜈(d𝛾) − g𝜇 (t, 𝛾, a)𝜇(t)(da)𝜈(d𝛾) dt. + 𝜇 (t, 𝛾) �Γ �U �Γ �U 𝛿
(10)
Substituting (7) in (10), gives
13
R. Deepa, P. Muthukumar
lim E[ 𝜇 (T)(X q (T) − X 𝜇 (T))] ≥ −( (q) − (𝜇)).
T→∞
By using the assumption (A3),
(q) − (𝜇) ≥ 0. Hence the proof.
□
4 Necessary condition for optimality The main contribution of this work is to establish the necessary condition for optimality by using the method of convex perturbation technique for the concerned relaxed control system (3) and (4). If 𝜇(t) ∈ is an optimal relaxed control and q(t) is an arbitrary element of with a sufficiently small 𝜖 > 0 , for each t ≥ 0 , then the perturbed control is defined as follows:
𝜇𝜖 (t) = 𝜇(t) + 𝜖[q(t) − 𝜇(t)].
(11)
Before proving the theorem on necessary condition for optimality of the system (3) and (4) the following three lemmas are needed:
Lemma 1 Let 𝜇(t) ∈ be a relaxed control and X 𝜇 (t) the corresponding state trajectory such that ] [ 2 | 𝜖 𝜇 | lim sup E|X (t) − X (t)| = 0, | 𝜖→0 0≤t≤∞ | where X 𝜖 (t) is the solution of Eq. (3) associated with 𝜇𝜖 defined in (11).
Proof For any 𝜇𝜖 , 𝜇 ∈ , the corresponding integral equation of (3) gives, ) b𝜇 (s, a)𝜇(s)(da) ds ∫0 ∫U ∫U ) t( 𝜖 𝜖 𝜇 𝜎 (s, a)𝜇 (s)(da) − 𝜎 (s, a)𝜇(s)(da) dW(s) + ∫U ∫0 ∫U ( ) t ̃ d𝛾). g𝜖 (s, 𝛾, a)𝜇𝜖 (s)(da) − g𝜇 (s, 𝛾, a)𝜇(s)(da) N(ds, + ∫0 ∫Γ ∫U ∫U t
X 𝜖 (t) − X 𝜇 (t) =
(
b𝜖 (s, a)𝜇𝜖 (s)(da) −
Substituting the definition of 𝜇𝜖 (s) in (11), gives
13
Infinite horizon optimal control of mean‑field delay system…
X 𝜖 (t) − X 𝜇 (t) =
∫0
t
+𝜖 −
{
(
∫U
∫U
∫U
b𝜖 (s, a)𝜇(s)(da) −
b𝜖 (s, a)q(s)(da) −
𝜎 𝜇 (s, a)𝜇(s)(da) +𝜖 t
{
(
∫U
b𝜇 (s, a)𝜇(s)(da)
∫U
) b𝜖 (s, a)𝜇(s)(da) ds +
∫U
𝜎 𝜖 (s, a)q(s)(da) −
∫U
∫0
t
{
∫U
𝜎 𝜖 (s, a)𝜇(s)(da)
)} 𝜎 𝜖 (s, a)𝜇(s)(da) dW(s)
g𝜖 (s, 𝛾, a)𝜇(s)(da) − g𝜇 (s, 𝛾, a)𝜇(s)(da) ∫0 ∫Γ ∫U ∫U ( )} ̃ +𝜖 N(ds, d𝛾). g𝜖 (s, 𝛾, a)q(s)(da) − g𝜖 (s, 𝛾, a)𝜇(s)(da) ∫U ∫U +
Now [ t ( ) | | 𝜖 |2 E|X 𝜖 (t) − X 𝜇 (t)| ≤ CE b𝜇 (s, a)𝜇(s)(da) | b (s, a)𝜇(s)(da) − | | �U �0 | ] ) ( |2 − g𝜖 (s, 𝛾, a)𝜇(s)(da) − g𝜇 (s, 𝛾, a)𝜇(s)(da) 𝜈(d𝛾)| ds | �Γ �U �U [ t ( ) | + C𝜖 2 E b𝜖 (s, a)q(s)(da) − b𝜖 (s, a)𝜇(s)(da) | �0 | �U �U ) ] ( |2 − g𝜖 (s, 𝛾, a)𝜇(s)(da) − g𝜖 (s, 𝛾, a)𝜇(s)(da) 𝜈(d𝛾)| ds | �Γ �U �U [ t ] 2 | | + CE 𝜎 𝜖 (s, a)𝜇(s)(da) − 𝜎 𝜇 (s, a)𝜇(s)(da)| ds | | �0 | �U �U [ t ] | |2 𝜖 𝜖 + C𝜖 2 E 𝜎 (s, a)q(s)(da) − 𝜎 (s, a)𝜇(s)(da) ds | | | � 0 | �U �U ] [ t | |2 + CE g𝜖 (s, 𝛾, a)𝜇(s)(da) − g𝜇 (s, 𝛾, a)𝜇(s)(da)| 𝜈(d𝛾)ds | | �U �0 �Γ | �U [ t ] | |2 + C𝜖 2 E g𝜖 (s, 𝛾, a)𝜇(s)(da) − g𝜖 (s, 𝛾, a)𝜇(s)(da)| 𝜈(d𝛾)ds , | | �U �0 �Γ | �U
where C is a constant and as mentioned before that b, 𝜎 , g are continuously differentiable with respect to their arguments and have bounded derivatives, which gives the following
| |2 | 𝜖 |2 E|X 𝜖 (t) − X 𝜇 (t)| ≤ CE |X (s) − X 𝜇 (s)| ds + C𝜖 2 . | | | �0 | t
By applying the Gronwall’s lemma and the Burkholder–Davis–Gundy inequality to get the desired result. Lemma 2 Let Z(t) be the solution of the following variational equation
13
R. Deepa, P. Muthukumar
dZ(t) = (t) +
( ) b𝜇 (t, a) q(t)(da) − 𝜇(t)(da) dt +
�U ) − 𝜇(t)(da) dW(t) +
�Γ �U
( 𝜎 𝜇 (t, a) q(t)(da)
�U ( ) ̃ d𝛾), g𝜇 (t, 𝛾, a) q(t)(da) − 𝜇(t)(da) N(dt,
Z(0) = 0,
(12)
where (t) =
[
�U
+ + +
b𝜇x (t, a)Z(t)𝜇(t)(da) +
�U
�U [
b𝜇x̃ (t, a)E[Z(t)]𝜇(t)(da) +
�U
𝜎x̃𝜇 (t, a)E[Z(t)]𝜇(t)(da)
�U
[
𝜎x𝜇 (t, a)Z(t)𝜇(t)(da) +
+
� Γ �U
+
�U
b𝜇x (t, a)Z(t − 𝛿)𝜇(t)(da) 𝛿
�U
�U
+
𝛿
𝜎x𝜇 (t, a)Z(t − 𝛿)𝜇(t)(da) 𝛿
�U
g𝜇x (t, 𝛾, a)Z(t)𝜇(t)(da) +
g𝜇x̃ (t, 𝛾, a)E[Z(t)]𝜇(t)(da) +
] b𝜇x̃ (t, a)E[Z(t − 𝛿)]𝜇(t)(da) dt
𝜎x̃𝜇 (t, a)E[Z(t 𝛿 �U
�U
− 𝛿)]𝜇(t)(da) dW(t)
g𝜇x (t, 𝛾, a)Z(t − 𝛿)𝜇(t)(da) 𝛿
] ̃ d𝛾), g𝜇x̃ (t, 𝛾, a)E[Z(t − 𝛿)]𝜇(t)(da) N(dt, 𝛿
then,
|2 | X 𝜖 (t) − X 𝜇 (t) − Z(t)| = 0. lim E| | 𝜖→0 | 𝜖
Proof For any 𝜇𝜖 , 𝜇 ∈ , let us take,
(t) =
X 𝜖 (t) − X 𝜇 (t) − Z(t). 𝜖
Using Eqs. (3), (12) and the definition of 𝜇𝜖 (s) in (11),
13
]
Infinite horizon optimal control of mean‑field delay system…
(t) =
) b𝜇 (s, a)𝜇(s)(da) ds �0 �U �U ) t( 𝜎 𝜖 (s, a)𝜇(s)(da) − 𝜎 𝜇 (s, a)𝜇(s)(da) dW(s) + �U �0 �U } ) ( ̃ d𝛾) g𝜖 (s, 𝛾, a)𝜇(s)(da) − g𝜇 (s, 𝛾, a)𝜇(s)(da) N(ds, + �U �U ) t( 𝜖 𝜇 + b (s, a)(q(s)(da) − 𝜇(s)(da)) − b (s, a)(q(s)(da) − 𝜇(s)(da)) ds �0 �U �U ) t( 𝜎 𝜖 (s, a)(q(s)(da) − 𝜇(s)(da)) − 𝜎 𝜇 (s, a)(q(s)(da) − 𝜇(s)(da)) dW(s) + �U � 0 �U ( t + g𝜖 (s, 𝛾, a)(q(s)(da) − 𝜇(s)(da)) �0 �Γ �U ) 𝜇 ̃ d𝛾) − (t). g (s, 𝛾, a)(q(s)(da) − 𝜇(s)(da)) N(ds, − (13) �U 1 𝜖
{
t
(
b𝜖 (s, a)𝜇(s)(da) −
Applying the condition in [18] for b, 𝜎 , g in Eq. (13) then using Lemma 1,
E|X(t)|2 ≤ CE
�0
and
t
|X(s)|2 ds + CE|𝛼 𝜖 (t)|2 ,
(14)
lim E|𝛼 𝜖 (t)|2 = 0. 𝜖→0
By using the Gronwall’s lemma in Eq. (14), which concludes the result.
Lemma 3 Let 𝜇 be the optimal relaxed control minimizing the cost over and X 𝜇 (t), the associated optimal trajectory. Then for any q(t) ∈ we have [ ∞{ ( 0≤E f 𝜇 (t, a)Z(t) + fx𝜇 (t, a)Z(t − 𝛿) + fx̃𝜇 (t, a)E[Z(t)] 𝛿 �0 �U x } ] ) + fx̃𝜇 (t, a)E[Z(t − 𝛿)] 𝜇(t)(da) + f 𝜇 (t, a)(q(t)(da) − 𝜇(t)(da)) dt . 𝛿 �U Proof The proof of this lemma is similar to Lemma 2 with the help of Eq. (11). Theorem 2 Let 𝜇(t) be an optimal relaxed control which minimizing the cost functional of (4) over . X 𝜇 (t) and E[X 𝜇 (t)] are corresponding state process and its mean of the system (3). If there exist an unique adjoint processes 𝜇 (t) , 𝜇 (t) , 𝜇 (t, 𝛾) which are solutions of the adjoint stochastic differential equation (6) and satisfies transversality condition lim 𝜇 (T)Z(T) = 0, then T→∞
(s, X 𝜇 (s), X 𝜇 (s − 𝛿), E[X 𝜇 (s)], E[X 𝜇 (s − 𝛿)], 𝜇(s), i, Y(s), 𝜇 (s), 𝜇 (s), 𝜇 (s, 𝛾)) ≤ (s, X 𝜇 (s), X 𝜇 (s − 𝛿), E[X 𝜇 (s)], E[X 𝜇 (s − 𝛿)], q(s), i, Y(s), 𝜇 (s), 𝜇 (s), 𝜇 (s, 𝛾)).
13
R. Deepa, P. Muthukumar
Proof By Lemma 3 we have, [ ∞{ ( 0≤E f 𝜇 (t, a)Z(t) + fx𝜇 (t, a)Z(t − 𝛿) + fx̃𝜇 (t, a)E[Z(t)] 𝛿 �0 �U x
+
fx̃𝜇 (t, a)E[Z(t 𝛿
) − 𝛿)] 𝜇(t)(da) +
�U
} ] (15) f (t, a)(q(t)(da) − 𝜇(t)(da)) dt . 𝜇
Now apply Itô formula to the process 𝜇 (t)Z(t), 𝜇 (T)Z(T) − 𝜇 (0)Z(0) { T = 𝜇 (t) b𝜇 (t, a)Z(t)𝜇(t)(da) �0 �U x +
�U
b𝜇x (t, a)Z(t − 𝛿)𝜇(t)(da) 𝛿
b𝜇x̃ (t, a)E[Z(t)]𝜇(t)(da) + b𝜇 (t, a)E[Z(t − 𝛿)]𝜇(t)(da) �U x̃ 𝛿 ( g𝜇 (t, 𝛾, a)Z(t)𝜇(t)(da) + g𝜇 (t, 𝛾, a)Z(t − 𝛿)𝜇(t)(da) + �U x𝛿 �Γ �U x +
�U
+
�U
+
�U
g𝜇x̃ (t, 𝛾, a)E[Z(t)]𝜇(t)(da)
�0
�U
b𝜇 (t, a)(q(t)(da) − 𝜇(t)(da))dt +
) } − 𝜇(t)(da) 𝜈(d𝛾)dt dt − +
+
g𝜇x̃ (t, 𝛾, a)E[Z(t 𝛿
T
{
𝜇 (t)
�U
�0
T
�Γ �U
)
− 𝛿)]𝜇(t)(da) 𝜈(d𝛾)
g𝜇 (t, 𝛾, a)(q(t)(da)
( ) Z(t) x (t) + x𝛿 (t) + x̃ (t) + x̃ 𝛿 (t)
𝜎x𝜇 (t, a)Z(t)𝜇(t)(da) +
+
�U
𝜎x̃𝜇 (t, a)E[Z(t)]𝜇(t)(da) +
+
�U
g𝜇x (t, 𝛾, a)Z(t − 𝛿)𝜇(t)(da) +
+
�U
g𝜇x̃ (t, 𝛾, a)E[Z(t − 𝛿)]𝜇(t)(da)
+
�U
g𝜇x (t, 𝛾, a)Z(t)𝜇(t)(da)(q(t)(da)
�U
𝜎x𝜇 (t, a)Z(t − 𝛿)𝜇(t)(da) 𝛿
𝜎 𝜇 (t, a)E[Z(t − 𝛿)]𝜇(t)(da) �U x̃ 𝛿 { } T 𝜇 𝜇 (t, 𝛾) 𝜎 (t, a)(q(t)(da) − 𝜇(t)(da)) dt + g𝜇 (t, 𝛾, a)Z(t)𝜇(t)(da) + �U x �0 �Γ �U x̃ 𝛿
�U
g𝜇x̃ (t, 𝛾, a)E[Z(t)]𝜇(t)(da)
𝛿
13
} − 𝜇(t)(da)) 𝜈(d𝛾)dt.
Infinite horizon optimal control of mean‑field delay system…
Taking limit as T → ∞ , applying the transversality condition lim 𝜇 (T)Z(T) = 0, to T→∞
the above equation with the help of (5) and (15) gives, 0≤E
(t, X 𝜇 (t), X 𝜇 (t − 𝛿), E[X 𝜇 (t)], E[X 𝜇 (t − 𝛿)], q(t), i, Y(t), 𝜇 (t), 𝜇 (t), 𝜇 (t, 𝛾)) ] − (t, X 𝜇 (t), X 𝜇 (t − 𝛿), E[X 𝜇 (t)], E[X 𝜇 (t − 𝛿)], 𝜇(t), i, Y(t), 𝜇 (t), 𝜇 (t), 𝜇 (t, 𝛾)) dt. �0
∞
[
(16)
Let t ∈ [0, ∞). For 𝜗 > 0 , the relaxed control is defined as 𝜗
q (t) =
{
q(t) on [t, t + 𝜗], 𝜇(t) otherwise.
It is obvious that q𝜗 is an element of . By applying q𝜗 (t) in (16) and dividing by 𝜗 , we get (s, X 𝜇 (s), X 𝜇 (s − 𝛿), E[X 𝜇 (s)], E[X 𝜇 (s − 𝛿)], q(s), i, Y(s), 𝜇 (s), 𝜇 (s), 𝜇 (s, 𝛾)) ] − (s, X 𝜇 (s), X 𝜇 (s − 𝛿), E[X 𝜇 (s)], E[X 𝜇 (s − 𝛿)], 𝜇(s), i, Y(s), 𝜇 (s), 𝜇 (s), 𝜇 (s, 𝛾)) ds.
0≤
1 E 𝜗 �t
t+𝜗
[
Take 𝜗 → 0, we obtain [ 0 ≤ E (s, X 𝜇 (s), X 𝜇 (s − 𝛿), E[X 𝜇 (s)], E[X 𝜇 (s − 𝛿)], q(s), i, Y(s), 𝜇 (s), 𝜇 (s), 𝜇 (s, 𝛾)) ] − (s, X 𝜇 (s), X 𝜇 (s − 𝛿), E[X 𝜇 (s)], E[X 𝜇 (s − 𝛿)], 𝜇(s), i, Y(s), 𝜇 (s), 𝜇 (s), 𝜇 (s, 𝛾)) .
Let F be an arbitrary element of the 𝜎-algebra t , and take 𝛽(t) = q(t)𝐈F + 𝜇(t)𝐈Ω−F , where 𝐈 is an indicator function. It is clear that 𝛽 ∈ . Applying the above inequality with 𝛽 we get, [ ( 0 ≤ E 𝐈F (s, X 𝜇 (s), X 𝜇 (s − 𝛿), E[X 𝜇 (s)], E[X 𝜇 (s − 𝛿)], q(s), i, Y(s), 𝜇 (s), 𝜇 (s), 𝜇 (s, 𝛾)) )] − (s, X 𝜇 (s), X 𝜇 (s − 𝛿), E[X 𝜇 (s)], E[X 𝜇 (s − 𝛿)], 𝜇(s), i, Y(s), 𝜇 (s), 𝜇 (s), 𝜇 (s, 𝛾)) ,
for all F ∈ t , which implies that 0 ≤E
[( (s, X 𝜇 (s), X 𝜇 (s − 𝛿), E[X 𝜇 (s)], E[X 𝜇 (s − 𝛿)], q(s), i, Y(s), 𝜇 (s), 𝜇 (s), 𝜇 (s, 𝛾)) ) ] − (s, X 𝜇 (s), X 𝜇 (s − 𝛿), E[X 𝜇 (s)], E[X 𝜇 (s − 𝛿)], 𝜇(s), i, Y(s), 𝜇 (s), 𝜇 (s), 𝜇 (s, 𝛾)) ∕t ,
the values inside the conditional expectation is t measurable, which gives the desired result.
13
R. Deepa, P. Muthukumar
Remark 2 The above Sects. 3 and 4 are devoted to develop infinite horizon version of stochastic maximum principle and necessary conditions for optimality of the proposed mean-field delay semi-Markov model. The assumption (A3), transversality condition at infinity plays an important role in applying the stochastic maximum principle and characterizes the behavior of adjoint variables at infinity. It is also called zero limit condition.
5 Example In this section portfolio optimization problem in financial market is studied, in which the dynamical system is modulated by semi-Markov process and mean-field. Let X q (t) be the price processes, ri (t, 𝜃(t)) , for i = 1, 2, 3 are the risk-less interest rate at time t, which is modulated by semi-Markov processes. Moreover, the stock price processes is driven by one-dimensional Brownian motion and Poisson jump processes. Therefore, the dynamical system is given by the following
dX q (t) =
∫U + +
[(
∫U
) ] q ̄ A + r1 (t, 𝜃(t)) X q (t) + AE[X (t)] + Ba q(t)(da)dt [(
) ] q ̄ C + r2 (t, 𝜃(t)) X q (t) + CE[X (t)] + Da q(t)(da)dW(t)
∫Γ ∫U
q
X (0) = x0 ,
) ] [( q ̄ ̃ (t)] + Ra q(t)(da)N(dt, d𝛾), Q + r3 (t, 𝜃(t)) X q (t) + QE[X (17)
for any q ∈ , where E[X (t)] denotes the average behavior of the price processes ̄ , R and x0 are constants. The cost functional corresponding to and A, Ā , B, C, D, Q, Q the dynamical system (17) as follows q
J(q) =
1 E 2 �0
∞
�U
) ( q 2 q ̄ S(X (t)) + S(E[X (t)])2 + a2 q(t)(da)dt,
(18)
where S, S̄ , are constants. Corresponding Hamiltonian function to the system (17) and (18) is given by the following
q (t) =
13
( q 2 ) 1 q ̄ S(X (t)) + S(E[X (t)])2 + a2 q(t)(da) � 2 U [( ) ] q ̄ + q (t) A + r1 (t, 𝜃(t)) X q (t) + AE[X (t)] + Ba q(t)(da)dt �U ) ] [( q ̄ + q (t) (t)] + Da q(t)(da) C + r2 (t, 𝜃(t)) X q (t) + CE[X �U [( ) ] q ̄ + q (t, 𝛾) Q + r3 (t, 𝜃(t)) X q (t) + QE[X (t)] + Ra q(t)(da), �Γ �U
(19)
Infinite horizon optimal control of mean‑field delay system…
where q (t) is convex and q (t) , q (t) , q (t, 𝛾) are adjoint processes of following adjoint differential equation corresponding to the system (17) and (18),
[ q ( ) q ̄ ̄ SX (t) + SE[X (t)] + q (t) A + r1 (t, 𝜃(t) + A) ( ) ( )] ̄ + q (t, 𝛾) Q + r3 (t, 𝜃(t) + Q) ̄ q(t)(da) + q (t) C + r2 (t, 𝜃(t) + C)
d q (t) = −
�U
+
�U
q (t)q(t)(da)dW(t) +
�U �Γ
̃ q (t, 𝛾)q(t)(da)N(dt, d𝛾).
(20)
Here the adjoint variable q (t) satisfies the transversality condition given in (A3). Also q (t) satisfies the assumption (A1) and (A2). Hence by applying Theorem 1, there exist an optimal control for the system (17) and (18). In order to find the optimal control 𝜇(t) , differentiate (19) with respect to the relaxed control variable a and equating zero which gives as follows:
𝜇(t) = −
1 ( q (t)B + q (t)D + q (t, 𝛾)R),
where 𝜇(t) is the required optimal control for the system (17) and (18). Using the adjoint differential equation (20) one can find the adjoint processes q (t) , q (t) , q (t, 𝛾).
6 Conclusion In this paper optimal control of mean-field delay system with semi-Markov modulated jump diffusion processes is formulated and discussed over an infinite horizon time interval. Also the control domain is assumed to be non-convex. Moreover infinite horizon version of stochastic maximum principle is established. Further, necessary condition for optimality is proved by using the convex perturbation technique. As an illustration, using the theoretical results, portfolio optimization problem in financial market is discussed. For future work, the authors will develop the differential game problems and near optimal control problems. Acknowledgements The author would like to thank the editor, the associate editor, and anonymous referees for their constructive corrections and valuable suggestions that improved the manuscript. Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interest.
13
R. Deepa, P. Muthukumar
References 1. Agram, N., S. Haadem, B. Øksendal, and F. Proske. 2013. A maximum principle for infinite horizon delay equations. SIAM Journal on Mathematical Analysis 45 (4): 2499–2522. 2. Agram, N., and B. Øksendal. 2014. Infinite horizon optimal control of forward–backward stochastic differential equations with delay. Journal of Computational and Applied Mathematics 259: 336–349. 3. Agram, N., and E.E. Rose. 2018. Optimal control of forward–backward mean-field stochastic delayed systems. Afrika Matematika 29 (1–2): 149–174. 4. Bahlali, K., M. Mezerdi, and B. Mezerdi. 2017. On the relaxed mean-field stochastic control problem. Stochastics and Dynamics. 18 (3): 1850024. 5. Balasubramaniam, P., and P. Tamilalagan. 2017. The solvability and optimal controls for impulsive fractional stochastic integro-differential equations via resolvent operators. Journal of Optimization Theory and Applications 174 (1): 139–155. 6. Bensoussan, A., J. Frehse, and P. Yam. 2013. Mean field games and mean field type control theory. New York: Springer. 7. Chala, A., and S. Bahlali. 2014. Stochastic controls of relaxed-singular problems. Random Operators and Stochastic Equations 22 (1): 31–41. 8. Chala, A. 2014. The relaxed optimal control problem for mean-field SDEs systems and application. Automatica 50 (3): 924–930. 9. D’Amico, G., J. Janssen, and R. Manca. 2006. Homogeneous semi-Markov reliability models for credit risk management. Decisions in Economics and Finance 28 (2): 79–93. 10. Deshpande, A. 2014. Sufficient stochastic maximum principle for the optimal control of semiMarkov modulated jump-diffusion with application to financial optimization. Stochastic Analysis and Applications 32 (6): 911–933. 11. Djehiche, B., H. Tembine, and R. Tempone. 2015. A stochastic maximum principle for risksensitive mean-field type control. IEEE Transactions on Automatic Control 60 (10): 2640–2649. 12. Dmitruk, A.V., and N.V. Kuz’kina. 2005. Existence theorem in the optimal control problem on an infinite time interval. Mathematical Notes 78: 466–480. 13. Ghosh, M.K., and A. Goswami. 2009. Risk minimizing option pricing in a semi-Markov modulated market. SIAM Journal on Control and Optimization 48 (3): 1519–1541. 14. Ghosh, M.K., and S. Saha. 2012. Optimal control of Markov processes with age-dependent transition rates. Applied Mathematics and Optimization 66 (2): 257–271. 15. Gikhman, I.I., and A.V. Skorokhod. 1983. The theory of stochastic processes II. Berlin: Springer. 16. Haadem, S., B. Øksendal, and F. Proske. 2013. Maximum principles for jump-diffusion processes with infinite horizon. Automatica 49 (7): 2267–2275. 17. Hafayed, M., S. Meherrem, D.H. Gucoglu, and S. Eren. 2017. Variational principle for stochastic singular control of mean-field L e ́ vy forward–backward system driven by orthogonal Teugels martingales with application. International Journal of Modelling, Identification and Control 28 (2): 97–113. 18. Li, J. 2012. Stochastic maximum principle in the mean-field controls. Automatica 48 (2): 366–373. 19. Lv, S., R. Tao, and Z. Wu. 2016. Maximum principle for optimal control of anticipated forward– backward stochastic differential delayed systems with regime switching. Optimal Control Applications and Methods 37 (1): 154–175. 20. Ma, H., and B. Liu. 2016. Maximum principle for partially observed risk-sensitive optimal control problems of mean-field type. European Journal of Control 32: 16–23. 21. Ma, H., and B. Liu. 2017. Infinite horizon optimal control problem of mean-field backward stochastic delay differential equation under partial information. European Journal of Control 36: 43–50. 22. Martelli, M., and B. Stavros. 1991. Delay differential equations and dynamical systems. Berlin: Springer. 23. Meng, Q., and Y. Shen. 2015. Optimal control of mean-field jump-diffusion systems with delay: A stochastic maximum principle approach. Journal of Computational and Applied Mathematics 279: 13–30. 24. Muthukumar, P., and R. Deepa. 2017. Infinite horizon optimal control of forward–backward stochastic system driven by Teugels martingales with L e ́ vy processes. Stochastics and Dynamics 17 (03): 1750020.
13
Infinite horizon optimal control of mean‑field delay system… 25. Shen, Y., Q.X. Meng, and P. Shi. 2014. Maximum principle for mean-field jump-diffusion stochastic delay differential equations and its application to finance. Automatica 50 (6): 1565–1579. 26. Socgnia, V.K., and O. Menoukeu-Pamen. 2015. An infinite horizon stochastic maximum principle for discounted control problem with Lipschitz coefficients. Journal of Mathematical Analysis and Applications 422 (1): 684–711. 27. Tamilalagan, P., and P. Balasubramaniam. 2018. The solvability and optimal controls for fractional stochastic differential equations driven by Poisson jumps via resolvent operators. Applied Mathematics and Optimization 77 (3): 443–462. 28. Tankov, P. 2003. Financial modelling with jump processes. Boca Raton: CRC Press. 29. Zhang, F. 2013. Stochastic maximum principle for mixed regular-singular control problems of forward–backward systems. Journal of Systems Science and Complexity 26 (6): 886–901.
13