Math Meth Oper Res https://doi.org/10.1007/s00186-017-0609-x ORIGINAL ARTICLE
Non-linear filtering and optimal investment under partial information for stochastic volatility models Dalia Ibrahim1 · Frédéric Abergel1
Received: 13 October 2016 © Springer-Verlag GmbH Germany 2018
Abstract This paper studies the question of filtering and maximizing terminal wealth from expected utility in partial information stochastic volatility models. The special feature is that the only information available to the investor is the one generated by the asset prices, and the unobservable processes will be modeled by stochastic differential equations. Using the change of measure techniques, the partial observation context can be transformed into a full information context such that coefficients depend only on past history of observed prices (filter processes). Adapting the stochastic nonlinear filtering, we show that under some assumptions on the model coefficients, the estimation of the filters depend on a priori models for the trend and the stochastic volatility. Moreover, these filters satisfy a stochastic partial differential equations named “Kushner–Stratonovich equations”. Using the martingale duality approach in this partially observed incomplete model, we can characterize the value function and the optimal portfolio. The main result here is that, for power and logarithmic utility, the dual value function associated to the martingale approach can be expressed, via the dynamic programming approach, in terms of the solution to a semilinear partial differential equation which depends on the filters estimate and the volatility. We illustrate our results with some examples of stochastic volatility models popular in the financial literature. Keywords Partial information · Stochastic volatility · Utility maximization · Martingale duality method · Non-linear filtering · Kushner–Stratonovich equations · Semilinear partial differential equation
B 1
Dalia Ibrahim
[email protected] Laboratoire MICS-Mathématiques et Informatique pour la Complexité et les Systèmes, CentraleSupélec, Plateau de Moulon, 9 rue Joliot-Curie, 91192 Gif-sur-Yvette, France
123
D. Ibrahim, F. Abergel
1 Introduction The basic problem of mathematical finance is the problem of an economic agent who invests in a financial market so as to maximize the expected utility of his terminal wealth. In the framework of continuous time model, the utility maximization problem has been studied by the first time in a Black–Scholes environment (full information) via the Hamilton–Jacobi–Bellman equation and dynamic programming. As in financial market models, we do not have in general a complete knowledge of all the parameters, which may be driven by unobserved random factors. So, we are in the situation of the utility maximization problem with partial observation, which has been studied extensively in the literature by Detemple (1986), Dothan and Feldman (1986), Lakner (1995, 1998) etc. There are many generalizations of Merton’s setting. The natural generalizations was to model the volatility by a stochastic process. In this paper, we consider a financial market where the price process of risky asset follows a stochastic volatility model and we require that investors observe just the stock price. So we are in the framework of partially observed incomplete market, where our aim is to solve the utility maximization problem in this context. In order to solve this problem with partial observation, the common way is to use the stochastic non-linear filtering and change of measure techniques, so as the partial observation context can be transformed into a full information context. Then it is possible to solve this problem either with the martingale approach or via dynamic programming approach. Models with incomplete information have been investigated by Dothan and Feldman (1986) using dynamic programming methods in a linear Gaussian filtering, Lakner (1995, 1998) has solved the partial optimization problem via martingale approach and worked out the special case of the linear Gaussian filtering. Pham and Quenez (2001) treated the case of partial information stochastic volatility model where they have combined stochastic filtering techniques and a martingale duality approach to characterize the value function and the optimal portfolio of the utility maximization problem. They have studied two cases: the case where the risks of the model are assumed to be independent Gaussian processes and the Bayesian case studied by Karatzas and Zhao (2001). In this paper, we are in the same framework studied by Pham and Quenez (2001), but here we assume that the unobservable processes are modeled by stochastic differential equations. More precisely, the unobservable drift of the stock and that of the stochastic volatility are modeled by stochastic differential equations. The main result in this case, is that the filters estimate of the risks depend on a priori models for the trend and the stochastic volatility. There are two reasons for this result, firstly we need to choose the models of the trend and the stochastic volatility such that the risks dynamics can be described only in terms of them. Secondly, we need to choose these models such that the coefficients of the risks dynamics satisfy some regularity assumptions, like globally Lipschitz conditions and some finite order moments will be imposed. We show that the filters estimate of the risks satisfy a system of stochastic partial differential equations named “Kushner–Stratonovich equations”. But these equations are valued in an infinite dimensional space and cannot be solved explicitly, so numerical approximations can be used to resolve them. Also, we study the case of
123
Non-linear filtering and optimal investment under partial...
finite dimensional filters like Kalman–Bucy filter. We illustrate our results with several popular examples of stochastic volatility models. After replacing the original partial information problem by a full information one which depends only on the past history of observed prices, it is then possible to use the classical theory for stochastic control problem. Here we will be interested by the martingale approach to resolve our utility optimization problem. As the reduced market is incomplete, we complement the martingale approach by using the theory of stochastic control to solve the related dual optimization problem. In Pham and Quenez (2001), they have also used the martingale approach, but they have studied the case where the dual optimizer vanishes. The main result in this paper is that the solution of the related dual problem, for power and logarithmic utility, can be expressed in terms of the solution to a semilinear partial differential equation which depends also on the filters estimates and the volatility. The paper is organized as follows. In Sect. 2, we describe the model and formulate the optmization problem. In Sect. 3, we use the non-linear filtering techniques and the change of measure techniques in order to transform the partial observation context into a full information context such that coefficients depend only on past history of observed prices (filter processes). In Sect. 4, we show that the filters estimate depend on a priori models for the trend and the stochastic volatility. We illustrate our results with examples of stochastic volatility models popular in the financial literature. Finally, in Sect. 5, we use the martingale duality approach to resolve our maximization problem, where we show that for power and logarithmic utility functions, the dual value function and the dual optimizer can be expressed in terms of the solution to a semilinear partial differential equation. By consequence, the primal value function, the optimal wealth and the optimal portfolio depend also on this solution.
2 Formulation of the problem Let (, F, P) be a complete probability space equipped with a filtration F = {Ft , 0 ≤ t ≤ T } satisfying the usual conditions, where T > 0 is a fixed time horizon. The financial market consists of one risky asset and a bank account (bound). The price of the bound is assumed for simplicity to be 1 over the entire continuous time-horizon [0, T ] and the risky asset has dynamics: d St = μt dt + g(Vt )dWt1 , S0 = s0 , St d Vt = f (βt , Vt )dt + k(Vt )(ρdWt1 + dμt = ζ (μt )dt
+ ϑ(μt )dWt3 ,
(2.1)
1 − ρ 2 dWt2 ), V0 = v0 ,
μ0 = m 0 .
(2.2) (2.3)
where s0 , v0 and μ0 = m 0 are constants. The processes W 1 and W 2 are two independent Brownian motions defined on (, F, P) and −1 ≤ ρ ≤ 1 is the correlation coefficient. W 3 is a standard Brownian motion independent of W 1 and W 2 . The drift μ = {μt , 0 ≤ t ≤ T } is not observable and follows a continuous stochastic differential
123
D. Ibrahim, F. Abergel
equation. The process βt can be taken as a function in terms of μt or another continuous unobservable process, which is solution to a stochastic differential equation. We assume that the functions g, f , k, ζ and ϑ ensure existence and uniqueness for solutions to the above stochastic differential equations. A Lipschitz conditions are sufficient, but we do not impose these on the parameters at this stage, as we do not wish to exclude some well-known stochastic volatility models from the outset. Also, we can assume that the drift μt can be replaced by μt g(Vt ), that is we have a factor model. Moreover, we assume that g(x), k(x) > 0 and the solution of (2.2) does not explode, that is, the solution does not touch 0 or ∞ in finite time. The last condition can be verified form Feller’s test for explosions given in Karatzas and Shreve (1991, p. 348). In the sequel, we denote by F S = {FtS , 0 ≤ t ≤ T } (resp. FV = {FtV , 0 ≤ t ≤ T }) the filtration generated by the price process S (resp. by the stochastic volatility V ). Also we denote by G = {Gt , 0 ≤ t ≤ T } the natural P-augmentation of the market filtration generated by the price process S. 2.1 The optimization problem Let πt be the fraction of the wealth that the trader decides to invest in the risky asset at time t, and 1 − πt is the fraction of wealth invested in the bound. We assume that the trading strategy is self-financing, then the wealth process corresponding to a portfolio π is defined by R0π = x and satisfies the following stochastic differential equation d Rtπ = Rtπ πt μt dt + πt g(Vt )dWt1 . A function U : R → R is called a utility function if it is strictly increasing, strictly concave of class C 2 . We assume that the investor wants to maximize the expected utility of his terminal wealth. The optimization problem thus reads as J (x) = sup E[U (RTπ )], π ∈A
x > 0,
(2.4)
where A denotes the set of the admissible controls (πt , 0 ≤ t ≤ T ) which are F S adapted, and satisfy the integrability condition 0
T
g 2 (Vs )πs2 ds < ∞
P − a.s.
(2.5)
We are in a context when an investor wants to maximize the expected utility from terminal wealth, where the only information available to the investor is the one generated by the asset prices, therefore leading to a utility maximization problem in partially observed incomplete model. In order to solve it, we aim to reduce it to a maximization problem with full information. For that, it becomes important to exploit all the information coming from the market itself in order to continuously update the knowledge of the not fully known quantities and this is where stochastic filtering becomes useful.
123
Non-linear filtering and optimal investment under partial...
3 Reduction to a full observation context Let us consider the following processes μt , g(Vt ) −1 β˜t := 1 − ρ 2 k(Vt ) ( f (βt , Vt ) − ρk(Vt )μ˜ t ) ,
μ˜ t :=
(3.1) (3.2)
we assume that they verify the integrability condition
T
|μ˜ t |2 + |β˜t |2 dt < ∞ a.s.
0
Here μ˜ t and β˜t are the unobservable processes that account for the market price of risk. The first is related to the asset’s Brownian component and the second is related to the stochastic volatility’s Brownian motion. Also we introduce the following process Lt = 1 − 0
t
L s μ˜ s dWs1 + β˜s dWs2 ,
(3.3)
and we shall make the usual standing assumption of filtering theory. Assumption 1 The process L is a martingale, that is, E[L T ] = 1. Under this assumption, we can now define a new probability measure P˜ equivalent to P on (, F) characterized by d P˜ |Ft = L t , dP
0 ≤ t ≤ T.
(3.4)
Then Girsanov’s transformation ensures that t ˜ F)-Brownian motion, μ˜ s ds is a (P, W˜ t1 = Wt1 + 0 t 2 2 ˜ F)-Brownian motion. ˜ β˜s ds is a (P, Wt = Wt +
(3.5) (3.6)
0
Also, we have that (μ˜ t , β˜t ) is independent of the Brownian motion W˜ t1 , W˜ t2 . Therefore, the dynamics of (S, V ) under P˜ become d St = g(Vt )d W˜ t1 , S0 = s0 . St d Vt = ρ k(Vt )d W˜ t1 + 1 − ρ 2 k(Vt )d W˜ t2 , V0 = v0 .
(3.7) (3.8)
123
D. Ibrahim, F. Abergel
We now state a lemma which will highly relevant in the following. The proof of this lemma is similar to Lemma 3.1 in Pham and Quenez (2001). For that, we need to make the following assumption: Assumption 2 We assume that the function g 2 (v) is continuous in v and its inverse function denoted by κ is also continuous with respect to v. Lemma 3.1 Under Assumptions 1 and 2, the filtration G is the augmented filtration of (W˜ 1 , W˜ 2 ). Proof The sketch of the proof is summarized by two steps: Firstly, we show that the filtration G is equal to the enlarged progressive filtration F S ∨ FV . The first inclusion is obvious. The other inclusion F S ∨ FV ⊂ G is deduced from the fact that g 2 (Vt ) is G-adapted (which is deduced from the quadratic variation of log(St )) and Assumption 2. Secondly, from (3.7), (3.8) and the fact that g(v), k(v) > 0, ˜ 1 W˜ 2 F is equal to F S ∨ FV . This ends the proof. we have that FW Let us then make the following assumption on the risk processes μ, ˜ β˜ . ∀t ∈ [0, T ],
E|μ˜ t | + E|β˜t | < ∞.
(3.9)
Under this assumption, we can introduce the conditional law of μ, ˜ β˜ defined by: μt := E[μ˜ t |Gt ], β t := E[β˜t |Gt ].
(3.10) (3.11)
˜ F) martingale defined as Ht = 1 , and we aim to construct We denote by H the (P, Lt the restriction of P equivalent to P˜ on (, G). First, let us consider the conditional version of Bayes’ formula: for any P integrable random variable X (X ∈ L 1 (P)), we have E˜ [X Ht |Gt ] E [X |Gt ] = . (3.12) ˜ [Ht |Gt ] E Then by taking X = L t , we get L˜ t := E [L t |Gt ] =
1 . ˜E[Ht |Gt ]
(3.13) ˜
Therefore, from (3.4) and (3.13), we have the following restriction to G: ddPP |Gt = L˜ t . Finally, from Proposition 2.30 in Bain and Crisan (2009) and Proposition 2.2.7 in Pardoux (1991), we have the following result. 1
2
Proposition 3.2 The following processes W and W are independent (P, G)Brownian motions. t t
1 1 1 ˜ μ˜ s − μs ds := Wt − W t = Wt + μs ds, 0
123
0
Non-linear filtering and optimal investment under partial... 2
W t = Wt2 +
t t β˜s − β s ds := W˜ t2 − β s ds. 0
0
These processes are called the innovation processes in filtering theory. They include the distances between the true values of μ, ˜ β˜ and their estimates. Then, by means of the innovation processes, we can describe the dynamics of (S, V, R) within the framework of full observation model: ⎧ 1 d St ⎪ = g(Vt )μt dt + g(Vt )dW t , S0 = s0 , ⎪ S ⎪ t ⎪ ⎪ ⎨ d Vt = ρ k(Vt )μ + 1 − ρ 2 k(Vt )β dt + ρk(Vt )dW 1 t t t (Q) = ⎪ + 1 − ρ 2 k(Vt )dW 2 , V0 = v0 , ⎪ t ⎪ ⎪ 1 ⎪ π ⎩ d Rt = Rtπ πt g(Vt ) μt dt + g(Vt )dW t , R0 = r0 .
4 Filtering We have shown that conditioning arguments can be used to replace the initial partial information problem by a full information problem one which depends only on the past history of observed prices. But the reduction procedure involves the filters estimate μt and β t . Our filtering problem can be summarized as follows. From Lemma 3.1, we have ˜1 ˜2 G = FW ∨ FW , then the vector (W˜ 1 , W˜ 2 ) corresponds to the observation process. On the other hand, our signal process is given by (μ˜t , β˜t ). So the filtering problem is to characterize the conditional distribution of (μ˜t , β˜t ), given the observation data ˜ 1 W˜ 2 F . G = FW We show in this section how the filters estimate depend on the models of the drift and the stochastic volatility. Using the non-linear filtering theory (presented in “Appendix A”), we can deduce that the filters estimate satisfy a system of stochastic partial differential equations, called “Kushner–Stratonovich equations”. Generally these equations are infinite-dimensional and thus very hard to solve them explicitly. So, in order to simplify the situation and in order to obtain a closed form for the optimal portfolio, we will be interested in some cases of models, when we can deduce a finite dimensional filters. 4.1 General case Let us assume that the processes μ˜ t and β˜t are solutions of the following stochastic differential equations d
μ˜ t β˜t
=
3 1 b1 b2 a Wt Wt g1 g2 d + d dt + a g1 g2 Wt4 Wt2 b1 b2
(4.1)
where we denote for simplification the functions a := a(μ˜ t , β˜t ), a := a(μ˜ t , β˜t ),…b2 = b2 (μ˜ t , β˜t ), and the Brownian motion (Wt3 , Wt4 ) is independent of (Wt1 , Wt2 ).
123
D. Ibrahim, F. Abergel
On the other hand, the dynamics of the observation process (W˜ 1 , W˜ 2 ) is given by d
W˜ t1 W˜ t2
=d
Wt1 Wt2
+
μ˜ t β˜t
dt
(4.2)
Notations 1 Let us denote by b1 b2 a g1 g2 W˜ t1 , A = , B = , G = a g1 g2 b1 b2 W˜ t2 3 1 1 h1 Wt Wt Wt = , h= K = (B B T + GG T ). (4.3) Mt = h2 Wt4 Wt2 2
Xt =
μ˜ t β˜t
, t =
where for x = (m, b), h 1 (x) = m, h 2 (x) = b and T denotes the transposition operator. With these notations, the signal-observation processes (X t , t ) satisfy d X t = A(X t )dt + G(X t )d Mt + B(X t )dWt
(4.4)
d t = h(X t )dt + dWt .
(4.5) ˜1
Remark 4.1 To avoid confusion in the sequel, we have G = FW
˜2
FW = F .
4.1.1 Estimate μt and β t The main result of this section will be presented in Proposition 4.3, which consist in showing that the estimation of μt and β t depends essentially on an a priori models for the trend and the stochastic volatility. Also, we need to make some assumptions on the coefficients of μ˜ t and β˜t , that is, on the coefficients of the dynamics of μ˜ t and β˜t , in order to deduce the estimation of μt and β t . To more understand this result, the idea is based essentially on the following two steps: First step Describe the dynamics of (μ˜ t , β˜t ) as in (4.1). We show that this description depends essentially on the model of Vt . In fact, if we apply Itô’s formula on μ˜ t and β˜t in order to describe their dynamics, we have that Vt still appear, for that we need to describe Vt only in terms of μ˜ t and β˜t in order to disappear it from their dynamics. This can be done from the definition of β˜t and taking into account the choice of the variable βt or more precisely the choice of f (βt , Vt ). We will clarify this point with an examples in paragraph 4.1.3. Second step Verification of some regularity assumptions. When we describe the dynamics of (μ˜ t , β˜t ) as in (4.1), we must need to check if the coefficients of the dynamics verify some regularity assumptions, in order to use the results of nonlinear filtering theory presented in “Appendix A”. We still assume that our signal-observation system (X t , t ) satisfies (4.4) and (4.5) and we now state some assumptions and lemma which will be important to prove our result in Proposition 4.3.
123
Non-linear filtering and optimal investment under partial...
Assumptions I.i) The functions A, G and B are globally Lipschitz. I.ii) X 0 has finite second moment. I.iii) X 0 has finite third moment. Lemma 4.2 Let (X, ) be the solution of (4.4) and (4.5) and assume that h has linear growth condition. If assumptions I.i) and I.ii) are satisfied, then (A.3) is satisfied. Moreover, if assumption I.iii) is satisfied, then (A.5) is satisfied. Proof The proof is given in Bensoussan (1992) (see, Lemmas 4.1.1 and 4.1.5).
As we have mentioned, the following result shows that we need to introduce an a priori models for the trend and the stochastic volatility in order to describe the dynamics of (μ˜ t , β˜t ) as in (4.1), and therefore we can deduce the dynamics of the filters estimate (μt , β t ) from Proposition 5.19 in “Appendix A”. More precisely, we show that these estimates depend essentially on the model of the volatility Vt , that is, we need to choose the dynamics of Vt such that the above two steps will be verified. Proposition 4.3 If there exists a function ϒ : R2 → R such that Vt = ϒ(μ˜ t , β˜t ) and such as with this function, the dynamics of X t = (μ˜ t , β˜t ) can be described as in (4.4) and assumptions I.i), I.ii) and I.iii) hold, then the conditional distribution αt (φ) := E[φ(X t )|Ft ] satisfies the following Kushner–Stratonovich equation 1 dαt (φ) = αt (Aφ)dt + αt h 1 + B 1 φ − αt (h 1 )αt (φ) dW t 2 + αt h 2 + B 2 φ − αt (h 2 )αt (φ) dW t .
(4.6)
For every φ ∈ D(A)∩D(B), where the operators A, B 1 and B 2 are given in “Appendix A”. Moreover the dynamics of (μt , β t ) satisfy the following stochastic differential equations 1 2 dμt = αt (a)dt + [αt h 21 + b1 − μ2t ]dW t + [αt (h 2 h 1 + b2 ) − β t μt ]dW t , dβ t = αt (a)dt
1 + [αt h 1 h 2 + b1 − μt β t ]dW t
+ [αt
h 22
+ b2
(4.7) 2 2 − β t ]dW t .
(4.8)
Proof From the definition of μ˜ t and β˜t and depending on the models of μt and βt , we have from Itô’s formula that Vt still appear in the dynamics of μ˜ t and β˜t . As ˜ β˜t ) Vt = ϒ(μ˜ t , β˜t ), then we can describe the dynamics of the signal process X t = (μ, as in (4.4). On the other hand, from the definition of the observation process given by (4.5), we have that the sensor function h = (h 1 , h 2 ) has a linear growth condition. Since assumptions I.i), I.ii) and I.iii) are verified, then we can deduce from Lemma 4.2 that conditions (A.3) and (A.5) are proved. Therefore the dynamics of αt given in (4.6) is deduced from Proposition 5.19 given in “Appendix A”. Then μt (resp.β t ) can be deduce from (4.6) by replacing φ by h 1 (resp. by h 2 ). Here, we must be careful because Kushner–Stratonovich equation (4.6) holds for any
123
D. Ibrahim, F. Abergel
φ ∈ D(A) ∩ D(B). But neither h 1 nor h 2 belongs to D(A) ∩ D(B) (because h 1 and h 2 are not bounded). We proceed by truncating of h 1 (resp.h 2 ) at a fixed level which we let tend to infinity. For this, let us introduce the functions (ψ k )k>0 defined as ψ k (x) = ψ(x/k),
x ∈ R2 ,
where ⎧ ⎪ ⎨1 2 |x| −1 ψ(x) = exp |x| 2 −4 ⎪ ⎩ 2
if |x| ≤ 1 if 1 < |x| < 2 if |x| ≥ 2.
Obviously, for all k, ψ k ∈ Cb∞ (R2 ). Also, all the derivatives of ψ k tend uniformly to 0. In particular lim ||Aψ k ||∞ = 0,
k→∞
lim ||∂i ψ k ||∞ = 0, i = 1, 2.
k→∞
Now using the following relations lim h 1 ψ k (x) = h 1 (x), |h 1 (x)ψ k (x)| ≤ |h 1 (x)| and lim As (h 1 ψ k )(x) = As h 1 (x),
k→∞
k→∞
and from the definition of ψ k (x), we have that h 1 (x)ψ k (x) (resp.h 2 (x)ψ k (x)) belongs to D(A) ∩ D(B), then replacing in Eq. (4.6) φ by h 1 ψ k (resp.φ by h 2 ψ k ) and from dominated convergence theorem, we may pass to the limit as k → ∞, then we can deduce the dynamics (4.7) and (4.8) for μt := αt (h 1 ) and β t := αt (h 2 ). Remark 4.4 Notice that the characterization of the filter estimate as the unique strong solution to the Kushner–Stratonovich equation (4.6) holds also under local Lipschitz and growth conditions on the coefficients A, G and B. For more details, see Ceci and Colaneri (2014) and Kurtz and Ocone (1988). 4.1.2 Existence and uniqueness of the solution to Eq. (4.6) We now take sufficient assumption on the coefficients of the signal-observation system in order to show that Eq. (4.6) has a unique solution, see Bain and Crisan (2009, chap. 4). We define in the following the space within with we prove the uniqueness. Let us define the space of measure-valued stochastic processes within which we prove uniqueness of the solution to Eq. (4.6). This space has to be chosen so that it contains only measures with respect to which the integral of any function with linear growth is finite. The reason of this choice is that we want to allow to the coefficients of the signal and observation processes to be unbounded. Let ψ : R2 → R be the function ψ(x) = 1+|x|, for any x ∈ R2 . We define C l (R2 ) as the space of continuous functions φ such that φ/ψ ∈ Cb (R2 ), where Cb (R2 ) is the space of bounded continuous functions.
123
Non-linear filtering and optimal investment under partial...
Let us denote by Ml (R2 ) the space of finite measure μ such that μ(ψ) < ∞. In particular, this implies that μ(φ) < ∞ for all φ ∈ C l (R2 ). Moreover, we endow Ml (R2 ) with the corresponding weak topology: A sequence (μn ) of measures in Ml (R2 ) converges to μ ∈ Ml (R2 ) if and only if limn→∞ μn (φ) = μ(φ), for all φ ∈ C l (R2 ). Definition 4.5 (1) The Class U is the space of all t -adapted Ml (R2 )-valued stochastic processes (μ)t0 with càdlàg paths such that, for all t 0, we have ˜ E
t
(μs (ψ)) ds < ∞. 2
0
˜ is the space of all t -adapted Ml (R2 )-valued stochastic processes (2) The Class U (μ)t0 with càdlàg paths such that the process m μ μ belongs to the class U, where the process m μ is defined as μ
m t = exp
t
μs (h T )d s −
0
1 2
t
μs (h T )μs (h)ds .
0
Now we state the uniqueness result of the solution to Eq. (4.6). Proposition 4.6 Assuming that the functions A and K defined in (4.3) have twice continuously differentiable components and all their derivatives of first and second ˜ order are bounded. Then Eq. (4.6) has a unique solution in the class U. Proof The proof can be found in Bain and Crisan (2009, chap. 4, Theorem 4.19). Remark 4.7 The filter Eq. (4.6) describes an infinite dimensional stochastic differential equation driven by the innovations process. For that, we need to use some numerical methods adapting to infinite dimensional filtering problem. For example, the so-called particular Monte Carlo method, see for instance Crisan and Lyons (1999). Also, we can use the approximation scheme used by Gobet et al. (2006) which consist in discretizing the Zakai equation, which is linear, and then deduce the approximation of the conditional distribution αt from Kllianpur–Striebel formula (A.4). 4.1.3 Application We will present in this section two stochastic volatility models for which Proposition 4.3 can be applied in order to deduce the filters estimate. Example 1 Let us consider the following Garch-model d St = Vt μt dt + dWt1 , St
d Vt = βt (θ − Vt ) dt + σV Vt ρdWt1 + 1 − ρ 2 dWt2 ,
dμt = λμ θμ − μt dt + σμ dWt3 ,
123
D. Ibrahim, F. Abergel
dβt = λβ βt dt + σβ βt dWt4 . where W 1 , W 2 , W 3 and W 4 are independent. Here the risks of the model are given by μ˜ t = μt
and
βt (θ − Vt ) ρ β˜t = − μ˜ t . 2 1 − ρ Vt 1 − ρ2
In order to compute the filters estimate in this case of Garch model, we will be interested by using Proposition 4.3. For that, we need to take θ = 0. Because, if we apply Itô’s formula on μ˜ t and β˜t in the case where θ = 0, we get a dynamics such that its coefficients don’t satisfy assumption I.i) and therefore Proposition 4.3 can’t be applied. Let θ = 0, then from Itô’s formula, we obtain that d
μ˜ t β˜t
=A
μ˜ t β˜t
dt + G
μ˜ t β˜t
d Mt .
where the functions A and G are given by ⎞ λμ (θμ − m) ρ(λβ + λμ ) ρλμ θμ ⎠ , A =⎝ λβ b + m− ρ ρ σ 0 μ m G = − ρσμ σ (b + ρ m) . b β ρ ρ
m b
⎛
and ρ = 1 − ρ 2 . So the dynamics of (μ˜ t , β˜t ) is described as in (4.4), where the function B is null. So we are in the case where the signal process X t := (μ˜ t , β˜t ) and the observation processes t := (W˜ t1 , W˜ t2 ) are independent. This implies that the operator B 1 and B 2 will disappear in the Zakai and Kushner–Stratonovich equations. As for this model, the assumptions of Proposition 4.3 are satisfied, then for any φ, the conditional distribution αt is given by 1
2
dαt (φ) = αt (Aφ)dt +[αt (h 1 φ)−αt (h 1 )αt (φ)] dW t +[αt (h 2 φ)−αt (h 2 )αt (φ)] dW t . 1 GG T . 2 Therefore, the filters estimate μt and β t satisfy the following dynamics Here the operator A is given by (A.6), where K =
1 2 dμt = λμ (θμ − μt )dt + αt h 21 − μ2t dW t + αt (h 2 h 1 ) − βt μt dW t ,
ρ(λβ + λμ ) ρλμ θμ 1 dβ t = λβ β t + dt + αt (h 1 h 2 ) − μt β t dW t μt − ρ ρ
123
Non-linear filtering and optimal investment under partial...
2 2 + αt h 22 − βt dW t . Example 2 Let us consider the following Log Ornstein–Uhlenbeck model d St = e Vt μt dt + dWt1 St
(4.9)
d Vt = λV (θ − Vt ) dt + σV ρdWt1 + σV 1 − ρ 2 dWt2
dμt = λμ θμ − μt dt + σμ dWt3 . Here the risks of the model are given by μ˜ t = μt
and β˜t =
ρ μ˜ t . 1 − ρ2
(4.10) (4.11) λV (θ − Vt ) − σV 1 − ρ 2
σV ρ σV 1 − ρ 2 β˜t and therefore the risks μ˜ t and β˜t have the μ˜ t − Then Vt = θ − λV λV following dynamics
μ˜ t β˜t
d
=
3 1 μ˜ t Wt Wt A ˜ + b dt + G d + B d . Wt4 Wt2 βt
Where ⎛
⎞ −λμ 0 λμ θμ σμ 0 ρ ρ ⎝ ⎠ A = ρ[λμ − λV ] , b= − λ θ , G= − σ 0 , −λV μ μ μ ρ ρ ρ 0 0 and ρ = 1 − ρ 2 . B = − ρ λ −λ V V ρ Notice that all the assumptions made in Proposition 4.3 are satisfied, then we can deduce that the filters estimate μt and β t satisfy Eqs. (4.7) and (5.8). But, as the functions A, B and G are deterministic, so we are in the framework of a classical Kalman-Bucy filter with correlation between the signal and the observation processes, see Pardoux (1991, Chap. 6) and Kallianpur (1980, Theorem 10.5.1). Therefore using Theorem 10.5.1 in Kallianpur (1980), we can deduce the following reduced forms for the stochastic differential equations of the filters d
μt βt
1 μt Wt = A + b dt + (B + (t)) d 2 . βt W
(4.12)
t
Where (t) = (i j (t))1≤i, j≤≤2 is the conditional covariance matrix (2 × 2) of the signal satisfies the following deterministic matrix Riccati equation d(t) = A(t) + (t)A T + GG T − (t)(t) − (t)B T − B(t). dt
(4.13)
123
D. Ibrahim, F. Abergel
We can also consider the case where the mean θ of the stochastic volatility Vt is a linear function of μt . For example, assume the above dynamics of (St , Vt , μt ) with θ = μt . Therefore, (μt , β t ) verifies (4.12) with G and B still the same matrix given above, but A and b become ⎞ ⎛ ⎞ −λμ 0 λμ θμ ⎠ , b = ⎝ λV ⎠. ρ A = ⎝ ρ[λμ − λV ] λμ λV − λμ θμ − −λV σV ρ ρ ρ σV ρ ⎛
Remark 4.8 The special feature of the second model is that the signal processes (μ˜ t , β˜t ) satisfies (4.4) with A, B and G are deterministic. So we are in the framework of a classical Kalman-Bucy filter. The advantage of this filter is that it is a finite dimensional filter. Also notice that Kalman filter can be deduced theoretically from the general Kushner–Stratonovich equation (4.6).
5 Application to portfolio optimization Before presenting our results, let us recall that the trader’s objective is to solve the following optimization problem: J (x) = sup E[U (RTπ )] π ∈A
x > 0,
(5.1)
where the dynamics of Rtπ in the full information context is given by 1 d Rtπ = Rtπ πt g(Vt ) μt dt + g(Vt )dW t , R0π = x. Here A is the set of admissible controls πt which are F S -adapted processes, take their values in a compact U ⊂ R, and satisfy the integrability condition 0
T
g 2 (Vs )πs2 ds < ∞
P a.s.
(5.2)
We have shown that the partial observation portfolio optimization problem is transformed into a full observation portfolio optimization problem where the filter term μt is appeared in the dynamic of the wealth. Therefore, one may apply the martingale or PDE approach in order to resolve our optimization problem. Here we will be interested by the martingale approach. The motivation to use martingale approach instead of PDE approach is that we don’t need to impose any constraint on the admissible control (see Remark 5.16). As the reduced market model is not complete, due to the stochastic factor V , we have to solve the related dual optimization problem. For that, we complement the martingale approach by using the PDE approach in order to resolve explicitly the dual problem. For power and logarithmic utility functions, we show by verification result that under some assumptions on the market coefficients, the dual value function
123
Non-linear filtering and optimal investment under partial...
and the dual optimizer are related to the solution of a semilinear partial differential equation. Therefore the primal value function, the optimal wealth and the optimal portfolio depend also on this solution, see Proposition 5.14. 5.1 Martingale dual approach Before presenting our result concerning the solution of the dual problem, let us begin by reminding some general results about the martingale approach. The martingale approach in incomplete markets is based on a dual formulation of the optimization problem in terms of a suitable family of (P, G)-local martingales. The important result for the dual formulation is the martingale representation theorem given in Pham and Quenez (2001) for (P, G)-local martingales with respect to the 1 2 innovation processes W and W . Lemma 5.1 (Martingale representation theorem) Let A be any (P, G)-local martingale. Then, there exist a G-adapted processes φ and ψ, P a.s. square-integrable and such that t t 1 2 At = φs dW s + ψs dW s . (5.3) 0
0
Now, we aim to describe the dual formulation of the optimization problem. We now make the following assumption which will be useful in the sequel
T 0
μ2t dt < ∞,
P − a.s.
(5.4)
T
For any G-adapted process ν = {νt , 0 ≤ t ≤ T }, which satisfies introduce the (P, G)-local martingale strictly positive
0
νt2 dt < ∞, we
t t 1 t 2 1 t 2 1 2 Z tν = exp − μs dW s − νs dW s − μs ds − νs ds 2 0 2 0 0 0
(5.5)
When, E Z Tν = 1, the process Z ν is a martingale and then there exists a probability measure Q equivalent to P with dQ |G = Z Tν . dP t Here μ is the risk related to the asset’s Brownian motion W 1 , which is chosen such that Q is an equivalent martingale measure, that is, the process Z ν R is a (P, G)local martingale. On the other hand, ν is the risk related to the stochastic volatility’s Brownian motion and this risk will be determined as the optimal solution of the dual problem defined below.
123
D. Ibrahim, F. Abergel
Consequently, from Itô’s formula, the process Z ν satisfies 1 2 d Z tν = −Z tν μt dW t + νt dW t .
(5.6)
As shown by Karatzas et al. (1991), the solution of the primal problem (5.1) relying upon solving the dual optimization problem dQ ˜ ) := inf E U˜ (z Z Tν ) , Jdual (z) = inf E U (z dP Q∈Q ν∈K
z>0
(5.7)
where, • Q is the set of equivalent martingale measures given by Q = {Q ∼ P| R is a local (Q, G) − martingale}.
(5.8)
• U˜ is the convex dual of U given by U˜ (y) = sup [U (m) − ym] ,
m > 0.
(5.9)
m>0
• K is the Hilbert space of G-adapted process ν, satisfies the integrability condition sup E[exp(νt2 )] < ∞, for some > 0.
t∈[0,T ]
(5.10)
We henceforth impose the following assumptions on the utility functions in order to guarantee that the dual problem admits a solution ν ∗ ∈ K. Assumption 3
• For some p ∈ (0, 1), γ ∈ (1, ∞), we have pU (x) ≥ U (γ x)
∀x ∈ (0, ∞).
• x → xU (x) is nondecreasing on (0, ∞). • For every z ∈ (0, ∞), there exists ν ∈ K such that Jdual (z) < ∞. In the sequel, we denote by I :]0, ∞[→]0, ∞[ the inverse function of U on ]0, ∞[. It’s a decreasing function and verifies lim x→0 I (x) = ∞ and lim x→∞ I (x) = 0. Now from Karatzas, Lehoczky, Shreve and XU (1991) and Owen (2002), we have the following result about the solution of the primal utility maximization problem (2.4). Theorem 5.2 Under Assumption 3, for all z > 0, the dual problem (5.7) admits a solution ν ∗ (z) ∈ K. Moreover, The optimal wealth for the utility maximization problem (2.4) is given by Rt∗ = E
123
Z Tν Z tν
∗ ∗
ν∗
I (z x Z T )|Gt
Non-linear filtering and optimal investment under partial...
∗ ∗ where ν ∗ = ν ∗ (z x ) and z x is the Lagrange multiplier such that E Z Tν (z x ) I (z x Z Tν (z x ) ) = x. Also the optimal portfolio π ∗ is implicitly determined by the equation d Rt∗ = Rt∗ πt∗ g(Vt )d W˜ t1 .
(5.11)
∗ ν (z ) Remark 5.3 Cvitani´c and Karatzas (1992) proved that the constraint E Z T x I (z x ν ∗ (z ) Z T x ) = x is done by choosing z x ∈ argmin z>0 {Jdual (z) + x z}. Now we begin by presenting our results about the solution of the dual problem. 5.1.1 Solution of the dual problem (5.7) We remark from Theorem 5.2 that optimal wealth depends on the optimal choice of ν. So we are interested in the following by finding the optimal risk ν which is solution of (5.7). Here we present two cases. Firstly, we show that in the case when the filter estimate ˜1 of the price risk μt ∈ FtW , the infimum for the dual problem is reached for ν ∗ = 0. Secondly, for the general case, the idea is to derive a Hamilton–Jacobi–Bellman equation for dual problem, which involves the volatility risk ν as control process. ˜1
Lemma 5.4 Assume that μt ∈ FtW , then the infimum for the dual problem is reached for ν ∗ = 0, that is dQ ˜ Jdual (z) = inf E U (z ) = E U˜ z Z T0 . dP Q∈Q
(5.12)
Proof The proof is similar to that in Proposition 5.1 of Pham and Quenez (2001). Generally, the filter estimate of the price risk doesn’t satisfy Lemma 5.4 and therefore it’s a difficult problem to derive an explicit characterization for the solution of the dual problem and therefore for the optimal wealth and portfolio. For that, we need to present the dual problem as a stochastic control problem with controlled process Z tν and control process ν. Firstly, from the underlying dynamics of Z tν , we notice that our optimization problem 5.7 has three state variables which will be taken in account to describe the associated Hamilton–Jacobi–Bellman equation: the dynamic (5.6) of Z tν , the dynamic of the stochastic volatility Vt which is given in system (Q) and the dynamic of the filter estimate of the price risk μt . Remark 5.5 We have shown in filtering section, that the filter estimate μt satisfies a stochastic differential equation which in general is infinite dimensional and is not a Markov process. Therefore, we can’t use it to describe the HJB. On the other hand, we have also shown that for some models of stochastic volatility models, we can obtain a finite dimensional stochastic differential equation for μt which is also a Markov process. So in the sequel, we will assume that the filter μt is Markov.
123
D. Ibrahim, F. Abergel
On the other hand, we need in general to take in account the dynamics of μt and βt . But for simplification, we will consider βt as a linear function of μt or a constant. We shall make in the sequel the following assumption on the filter estimate. Assumption 4 We assume that μt is Markov. So for initial time t ∈ [0, T ] and for fixed z, the dual value function is defined by the following stochastic control problem Jdual (z, t, z, v, m) := inf E U˜ (z Z Tν )|Z tν = z, Vt = v, μt = m . ν∈K
(5.13)
Where the dynamics of (Z tν , Vt , μt ) are given as follows: 1
2
d Z tν = −Z tν μt dW t − Z tν νt dW t , Z 0ν = 1, 1 2 d Vt = f (μt , Vt )dt + ρk(Vt )dW t + 1 − ρ 2 k(Vt )dW t , V0 = v0 , 1
2
dμt = τ (μt )dt + ϑ(μt )dW t + ϒ(μt )dW t , μ0 = m 0 , where m 0 is a constant and f is a linear function. Remark 5.6 The dual value function in (5.7) is simply deduced from Jdual (z) = Jdual (z, 0, z, v, m). If we assume that Yt = (Vt , μt ) be a bi-dimensional process, then the controlled process (Z tν , Yt ) satisfies the following dynamics 1
1
2
d Z tν = −Z tν ψ(Yt )dW t − Z tν νt dW t
(5.14)
dYt = (Yt )dt + (Yt )dWt
(5.15)
2
where Wt = (W t , W t ) is a bi-dimensional Brownian motion, and for y = (v, m), we have f (m, v) ψ(y) = m, (y) = τ (m) and (y) =
ρk(v) ϑ(m)
1 − ρ 2 k(v) . ϒ(m)
Then we have the new reformulation of the above stochastic problem (5.13) and its HJB equation as follows Jdual (z, t, z, y) := inf E U˜ (z Z Tν )|Z tν = z, Yt = y . ν∈K
123
(5.16)
Non-linear filtering and optimal investment under partial...
Now assuming that U˜ satisfies the following property U˜ (λx) = g1 (λ)U˜ (x) + g2 (λ),
(5.17)
for λ > 0 and for any functions g1 and g2 . The special advantage of this assumption is: we can solve the dual problem (5.16) independently of z. In general, a solution to the dual problem (5.16) depends on z, but for this kind of U˜ , this dependence vanishes. Then (5.16) reads as follows Jdual (z, t, z, y) = g1 (z) inf E U˜ (Z Tν )|Z tν = z, Yt = y + g2 (z) ν∈K
= g1 (z) J˜(t, z, y) + g2 (z). Where
J˜(t, z, y) = inf E U˜ (Z Tν )|Z tν = z, Yt = y . ν∈K
(5.18)
In the sequel, we will be interested in the stochastic control problem (5.18) in order to deduce the dual value function Jdual (z, t, z, y). Also, we will call J˜ as the dual value function. Formally, the Hamilton–Jacobi–Bellman equation associated to the above stochastic control problem (5.18) is the following nonlinear partial differential equation ∂ J˜ 1 + T r (y) T (y)D 2y J˜ + T (y)D y J˜ ∂t 2 1 2 ˜ (ψ(y)2 + ν 2 )z 2 Dz2 J˜ − z[ψ(y)K 1T (y) + ν K 2T (y)]Dz,y + inf J = 0, ν∈K 2 (5.19) with the boundary condition
J˜(T, z, y) = U˜ (z),
and the associated optimal dual optimizer νt∗ =
ν∗
(5.20)
is given by
2 J˜ K 2T (y) Dz,y
z Dz2 J˜
.
Here D y and D 2y denote the gradient and the Hessian operators with respect to the 2 is the second derivative vector with respect to the variables z and y variable y. Dz,y ρk(v) 1 − ρ 2 k(v) . and for y = (v, m), K 1 (y) = and K 2 (y) = ϑ(m) ϒ(m) The above HJB is nonlinear, but if we consider the case of power and logarithmic utility functions and via a suitable transformation, we can make this equation semilinear and then characterize the dual value function J˜ through the classical solution of this semilinear equation which is more simpler than the usual fully nonlinear HJB equation.
123
D. Ibrahim, F. Abergel
5.2 Special cases for utility function Let us consider the two more standard utility functions: logarithmic and power, defined by
U (x) =
U˜ (z) =
⎧ ln(x) ⎪ ⎪ ⎨
x ∈ R+ and
xp ⎪ ⎪ x ∈ R+ , p ∈ (0, 1) ⎩ p ⎧ −(1 + ln(z)) z ∈ R+ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩
−
zq q
z ∈ R, q =
p p−1
These utility functions are of particular interests. Firstly, they satisfy property (5.17) and secondly, due to the homogeneity of the convex dual functions together with the fact that the process Z tν and the control ν appear linearly, we can suggest a suitable transformation, for which we can characterize the dual value functions J˜ through a classical solution of a semilinear semilinear partial differential equations which will be described below. Let us now make the following assumption which will be useful for proving our verification results. Assumption H: (i) and are Lipschitz and C 1 with bounded derivatives. (ii) T satisfies the uniform parabolicity assumption, that is, there exists c > 0 such that for y, ξ ∈ R2 2 ( T (y))i j ξi ξ j ≥ c|ξ |2 . i, j=1
(iii) is bounded or is a constant matrix. (iv) There exists a positive constant such that supt∈[0,T ] E[exp(ψ 2 (Yt )] < ∞. Notice that the Lipschitz assumption on and ensure the existence and uniqueness of the solution of (5.15). Moreover, we have E[ sup |Ys |2 ] < ∞.
(5.21)
0≤s≤t
Remark 5.7 Under the Lipschitz assumption H (i) on and H (iii), we obtain from (5.15) and Gronwall’s lemma, that there exists some positive constant C such that t |Yt | ≤ C 1 + |Wu |du + |Wt | . 0
Therefore we deduce that there exists some > 0 such that supt∈[0,T ] E[exp(|Yt |2 )] < ∞.
123
Non-linear filtering and optimal investment under partial...
5.2.1 Logarithmic utility For the logarithmic utility case, we can look for a candidate solution of (5.19) and (5.20) in the form (5.22) J˜(t, z, y) = −(1 + ln(z)) − (t, y) Then direct substitution of (5.22) in (5.19) and (5.20) gives us the following semilinear partial differential equation for −
∂ 1 − T r (y) T (y)D 2y + H (y, D y ) = 0, ∂t 2
(5.23)
with the boundary condition (T, y) = 0.
(5.24)
Here the Hamiltonian H is defined by H (y, Q) = − (y)Q + inf T
ν
1 2 2 (ψ (y) + ν ) . 2
We now state a verification result for the logarithmic case, which relates the solution of the above semilinear (5.23) and (5.24) to the stochastic control problem (5.18). Theorem 5.8 (Verification theorem) Let assumption H (i) holds. Suppose that there exists a solution ∈ C 1,2 ([0, T )×R2 )∩C 0 ([0, T ]×R2 ) to the semilinear (5.23) with the terminal condition (5.24). Also we assume that satisfies a polynomial growth condition, i.e: |(t, y)| ≤ C(1 + |y|k ) for some k ∈ N. Then the dual value function of (5.18) is given by J˜(t, z, y) = −1 − ln(z) − (t, y), and the optimal risk ν ∗ is given by ν ∗ = 0. Proof Let J˜ν (t, z, y) = E U˜ (Z Tν )|Z tν = z, Yt = y . From (5.18) and U˜ (z) = −1 − ln(z), we have the following expression for J˜ν T 1 2 2 ˜ (ψ (Ys ) + νs )ds . Jν (t, z, y) = −1 − ln(z) + E 2 t
(5.25)
Let ν be an arbitrary control process, Y the associated process with Yt = y and define the stopping time θn := T ∧ inf{s > t : |Ys − y| ≥ n}. Now, let be a C 1,2 solution to (5.23). Then, by Itô’s formula, we have
123
D. Ibrahim, F. Abergel
(θn , Yθn ) = (t, y) + +
θn
θn
t
∂ 1 + T r T D 2y + T D y (s, Ys )ds ∂t 2
(D y )T (s, Ys )dW s
t
1 ≤ (t, y) + 2
θn
(ψ
t
2
(Ys ) + νs2 )ds
θn
+
((D y )T )(s, Ys )dW s
t
(5.26) From the definition of θn , the integrand in the stochastic integral is bounded on [t, θn ], a consequence of the continuity of D y and assumption H (i). Then, by taking expectation, one obtains θn 1 2 2 E[(θn , Yθn )] ≤ (t, y) + E (ψ (Ys ) + νs )ds . 2 t We now take the limit as n increases to infinity, then θn → T a.s. From the growth condition satisfied by and (5.21), we can deduce the uniform integrability of ((θn , Yθn ))n . Therefore, it follows from the dominated convergence theorem and the boundary condition (5.24) that for all ν ∈ K −(t, y) ≤ E
T 1 (ψ 2 (Ys ) + νs2 )ds . 2 t
Then from (5.25), we have J˜ν (t, z, y) ≥ −1 − ln(z) − (t, y). Now by repeating the above argument by replacing ν by ν ∗ = 0 which is the optimal risk, we can finally deduce that J˜ν ∗ (t, z, y) = −1 − ln(z) − (t, y), which ends the proof since J˜(t, z, y) = inf ν∈K J˜ν (t, z, y). Let us now study the regularity of the solution to the semilinear (5.23) with terminal condition (5.24). Proposition 5.9 Under assumptions H (i) and (ii), there exists a solution ∈ C 1,2 ([0, T ) × R2 ) ∩ C 0 ([0, T ] × R2 ) with polynomial growth condition in y, to the semilinear (5.23) with terminal condition (5.24). Proof Under assumptions H (i) and (ii) and the fact that the Hamiltonian H satisfies a global Lipschitz condition on D y , we can deduce from Theorem 4.3 in Fleming and Soner (2006, p. 163) the existence and uniqueness of a classical solution to the semilinear Eq. (5.23). 5.2.2 Power utility As the above reasons given in the logarithmic case, we can suggest that the dual value function must be of the form q
z J˜(t, z, y) = − exp(−(t, y)). q
123
(5.27)
Non-linear filtering and optimal investment under partial...
Then if we substitute the above form in (5.19) and (5.20), we can deduce the following semilinear PDE for ∂ 1 − T r (y) T (y)D 2y + H (y, D y ) = 0, ∂t 2 (T, y) = 0.
−
(5.28) (5.29)
The Hamiltonian H is defined by 1 T Q (y) T (y)Q − Q T (y) 2 1 2 2 T T q(q − 1)(ψ (y) + ν ) + q ψ(y)K 1 (y) + ν K 2 (y) Q + inf ν∈K 2 (5.30) 1 = Q T (y) T (y) − G(y) Q − Q T F(y) + (y). (5.31) 2
H (y, Q) =
where for y := (v, m) q K 2 (y)K 2T (y), F(y) = (y) − qψ(y)K 1 (y), and q −1 1 (y) = q(q − 1)ψ 2 (y). 2 G(y) =
We now state a verification result for the power case, which relates the solution of the above semilinear Eq. (5.28), with boundary condition (5.29), to the stochastic control problem (5.18). Theorem 5.10 (Verification theorem) Let assumptions H (i), (iii) and (iv) hold. Suppose that there exists a solution ∈ C 1,2 ([0, T ) × R2 ) ∩ C 0 ([0, T ] × R2 ) with linear growth condition on the derivation D y , to the semilinear (5.28) with terminal condition (5.29). Then the dual value function is given by q
z J˜(t, z, y) = − exp(−(t, y)). q and the associated optimal ν ∗ is given by the Markov control {ν˜t = ν ∗ (t, Yt )} with νt∗ = −
1 K T (Yt )D y (t, Yt ). q −1 2
(5.32)
Proof Let us introduce the new probability Qν as follows t t 1 t 2 2 dQν 1 2 = exp − qψ(Yu )dW u − qνu dW u − q (ψ (Yu ) + νu2 )du , dP 2 0 0 0
123
D. Ibrahim, F. Abergel
From assumption (iv) and (5.10), the probability measure Qν with the density process dQν dP is well defined,see Liptser and Shiryaev (Liptser and Shiryaev 2001, P. 233). Let J˜ν (t, z, y) = E U˜ (Z ν )|Z tν = z, Yt = y . T
q
z From (5.18) and U˜ (z) = − , we have from Itô’s formula the following expression q for J˜ν T zq 1 q(q − 1)(ψ 2 (Yu ) + νu2 )du |Yt = y . (5.33) J˜ν (t, z, y) = − Eν exp q t 2 Also by Girsanov’s theorem, the dynamics of Y under Qν , is given by dYt = ((Yt ) − qψ(Yt )K 1 (Yt ) − qνt K 2 (Yt )) dt + (Yt )dWtν ,
(5.34)
where W ν is a bi-dimensional Brownian motion under Qν . Now, let be a C 1,2 solution to (5.28), then by Itô’s formula applied to (t, Yt ) under Qν , one obtains (T, YT ) = (t, y) T ∂ 1 + + ( − qψ K 1 − qν K 2 )T D y + T r ( T D 2y ) (u, Yu )du ∂t 2 t T + (D yT )(u, Yu )dWuν t
Since is solution of (5.28), then one obtains
T
(T, YT ) = (t, y)+
H (y, D y )+(−qψ K 1 − qνu K 2 )T D y (u, Yu )du
t
T
+ t
(D yT )(u, Yu )dWuν
(5.35)
T
1 q(q − 1)(ψ 2 (Yu ) + νu2 )du 2 t D yT T D y (u, Yu )du
≤ (t, y) +
1 T + 2 t T (D yT )(u, Yu )dWuν , + t
where the inequality comes from the representation (5.30) of the Hamiltonian. Therefore, we have 1 T T exp(−(t, y))Eν exp − D y T D y (u, Yu )du 2 t T − (D yT )(u, Yu )dWuν t
123
(5.36)
Non-linear filtering and optimal investment under partial...
≤ Eν exp
T
1 q(q − 1)(ψ 2 (Yu ) + νu2 )du 2
t
.
Let us now consider the exponential Q ν -local martingales t 1 t T tπ = exp − (D yT )(u, Yu )dWuν − D y T D y (u, Yu )du . 2 0 0 From the Lipschitz condition assumed in H (i) and from H (iii), we can deduce from Gronwall’s lemma that there exists a positive constant C such that t |Yt | ≤ C 1 + |Wuν |du + |Wtν | 0
Then we deduce that there exists some > 0 such that sup Eν [exp(|Yt |2 )] < ∞.
(5.37)
t∈[0,T ]
Therefore from (5.37) and the fact that D y satisfies a linear growth condition in y, we can deduce that π is a martingale under Q ν , therefore we have ν
exp(−(t, y)) ≤ E
T
exp t
1 q(q − 1)(ψ 2 (Yu ) + νu2 )du 2
.
The above inequality is proved for all ν ∈ K, therefore we can deduce from (5.33) that q
z J˜ν (t, z, y) ≥ − exp(−(t, y)). q Now by repeating the above argument and observing that the control ν ∗ given by (5.32), achieves equality in (5.36), we can finally deduce that J˜ν (t, z, y) = q − zq exp(−(t, y)), which ends the proof since J˜(t, z, y) = inf ν∈K J˜ν (t, z, y). It remains to prove that ν ∗ satisfies the integrability condition (5.10). In fact, as assumption H (iii) holds (that is K 2 is bounded) and D y satisfies a linear growth condition, then from Remark 5.7, ν ∗ satisfies (5.10). We now study the existence of a classical solution to (5.28)–(5.29). In fact, the existence of a classical solution to (5.28)–(5.29) cannot be found directly in the literature since Q → H (y, Q) is not globally Lipschitz on Q but satisfies a quadratic growth condition on Q. For that we can use the approach taken in Fleming and Soner (2006) by considering a certain sequence of approximating PDEs which are the HJB-equations of certain stochastic control problems for which the existence of smooth solution is well-known. Let us make some assumptions which will be useful to prove the regularity for the solution of (5.28).
123
D. Ibrahim, F. Abergel
Assumption H’ Let us consider either one of the following conditions H’.I: If is a deterministic matrix, in this case we need this assumption (i) and ψ are Lipschitz and C 1 with bounded derivatives. H’.II: If is not a deterministic matrix, in this case we need these assumptions (ii) and ψ.K 1 are Lipschitz and C 1 . (iii) ψ 2 , K 2 K 2T are C 1 with bounded derivatives. q (iv) T − K 2 K 2T satisfies the uniform parabolicity assumption. q −1 By similar arguments used by Pham (2002) and from the standard theorem about the existence of a solution to a parabolic partial differential equations proved by Fleming and Soner (2006, Theorem 4.3 P. 163), we can deduce our regularity result for the case where the Hamiltonian is not globally Lipschitz but satisfies a quadratic growth condition. Theorem 5.11 Under one of the assumptions H’, there exists a solution ∈ C 1,2 ([0, T ) × R2 ) ∩ C 0 ([0, T ] × R2 ) with linear growth condition on the derivation D y , to the semilinear (5.28) with terminal condition (5.29). Finally, we are interested by the solution of our primal control problem (2.4), that is we aim to give an explicit solution for the value function J , the optimal wealth and the optimal portfolio. Once we solved the dual problem and found the dual optimizer ν ∗ , we can deduced from Theorem 5.2, the optimal wealth, the optimal portfolio and the value function. We study in next section the solution of our primal control problem in the case of logarithmic and power utility. 5.3 Solution to the primal problem for special utility functions Before presenting our result concerning the optimal wealth and the optimal portfolio, and in order to avoid any confusion, let us describe the dynamics of the wealth Rt in terms of the process Yt := (Vt , μt ) as follows 1
d Rt = Rt πt (ψ(Yt )δ(Yt )dt + δ(Yt )dW t )
(5.38)
where ψ(Yt ) = μt and δ(Yt ) = g(Vt ). Also let us recall from Theorem 5.2 the relation between the optimal wealth, the optimal portfolio and the optimal optimizer ν ∗ Rt∗ = E
Z Tν Z tν
∗ ∗
ν∗
I (z x Z T )|Gt ,
(5.39)
and the optimal portfolio π ∗ is implicitly determined by d Rt∗ = Rt∗ πt∗ g(Vt )d W˜ t1 , ∗ dual maximizer and z x is the Lagrange multiplier such that where ∗ν is the∗ optimal ν ν E Z T I (z x Z T ) = x. Logarithmic utility: U (x) = ln(x)
123
Non-linear filtering and optimal investment under partial...
Proposition 5.12 We suppose that the assumptions of Theorems 5.8 and 5.9 hold. x Then the optimal wealth process is given by Rt∗ = 0 . Also the optimal portfolio π ∗ Zt and the primal value function are given by πt∗ =
μt ψ(Yt ) = δ(Yt ) g(Vt )
and J (x) = ln(x) − (0, Y0 ).
(5.40)
where is the solution of the semilinear equation (5.23) with boundary condition (5.24). 1 and from Theorem 5.8, the dual optimizer x 1 ν ∗ = 0. Moreover, the Lagrange multiplier z x = . Therefore from (5.39), the x optimal wealth is given by x Rt∗ = 0 . (5.41) Zt
Proof In this case we have I (x) =
By applying Itô’s formula to (5.41) and from Proposition 3.2, we obtain that d Rt∗ = Rt∗ ψ(Yt )d W˜ t1 . On the other hand, we have from (5.38) that d Rt∗ = Rt∗ πt∗ δ(Yt )d W˜ t1 . Therefore comparing these two expressions for Rt∗ , we obtain that the optimal portfolio π ∗ is given by (5.40). Finally from the definition of the primal value function and (5.41), we have that J (x) = ln(x) − E[ln(Z T0 )] = ln(x) + 1 + J˜(0, 1, Y0 ) = ln(x) − (0, Y0 ). The last equality comes from Theorem 5.8. Power utility: U (x) = x p / p 0 < p < 1 Proposition 5.13 We suppose the assumptions of Theorems 5.10 and 5.11 hold. Then the optimal wealth is given by Rt∗ =
x ∗ (Z tν )q−1 exp (−(t, Yt )) . ∗ ν q E[(Z T ) ]
where the associated optimal portfolio is given by the Markov control {πt∗ = π ∗ (t, Yt )} with K T (Yt ) 1 ψ(Yt ) − 1 D y (t, Yt ) (5.42) πt∗ = 1 − p δ(Yt ) δ(Yt ) and the primal value function is given by J (x) =
xp exp(−(1 − p)(0, Y0 )). p
p , ν ∗ is given by (5.32) and is a solution of the semilinear equation p−1 (5.28) with boundary condition (5.29). where q =
123
D. Ibrahim, F. Abergel
Proof In this case we have I (x) = x 1/( p−1) and from Theorem 5.10, the dual optimizer p−1 x ν ∗ is given by (5.32). The Lagrange multiplier z x = . Therefore ∗ E[(Z Tν ) p/ p−1 ] from (5.39), the optimal wealth is given by Rt∗
=E =
Z Tν
∗
ν∗ ∗ I (z x Z T )|Gt ν Zt
=E
1 ν∗ q x ∗ ∗ E (Z T ) |Gt ν E[(Z T )q ] Z tν
Z Tν
Z tν
∗ ∗
(z x )
1/( p−1)
∗ (Z Tν )1/( p−1) |Gt
Therefore from Theorem 5.10, we deduce that Rt∗ =
x ∗ (Z tν )q−1 exp (−(t, Yt )) . ∗ ν q E[(Z T ) ]
(5.43)
Now, as in the logarithmic case, by writing d Rt∗ = Rt∗ πt δ(Vt )d W˜ t1 and applying ∗ Itô’s formula to (Z tν )q−1 exp (−(t, Yt )), then we deduce after comparing the two expressions for Rt∗ that: πt∗ =
K T (Yt ) 1 ψ(Yt ) − 1 D y (t, Yt ). 1 − p δ(Yt ) δ(Yt )
Finally, from (5.43) and the boundary condition (T, Yt ) = 0, we have J (x) =
xp xp ∗ E[(Z Tν )q ]1− p = exp(−(1 − p)(0, Y0 )). p p
where the last equality comes from Theorem 5.10 and function is given in (5.28) and (5.29) . From the above formula for the optimal portfolio in the case of power utility function, we can deduce the following relation between the primal and the dual control function. Corollary 5.14 The optimal portfolio π ∗ is given by πt∗ =
1 K 1T (Yt )(K 2T (Yt ))−1 ∗ 1 ψ(Yt ) − νt . 1 − p δ(Yt ) 1− p δ(Yt )
(5.44)
ρk(Vt ) ϑ(μt )
where for Yt = (Vt , μt ), ψ(Yt ) = μt , δ(Yt ) = g(Vt ), K 1 (Yt ) = 1 − ρ 2 k(Vt ) . The functions g, ϑ, k and ϒ are given in (2.1), and K 2 (Yt ) = ϒ(μt ) (2.2) and (2.3). Proof The proof can be deduced easily from Theorem 5.10 and Proposition 5.13.
123
Non-linear filtering and optimal investment under partial...
Remark 5.15 For the logarithmic case, we notice that in the case of partial information, the optimal portfolio can be formally derived from the full information case by replacing the unobservable risk premium μ˜ t by its estimate μt . But on the other hand, in the power utility function, this property does not hold and the optimal strategy cannot be derived from the full information case by replacing the risk μ˜ t by its best estimate μt due to the last additional term which depend on the filter. This property corresponds to the so called separation principle. It is proved by Kuwana (1995) that certainty equivalence holds if and only if the utilities functions are logarithmic. Remark 5.16 The advantage of using the martingale approach instead of the dynamic programming approach (PDE approach) is that we don’t need to impose any constraint on the admissible portfolio controls, while it is essential in the case of the PDE approach. In fact, with the PDE approach, we need to make the following constraint on the admissible portfolio controls sup E[exp(c|δ(Yt )πt |)] < ∞,
t∈[0,T ]
for some c > 0.
(5.45)
this constraint is indispensable to impose in order to show a verification theorem in the case of power utility function.
5.4 Application We present here an example of stochastic volatility model which satifies all the assumptions given above and for which we can obtain a closed form for the primal value function and the optimal portfolio. We consider the Log Ornstein–Uhlenbeck model studied in Example 2 defined by (4.9), (4.10) and (4.11). We deduce from (4.12) that the filter estimate μt is finite dimensional, thus the Markovianity assumption 4 on the filter estimate μt is satisfied. From Sect. 3, we have the following dynamics of (Rtπ , Vt , μt ) in the full observation framework 1 d Rtπ = Rπt πt μt e Vt dt + e Vt dW t , R0π = x, 1 2 d Vt = λV (θ − Vt ) dt + σV ρdW t + σV 1 − ρ 2 dW t , V0 = v0 ,
1 2 dμt = −λμ μt + λμ θμ dt + 11 (t)dW t + 12 (t)dW t , μ0 = m 0 .
(5.46) (5.47) (5.48)
where the last dynamics is deduced from (4.12). 11 (t) and 12 (t) are solutions of Riccati equation (4.13). In the sequel, we write 11 (resp.12 ) instead of 11 (t) (resp.12 (t)) in order to simplify the expressions.
123
D. Ibrahim, F. Abergel
Proposition 5.17 We consider the power utility function U (x) =
xp , 0 < p < 1. p
Then the optimal portfolio is given by π˜ t =
1 μt ρσV ˜ 11 − V [ A(t) − (T − t)] + V [2 A(t)μt + B(t)]. V t t p−1e e e t
and the primal value function is given by J (x) =
xp ˜ ˜ exp −(1 − p) A(0)V − V0 T − A(0)μ20 − B(0)μ0 − C(0) . 0 + B(0) p
˜ =− where A(t)
T
(λV (T − s) + 1)e−λV (s−t) ds.
t
T
1 q (1 − ρ 2 )σV2 ) A˜ 2 (s) − (σV2 − 2 q −1 t q ˜ + ((σV2 − (1 − ρ 2 )σV2 )(T − s) + λV θ ) A(s) q −1 1 q (1 − ρ 2 )σV2 )(T − s)2 − λV θ (T − s) ds. − (σV2 − 2 q −1
˜ B(t) =
and A is solution of the following Riccati equation q 1 2 2 12 A (t) + 2(λμ + q11 )A(t) − q(q − 1), q −1 2 T B(t) = B1 (s)A(s) exp −(λμ + q11 )(s − t) + 2(211 + 212 t s q 212 ) − A(u)du ds. q −1 t T 1 q 2 211 +212 − 212 B (s)− B1 (s)B(s) ds. (211 +212 )A(s)+ C(t) = 2 q −1 t
A (t) = −2 211 + 212 −
where B1 (s) = +2 (ρσV 11 + 1 − ρ 2 σV 12 −
q ˜ − (T − s)) − λμ θμ 1 − ρ 2 σV 12 )( A(s) q −1
˜ ) = B(T ˜ ) = A(T ) = B(T ) = C(T ) = 0. and with terminal conditions A(T Proof From (5.46), (5.47) and (5.48), we have for Yt = (Vt , μt )
123
Non-linear filtering and optimal investment under partial...
λV (θ − Vt ) ρσV and K 2 (y) = ψ(Yt ) = μt , (Yt ) = , K 1 (y) = 11 λμ (θμ − μt ) 1 − ρ 2 σV . 12 Notice that with this Log Ornstein–Uhlenbeck model, assumptions H and H’.I hold, see Remark 5.18. Therefore from Proposition 5.13, we have π˜ t =
K 1T (Yt ) 1 μt − D y (t, Yt ), p − 1 e Vt e Vt
where is solution of to the semilinear (5.28) with terminal condition (5.29). Generally, Eq. (5.28) does not have a closed-form, but with this model we can deduce a closed form for by using the following separation transformation: For y = (v, m), ˜ v) − f˜(t, v, m). (t, y) = (t, The general idea of this separation transformation has been used by a lot of authors like Pham (2002), Rishel (1999), in order to express the value function in terms of the solution to a semilinear parabolic equation. Now, substituting the above form of into (5.28) gives us ∂ f˜ ˜ ˜ ∂ f˜ 1 2 ∂ 2 ∂ 2 f˜ ∂ 2σ + − σV − + 1 − ρ + ρσ − V 11 V 12 ∂t ∂t 2 ∂v2 ∂ 2v ∂v,m 2 ∂ f˜ 1 211 + 212 2 + 2 ∂ m ˜ ∂ f˜ ˜ 2 q ∂ ∂ f˜ 2 1 ∂ 2 2 2 σV − (1 − ρ )σV ) −2 +( ) + ( 2 q −1 ∂v ∂v ∂v ∂v ˜ ∂ ∂ f˜ − − (λV (θ − v) − qmρσV ) ∂v ∂v
∂ f˜ + −λμ m + λμ θμ − qm11 ∂m ˜ 2 ∂f 1 q 1 211 + 212 − 212 + + q(q − 1)m 2 2 q −1 ∂m 2 ˜ ˜ ∂ f˜ ∂ f˜ ∂ f ∂ q − ρσV 11 + 1−ρ 2 σV 12 − − 1−ρ 2 σV 12 = 0. q −1 ∂m ∂v ∂m ∂v Thus we have a coupled PDEs for which we have not able to find its solution in general. ˜ and another in f˜, with the The key is to separate the considered PDE into a PDE in ˜ ˜ fact that (T, v) = 0 and f (T, v, m) = 0. These two last conditions come from the boundary condition (5.29).
123
D. Ibrahim, F. Abergel
The difficulty is how we can obtain two PDEs for which we can obtain an explicit ˜ ∂ f˜ ∂ f˜ ∂ f˜ ∂ f˜ ∂ , . For forms. In fact, the difficulty comes from the terms and ∂v,m ∂v ∂m ∂v ∂v that we need to use a new separation transformation for f˜ as follows: f˜(t, v, m) = v(T −t)+ f (t, m), with f (T, m) = 0. So with this new transformation, we can obtain ˜ which depends only on v and another PDE for f which depends only on a PDE for ˜ and f . m, and therefore we can obtain a closed form for Here we want to make clear the advantage of the Log Ornstein–Uhlenbeck model. In fact, it’s not just the new transformation that helped us to find the PDE equation satisfy by f which depends only on m, but also the advantage of the constant diffusion term in the stochastic volatility model which is modeled as an Ornstein–Uhlenbeck model. ˜ and f for which we can deduce an Finally, we have the following PDEs for explicit form as follows 2 ˜ ˜ ˜ ∂ 1 2 ∂ 2 1 q ∂ 2 2 2 σV − (1 − ρ )σV − − σV 2 + ∂t 2 ∂v 2 q −1 ∂v − (λV (T − t) + 1)v + λV θ (T − t) 1 q + (σV2 − (1 − ρ 2 )σV2 )(T − t)2 2 q −1 ˜ q ∂ (1 − ρ 2 )σV2 )(T − t) + λV (θ − v) − (σV2 − = 0. q −1 ∂v
(5.49)
and ∂2 f ∂f 1 2 11 + 212 2 + ∂t 2 ∂ m 2 ∂f 1 q 2 2 2 11 + 12 − 12 + 2 q −1 ∂m ˜ ∂ 1 + q(q − 1)m 2 + qmρσV 2 ∂v + [(−ρσV 11 − 1 − ρ 2 σV 12 + − λμ m + λμ θμ − qm11 ]
˜ ∂ q 1 − ρ 2 σV 12 ) − (T − t) q −1 ∂v
∂f ∂m
− qmρσV (T − t) = 0.
(5.50)
˜ ∂ , but we show below that the solution of ∂v ˜ is polynomial of degree 1, then by deriving it, we obtain a the PDF satisfied by Notice that the PDE for f depends on
123
Non-linear filtering and optimal investment under partial...
term which does not depend on v. So we have a PDE for f which depends only on m, therefore an explicit form can be deduced. ˜ As in Rishel (1999), the solution of (5.49) with boundary condition (T, v) = 0 is given by ˜ ˜ ˜ v) = A(t)v (t, + B(t) ˜ where, A˜ t and B(y) are respectively solutions of the following differential equations ˜ ) = 0, ˜ − (λV (T − t) + 1), with A(T A˜ (t) = λV A(t) 1 q (1 − ρ 2 )σV2 ) A˜ 2 (t) B˜ (t) = (σV2 − 2 q −1 q ˜ (1 − ρ 2 )σV2 )(T − t) + λV θ A(t) − (σV2 − q −1 q 1 (1 − ρ 2 )σV2 )(T − t)2 + λV θ (T − t) + (σV2 − 2 q −1
˜ ) = 0, with B(T
˜ ˜ One easily verifies that A(t), B(t) given in Proposition 5.17 are solutions of the above differential equations. Also as in Rishel (1999), the solution of (5.50) with the boundary condition f (T, m) = 0 is given by f (t, m) = A(t)m 2 + B(t)m + C(t) where q 1 2 212 A (t) + 2(λμ + q11 )A(t) − q(q − 1), A (t) = −2 211 + 212 − q −1 2 q 2 2 2 ˜ + qρσV (T − t) )A(t) + (λμ + q11 ) B(t) − qρσV A(t) B (t) = −2(11 + 12 − q − 1 12 q 2 2 ˜ 1 − ρ σV 12 )( A(t) − (T − t)) − λμ θμ A(t), + 2 (ρσV 11 + 1 − ρ σV 12 − q −1 1 q 2 211 + 212 − 212 B (t) C (t) = −(211 + 212 )A(t) − 2 q −1 q ˜ − (T − t)) − λμ θμ B(t). + (ρσV 11 + 1 − ρ 2 σV 12 − 1 − ρ 2 σV 12 )( A(t) q −1
with terminal condition A(T ) = B(T ) = C(T ) = 0. The solution of the riccati equation satisfied by A(t) can be deduced from Rishel (1999). For B(t) and C(t), one easily verifies that their expressions given in Proposition 5.17 are solutions of the above differential equations. ˜ and f , we can deduce Finally, from Proposition 5.13 and the above solutions of the explicit form of the value function given in Proposition 5.17.
123
D. Ibrahim, F. Abergel
Remark 5.18 Assumptions H (i), (ii), (iii) are easy to verify. It remains to more clarify H (iv). Indeed, as ψ(Y y ) = μt and the fact that 11 and 12 are bounded, then from (5.48) and by the same arguments used in Remark 5.7, we can deduce that there exists some > 0 such that sup E[exp(ψ 2 (Yt ))].
t∈[0,T ]
A Appendix Let us consider the following partially observation system d X t = A(X t )dt + G(X t )d Mt + B(X t )dWt d t = dWt + h(X t )dt
(A.1) (A.2)
Here X is the two dimensional signal process and is the two dimensional observation process. A is a 2 × 1 matrix, G, B are 2 × 2 matrix and h is 2 × 1 matrix . W and M are two dimensional independents Brownian motions. We assume that A, B, and G satisfy the global Lipschitz condition. Using the change of measure P˜ given in (3.4), we need to discuss some conditions under which the process L is a martingale. Normally Novikov’s condition is quite difficult to verify directly, so we need to use an alternative conditions under which the process L is a martingale. From Lemma 3.9 in Bain and Crisan (2009), we can deduce that L is a martingale if the following conditions are satisfied E
t
(|h(X s )| )ds < ∞, 2
0
t
E
L s |h(X s )| ds < ∞
∀t > 0.
2
(A.3)
0
˜ F -martingale given by t = Let us now denote by t the P,
1 Lt .
Therefore from
B(R2 ),
Kallianpur-Striebel formula, we have for every φ ∈ the following representation E˜ φ(X t )t |Ft ψt (φ) , (A.4) := αt (φ) := E φ(X t )|Ft = ˜E t |Ft ψt (1) with B(R2 ) is the space of bounded measurable functions R2 → R. In the following, we assume that for all t ≥ 0, P˜
t
[ψs (|h|)]2 ds < ∞ = 1, for all t > 0.
(A.5)
0
Let us now introduce the following notations which will be useful in the sequel.
123
Non-linear filtering and optimal investment under partial...
1 Notations 2 Let K = (B B T + GG T ), A and B = (B k )2k=1 are the following 2 differential operators Aφ =
2
K i j ∂x2i x j φ +
i, j=1
Bk φ =
2
2
Ai ∂xi φ, and
i=1
Bik ∂xi φ,
for φ ∈ D(A) ∩ D(B).
(A.6)
i=1
where D(A) = Cb2 (R2 ) is the domain of the generator A and D(B) = ∩i=1,2 D(Bi ) where D(Bi ) = Cb1 (R2 ) is the domain of the operator Bi . We need now to impose the following fundamental condition in order to derive the Kushner–Stratonovich equation P
t
|αs (h)| ds < ∞ = 1, 2
for all t ≥ 0.
(A.7)
0
The following results due to Bain and Crisan (2009) and Pardoux (1991). Proposition 5.19 Assume that the signal and observation processes satisfy (A.1) and (A.2). If conditions (A.3) and (A.5) are satisfied then the conditional distribution αt satisfies the following Kushner–Stratonovich equation 1 dαt (φ) = αt (Aφ)dt + αt h 1 + B 1 φ − αt (h 1 )αt (φ) dW t 2 + αt h 2 + B 2 φ − αt (h 2 )αt (φ) dW t , for φ ∈ D(A) ∩ D(B). (A.8)
References Bain A, Crisan D (2009) Fundamentals of stochastic filtering. In: Stochastic modelling and applied probability, vol 60. Springer, New York Bensoussan A (1992) Stochastic control of partially observable systems. Cambridge University Press, Cambridge Ceci C, Colaneri K (2014) The Zakai equation of nonlinear filtering for jump-diffusion observations: existence and uniqueness. Appl Math Optim 69(1):47–82 Crisan D, Lyons T (1999) A particle approximation of the solution of the Kushner–Stratonovitch equation. Probab Theory Relat Fields 115(4):549–578 Cvitani´c J, Karatzas I (1992) Convex duality in constrained portfolio optimization. Ann Appl Probab 2(4):767–818 Detemple J (1986) Asset pricing in a production economy with incomplete information. J Finance 41(2):383–391 Dothan MU, Feldman D (1986) Equilibrium interest rates and multiperiod bonds in a partially observable economy. J Finance 41(2):369–382 Fleming WH, Soner HM (2006) Controlled Markov processes and viscosity solutions In: Springer Science & Business Media (ed) Stochastic modelling and applied probability, 2nd edn, vol 25. Springer, New York
123
D. Ibrahim, F. Abergel Gobet E, Pages G, Pham H, Printemps J (2006) Discretization and simulation of the Zakai equation. SIAM J Numer Anal 44(6):2505–2538 Kallianpur G (1980) Stochastic filtering theory. In: Stochastic modelling and applied probability, vol 13. Springer, New York Karatzas I, Lehoczky JP, Shreve SE, Xu GL (1991) Martingale and duality methods for utility maximization in an incomplete market. SIAM J Control Optim 29(3):702–730 Karatzas I, Shreve SE (1991) Brownian motion and stochastic calculus. In: Graduate texts in mathematics, vol 113. Springer, New York Karatzas I, Zhao X (2001) Bayesian adaptive portfolio optimization. In: Handbooks in mathematical finance: option pricing, interest rates and risk management, 1st edn. Cambridge University Press, Cambridge, pp 632–669 Kurtz TG, Ocone DL (1988) Unique characterization of conditional distributions in nonlinear filtering. Ann Probab 16:80–107 Kuwana Y (1995) Certainty equivalence and logarithmic utilities in consumption/investment problems. Math Finance 5(4):297–309 Lakner P (1995) Utility maximization with partial information. Stoch Process Appl 56(2):247–273 Lakner P (1998) Optimal trading strategy for an investor: the case of partial information. Stoch Process Appl 76(1):77–97 Liptser RS, Shiryaev AN (2001) Statistics of random processes. I, volume 5 of applications of mathematics (New York). Springer, Berlin, expanded edition. General theory, translated from the (1974) Russian original by A. B. Aries, Stochastic modelling and applied probability Owen MP (2002) Utility based optimal hedging in incomplete markets. Ann Appl Probab 12(2):691–709 Pardoux E (1991) Filtrage non linéaire et équations aux dérivées partielles stochastiques associées. In: Hennequin PL (ed) École d’Été de Probabilités de Saint-Flour XIX–1989, vol 1464. Lecture Notes in Mathematics. Springer, Berlin, pp 67–163 Pham H (2002) Smooth solutions to optimal investment models with stochastic volatilities and portfolio constraints. Appl Math Optim 46(1):55–78 Pham H, Quenez MC (2001) Optimal portfolio in partially observed stochastic volatility models. Ann Appl Probab 11(1):210–238 Rishel R (1999) Optimal portfolio management with partial observations and power utility function. In: McEneaney WM, Yin GG, Zhang Q (eds) Stochastic analysis, control, optimization and applications, systems control found. Application, Birkhäuser, Boston, pp 605–619
123