Probab. Theory Relat. Fields DOI 10.1007/s00440-015-0640-x
Inverting Ray-Knight identity Christophe Sabot1 · Pierre Tarres2
Received: 26 November 2013 / Revised: 4 June 2015 © Springer-Verlag Berlin Heidelberg 2015
Abstract We provide a short proof of the Ray-Knight second generalized Theorem, using a martingale which can be seen (on the positive quadrant) as the Radon–Nikodym derivative of the reversed vertex-reinforced jump process measure with respect to the Markov jump process with the same conductances. Next we show that a variant of this process provides an inversion of that Ray-Knight identity. We give a similar result for the Ray-Knight first generalized Theorem. Mathematics Subject Classification Secondary 60K35 · 81T25 · 81T60
Primary 60J27 · 60J55;
1 Introduction Let G = (V, E, ∼) be a nonoriented connected finite graph without loops, with conductances (We )e∈E ; define, for all x, y ∈ V , Wx,y = W{x,y} 1x∼y .
This work was partly supported by the ANR project MEMEMO2, and the LABEX MILYON. The first author is grateful to DMA, ENS, for his hospitality and financial support while part of this work was done.
B
Pierre Tarres
[email protected] Christophe Sabot
[email protected]
1
Université de Lyon, Université Lyon 1, Institut Camille Jordan, CNRS UMR 5208, 43, Boulevard du 11 novembre 1918, 69622 Villeurbanne Cedex, France
2
CNRS and Université Paris-Dauphine, Ceremade (UMR 7534), Place Lattre de Tassigny, 75775 Paris Cedex 16, France
123
C. Sabot, P. Tarres
Let E and L be respectively the associated Markov generator and Dirichlet form defined by, for all f ∈ RV , L f (x) =
Wx,y ( f (y) − f (x))
y∈V
E( f, f ) =
1 Wx,y ( f (x) − f (y))2 2 x,y∈V
Let x0 ∈ V be a special point that will be fixed throughout the text. Let U = V \{x0 }, and let P G,U be the unique probability on RV under which (ϕx )x∈V is the centered Gaussian field with covariance E G,U [ϕx ϕ y ] = gU (x, y), where gU is the Green function killed outside U , in other words 1 1 E(ϕ, ϕ) δ0 (ϕx0 ) P G,U = exp − dϕx , |U | √ 2 (2π ) 2 det G U x∈U where G U := gU (., .), and δ0 is the Dirac at 0 so that the integration is on (ϕx )x∈U with ϕx0 = 0. Let Pz 0 be the law under which (X t )t0 is a Markov Jump Process with conductances (We )e∈E (i.e. jump rates Wi j from i to j ∈ V ) starting at z 0 at time 0, with right-continuous paths and local times at x ∈ V , t 0, x (t) = 0
t
1{X u =x} du.
(1.1)
Let τ. be the right-continuous inverse of t → x0 (t) τu = inf{t 0; x0 (t) > u}, for u 0. Our first aim is to provide a short proof of the generalized second Ray-Knight theorem. Theorem 1 (Generalized second Ray-Knight theorem, [10]) For any u > 0, 1 under Px0 ⊗ P G,U , has the same law as x (τu ) + ϕx2 2 x∈V √ 2 1 (ϕx + 2u) under P G,U . 2 x∈V
This theorem is due to Eisenbaum, Kaspi, Marcus, Rosen and Shi [10] and is closely related to Dynkin’s isomorphism; see [8] for a first relation between Ray-Knight theorem and Dynkin’s isomorphism, [17] for an overview of the subject, and [12,18] for the original papers of Knight and Ray see [14–16] for related work on the link between Markov loops and the Gaussian Free Field. Note that this result plays a crucial rôle in a recent work of Ding, Lee and Peres [6] on cover times of discrete Markov processes, and in the study of random interlacements, see for instance [20–22].
123
Inverting Ray-Knight identity
In Sect. 2, we give our short proof of Theorem 1, independent of any reference to the VRJP. Similar results would hold for Dynkin and Eisenbaum isomorphism theorems (see Sect. 5, and [17,22]). We note that there is also a non-symmetric version of Dynkin’s isomorphism [13] (see also [9]), which our technique cannot provide as it is. In Sect. 3, we explain how the martingale appearing in this proof is related to the Vertex reinforced jump process (VRJP). However, note that the proof does not need any reference to the VRJP. This short proof in fact yields an identity that corresponds to an inversion of the Ray-Knight identity, proved in Sect. 4. Indeed, Theorem 1 gives an identity in to give any information on the law of (x (τu ), ϕx )x∈V conditioned on law, but fails x (τu ) + 21 ϕx2 x∈V . We provide below a process that describes this conditional law. Finally, Sect. 5 yields the equivalent inversion for the generalized first Ray-Knight theorem. Let (x )x∈V be positive reals. As before, we fix the special point x0 ∈ V . We consider the continuous time process (Yˇs )s0 with state space V defined as follows. We set Lˇ i (s) = i −
s
0
1Yˇu =i du.
At time s, we consider the Ising model (σx )x∈V on G with interaction Ji, j (s) = Wi, j Lˇ i (s) Lˇ j (s). and with boundary condition σx0 = +1. We denote by F(s) =
e
{i, j}∈E
Ji, j (s)σi σ j
σ ∈{−1,+1}V \{x0 }
its partition function and by ·s its associated expectation, so that for example σx s =
1 F(s)
σx e
{i, j}∈E
Ji, j (s)σi σ j
>0
σ ∈{−1,+1}V \{x0 }
The process Yˇ is then defined as the jump process which, conditioned on the past at time s, if Ys = i, jumps from i to j at a rate Wi, j L j (s)
σ j s . σi s
and stopped at the time S = sup{s 0, Lˇ i (s) > 0 for all i}.
(1.2)
123
C. Sabot, P. Tarres
Note, using the positivity of the Ising model (see for instance [23], proposition 7.1), that σx s > 0 for all s < S. Hence the process Yˇ is defined up to time S. Yˇ the law of Yˇ starting from z and initial condition , and stopped We denote by P,z at time S. (Note that this law also depends on the choice of the “special point” x0 ). Lemma 1 Starting from any point z ∈ V , the process Yˇ ends at x0 , i.e. Yˇ S < ∞ and Yˇ S = x0 , P,z a.s.
This process provides an inversion to the second Ray-Knight identity, as stated in the following theorem. Theorem 2 Let , ϕ, τu be as in Theorem 1 and set x = ϕx2 + 2x (τu ). Under Px0 ⊗ P G,U , we have ˇ L (ϕ|) = (σ L(S)), law
Yˇ ˇ ˇ where L(S) is distributed under P,x and, conditionally on L(S), σ is dis0 tributed according to the distribution of the Ising model with interaction Ji, j (S) = Wi, j Lˇ i (S) Lˇ j (S) with boundary condition σx0 = +1.
Remark 1 Once ϕ is known, then obviously (τu ) = (2 − ϕ 2 )/2 is also known: in other words, Theorem 2 is equivalent to the more precise identity law
L (((τu ), ϕ)|) =
1 2 ˇ2 ˇ ( − L (S)), σ L(S) , 2
ˇ where L(S) and σ are distributed as in the statement of Theorem 2. The proof of Theorem 2 is given in Sect. 4. Theorem 2 is a consequence of a more precise statement, cf Theorem 5, which gives the law of (X s )sτu conditionally on .
2 A new Proof of Theorem 1 Let G be a positive measurable test function. Letting dϕ := δ(ϕx0 ) − |U2 |
C = (2π ) we get
123
(det G U
)−1/2
x∈U
dϕx and
be the normalizing constant of the Gaussian free field,
Inverting Ray-Knight identity
1 Ex0 ⊗ P G,U G x (τu ) + ϕx2 2 x∈V 1 1 2 = C Ex0 G x (τu ) + ϕx exp − E(ϕ, ϕ) dϕ U 2 2 ⎡R ⎤ 1 1 ⎢ ⎥ G x (τu ) + ϕx2 exp − E(σ ϕ, σ ϕ) dϕ ⎦ = C Ex0 ⎣ U 2 2 R+ V σ ∈{+1,−1} ,σx0 =+1
(2.1) where in the last equality we decompose the integral according to the possible signs of ϕ sothat the sum is on σ ∈ {+1, −1}U with σx0 = +1. In the following we simply write σ for this sum. The strategy is now to make the change of variables = 2(τu ) + ϕ 2 . Given = (i (t))i∈V,t∈R+ , let V : x0 = Du := { ∈ R+
√
2u, 2x /2 x (τu ) for all x ∈ V \{x0 }}
We first make the change of variables V Tu : R+ ∩ {ϕx0 = 0} → Du ϕ → = Tu (ϕ) = 2i (τu ) + ϕi2
.
i∈V
which can be inverted by ϕ =
δ√2u (x0 ) x∈U dx ,
i2 − 2(τu ). This yields, letting d :=
1 Ex0 ⊗ P G,U G x (τu ) + ϕx2 2 x∈V 1 2x −1 exp − E(σ ϕ, σ ϕ) |Jac(Tu )()|1∈Du d = C Ex0 G 2 2 RU + σ 1 x 2x = C Ex0 exp − E(σ ϕ, σ ϕ) G 1∈Du d . 2 2 ϕx RU + σ x∈U
(2.2) Note that the Jacobian is taken over all coordinates but x0 (x0 = then |Jac(Tu−1 )())| = |Jac( → ϕ)())| =
√
2u); if ∈ Du ,
x . ϕx
x∈U
123
C. Sabot, P. Tarres V such that = Given ∈ R+ x0
√ 2u, we define for all i ∈ V ,
1 T = inf t 0; i (t) = i2 for some i ∈ V 2 i (t) = i2 − 2i (t), t T
(2.3)
so that in (2.2) we have ϕ = (τu ). An important remark is that ∈ Du ⇐⇒ X T = x0
⇐⇒ T = τu .
(2.4)
Finally, we define for a configuration of signs σ ∈ {+1, −1}V with σx0 = +1, Mtσ = exp
1 j=x σ j j (0) . − E(σ (t), σ (t)) 0 2 j= X t σ j j (t)
(2.5)
From (2.2)–(2.4), we deduce
Ex0 ⊗ P G,U
1 2 1 2 Ex0 G (τu )+ ϕ =C G MTσ 1{X T =x0 } d. 2 2 RU + σ (2.6)
σ ) Lemma 2 For any ∈ RV and σ ∈ {−1, 1}V , the process (Mt∧T t0 is a uniformly integrable martingale.
Proof Consider the Markov process ((t), X (t)), which obviously has generator ˜ L(g)(, x) = ( ∂∂ x + L)g(, x). Let f be the function defined by f (, x) = −1 2 − 2 σ . Note that, if t < T , y y y=x y d L˜ f 2 Lf E (σ (t), σ (t)) = L(σ (t))(X t ) = 2 ((t), X t ) = 2 ((t), X t ), dt (σ ) X t (t) f f
since f (, x) does not depend on x . Therefore, for t < T , Mtσ f ((t), X (t)) − 0t e = f (0, x0 ) M0σ
L˜ f f
((s),X (s))ds
σ is a martingale, using for instance lemma 3.2 in [11] pp.174– which implies that Mt∧T 175. The condition that f is bounded in that result is not satisfied, but the proof remains true, noting that the following integrability conditions hold (see Problem
123
Inverting Ray-Knight identity
22 of Chapter 2, p.92 in [11]): first, using that ( L˜ f / f )((t), X (t)) −Cst(W ) and (|L f |/ f )((t), X (t)) Cst(W )/ X t (t), we deduce
t
0
s L i ds ˜f L˜ f ((s), X s ) exp − ((u), X u ) du ds Cst(W, ) √ f f s 0 i (t) i∈V
Cst(, W, |V |). Second, f ((t), X (t)) can be upper bounded by an integrable random variable, uniformly in t. Indeed, let us consider the extension of process (X t )t∈R to R, and take the convention that the local times at all sites are 0 at time 0. For all j ∈ V , let s j be the (possibly negative) local time at j, at the last jump from that site before reaching a local time 2j /2 at that site. Let, for all j ∈ V , m j = 2j /2 − s j . Then the random variables m j , j ∈ V , are independent exponential distributions with parameters W j = k∼ j W jk , since the sequence of local times of jumps from j is a Poisson Point Process with intensity W j . Now, for all t ∈ R+ , j = X t , j (t ∧ T ) 2m j , so that f ((t), X (t))
−1/2 Cst() j∈V m j .
−1/2 is integrable, which also implies This enables us to conclude, since j∈V m j uniform integrability of Mt∧T , using that |Mt∧T | Cst(W, , t) f ((t), X (t)). Let us now consider the process Nt =
Mtσ
σ ∈{−1,+1}V \{x0 }
=
σ ∈{−1,+1}V \{x0
1 j=x σ j j (0) . exp − E(σ (t), σ (t)) 0 2 j= X t σ j j (t) }
(2.7)
Lemma 3 For all x = x0 , we have N T 1{X T =x} = 0
(2.8)
Proof Let σ x be the spin-flip of σ at x : σ x = x σ with yx = −1 if y = x and 1 if y = x. If x = x0 and X T = x, then MTσ
x
= −MTσ .
Indeed, since x (T ) = 0, then σ x (T ) = σ (T ), and the minus sign comes from the numerator of the product term in (2.5). By symmetry, the left-hand side of (2.8) is equal to
123
C. Sabot, P. Tarres
⎛ 1⎝ 2
σx
MTσ + MT
⎞ ⎠ 1{X T =x} = 0.
σ ∈{−1,+1}V \{x0 }
It follows from Lemmas 2 and 3, by the optional stopping theorem (using the uniform integrability of Mt∧T ), that 1 Ex0 ⊗ P G,U G x (τu ) + ϕx2 2 x∈V " ϕ # 1 2 Ex0 N T 1{X T =x0 } d x =C G 2 RU x∈V + 1 2 1 2 ϕ ϕ =C Ex0 [N T ]d = C N0 d G G U 2 2 RU R + + 1 2 1 G exp − E(σ , σ ) d =C 2 2 RU + σ 1 1 2 exp − E(, ) d, G =C 2 2 RU which concludes the proof of Theorem 1.
3 Link with vertex-reinforced jump process The aim of this section is to point out a link between the Ray-Knight identity and a reversed version of the Vertex-Reinforced Jump Process (VRJP). It is organized as follows. In Sect. 3.1 we compute the Radon–Nykodim derivative of the VRJP, which is similar to the martingale M in Sect. 2: the computation can be used in particular to provide a direct proof of exchangeability of VRJP. In Sect. 3.2 we introduce a time-reversed version of the VRJP, i.e. where the process subtracts rather than adds local time at the site where it stays (see Definition 2). Then we show in Theorem 4 that its Radon–Nykodim derivative is the martingale M σ in Sect. 2 with positive spins σ ≡ +1. Note that the “magnetized” reversed VRJP, defined in Sect. 1 and related to the inversion of Ray-Knight in Theorem 2, involves instead the sum N , cf (2.7), of all the martingales M σ . 3.1 The vertex-reinforced jump process and its Radon–Nykodim derivative Definition 1 Given positive conductances on the edges of the graph (We )e∈E and initial positive local times (ϕi )i∈V , the vertex-reinforced jump process (VRJP) is a continuous-time process (Yt )t0 on V , starting at time 0 at some vertex z ∈ V and such that, if Y is at a vertex i ∈ V at time t, then, conditionally on (Ys , s t), the process jumps to a neighbour j of i at rate Wi, j L j (t), where
123
Inverting Ray-Knight identity
t
L j (t) := ϕ j + 0
1{Ys = j} ds.
The Vertex-Reinforced Jump Process was initially proposed by Werner in 2000, first studied by Davis and Volkov [4,5], then Collevechio [2,3], Basdevant and Singh [1], and Sabot and Tarrès [19]. Let D be the increasing functional D(s) =
1 2 (L i (s) − ϕi2 ), 2 i∈V
define the time-changed VRJP Z t = Y D −1 (t) . and let, for all i ∈ V and t 0, iZ (t) be the local time of Z at time t. Lemma 4 The inverse functional D −1 is given by D −1 (t) =
( ϕi2 + 2iZ (t) − ϕi ). i∈V
Conditionally on the past at time t, the process Z jumps from Z t = i to a neighbour j at rate $ % 2 % ϕ j + 2 Zj (t) Wi, j & 2 . ϕi + 2iZ (t) Proof The proof is elementary and already in [19] [Section 4.3, Proof of Theorem 2 ii)] in a slightly modified version, but we include it here for completeness. First note that, for all i ∈ V , (3.1) iZ (D(s)) = (L i2 (s) − ϕi2 )/2, since (iZ (D(s))) = D (s)1{Z D(s) =i} = L Ys (s)1{Ys =i} . Hence (D −1 ) (t) =
1 1 1 = = , D (D −1 (t)) L Z t (D −1 (t)) 2 ϕ Z t + 2 ZZ t (t)
which yields the expression for D −1 . It remains to prove the last assertion:
123
C. Sabot, P. Tarres
P(Z t+dt = j|Ft ) = P(Y D −1 (t+dt) = j|Ft )
$ % 2 % ϕ j + 2 Zj (t) = W Z t , j (D −1 ) (t)L j (D −1 (t))dt = W Z t , j & 2 dt. ϕ Z t + 2 ZZ t (t)
Z ) be the distribution, starting from x0 and on the time interval Let Px0 ,t (resp. Pϕ,x 0 ,t [0, t], of the Markov Jump Process with conductances (We )e∈E (resp. the time-changed VRJP (Z t )s∈[0,t] with conductances (We )e∈E and initial positive local times (ϕi )i∈V ).
Theorem 3 The law of the time-changed VRJP Z on the interval [0, t] is absolutely continuous with respect to the law of the MJP X with rates Wi, j , with Radon–Nikodym derivative given by Z dPϕ,x 0 ,t
dPx0 ,t
=e
1 2
E(
√
√
ϕ 2 +2(t),
ϕ 2 +2(t))−E (ϕ,ϕ)
j=x ϕ j 0 , ϕ 2j + 2 j (t)
j= X t
where j (t) is the local time of X at time t and site j defined in (1.1). Proof In the proof, we write for the local time of both Z and X , since we consider Z and X on the canonical space with different probabilities. Let, for all ψ ∈ RV , i ∈ V , t 0, F(ψ) =
Wi j ψi ψ j , G i (t) =
{i, j}∈E
(ϕ 2j + 2 j (t))−1/2 .
j=i
First note that the probability, for the time-changed VRJP Z , of holding at a site v ∈ V on a time interval [t1 , t2 ] is ⎛ exp ⎝−
t2 t1
j∼Z t
ϕ 2j + 2 j (t)
WZt , j ϕ 2Z t + 2 Z t (t)
⎞
⎠ dt = exp −
t2
2 d F( ϕ + 2(t)) .
t1
Second, conditionally on (Z u , u t), the probability that Z jumps from Z t = i to j in the time interval [t, t + dt] is $ % 2 % ϕ j + 2 j (t) G j (t) Wi j & 2 dt. dt = Wi j G i (t) ϕi + 2i (t) Therefore the probability that, at time t, Z has followed a path Z 0 = x0 , x1 , . . ., Z t = xn with jump times respectively in [ti , ti + dti ], i = 1 . . . n, where t0 = 0 < t1 < · · · < tn < t = tn+1 , is
123
Inverting Ray-Knight identity n G xi (ti ) dti exp F(ϕ) − F( ϕ 2 + 2(t)) Wxi−1 xi G xi−1 (ti ) i=1
n G (t) Xt = exp F(ϕ) − F( ϕ 2 + 2(t)) Wxi−1 xi dti , G x0 (0) i=1
where we use that G xi (ti ) = G xi (ti+1 ), since Z stays at site xi on the time interval [ti , ti+1 ]. On the other hand, the probability that, at time t, X has followed the same path with jump times in the same intervals is ⎛ exp ⎝−
⎞ Wi j i ⎠
i, j: j∼i
n
Wxi−1 xi dti ,
i=1
which concludes the proof.
Note that Theorem 3 can be used to show exchangeability of the VRJP, and provides a martingale for the Markov Jump Process, similar to Mt in (2.5). Recall that it is shown in [19] that the time-changed VRJP (Z t )t0 is a mixture ofMarkov Jump Processes, i.e. that there exist random variables (Ui )i∈V ∈ H0 := { ∈V u i = 0} with a sigma supersymmetric hyperbolic distribution with parameters (Wi j ϕi ϕ j ){i, j}∈E (see Section 6 of [19] and [7]) such that, conditionally on (Ui )i∈V , Z t is a Markov jump process starting from z, with jump rate from i to j Wi, j eU j −Ui . In particular, the discrete time process corresponding to the VRJP observed at jump times is exchangeable, and is a mixture of reversible Markov chains with conductances Wi, j eUi +U j . 3.2 The reversed VRJP and its Radon–Nykodim derivative Definition 2 Given positive conductances on the edges of the graph (We )e∈E and initial positive local times (i )i∈V , the reversed vertex-reinforced jump process (RVRJP) is a continuous-time process (Y˜t )0tS , starting at time 0 at some vertex i 0 ∈ V such that, if Y˜ is at a vertex i ∈ V at time t, then, conditionally on (Y˜s , s t), the process jumps to a neighbour j of i at rate Wi, j L˜ j (t), where L˜ j (t) := j −
t 0
1{Y˜s = j} ds,
defined up until the stopping time S˜ where one of the local times hits 0, i.e. S˜ = inf{t ∈ R : L˜ j (t) = 0 for some j}.
123
C. Sabot, P. Tarres
Similarly as for Y , let us define the increasing functional 1 2 ˜2 ˜ (i − L i (s)), D(s) = 2 i∈V
define the time-changed VRJP Z˜ t = Y˜ D˜ −1 (t) . ˜ and let, for all i ∈ V and t 0, iZ (t) be the local time of Z˜ at time t. Then, similarly as in Lemma 4, conditionally on the past at time t, Z˜ jumps from ˜ Z t = i to a neighbour j at rate $ % 2 % j − 2 Zj˜ (t) Wi, j & , ˜ i2 − 2iZ (t)
and Z˜ stops at time ˜ S) ˜ = inf t 0; iZ˜ (t) = 1 i2 for some i ∈ V . T˜ = D( 2 Z˜ be the distribution of ( Z˜ t )t0 on the time interval [0, t ∧ T˜ ], starting Let P,x 0 ,t from x0 and initial condition . An easy adaptation of the proof of Theorem 3 shows Theorem 4 The law of the time-reversed VRJP Z˜ on the interval [0, t ∧ T˜ ] is absolutely continuous with respect to the law of the MJP X with rates Wi, j , with Radon–Nikodym derivative given by ˜
Z dP,x 0 ,t
dPx0 ,t
=e
√ √ − 12 E ( 2 −2(t∧T ), 2 −2(t∧T ))−E (,)
j= X t∧T
j=x j 0 , 2j − 2 j (t ∧ T )
where j (t) (resp. T ) is the local time of X at time t and site j (resp. the stopping time) defined in (1.1) (resp. in (2.3)). Hence, the Radon–Nikodym derivative of the time-reversed VRJP with respect to the MJP is the martingale that appears in the proof of Theorem 1, more precisely ˜
Z dP,x 0 ,t
dPx0 ,t
=
Mt∧T
M0
,
with the notations of Sect. 2. Note that this Radon–Nikodym derivative involves the martingale M with positive spins σ ≡ +1. The “magnetized” inverse VRJP, defined in Sect. 1 and related to the inversion of Ray-Knight in Theorem 2, involves the sum all the martingales M σ : this is the purpose of next section.
123
Inverting Ray-Knight identity
4 Proof of Lemma 1 and Theorem 2 The proofs of Lemma 1 and Theorem 2 rely on a time change of the process Yˇ which is in fact the same time change as the one appearing in Sect. 3 for Y˜ : let us define 1 2 ˇ2 ˇ (i − L i (s)), D(s) = 2 i∈V
define the time-changed VRJP Zˇ t = Yˇ Dˇ −1 (t) . ˇ
and let, for all i ∈ V and t 0, iZ (t) be the local time of Zˇ at time t. Then, similarly to Lemma 4, conditionally on the past at time t, Zˇ jumps from Zˇ t = i to a neighbour j at rate $ % 2 % j − 2 Zjˇ (t) σ j (t) Wi, j & , ˇ i2 − 2iZ (t) σi (t) where we write ·(t) for · D −1 (t) according to the notation of Sect. 1 : more precisely, ·(t) is the expectation for the Ising model with interaction Ji, j (D
−1
ˇ ˇ (t)) = Wi, j i2 − 2iZ (t) 2j − 2 Zj (t)
ˇ since the vectors of local times Z and Lˇ are related by the formula ˇ
Z (t) =
1 2 ˇ −1 ( − L(D (t))). 2
(4.1)
Clearly, this process is well defined up to time ˇ S) ˇ = inf t 0; iZˇ (t) = 1 i2 for some i ∈ V . Tˇ = D( 2 Lemma 1 tells that Zˇ Tˇ = x0 .
Zˇ the law of the process Zˇ starting from the initial condition We denote by P,z and initial state z up to the time Tˇ (as for Yˇ this law depends on the choice of x0 ). We now prove a more precise version of Theorem 2, giving a description of the conditional law of the full process.
Theorem 5 With the notations of Theorem 2, under Px0 (·|), ( X t )t∈[0,τu ] , ϕ has the Zˇ and σ is distributed law of ((( Zˇ (t))t∈[0,T ] , σ (T )) where Zˇ is distributed under P,x 0 according to the Ising model with interation Wi, j i (T ) j (T ).
123
C. Sabot, P. Tarres
We will adopt the following notation i (t) =
ˇ i2 − 2liZ (t) = Lˇ i (D −1 (t)).
(4.2)
ϕ
Recall that Mt , Nt and T are the processes (starting with the initial conditions ϕ and ) and stopping times defined respectively in (2.5), (2.7) and (2.3), as a function of the path of the Markov process X up to time t. The proof of Theorem 5 is based on the following lemma. Lemma 5 We have: (i) For all t T , Nt
=e
i∈V
Wi (i (t)− 21 i2 )
F(D
−1
(t))σ X t (t)
j=x0
j (0)
j= X t
j (t)
,
where F(D −1 (t)) (resp. ·(t) ) corresponds to the partition function (resp. dis−1 tribution) of the Ising model with interaction Ji, j (D (t)) = Ji, j i (t) j (t), and Wi = j∼i Wi, j . (ii) N T = 0 if X T = x0 . is a positive martingale, more pre(iii) Under P Z (the law of the MJP (X t )), Nt∧T ˇ
/N is the Radon–Nykodim derivative of the measure P Z with cisely, Nt∧T ,z 0 respect to the law of the MJP X starting from z and stopped at time T .
Proof of Lemma 5 (i) We expand the squares in the energy term, which yields 1 1 E(σ (t), σ (t)) = − Wi, j i (t) j (t)σi σ j + Wi (i2 − 2i (t)), 2 2 {i, j}∈E
i∈V
and the statement follows easily. (ii) Same argument as in Lemma 3. This can also be seen from the expression in (i) since in this case all the interactions between x and its neighbors vanish, indeed, Jx,y (T ) = 0, using x (T ) = 0. This implies that the pinning σx0 = +1 has no effect on the spin σx , and therefore by symmetry that σx (T ) = 0 if X T = x = x0 . (iii) The fact that Nt is a martingale follows directly from the martingale property of the Mtσ , cf Lemma 2. It is also a consequence of the Radon–Nykodim property proved below. The fact that Nt is positive follows from the positive correlation in the Ising model : σx (t) = σx0 σx (t) 0, see for instance [23]. The beginning of the proof follows the same line of ideas as in the proof of Theorem 3. Similarly, we set Gˇ i (t) =
j=i
123
1 1 = , ˇ j (t) 2 j − 2 Zj (t) j=i
Inverting Ray-Knight identity
so that j (t) Gˇ i (t) = . i (t) Gˇ j (t) First note that the probability, for the time-changed process Zˇ , of holding at a site v ∈ V on a time interval [t1 , t2 ] is ⎛ exp ⎝−
t2 t1
j∼ Zˇ u
⎞ j (u)σ j (u) W Zˇ u , j du ⎠ . Zˇ u (u)σ Zˇ u (u)
Second, conditionally on ( Zˇ u , u t), the probability that Zˇ jumps from Zˇ t = i to j in the time interval [t, t + dt] is Wi j
j (t)σ j (t) dt. i (t)σi (t)
Therefore the probability that, at time t, Zˇ has followed a path Zˇ 0 = x0 , x1 , . . ., Zˇ t = xn with jump times respectively in [ti , ti + dti ], i = 1 . . . n, where t0 = 0 < t1 < · · · < tn < t = tn+1 , with t Tˇ , is ⎛
⎞ n j (u)σ j (u) xi (ti )σxi (ti ) exp ⎝− W Zˇ u , j du ⎠ Wxi−1 xi dti Zˇ u (u)σ Zˇ u (u) xi−1 (ti )σxi−1 (ti ) 0 i=1 ˇ j∼ Z u ⎛ ⎞ t n Gˇ x (ti ) σxi (ti ) j (u)σ j (u) ⎝ ⎠ = exp − W Zˇ u , j du Wxi−1 xi i−1 dti Zˇ u (u)σ Zˇ u (u) Gˇ xi (ti ) σxi−1 (ti ) 0 i=1 j∼ Zˇ u ⎛ ⎞ t n j (u)σ j (u) σxi (ti ) Gˇ x (0) = exp ⎝− W Zˇ u , j du ⎠ 0 Wxi−1 xi dti Zˇ u (u)σ Zˇ u (u) σxi−1 (ti ) Gˇ ˇ (t) 0
t
j∼ Zˇ u
Zt
i=1
where we use that Gˇ xi−1 (ti−1 ) = Gˇ xi−1 (ti ), since Z stays at site xi−1 on the time interval [ti−1 , ti ]. We now use that n σxi−1 (ti−1 ) σ Zˇ t (t) n+1 σxi (ti ) = σxi−1 (ti ) σx0 (0) σxi−1 (ti ) i=1 i=1 ∂ t ∂u σ Zˇ u (u) = σ Zˇ t (t) exp − du . σ Zˇ u (u) 0
123
C. Sabot, P. Tarres
Finally, set
H (t) = F(D −1 (t)) =
exp
σ ∈{−1,+1}V \{x0 }
⎧ ⎨ ⎩
Wi, j i (t) j (t)σi σ j
{i, j}∈E
⎫ ⎬ ⎭
and,
K (t) =
exp
⎧ ⎨
Wi, j i (t) j (t)σi σ j
⎩
{i, j}∈E
σ ∈{−1,+1}V \{x0 }
⎫ ⎬ ⎭
σ Zˇ t .
We have σ Zˇ t (t) = K (t)/H (t), so that ∂ ∂u σ Zˇ u (u)
σ Zˇ u (u)
=
∂ ∂u K (u)
K (u)
−
∂ ∂u
H (u) . H (u)
Now, since ⎫ ⎧ ⎬ j (u) ∂ ⎨ σ ˇ σj, Wi, j i (u) j (u)σi σ j = − W Zˇ u , j ⎭ ∂u ⎩ Zˇ u (u) Z u {i, j}∈E
j∼ Zˇ u
we have that ∂ ∂u K (u)
K (u)
=−
j (u) σ j (u) Zˇ u (u) σ Zˇ u (u)
j∼ Zˇ u
These identities imply that the probability that, at time t, Zˇ has followed a path Zˇ 0 = x0 , x1 , . . ., Zˇ t = xn with jump times respectively in [ti , ti + dti ], i = 1 . . . n, where t0 = 0 < t1 < · · · < tn < t = tn+1 , with t Tˇ , is . n n N (t) Gˇ x0 (0) H (t) σ Zˇ t (t) exp − Wxi−1 xi dti = Wi i (t) Wxi−1 xi dti H (0) N (0) Gˇ ˇ (t) Zt
i=1
i∈V
i=1
where in the last equality we used Lemma 5, (i). Finally, the probability that, at time t, the Markov jump process X has followed the same path with jump times in the same intervals is n Wi i Wxi−1 xi dti . exp − i∈V
i=1 ˇ
Z with respect This exactly tells that the Radon–Nykodim derivative of Z t∧Tˇ under P
to the law P of the Markov jump process is
123
N (t∧Tˇ ) . N (0)
Inverting Ray-Knight identity
Proof of Lemma 1 By (ii) and (iii) of Lemma 5, we have, by the optional stopping theorem, Zˇ Zˇ 1{Yˇ =x0 } = E,x 1{ Zˇ =x0 } E,x 0 0 Sˇ Tˇ
= Ex0 N T 1{X T =x0 } /N0 = Ex0 N T /N0 =0 notation
Proof of Theorem 5 Let ψ((X t )t∈[0,τu ] , ϕ) = ψ(X, ϕ) and G() be test functions. We are interested in the following expectation Ex0 ⊗ P
(ψ(X, ϕ)G()) = Ex0
G,U
RV \{x0 }
ψ(X, ϕ)G()Ce
− 12 E (ϕ,ϕ)
dϕ , (4.3)
where, as in the proof of Theorem 1, C is the normalizing constant of the Gaussian free field. Recall that = ϕ 2 + 2l(τu ) and set σ = sign(ϕ). As in the proof of Theorem 1 we change to variables . Following the computation at the beginning of the proof of Theorem 1 up to Eq. (2.6), we deduce that (4.3) is equal to C
V \{x0 }
R+
G()Ex0
σ
ψ(X, σ (T ))MTσ 1{X T =x0 }
d
(4.4)
If X T = x0 then, using that σ X t = σx0 = 1 and the expansion in the proof of Lemma 5 (i), we deduce that e
MTσ = N T
{i, j}∈E
Wi, j i (T ) j (T )σi σ j
F(D −1 (T ))
and, therefore, σ
1 F(D −1 (T )) ψ(X, σ (T ))e {i, j}∈E Wi, j i (T ) j (T )σi σ j ×
ψ(X, σ (T ))MTσ = N T
=
σ N T ψ(X, σ (T ))(T ) .
This implies that (5.3) = C =C
V \{x } R+ 0 V \{x0
R+
G()Ex0 ψ(X, σ (T ))(T ) N T 1{X T =x0 } d
ˇ V RJ P ˇ ψ( Z , σ (T )) d G()N E (T ) 0 x0 }
123
C. Sabot, P. Tarres
where in the last equality we used Lemma 5 (ii)–(iii). Since
N0 =
σ ∈{−1,+1}V \{x0
1 exp − E(σ , σ ) , 2 }
law √ it implies that C N0 is the density of since by Theorem 1 we have = | 2u + ϕ|2 where ϕ has the law of the Gaussian free field P G,U . This exactly means that
Zˇ ˇ , σ (T ))(T ) . ψ( Z Ex0 ⊗ P G,U (ψ(X, ϕ)|) = E,x 0 Proof of Theorem 2 From Theorem 5, we know that conditionally on , (, ϕ) has Zˇ the law of ((T ), 2 − 2(T )), where (T ) is the local time of Zˇ under P,x . If 0 ˇ we change back to the process Y we have, using (4.1), ˇ L(S) =
2 − 2(T ),
hence, L((, ϕ)|) is the law of ( 21 (2 − L 2 (S)), L(S)) for initial conditions (, x0 ).
5 Inversion of the generalized first Ray-Knight theorem We use the same notation as in the first section. The generalized first Ray-Knight theorem concerns the local time of the Markov jump process starting at a point z 0 = x0 , stopped at its first hitting time of x0 . Denote by Hx0 = inf{t 0,
X t = x0 },
the first hitting time of x0 . Theorem 6 For any z 0 ∈ V and any s > 0, 1 x (Hx0 ) + (ϕx + s)2 under Pz 0 ⊗ P G,U , has the same “law” as 2 x∈V 1 ϕz under (1 + 0 )P G,U . (ϕx + s)2 2 s x∈V Remark 2 This theorem is in general stated for s = 0, but obviously we do not loose generality by restricting to s > 0.
123
Inverting Ray-Knight identity
This formally means that for any test function g,
1 g (x (Hx0 ) + (ϕx + s)2 )x∈V dPz 0 ⊗ P G,U 2 ϕz 1 1 + 0 d P G,U . (ϕx + s)2 = g 2 s x∈V
(5.1)
ϕ
Remark that the measure (1 + sz0 )P G,U has mass 1 (since ϕz 0 is centered) but is not positive. In fact, since the integrand depends only on |ϕx + s|, x ∈ V , everything can be written in terms of a positive measure. Indeed, if σx = sign(ϕx + s), then conditionally on |ϕx + s|, x ∈ V , σ has the law of an Ising model with interaction Ji, j = Wi, j |ϕi + s||ϕ j + s| and boundary condition σx0 = +1. This implies that the right hand side of (5.1) can be written equivalently as
g
1 (ϕx + s)2 2
x∈V
σz 0 |s + ϕz 0 |d P G,U s
where σz 0 denotes the expectation of σz 0 with respect to the Ising model described σ σ above. Since σx0 = +1, we have that sz0 0, and sz0 |s+ϕz 0 |d P G,U is a probability measure. We give now a counterpart of Theorem 2 for the generalized first Ray-Knight theorem. Consider the process Yˇ defined in Sect. 1, starting from a point z 0 . Denote by Hˇ x0 the first hitting time of x0 by the process Yˇ . Obviously, Lemma 1 implies the following Lemma 6. Lemma 6 Almost surely Hˇ x0 S, where S is defined in (1.2). Theorem 7 With the notation of Theorem 6, let z =
2z (Hx0 ) + (ϕz + s)2 .
Under Pz 0 ⊗ P G,U , we have ˇ Hˇ x0 )), L (ϕ + s|) = (σ L( law
ˇ Hˇ x0 ) is distributed under P Zˇ ˇ ˇ where L( ,z 0 and, conditionally on L( Hx0 ), σ is distributed according to the distribution of the Ising model with interaction Ji, j ( Hˇ x0 ) = Wi, j Lˇ i ( Hˇ x0 ) Lˇ j ( Hˇ x0 ) and boundary condition σx0 = +1. Similarly as for the generalized second Ray-Knight theorem, Theorem 7 is a consequence of the following more precise result. Let us consider, as in Sect. 4, the time-changed version Zˇ of the process Yˇ . Theorem 8 With the notation of Theorem 7, under Pz 0 (·|), ( X t )t∈[0,Hx0 ] , ϕ + s ˇ
Z ,H ˇ x0 is has the law of ((( Zˇ (t))t∈[0, Hˇ x ] , σ ( Hˇ x0 )) where Zˇ is distributed under P,z 0 0
123
C. Sabot, P. Tarres
the first hitting time of x0 by Zˇ , and σ is distributed according to the Ising model with interation Wi, j i ( Hˇ x0 ) j ( Hˇ x0 ) and boundary condition σx0 = +1. Proof We only sketch the proof since it is very similar to the proof of Theorem 5. Let notation
ψ((X t )t∈[0,Hx0 ] , ϕ + s) = ψ(X, ϕ + s) and G() be positive test functions. We are interested in the following expectation Ez 0 ⊗ P G,U (ψ(X, ϕ + s)G()) − 12 E (ϕ,ϕ) ψ(X, ϕ + s)G()Ce dϕ = Ez 0 RV \{x0 }
(5.2)
where, as in the proof of Theorem 1, C is the normalizing constant of the Gaussian free field. Recall that = (ϕ + s)2 + 2(Hx0 ), set σ = sign(ϕ + s) and define T = S ∧ Hx0 . As in the proof of Theorem 1 we change to variables . An easy adaptation of the computation in the proof of Theorem 1 up to Eq. (2.6) yields that (5.2) is equal to C
V \{x0 }
R+
G()Ez 0
ψ(X, σ (T
σ
))MTσ 1{X T =x0 }
d,
(5.3)
As in the proof of Theorem 5 we have that, if X T = x0 , then σ
ψ(X, σ (T ))MTσ = N T ×
1 F(D −1 (t))
ψ(X, σ (T ))e
{i, j}∈E
Wi, j i (T ) j (T )σi σ j
σ
= N T ψ(X, σ (T ))(T ) . This implies that (5.3) = C
V \{x } R+ 0
=C
V \{x0
R+
G()Ez 0 ψ(X, σ (T ))(T ) N T 1 X T =x0 d
Zˇ ˇ ) d, ψ( Z , σ (T G()N E )) (T 0 ,z 0 }
using in the last equality an easy adaptation of Lemma 5 (ii)–(iii) for time T . Now N0
=
σ ∈{−1,+1}V \{z 0 }
123
σz 0 z 0 s
1 exp − E(σ , σ ) , 2
Inverting Ray-Knight identity law
which implies that C N0 is the density of since by Theorem 6 we have = |ϕ +s|2 ϕ under (1 + sz0 )P G,U . This exactly means that Zˇ ˇ , σ (T ))(T ) . ψ( Z Ez 0 ⊗ P G,U (ψ(X, ϕ)|) = E,z 0 Acknowledgments We are grateful to Alain-Sol Sznitman and Jay Rosen for several useful comments on a first version of the manuscript. We thank also Yuval Peres for interesting discussions.
References 1. Basdevant, A-L., Singh, A.: Continuous time vertex reinforced jump processes on Galton–Watson trees. (2010) Preprint, available on http://arxiv.org/abs/1005.3607 2. Collevecchio, A.: On the transience of processes defined on Galton–Watson trees. Ann. Probab. 34(3), 870–878 (2006) 3. Collevecchio, A.: Limit theorems for vertex-reinforced jump processes on regular trees. Electron. J. Probab. 14(66), 1936–1962 (2009) 4. Davis, B., Volkov, S.: Continuous time vertex-reinforced jump processes. Probab. Theory Relat. Fields 123(2), 281–300 (2002) 5. Davis, B., Volkov, S.: Vertex-reinforced jump processes on trees and finite graphs. Probab. Theory Relat. Fields 128(1), 42–62 (2004) 6. Ding, J., Lee, J.R., Peres, Y.: Cover times, blanket times, and majorizing measures. Ann. of Math. (2) 175(3), 1409–1471 (2012) 7. Disertori, M., Spencer, T., Zirnbauer, M.R.: Quasi-diffusion in a 3D supersymmetric hyperbolic sigma model. Commun. Math. Phys. 300(2), 435–486 (2010) 8. Eisenbaum, N.: Dynkin’s isomorphism theorem and the Ray-Knight theorems. Probab. Theory Relat. Fields 99(2), 321–335 (1994) 9. Eisenbaum, N., Kaspi, H.: On permanental processes. Stoch. Process. Appl. 119(5), 1401–1415 (2009) 10. Eisenbaum, N., Kaspi, H., Marcus, M.B., Rosen, J., Shi, Z.: A Ray-Knight theorem for symmetric Markov processes. Ann. Probab. 28(4), 1781–1796 (2000) 11. Ethier, Stewart N., Kurtz, Thomas G.: Markov processes. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. John Wiley & Sons Inc., New York, 1986. (Characterization and convergence) 12. Knight, F.B.: Random walks and a sojourn density process of Brownian motion. Trans. Am. Math. Soc. 109, 56–86 (1963) 13. Le Jan, Y.: Dynkin’s isomorphism without symmetry. (2006) Preprint, available on http://arxiv.org/pdf/math/0610571.pdf 14. Le Jan, Yves: Markov loops and renormalization. Ann. Probab. 38(3), 1280–1319 (2010) 15. Le Jan, Y.: Markov paths, loops and fields, volume 2026 of Lecture Notes in Mathematics. Springer, Heidelberg, 2011. Lectures from the 38th Probability Summer School held in Saint-Flour, 2008, École d’Été de Probabilités de Saint-Flour. [Saint-Flour Probability Summer School] 16. Lupu, T.: From loop clusters and random interlacement to the free field. (2014) Preprint, available on http://arxiv.org/abs/1402.0298 17. Marcus, Michael B., Rosen, Jay: Markov processes, Gaussian processes, and Local times, Volume 100 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (2006) 18. Ray, Daniel: Sojourn times of diffusion processes. Ill. J. Math. 7, 615–630 (1963) 19. Sabot, C., Tarrès, P.: Edge-reinforced random walk, vertex-reinforced jump process and the supersymmetric hyperbolic sigma model. (2012) Preprint, available on http://arxiv.org/abs/1111.3991 20. Sznitman, A.-S.: An isomorphism theorem for random interlacements. Electron. Commun. Probab. 17(9), 9 (2012) 21. Sznitman, Alain-Sol: Random interlacements and the Gaussian free field. Ann. Probab. 40(6), 2400– 2438 (2012)
123
C. Sabot, P. Tarres 22. Sznitman, A.-S.: Topics in occupation times and Gaussianfree fields. Zurich Lectures in Advanced Mathematics. EurMath Soc (EMS), Zürich (2012) 23. Werner, W.: Percolation et modèle d’Ising, vol. 16of Cours Spécialisés [Specialized Courses]. SociétéMathématique de France, Paris (2009)
123