undefined

Matthew Varble

mvarble@math.ucsb.edu
Department of Mathematics
University of California, Santa Barbara

Overview

  1. Large deviations
  2. Affine processes
  3. Large deviation principle for affine processes
  4. Large deviation rate functions

1. Large deviations

Exponential objects...

... require exponential asymptotics

f(ϵ)≈rϵf(\epsilon) \approx r \epsilon f(ϵ)≈rϵ
⇝f(ϵ)=rϵ+o(ϵ)\leadsto\quad f(\epsilon) = r\epsilon + o(\epsilon) ⇝f(ϵ)=rϵ+o(ϵ)
⟺lim⁡ϵ→0f(ϵ)ϵ=r\Longleftrightarrow\quad \lim_{\epsilon\rightarrow0} \frac{f(\epsilon)}{\epsilon} = r ⟺ϵ→0lim​ϵf(ϵ)​=r
f(ϵ)≈rϵkf(\epsilon) \approx r \epsilon^kf(ϵ)≈rϵk
⇝f(ϵ)=rϵk+o(ϵk)\leadsto\quad f(\epsilon) = r\epsilon^k + o(\epsilon^k) ⇝f(ϵ)=rϵk+o(ϵk)
⟺lim⁡ϵ→0f(ϵ)ϵk=r\Longleftrightarrow\quad \lim_{\epsilon\rightarrow0} \frac{f(\epsilon)}{\epsilon^k} = r ⟺ϵ→0lim​ϵkf(ϵ)​=r
f(ϵ)≈exp⁡(−r/ϵ)f(\epsilon) \approx \exp(-r/\epsilon)f(ϵ)≈exp(−r/ϵ)
⇝f(ϵ)=exp⁡(−r/ϵ+o(1/ϵ))\leadsto\quad f(\epsilon) = \exp\big(-r/\epsilon + o(1/\epsilon) \big)⇝f(ϵ)=exp(−r/ϵ+o(1/ϵ))
⟺lim⁡ϵ→0ϵlog⁡f(ϵ)=−r\Longleftrightarrow\quad \lim_{\epsilon\rightarrow0} \epsilon \log f(\epsilon) = -r⟺ϵ→0lim​ϵlogf(ϵ)=−r

Large deviation principle

Family of measures (pϵ)ϵ>0(p^\epsilon)_{\epsilon>0}(pϵ)ϵ>0​ satisfies large deviation principle on space X\bbXX if there exists lower semicontinuous I:X→[0,∞]I: \bbX \rightarrow [0,\infty]I:X→[0,∞] such that:

−inf⁡ξ∈Γ∘I(ξ)≤lim inf⁡ϵ→0ϵlog⁡pϵ(Γ)≤lim sup⁡ϵ→0ϵlog⁡pϵ(Γ)≤−inf⁡ξ∈Γ‾I(ξ) \begin{aligned} -\inf_{\xi \in \Gamma^\circ} I(\xi) &\leq \liminf_{\epsilon\rightarrow0} \epsilon \log p^\epsilon(\Gamma) \\ &\leq \limsup_{\epsilon\rightarrow0} \epsilon \log p^\epsilon(\Gamma) &\leq -\inf_{\xi \in \overline\Gamma} I(\xi) \end{aligned}−ξ∈Γ∘inf​I(ξ)​≤ϵ→0liminf​ϵlogpϵ(Γ)≤ϵ→0limsup​ϵlogpϵ(Γ)​≤−ξ∈Γinf​I(ξ)​
lim⁡δ→0lim⁡ϵ→0ϵlog⁡Pϵ(Yϵ∈B(ξ,δ))=−I(ξ)\lim_{\delta\rightarrow0}\lim_{\epsilon\rightarrow0} \epsilon\log\Prb^\epsilon\big(Y^\epsilon \in B(\xi,\delta)\big) = -I(\xi)δ→0lim​ϵ→0lim​ϵlogPϵ(Yϵ∈B(ξ,δ))=−I(ξ)
Pϵ(Yϵ∈B(ξ,δ))≈exp⁡(−I(ξ)/ϵ) \Prb^\epsilon\big(Y^\epsilon \in B(\xi,\delta)\big) \approx \exp\big(-I(\xi)/\epsilon \big)Pϵ(Yϵ∈B(ξ,δ))≈exp(−I(ξ)/ϵ)

Measure-change argument

Qϵ,θ(dω):=Zϵ,θ(ω)⋅Pϵ(dω),Zϵ,θ:=exp⁡(1ϵ(⟨Yϵ,θ⟩−Λϵ(Yϵ,θ))) \Qrb^{\epsilon,\theta}(\rmd\omega) \defeq Z^{\epsilon,\theta}(\omega) \cdot \Prb^\epsilon(\rmd\omega) , \quad Z^{\epsilon,\theta} \defeq \exp\bigg( \frac1\epsilon \Big( \langle Y^\epsilon, \theta \rangle - \Lambda_\epsilon(Y^\epsilon, \theta) \Big) \bigg) Qϵ,θ(dω):=Zϵ,θ(ω)⋅Pϵ(dω),Zϵ,θ:=exp(ϵ1​(⟨Yϵ,θ⟩−Λϵ​(Yϵ,θ)))
ϵlog⁡Pϵ(Yϵ∈U(ξ))\epsilon\log\Prb^\epsilon\big(Y^\epsilon \in U(\xi)\big)ϵlogPϵ(Yϵ∈U(ξ))

=ϵlog⁡EQϵ,θ(Zϵ,θ⋅1U(ξ)(Yϵ))= \epsilon\log\Exp_{\Qrb^{\epsilon,\theta}}\Big( Z^{\epsilon,\theta} \cdot 1_{U(\xi)}(Y^\epsilon) \Big)=ϵlogEQϵ,θ​(Zϵ,θ⋅1U(ξ)​(Yϵ))

=ϵlog⁡EQϵ,θ(exp⁡(1ϵ(⟨Yϵ,θ⟩−Λϵ(Yϵ,θ)))1U(ξ)(Yϵ))= \epsilon\log\Exp_{\Qrb^{\epsilon,\theta}}\Big( \exp\Big( \frac1\epsilon \big( \langle Y^\epsilon , \theta \rangle - \Lambda_\epsilon(Y^\epsilon, \theta) \big) \Big) 1_{U(\xi)}(Y^\epsilon) \Big)=ϵlogEQϵ,θ​(exp(ϵ1​(⟨Yϵ,θ⟩−Λϵ​(Yϵ,θ)))1U(ξ)​(Yϵ))

≈ϵlog⁡EQϵ,θ(exp⁡(1ϵ(⟨ξ,θ⟩−Λϵ(ξ,θ)))1U(ξ)(Yϵ))\approx \epsilon\log\Exp_{\Qrb^{\epsilon,\theta}}\Big( \exp\Big( \frac1\epsilon \big( \langle \xi, \theta \rangle - \Lambda_\epsilon(\xi, \theta) \big) \Big) 1_{U(\xi)}(Y^\epsilon) \Big)≈ϵlogEQϵ,θ​(exp(ϵ1​(⟨ξ,θ⟩−Λϵ​(ξ,θ)))1U(ξ)​(Yϵ))

=⟨ξ,θ⟩−Λϵ(ξ,θ)+ϵlog⁡Qϵ,θ(Yϵ∈U(ξ))= \langle \xi, \theta \rangle - \Lambda_\epsilon(\xi, \theta) + \epsilon\log\Qrb^{\epsilon,\theta}\Big( Y^\epsilon \in U(\xi) \Big)=⟨ξ,θ⟩−Λϵ​(ξ,θ)+ϵlogQϵ,θ(Yϵ∈U(ξ))

need Λϵ→Λ0↑\Lambda_\epsilon \rightarrow \Lambda_0 \quad \uparrowΛϵ​→Λ0​↑

need θ\thetaθ to produce sub-exponential deviations ↑\uparrow↑

what about sample-path deviations?

  • (1966) Schilder gives LDP of scaled Brownian motion using Girsanov's theorem.
  • (1970-1979) Freidlin and Wentzell give LDP of small-noise diffusions (uniformly Lipschitz) with measure-changes
  • (1976) Donsker and Varadhan develop contraction principles
  • (1976) Mogulskii gives LDP of processes with independent increments
  • (1987) Dawson and Gärtner describe LDP for projective limits
  • (1994) Puhalskii develops weak-convergence analogies

Puhalskii's analogy

convergence in distribution Yt‾ϵ→Yt‾Y^\epsilon_{\underline t} \rightarrow Y_{\underline t}Yt​ϵ​→Yt​​
+
tightness
→Prokhorov\xrightarrow{\text{Prokhorov}}Prokhorov​
convergence in distribution Yϵ→YY^\epsilon \rightarrow YYϵ→Y
LDP of (Yt‾ϵ)ϵ>0(Y^\epsilon_{\underline t})_{\epsilon>0}(Yt​ϵ​)ϵ>0​ with rate functions It‾I_{\underline t}It​​
+
exponential tightness
→Puhalskii\xrightarrow{\text{Puhalskii}}Puhalskii​
LDP of (Yϵ)ϵ>0(Y^\epsilon)_{\epsilon>0}(Yϵ)ϵ>0​ with rate function I=sup⁡t‾II = \sup_{\underline t} II=supt​​I

Dawson-Gärtner

Zϵ,t‾,u‾=exp⁡(1ϵ(⟨u‾,Yt‾ϵ⟩−Ψϵ(t‾,u‾))) Z^{\epsilon, \underline t, \underline u} = \exp\Big( \frac1\epsilon \big( \langle \underline u, Y^\epsilon_{\underline t} \rangle - \Psi_\epsilon(\underline t, \underline u) \big) \Big)Zϵ,t​,u​=exp(ϵ1​(⟨u​,Yt​ϵ​⟩−Ψϵ​(t​,u​)))

If everything is nice... these produce a large deviation principle, and the rate function is:

I(ξ)=sup⁡t‾,u‾(⟨u‾,ξ(t‾)⟩−Ψ0(t‾,u‾))=sup⁡t‾Ψ0∗(t‾,ξ(t‾)) I(\xi) = \sup_{\underline t, \underline u} \Big( \big\langle \underline u, \xi(\underline t) \big\rangle - \Psi_0(\underline t, \underline u) \Big) = \sup_{\underline t} \Psi^*_0\big(\underline t, \xi(\underline t)\big)I(ξ)=t​,u​sup​(⟨u​,ξ(t​)⟩−Ψ0​(t​,u​))=t​sup​Ψ0∗​(t​,ξ(t​))

Exponential martingale method

Zϵ,h=exp⁡(1ϵ(∫0τh(t)dYtϵ−∫0τΛϵ(h(t),Ytϵ)dt))Z^{\epsilon, h} = \exp\bigg( \frac1\epsilon \Big( \int_0^\tau h(t) \rmd Y^\epsilon_t - \int_0^\tau \Lambda_\epsilon\big(h(t), Y^\epsilon_t\big) \rmd t \Big) \bigg)Zϵ,h=exp(ϵ1​(∫0τ​h(t)dYtϵ​−∫0τ​Λϵ​(h(t),Ytϵ​)dt))

If everything is nice... these produce a large deviation principle, and the rate function is:

I(ξ)=∫0τsup⁡θ(⟨θ,ξ˙(t)⟩−Λ(θ,ξ(t)))dt=∫0τΛ∗(ξ˙(t),ξ(t))dt I(\xi) = \int_0^\tau \sup_\theta \Big( \big\langle \theta , \dot\xi(t) \big\rangle - \Lambda\big(\theta, \xi(t)\big) \Big) \rmd t = \int_0^\tau \Lambda^*\big(\dot\xi(t), \xi(t)\big) \rmd tI(ξ)=∫0τ​θsup​(⟨θ,ξ˙​(t)⟩−Λ(θ,ξ(t)))dt=∫0τ​Λ∗(ξ˙​(t),ξ(t))dt

for absolutely continuous ξ\xiξ and I(ξ)=∞I(\xi) = \inftyI(ξ)=∞, otherwise.

Pros/cons

  • Dawson-Gärtner
    • more generality
    • abstract, less interpretable
  • Exponential martingales
    • more interpretable
    • technically challenging to generalize

2. Affine processes

Definition

Fix finite-dimensional real inner-product space V\bbVV, convex state space X⊆V\bbX \subseteq \bbVX⊆V. A stochastically continuous time-homogeneous Markov process XXX on X\bbXX is affine if we have the following.

affine transform formula
EPxexp⁡⟨u,Xt⟩=exp⁡Ψ(t,u,x)Ψ(t,u,x)=ψ0(t,u)+⟨ψ(t,u),x⟩t≥0, u∈iV, x∈X\begin{aligned} \Exp_{\Prb_x}\exp\langle u, X_t \rangle &= \exp\Psi(t, u, x) \\ \Psi(t, u, x) &= \psi_0(t, u) + \big\langle \psi(t, u), x \big\rangle \end{aligned} \qquad t \geq 0, ~ u \in \rmi \bbV, ~ x \in \bbXEPx​​exp⟨u,Xt​⟩Ψ(t,u,x)​=expΨ(t,u,x)=ψ0​(t,u)+⟨ψ(t,u),x⟩​t≥0, u∈iV, x∈X

Semimartingales

Cuchiero (2011). Any affine process XXX is a semimartingale with the following χ\chiχ-characteristics (((BχB^\chiBχ,,, AAA,,, q^X\hat q^Xq^​X))).

Btχ=∫0tβχ(Xs)ds,\displaystyle B^\chi_t = \int_0^t \beta^\chi(X_s) \rmd s,Btχ​=∫0t​βχ(Xs​)ds, At=∫0tα(Xs)ds,\displaystyle A_t = \int_0^t \alpha(X_s) \rmd s,At​=∫0t​α(Xs​)ds, q^X(ds,dv)=μ(Xs,dv)ds\hat q^X(\rmd s, \rmd v) = \mu(X_s, \rmd v) \rmd sq^​X(ds,dv)=μ(Xs​,dv)ds

Differentiability of BχB^\chiBχ,,, AAA,,, q^X\hat q^Xq^​X means XXX has a generator L\calLL.

Lf(x)=\calL f(x) =Lf(x)=
Df(x)\Der f(x)Df(x) βχ(x)\beta^\chi(x)βχ(x) +12tr⁡(D2f(x)\displaystyle + \frac12 \tr \big( \Hess f(x)+21​tr(D2f(x) α(x)\alpha(x)α(x) )\big)) +∫V(f(x+v)−f(x)−Df(x)\displaystyle + \int_\bbV \big( f(x + v) - f(x) - \Der f(x)+∫V​(f(x+v)−f(x)−Df(x) χ(v)\chi(v)χ(v) )\big)) μ(x,dv)\mu(x, \rmd v)μ(x,dv)

Real-moments

Keller-Ressel and Mayerhofer (2015). An equivalence for u∈Vu \in \bbVu∈V.

Λ(u,x)=⟨u,\Lambda(u, x) = \langle u,Λ(u,x)=⟨u, βχ(x)\beta^\chi(x)βχ(x) ⟩+12⟨u,\rangle + \displaystyle \frac12\langle u, ⟩+21​⟨u, α(x)\alpha(x)α(x) u⟩+u \rangle +u⟩+ ∫V(e⟨u,v⟩−1−⟨u,\displaystyle \int_\bbV \Big( e^{\langle u, v \rangle} - 1 - \langle u, ∫V​(e⟨u,v⟩−1−⟨u, χ(v)\chi(v)χ(v) ⟩)\rangle \Big)⟩) μ(x,dv)\mu(x, \rmd v)μ(x,dv)
generalized Riccati system.
∀x∈X,{Ψ˙(t,u,x)=Λ(ψ(t,u),x)t∈[0,τ]Ψ(0,u,x)=⟨u,x⟩ \forall x \in \bbX, \quad \left\{\begin{array}{ll} \dot\Psi(t, u, x) = \Lambda\big(\psi(t, u), x\big) & t \in [0,\tau] \\ \Psi(0, u, x) = \langle u, x \rangle \end{array}\right.∀x∈X,{Ψ˙(t,u,x)=Λ(ψ(t,u),x)Ψ(0,u,x)=⟨u,x⟩​t∈[0,τ]
affine transform formula.
∀t∈[0,τ], x∈X,EPxexp⁡⟨u,Xt⟩=exp⁡Ψ(t,u,x)<∞ \forall t \in [0,\tau],~x \in \bbX, \quad \Exp_{\Prb_x}\exp\langle u, X_t \rangle = \exp\Psi(t, u, x) < \infty∀t∈[0,τ], x∈X,EPx​​exp⟨u,Xt​⟩=expΨ(t,u,x)<∞

3. Large deviation principle for affine proceses

Previous results

  • Kang and Kang (2014). Large deviation of families (Yϵ)ϵ>0(Y^\epsilon)_{\epsilon>0}(Yϵ)ϵ>0​ of affine diffusions.

    YtϵY^\epsilon_tYtϵ​ === y+∫0ty + \displaystyle \int_0^ty+∫0t​ β(Ytϵ)\beta(Y^\epsilon_t)β(Ytϵ​) dt\rmd tdt +∫0t\displaystyle + \int_0^t+∫0t​ ϵσ(Ytϵ)\sqrt\epsilon\sigma(Y^\epsilon_t)ϵ​σ(Ytϵ​) dWt\rmd W_tdWt​

    i.e. β\betaβ and α=σσ∗\alpha = \sigma\sigma^*α=σσ∗ as general as we want, but μ(⋅,dv)=0\mu(\cdot, \rmd v) = 0μ(⋅,dv)=0.

    • Established LDP using and Dawson-Gärtner exponential martingale methods.
    • Unable to derive nice rate function for Dawson-Gärtner.
    • No indication on how to regularize jumps.

Previous results

  • Feng and Kurtz (1996), Dupuis and Ellis (1997), Wentzell (2000). Large deviation of families (Yϵ)ϵ>0(Y^\epsilon)_{\epsilon>0}(Yϵ)ϵ>0​ of Markov processes:

    Lf(x)=\calL f(x) =Lf(x)=
    Df(x)\Der f(x)Df(x) β(x)\beta(x)β(x) +ϵ2tr⁡(D2f(x)\displaystyle + \frac{\epsilon}{2} \tr \big( \Hess f(x)+2ϵ​tr(D2f(x) α(x)\alpha(x)α(x) )\big)) +1ϵ∫V(f(x+ϵv)−f(x)−Df(x)ϵv)\displaystyle + \frac1\epsilon\int_\bbV \big( f(x + \epsilon v) - f(x) - \Der f(x) \epsilon v \big)+ϵ1​∫V​(f(x+ϵv)−f(x)−Df(x)ϵv) μ(x,dv)\mu(x, \rmd v)μ(x,dv)
    • Bounded β\betaβ, bounded α\alphaα, compactly supported and bounded μ(⋅,dv)\mu(\cdot, \rmd v)μ(⋅,dv).

βϵ=β\beta_\epsilon = \betaβϵ​=β, αϵ=ϵα\alpha_\epsilon = \epsilon\alphaαϵ​=ϵα, and μϵ(x,dv)=1ϵτ#ϵμ(x,dv)\mu_\epsilon(x, \rmd v) = \frac1\epsilon \tau^\epsilon_\#\mu(x, \rmd v)μϵ​(x,dv)=ϵ1​τ#ϵ​μ(x,dv)?

What we do

  1. Intuitively formulate asymptotics
  2. Establish an equivalence between Dawson-Gärtner and exponential martingale methods
  3. Establish a large deviation principle for our asymptotic family
  1. Intuitively formulate asymptotics
  2. Establish an equivalence between Dawson-Gärtner and exponential martingale methods
  3. Establish a large deviation principle for our asymptotic family

Addressing big jumps

Proposition. If 0∈V0 \in \bbV0∈V has a neighborhood of of points uuu with

∫∣v∣>1e⟨u,v⟩μ(x,dv)<∞,x∈X \int_{|v| > 1} e^{\langle u, v \rangle} \mu(x, \rmd v) < \infty, \quad x \in \bbX∫∣v∣>1​e⟨u,v⟩μ(x,dv)<∞,x∈X

then XXX is a special semimartingale.

Lf(x)=\calL f(x) =Lf(x)=
Df(x)\Der f(x)Df(x) β(x)\beta(x)β(x) +12tr⁡(D2f(x)\displaystyle + \frac12 \tr \big( \Hess f(x)+21​tr(D2f(x) α(x)\alpha(x)α(x) )\big)) +∫V(f(x+v)−f(x)−Df(x)v)\displaystyle + \int_\bbV \big( f(x + v) - f(x) - \Der f(x) v \big)+∫V​(f(x+v)−f(x)−Df(x)v) μ(x,dv)\mu(x, \rmd v)μ(x,dv)
β(x)\beta(x)β(x) === βχ(x)\beta^\chi(x)βχ(x) +∫V⟨v−χ(v)⟩\displaystyle+ \int_\bbV \langle v - \chi(v) \rangle+∫V​⟨v−χ(v)⟩ μ(x,dv)\mu(x, \rmd v)μ(x,dv)
equivalent to Riccati/AT uuu's!

Asymptotic family

Process XϵX^\epsilonXϵ with affine drift βϵ\beta_\epsilonβϵ​, diffusion αϵ\alpha_\epsilonαϵ​, jump-kernel μϵ\mu_\epsilonμϵ​.

βϵ(x)=1ϵβ(ϵx)\displaystyle \beta_\epsilon(x) = \frac1\epsilon \beta(\epsilon x)βϵ​(x)=ϵ1​β(ϵx) , αϵ(x)=1ϵα(ϵx)\displaystyle \alpha_\epsilon(x) = \frac1\epsilon \alpha(\epsilon x)αϵ​(x)=ϵ1​α(ϵx) , μϵ(x,dv)=1ϵμ(ϵx,dv)\displaystyle \mu_\epsilon(x, \rmd v) = \frac1\epsilon \mu(\epsilon x, \rmd v)μϵ​(x,dv)=ϵ1​μ(ϵx,dv)

Note. This is the same as selecting familiar expressions for ϵXϵ\epsilon X^\epsilonϵXϵ.

β(x)\beta(x)β(x) , ϵα(x)\epsilon \alpha(x)ϵα(x) , 1ϵτ#ϵμ(x,dv)\displaystyle \frac1\epsilon \tau^\epsilon_\#\mu(x, \rmd v)ϵ1​τ#ϵ​μ(x,dv)

Stochastic differential equations

Proposition. Each ϵXϵ\epsilon X^\epsilonϵXϵ weakly driven by Brownian-Poisson pair (W,p)(W, p)(W,p).

fluid-limit
Wtϵ=Wt/ϵ,pϵ([0,t]×Γ)=p([0,t/ϵ]×Γ)W^\epsilon_t = W_{t/\epsilon}, \quad p^\epsilon([0,t] \times \Gamma) = p([0,t/\epsilon] \times \Gamma)Wtϵ​=Wt/ϵ​,pϵ([0,t]×Γ)=p([0,t/ϵ]×Γ)
ϵXtϵ=x+∫0tβ(ϵXsϵ)ds+∫0tϵσ(ϵXsϵ)dWsϵ+∫[0,t]×Vϵc(ϵXs−ϵ,v)p~ϵ(ds,dv) \begin{aligned} \epsilon X^\epsilon_t &= x + \int_0^t \beta(\epsilon X^\epsilon_s) \rmd s + \int_0^t \epsilon\sigma(\epsilon X^\epsilon_s) \rmd W^\epsilon_s \\ &\hspace{40mm}+ \int_{[0,t] \times \bbV} \epsilon c\big(\epsilon X^\epsilon_{s-}, v) \tilde p^\epsilon(\rmd s, \rmd v) \end{aligned}ϵXtϵ​​=x+∫0t​β(ϵXsϵ​)ds+∫0t​ϵσ(ϵXsϵ​)dWsϵ​+∫[0,t]×V​ϵc(ϵXs−ϵ​,v)p~​ϵ(ds,dv)​
small-noise
ϵXtϵ=x+∫0tβ(ϵXsϵ)ds+∫0tϵσ(ϵXsϵ)dWs+∫[0,t]×Vϵc(ϵXs−ϵ,ϵd⋅v)p~(ds,dv) \begin{aligned} \epsilon X^\epsilon_t &= x + \int_0^t \beta(\epsilon X^\epsilon_s) \rmd s + \int_0^t \sqrt{\epsilon}\sigma(\epsilon X^\epsilon_s) \rmd W_s \\ &\hspace{40mm}+ \int_{[0,t] \times \bbV} \epsilon c\big(\epsilon X^\epsilon_{s-}, \sqrt[d]{\epsilon} \cdot v) \tilde p(\rmd s, \rmd v) \end{aligned}ϵXtϵ​​=x+∫0t​β(ϵXsϵ​)ds+∫0t​ϵ​σ(ϵXsϵ​)dWs​+∫[0,t]×V​ϵc(ϵXs−ϵ​,dϵ​⋅v)p~​(ds,dv)​

What we do

  1. Intuitively formulate asymptotics
  2. Establish an equivalence between Dawson-Gärtner and exponential martingale methods
  3. Establish a large deviation principle for our asymptotic family

Dawson-Gärtner, for us

Z‾ϵ,t‾,u‾=exp⁡(⟨u‾,Xt‾ϵ⟩−Ψϵ(t‾,u‾,x)),Ψϵ(t‾,u‾,x)=log⁡EPxexp⁡⟨u‾,Xt‾ϵ⟩\underline Z^{\epsilon, \underline t, \underline u} = \exp\Big( \langle \underline u, X^\epsilon_{\underline t} \rangle - \Psi_\epsilon(\underline t, \underline u, x) \Big), \quad \Psi_\epsilon(\underline t, \underline u, x) = \log\Exp_{\Prb_x}\exp\langle \underline u, X^\epsilon_{\underline t} \rangleZ​ϵ,t​,u​=exp(⟨u​,Xt​ϵ​⟩−Ψϵ​(t​,u​,x)),Ψϵ​(t​,u​,x)=logEPx​​exp⟨u​,Xt​ϵ​⟩
Ψϵ(t‾,u‾,x)=1ϵΨ(t‾,u‾,ϵx),\displaystyle\Psi_\epsilon(\underline t, \underline u, x) = \frac1\epsilon\Psi(\underline t, \underline u, \epsilon x),Ψϵ​(t​,u​,x)=ϵ1​Ψ(t​,u​,ϵx), φ(t,θ,x)=log⁡EPx(exp⁡⟨θ,Xτ+t−Xτ⟩∣Xτ=x)\varphi(t, \theta, x) = \log\Exp_{\Prb_x}\big( \exp\langle \theta, X_{\tau+t} - X_\tau \rangle | X_\tau = x \big)φ(t,θ,x)=logEPx​​(exp⟨θ,Xτ+t​−Xτ​⟩∣Xτ​=x)
⟨u‾,x‾⟩−Ψ(t‾,u‾,x‾)=∑k=1∣t‾∣(⟨θk,xk−xk−1⟩−φ(Δtk,θk,xk−1))\big\langle \underline u, \underline x \rangle - \Psi(\underline t, \underline u, \underline x) = \sum_{k=1}^{|\underline t|} \Big( \langle \theta_k, x_k-x_{k-1} \rangle - \varphi(\Delta t_k, \theta_k, x_{k-1}) \Big)⟨u​,x​⟩−Ψ(t​,u​,x​)=k=1∑∣t​∣​(⟨θk​,xk​−xk−1​⟩−φ(Δtk​,θk​,xk−1​))
I(ξ)=sup⁡t‾∑k=1∣t‾∣sup⁡θk(⟨θk,ξ(tk)−ξ(tk−1)⟩−φ(Δtk,θk,ξ(tk−1)))I(\xi) = \sup_{\underline t} \sum_{k=1}^{|\underline t|} \sup_{\theta_k} \Big( \big\langle \theta_k, \xi(t_k) - \xi(t_{k-1}) \big\rangle - \varphi\big(\Delta t_k, \theta_k, \xi(t_{k-1}) \big) \Big) I(ξ)=t​sup​k=1∑∣t​∣​θk​sup​(⟨θk​,ξ(tk​)−ξ(tk−1​)⟩−φ(Δtk​,θk​,ξ(tk−1​)))

Calibrating twists. solve θk:ξ(tk)−ξ(tk−1)=∇θφ(Δtk,θ,ξ(tk−1))∣θ=θk\text{solve } \theta_k: \quad \xi(t_k)-\xi(t_{k-1}) = \nabla_\theta \varphi\big(\Delta t_k, \theta, \xi(t_{k-1}) \big) \Big|_{\theta = \theta_k}solve θk​:ξ(tk​)−ξ(tk−1​)=∇θ​φ(Δtk​,θ,ξ(tk−1​))∣∣​θ=θk​​

Exponential martingales, for us

Λ(u,x)=⟨u,β(x)⟩+12⟨u,α(x)u⟩+∫V(e⟨u,v⟩−1−⟨u,v⟩)μ(x,dv) \Lambda(u, x) = \langle u, \beta(x) \rangle + \frac12 \langle u, \alpha(x) u \rangle + \int_\bbV \Big( e^{\langle u, v \rangle} - 1 - \big\langle u, v \big\rangle \Big) \mu(x, \rmd v)Λ(u,x)=⟨u,β(x)⟩+21​⟨u,α(x)u⟩+∫V​(e⟨u,v⟩−1−⟨u,v⟩)μ(x,dv)
Λϵ(u,x)=1ϵΛ(u,ϵx),\Lambda^\epsilon(u, x) = \frac1\epsilon\Lambda(u, \epsilon x),Λϵ(u,x)=ϵ1​Λ(u,ϵx),
ϵXtϵ=x+∫0t(β(ϵXsϵ)+∇θΛ(θ,ϵXsϵ)∣θ=h(s))ds+Mt\epsilon X^\epsilon_t = x + \int_0^t \Big( \beta\big(\epsilon X^\epsilon_s\big) + \nabla_\theta\Lambda\big(\theta, \epsilon X^\epsilon_s) \big|_{\theta = h(s)} \Big) \rmd s + M_tϵXtϵ​=x+∫0t​(β(ϵXsϵ​)+∇θ​Λ(θ,ϵXsϵ​)∣∣​θ=h(s)​)ds+Mt​
I(ξ)=∫0τsup⁡θ(⟨θ,ξ˙(t)⟩−Λ(θ,ξ(t)))dt=∫0τΛ∗(ξ˙(t),ξ(t))dtI(\xi) = \int_0^\tau \sup_\theta \Big( \big\langle \theta, \dot\xi(t) \big\rangle - \Lambda\big( \theta, \xi(t) \big) \Big) \rmd t = \int_0^\tau \Lambda^*\big(\dot\xi(t), \xi(t)\big) \rmd tI(ξ)=∫0τ​θsup​(⟨θ,ξ˙​(t)⟩−Λ(θ,ξ(t)))dt=∫0τ​Λ∗(ξ˙​(t),ξ(t))dt

Calibrating control. solve h(t):ξ˙(t)=∇θΛ(θ,ξ(t))∣θ=h(t)\text{solve } h(t): \quad \dot\xi(t) = \nabla_\theta \Lambda\big(\theta, \xi(t) \big) \Big|_{\theta = h(t)}solve h(t):ξ˙​(t)=∇θ​Λ(θ,ξ(t))∣∣​θ=h(t)​

Equivalence

Theorem. For each Dawson-Gärtner measure-change Pϵ,t‾,θ‾\Prb^{\epsilon,\underline t, \underline\theta}Pϵ,t​,θ​ there exists an exponential martingale measure-change Pϵ,h\Prb^{\epsilon,h}Pϵ,h such that

Pϵ,t‾,θ‾=Pϵ,h\Prb^{\epsilon, \underline t, \underline\theta} = \Prb^{\epsilon, h}Pϵ,t​,θ​=Pϵ,h
∑k=1∣t‾∣(⟨θk,Xtk−Xtk−1⟩−φ(Δtk,θk,Xtk−1))=∫0τh(t)dXt−∫0τΛ(h(t),Xt)dt\sum_{k=1}^{|\underline t|} \Big( \big\langle \theta_k, X_{t_k} - X_{t_{k-1}} \rangle - \varphi\big(\Delta t_k, \theta_k, X_{t_{k-1}} \big) \Big) = \int_0^\tau h(t) \rmd X_t - \int_0^\tau \Lambda\big(h(t), X_t\big) \rmd tk=1∑∣t​∣​(⟨θk​,Xtk​​−Xtk−1​​⟩−φ(Δtk​,θk​,Xtk−1​​))=∫0τ​h(t)dXt​−∫0τ​Λ(h(t),Xt​)dt
h(t,t‾,θ‾)=∑k=1∣t‾∣ψ(tk−t,θk)1[tk−1,tk)(t)h(t, \underline t, \underline\theta) = \sum_{k=1}^{|\underline t|} \psi(t_k - t, \theta_k) 1_{[t_{k-1},t_k)}(t)h(t,t​,θ​)=k=1∑∣t​∣​ψ(tk​−t,θk​)1[tk−1​,tk​)​(t)

What we do

  1. Intuitively formulate asymptotics
  2. Establish an equivalence between Dawson-Gärtner and exponential martingale methods
  3. Establish a large deviation principle for our asymptotic family

Simple summary

  1. Establish LDP with Dawson-Gärtner
  2. Tighten to Skorokhod topology by showing exponential tightness.
  3. Use equivalence theorem to establish rate function.
h(t,t‾,θ‾)→refine t‾, choose θ‾hh(t, \underline t, \underline\theta) \xrightarrow{\text{refine } \underline t, \text{ choose } \underline\theta} hh(t,t​,θ​)refine t​, choose θ​​h

Main result

Theorem. For each x∈X∘x \in \bbX^\circx∈X∘, we have a large deviation principle for (Pxϵ)ϵ>0(\Prb^\epsilon_x)_{\epsilon>0}(Pxϵ​)ϵ>0​ with rate function Ix:D([0,τ],X)→[0,∞]I_x: \bbD([0,\tau], \bbX) \rightarrow [0,\infty]Ix​:D([0,τ],X)→[0,∞].

Ix(ξ)={∫0τΛ∗(ξ˙(t),ξ(t))dt,ξ(0)=x, ξ absolutely continuous,∞,otherwiseI_x(\xi) = \left\{\begin{array}{ll} \displaystyle \int_0^\tau \Lambda^*\big(\dot\xi(t), \xi(t)\big) \rmd t, & \xi(0) = x, ~ \xi \text{ absolutely continuous,} \\[1em] \infty, &\text{otherwise} \end{array}\right.Ix​(ξ)=⎩⎨⎧​∫0τ​Λ∗(ξ˙​(t),ξ(t))dt,∞,​ξ(0)=x, ξ absolutely continuous,otherwise​
Pxϵ(ϵXϵ∈B(ξ,δ))≈exp⁡(−Ix(ξ)/ϵ)\Prb^\epsilon_x\Big( \epsilon X^\epsilon \in B(\xi, \delta) \Big) \approx \exp\Big(-I_x(\xi)/\epsilon\Big)Pxϵ​(ϵXϵ∈B(ξ,δ))≈exp(−Ix​(ξ)/ϵ)

4. Representation of rate function

Example: Brownian motion

Brownian motions (ϵW)ϵ>0(\sqrt\epsilon W)_{\epsilon>0}(ϵ​W)ϵ>0​.

Our principle: β(x)=0\beta(x) = 0β(x)=0, α(x)=id⁡V\alpha(x) = \operatorname{id}_\bbVα(x)=idV​, μ(x,dv)=0\mu(x, \rmd v) = 0μ(x,dv)=0

Λ∗(x˙)=sup⁡u∈V(⟨u,x˙⟩−12⟨u,u⟩)=12∣x˙∣2\Lambda^*(\dot x) = \sup_{u \in \bbV} \Big( \langle u, \dot x \rangle - \frac12 \langle u, u \rangle \Big) = \frac12 |\dot x|^2Λ∗(x˙)=u∈Vsup​(⟨u,x˙⟩−21​⟨u,u⟩)=21​∣x˙∣2
∫0τ12∣ξ˙(t)∣2dt\int_0^\tau \frac12 \big|\dot\xi(t)\big|^2 \rmd t∫0τ​21​∣∣​ξ˙​(t)∣∣​2dt

Example: Poisson process

For Poisson process NNN, (ϵN⋅/ϵ)ϵ>0(\epsilon N_{\cdot/\epsilon})_{\epsilon>0}(ϵN⋅/ϵ​)ϵ>0​.

Our principle: β(x)=1\beta(x) = 1β(x)=1, α(x)=0\alpha(x) = 0α(x)=0, μ(x,dv)=δ1\mu(x, \rmd v) = \delta_1μ(x,dv)=δ1​

Λ∗(x˙)=sup⁡u∈V(⟨u,x˙⟩−(eu−1))=x˙log⁡x˙−x˙+1\Lambda^*(\dot x) = \sup_{u \in \bbV} \Big( \langle u, \dot x \rangle - (e^u-1) \Big) = \dot x \log \dot x - \dot x + 1 Λ∗(x˙)=u∈Vsup​(⟨u,x˙⟩−(eu−1))=x˙logx˙−x˙+1
∫0τ(ξ˙(t)log⁡(ξ˙(t))−ξ˙(t)+1)dt\int_0^\tau \Big( \dot\xi(t) \log\big(\dot\xi(t)\big) - \dot\xi(t) + 1 \Big) \rmd t∫0τ​(ξ˙​(t)log(ξ˙​(t))−ξ˙​(t)+1)dt

Example: Diffusion

ϵXtϵ=x+∫0tβ(ϵXsϵ)ds+∫0tϵσ(ϵXϵ)dWs\epsilon X^\epsilon_t = x + \int_0^t \beta(\epsilon X^\epsilon_s) \rmd s + \int_0^t \sqrt\epsilon\sigma(\epsilon X^\epsilon) \rmd W_sϵXtϵ​=x+∫0t​β(ϵXsϵ​)ds+∫0t​ϵ​σ(ϵXϵ)dWs​

Our principle: β(x)\beta(x)β(x), α(x)=σσ∗(x)\alpha(x) = \sigma\sigma^*(x)α(x)=σσ∗(x), μ(x,dv)=0\mu(x, \rmd v) = 0μ(x,dv)=0

Λ∗(x˙,x)=sup⁡u∈V(⟨u,x˙⟩−⟨u,β(x)⟩−12⟨u,α(x)u⟩)=12⟨(x˙−β(x)),α(x)†(x˙−β(x))⟩\begin{aligned} \Lambda^*(\dot x, x) &= \sup_{u\in\bbV} \Big( \langle u, \dot x \rangle - \langle u, \beta(x) \rangle - \frac12\langle u, \alpha(x)u \rangle \Big) \\ &= \frac12 \Big\langle \big(\dot x - \beta(x) \big), \alpha(x)^\dagger \big(\dot x - \beta(x) \big) \Big\rangle \end{aligned}Λ∗(x˙,x)​=u∈Vsup​(⟨u,x˙⟩−⟨u,β(x)⟩−21​⟨u,α(x)u⟩)=21​⟨(x˙−β(x)),α(x)†(x˙−β(x))⟩​
I(ξ)=∫0τ12⟨ξ˙(t)−β(ξ(t)),α(ξ(t))†(ξ˙(t)−β(ξ(t)))⟩dtI(\xi) = \int_0^\tau \frac12\Big\langle \dot\xi(t) - \beta\big(\xi(t)\big), \alpha\big(\xi(t)\big)^\dagger \Big( \dot\xi(t) - \beta\big(\xi(t)\big) \Big) \Big\rangle \rmd tI(ξ)=∫0τ​21​⟨ξ˙​(t)−β(ξ(t)),α(ξ(t))†(ξ˙​(t)−β(ξ(t)))⟩dt

Example: Compound-Poisson

Xtϵ=∑k=1Nt/ϵVk,Vk∼κX^\epsilon_t = \sum_{k=1}^{N_{t/\epsilon}} V_k, \quad V_k \sim \kappaXtϵ​=k=1∑Nt/ϵ​​Vk​,Vk​∼κ

Regime for (ϵXϵ,ϵN⋅/ϵ)ϵ>0(\epsilon X^\epsilon, \epsilon N_{\cdot/\epsilon})_{\epsilon>0}(ϵXϵ,ϵN⋅/ϵ​)ϵ>0​.

Our principle: β(x^)=(κ‾,1)\beta(\hat x) = (\overline\kappa, 1)β(x^)=(κ,1), α(x^)=0\alpha(\hat x) = 0α(x^)=0, μ(x^,dv^)=κ(dv1)δ1(dv2)\mu(\hat x, \rmd\hat v) = \kappa(\rmd v_1) \delta_1(\rmd v_2)μ(x^,dv^)=κ(dv1​)δ1​(dv2​)

Λ∗(x˙,x)=sup⁡u∈V(⟨u1,x˙1⟩+u2x˙2−exp⁡(Λκ(u1)+u2)+1)=x˙2log⁡x˙2−x˙2+1+x˙2Λκ∗(x˙1x˙2)\begin{aligned} \Lambda^*(\dot x, x) &= \sup_{u\in\bbV} \Big( \langle u_1, \dot x_1 \rangle + u_2\dot x_2 - \exp\big(\Lambda_\kappa(u_1) + u_2\big) + 1 \Big) \\ &= \dot x_2 \log \dot x_2 - \dot x_2 + 1 + \dot x_2 \Lambda_\kappa^*\big(\frac{\dot x_1}{\dot x_2}\big) \end{aligned}Λ∗(x˙,x)​=u∈Vsup​(⟨u1​,x˙1​⟩+u2​x˙2​−exp(Λκ​(u1​)+u2​)+1)=x˙2​logx˙2​−x˙2​+1+x˙2​Λκ∗​(x˙2​x˙1​​)​
I(ξ,η)=∫0τ(η˙(t)log⁡η˙(t)−η˙(t)+1)dt+∫0τΛκ∗(ξ˙(t)η˙(t))dtI(\xi, \eta) = \int_0^\tau \Big( \dot\eta(t) \log \dot\eta(t) - \dot\eta(t) + 1 \Big) \rmd t + \int_0^\tau \Lambda_\kappa^*\Big(\frac{\dot\xi(t)}{\dot\eta(t)} \Big) \rmd tI(ξ,η)=∫0τ​(η˙​(t)logη˙​(t)−η˙​(t)+1)dt+∫0τ​Λκ∗​(η˙​(t)ξ˙​(t)​)dt

General closed form

Theorem. Suppose μ(⋅,V)\mu(\cdot, \bbV)μ(⋅,V) is finite, so we may factor it into an intensity and jump distribution.

μ(x,dv)=λ(x)κ(x,dv)\mu(x, \rmd v) = \lambda(x) \kappa(x, \rmd v) μ(x,dv)=λ(x)κ(x,dv)

Otherwise, as general as we want: β(x)\beta(x)β(x), α(x)\alpha(x)α(x), μ(x,dv)=λ(x)κ(x,dv)\mu(x, \rmd v) = \lambda(x) \kappa(x, \rmd v)μ(x,dv)=λ(x)κ(x,dv)

Can define coupling X^=(X,Xc,Xd,NX)\hat X = (X, X^\rmc, X^\rmd, N^X)X^=(X,Xc,Xd,NX) and asymptotic X^ϵ\hat X^\epsilonX^ϵ for which the rate function has a semi-closed form.

...said closed form

I(ξ,ω,γ,η)=∫0τ(η˙(t)log⁡(η˙(t)λ(ξ(t)))−η˙(t)+λ(ξ(t)))dt+∫0τ12⟨ω˙(t),α(ξ(t))ω˙(t)⟩dt+∫0τη˙(t)Λκ(ξ(t),⋅)(ξ˙(t)−β(ξ(t))−γ˙(t)+λ(ξ(t))ν(ξ(t),⋅)‾η˙(t))dt \begin{aligned} I(\xi, \omega, \gamma, \eta) &= \int_0^\tau \bigg( \dot\eta(t) \log\Big(\frac{\dot\eta(t)}{\lambda\big(\xi(t)\big)}\Big) - \dot\eta(t) + \lambda\big(\xi(t)\big) \bigg) \rmd t \\ &\quad + \int_0^\tau \frac12 \Big\langle \dot\omega(t), \alpha\big(\xi(t)\big) \dot\omega(t) \Big\rangle \rmd t \\ &\quad + \int_0^\tau \dot\eta(t) \Lambda_{\kappa(\xi(t),\cdot)}\Big(\frac{\dot\xi(t) - \beta\big(\xi(t)\big) - \dot\gamma(t) + \lambda\big(\xi(t)\big)\overline{\nu(\xi(t), \cdot)}}{\dot\eta(t)}\Big) \rmd t \end{aligned}I(ξ,ω,γ,η)​=∫0τ​(η˙​(t)log(λ(ξ(t))η˙​(t)​)−η˙​(t)+λ(ξ(t)))dt+∫0τ​21​⟨ω˙(t),α(ξ(t))ω˙(t)⟩dt+∫0τ​η˙​(t)Λκ(ξ(t),⋅)​(η˙​(t)ξ˙​(t)−β(ξ(t))−γ˙​(t)+λ(ξ(t))ν(ξ(t),⋅)​​)dt​
ξ˙(t)=β(ξ(t))+ω˙(t)+γ˙(t)\dot\xi(t) = \beta\big(\xi(t)\big) + \dot\omega(t) + \dot\gamma(t)ξ˙​(t)=β(ξ(t))+ω˙(t)+γ˙​(t)
thank you.
homepostspresentationsbooks