# The optimal control of a Levy process

Table of Contents Acknowledgments v Abstract vii Chapter 1.Introduction and Preliminaries 1 1.1 Lévy Processes...........................2 1.1.1 Deﬁnition and Fundamental Results...........2 1.1.2 The Inﬁnitesimal Generator................8 1.1.3 Basic Examples of Lévy Processes............10 1.1.4 Stable Processes......................12 1.2 Existing Work...........................16 1.2.1 Stochastic Optimal Control of Lévy Processes.....16 1.2.2 The Lévy Density,Probability Density,and Score Function 18 1.2.3 Numerical Methods....................19 Chapter 2.Control Problem and Main Result 21 2.1 Setting...............................21 2.1.1 Problem statement and dynamic programming.....21 2.1.2 Regularity assumptions...................24 2.2 Main Result............................27 2.3 A proof of Theorem 2.2.1.....................31 2.3.1 The Functional Ψ.....................31 2.3.2 A ﬁxed-point argument..................34 2.3.3 A Feynman-Kac argument.................37 2.3.4 The function v R ∗ ......................42 Chapter 3.Integrability of the Score Function 45 viii

3.1 Overview..............................45 3.1.1 Connection with Information Theory...........46 3.2 Assumption 3.1.1.........................48 3.3 Assumption 3.1.2,Basic Examples................51 3.3.1 Brownian Motion......................51 3.3.2 Cauchy Process.......................52 3.3.3 Stable Processes......................53 3.4 Assumption 3.1.2,Mixture of a Brownian Motion and α-Stable Process...............................57 3.4.1 Supporting lemmas....................59 3.4.2 Proof of the theorem....................66 Chapter 4.Numerical Implementation 75 4.1 Overview..............................75 4.1.1 Experimental Mathematics................76 4.1.2 Object-Oriented Paradigm................78 4.1.3 Finite Diﬀerence Method.................78 4.1.4 Reward functions and speciﬁc processes.........80 4.1.5 Testing the Code......................81 4.2 Three Implementations......................87 4.2.1 Iteration..........................88 4.2.2 Dynamic Programming..................90 4.2.3 Iteration with Fourier Transform.............91 4.3 Comparison of Implementations.................94 4.4 Figures...............................95 Appendix 107 Appendix 1.Source Code Documentation 108 1.1 DiscreteOperator Class Reference................108 1.1.1 Detailed Description....................108 1.1.2 Constructor & Destructor Documentation........109 ix

1.1.3 Member Function Documentation............110 1.2 Measure Class Reference.....................110 1.2.1 Detailed Description....................111 1.2.2 Member Function Documentation............112 1.3 MySpaceCells Class Reference..................112 1.3.1 Detailed Description....................113 1.4 Operator Class Reference.....................114 1.4.1 Detailed Description....................114 1.4.2 Constructor & Destructor Documentation........115 1.5 ProblemInfo Class Reference...................115 1.5.1 Detailed Description....................116 1.6 Results Class Reference......................117 1.6.1 Detailed Description....................118 1.6.2 Member Function Documentation............119 Bibliography 121 Index 127 Vita 128 x

Chapter 1 Introduction and Preliminaries In this thesis,we consider a class of optimal stochastic control problems in- volving Lévy processes. Generally speaking,one would like to obtain an optimal stochastic control result that is as strong as possible,and,at the same time,pertains to as broad a class of processes as possible.Of course,this is a truism,but the point is that there is a balance to be struck.When controlling more general processes (Feller or Markov processes) less can be said regarding the value functions of control problems,even less without a notion of weak solutions.Certainly more can be said when working with a Brownian motion due to the abundance of analytical techniques available.The balance here settles on Lévy processes, which are a substantial generalization of Brownian motion,but still possess many tractable analytic qualities. The purpose of this introductory chapter is to deﬁne and characterize Lévy processes and brieﬂy survey previous work on the optimal stochastic control of Lévy processes. In Chapter 2 we present the main result:for a broad class of Lévy processes, the value function associated to the control of the drift of a Lévy process is 1

smooth and controls exist in feedback form.In Chapter 3 we establish that selected families of Lévy processes satisfy the assumptions of the main result. Finally in Chapter 4,we present the results of three numerical implementations of the control problem. 1.1 Lévy Processes There are many excellent references on Lévy processes,but so that this dis- sertation is self contained we present deﬁnitions and basic results here.The presentation follows the deﬁnitions and terminology of Sato [34]. For convenience,the deﬁnitions and theorems are given only in the generality required by our control problem.In particular,we work with processes taking values in R (as opposed to R d ),and so will restrict our deﬁnitions and theorem statements accordingly.Also,many results about Lévy processes are true of their generalizations:for example Lévy processes in law or additive processes (which are deﬁned by relaxing certain of the properties in Deﬁnition 1.1.1). 1.1.1 Deﬁnition and Fundamental Results Deﬁnition 1.1.1.([34] p.3) A stochastic process {X t :t ≥ 0} on R is a Lévy process if it has the following properties: 1.Independent increments:for any choice of n ≥ 1 and 0 < t 0 < t 1 <...< t n ,the random variables X t 0 ,X t 1 − X t 0 ,X t 2 − X t 1 ,...,X t n − X t n−1 are independent. 2

2.Initial normalization:X 0 = 0 almost surely. 3.Stationary increments:the distribution of X s+t −X t does not depend on t. 4.Stochastic continuity:for every t ≥ 0 and > 0, lim s→t P[|X s −X t | > ] = 0. The Lévy-Khintchine formula,stated below,is the ﬁrst of the two fundamental theorems that characterize distributions and paths of Lévy processes. Deﬁnition 1.1.2.The characteristic function ˆµ:R → C of a probability measure µ on R is given by ˆµ(u) =

R e iux µ(dx),u ∈ R. Theorem 1.1.1 (Lévy-Khintchine formula.([34] pp.37-8)).1.Let (X t ) t≥0 be a Lévy process,and let µ be the distribution of X 1 . Then the characteristic function ˆµ of µ is given by ˆµ(u) = expψ(u), where ψ(u) =

− 1 2 σu 2 +iγu +

R (e iux −1 −iux1 |x|≤1 )ν(dx)

,u ∈ R (1.1.1) where σ,γ ∈ R,σ ≥ 0,and ν is a measure on R satisfying ν({0}) = 0 and

R (|x| 2 ∧1)ν(dx) < ∞. 2.The representation of ˆµ by σ,ν,and γ is unique. 3

3.Conversely,given a triple (σ,ν,γ) satisfying the above,there exists a Lévy process whose distribution at time 1 has characteristic function exp(ψ(u)) where ψ is deﬁned in (1.1.1). The measure ν is perhaps the most elusive aspect of the above characterization. We give intuition for it below,after presenting the path-wise structure of Lévy processes and deﬁning compound Poisson processes.The Lévy-Khintchine for- mula characterizes the analytic structure of Lévy processes,giving the generat- ing triplet (σ,ν,γ).Even though Theorem 1.1.1 refers only to the distribution of a Lévy process at time 1,the temporal homogeneity property,as presented in Corollary 1.1.2 below,allows us to characterize the distribution of X t for all t ≥ 0. Corollary 1.1.2 ([34] p.38).If µ has generating triplet (σ,ν,γ),and µ t is the distribution of X t ,then µ t has generating triplet (tσ,tν,tγ). This corollary gives us the useful relation that the characteristic function of a Lévy process at time t > 0 can be written ˆµ t (z) = exp(tψ(z)) where ψ(z) is given in (1.1.1). Deﬁnition 1.1.3.A probability measure µ on R is inﬁnitely divisible if,for any positive integer n,there exists a probability measure µ n on R such that µ is the n-fold convolution of µ n . For a Lévy process X t on R,the distribution at every time t is inﬁnitely 4

divisible.Indeed, X t = X t n +(X 2t n −X t n ) +· · · +(X t −X n−1 n t ). Conversely,given an inﬁnitely divisible distribution,µ one can construct a Lévy process X with µ X 1 = µ from the time t characteristic function exp(tψ(u)) where ψ is the exponent (1.1.1) associated to the Lévy-Khintchine representation of ˆµ. The second fundamental result,the Lévy-Itô decomposition,expresses the Lévy process as a sum of independent Lévy processes,each of which corre- sponds to a part of the generating triplet. To describe the sample paths we ﬁrst deﬁne Poisson random measures. Deﬁnition 1.1.4.([34] p.119) Let (Θ,B,ρ) be a σ-ﬁnite measure space.A family of randomvariables {N(B):B ∈ B} taking values in {0,1,2,...}∪{∞} is called a Poisson random measure on Θ with intensity measure ρ,if the following hold: 1.for every B ∈ B,N(B) has Poisson distribution with mean ρ(B); 2.if B 1 ,B 2 ,...B n are disjoint,then N(B 1 ),N(B 2 ),...N(B n ) are indepen- dent; 3.for every ω,N(·,ω) is a measure on Θ. For the remainder of this section,let {X t :t ≥ 0} be a Lévy process on R, 5

deﬁned on a probability space (Ω,F,P) with generating triplet (σ,ν,γ).Deﬁne H = (0,∞) ×(R\{0}) with B(H) its Borel σ-algebra.Let ˜ν = λ ⊗ν be the product of the Lebesgue measure λ on (0,∞) and the restriction of ν to R\{0}. Deﬁne the measure ˜ν on H by ˜ν((0,t) ×B) = tν(B),forB ∈ B(R) where the measure extends to arbitrary Borel sets in time by computing the measure of disjoint open intervals. According to ([34] p.3) a Lévy process has a càdlàg version,so there exists Ω 0 ∈ F with P(Ω 0 ) = 1 on which X t is càdlàg.Without loss of generality,we assume that X t ,in fact,is càdlàg on Ω.Additionally,we introduce notation for the left limit of the path of a process at time t > 0: X t− (ω) = lim st X s . Theorem 1.1.3 (Lévy-Itô decomposition.([34] p.120,[11] pp.79-80)).For B ∈ B(H) deﬁne J(B,ω) = card{s:(s,X s (ω) −X s− (ω)) ∈ B}, where card(A) denotes the cardinality of the set A.Then the following hold. 1.J is a Poisson random measure on H with intensity ˜ν. 6

2.There exists Ω 1 ∈ F with P[Ω 1 ] = 1 such that for any ω ∈ Ω 1 , X 1 t (ω) = lim ↓0

≤|x|<1,s∈(0,t] {xJ(ds,dx,ω) −x˜ν(ds,dx)} +

|x|≥1,s∈(0,t] xJ(ds,dx,ω) (1.1.2) is deﬁned for all t ∈ [0,∞) and the convergence is uniform in t on any bounded interval.The process {X 1 t } is a Lévy process on R with {(0,ν,0)} as the generating triplet. 3.Deﬁne X 2 t (ω) = X t (ω) −X 1 t (ω) for ω ∈ Ω 1 ,t ≥ 0. There is Ω 2 ∈ F with P[Ω 2 ] = 1 such that,for any ω ∈ Ω 2 ,X 2 t (ω) is continuous in t.The process X 2 t is a Lévy process on R with {(σ,0,γ)} as the generating triplet. 4.X 1 t and X 2 t are independent. The Lévy-Itô decomposition allows us to view the sample paths of a Lévy process as sums of three independent Lévy processes,each of which is charac- terized by a part of the generating triplet.The continuous part,X 2 t above,is a scaled Brownian motion {(σ,0,0)} with a linear drift term {(0,0,γ)}.The X 1 t term is a pure jump process {(0,ν,0)} that can nearly be expressed as the sum of all jumps up to a given time t.More precisely,since there are generally inﬁnitely many jumps of size approaching 0,the sum of small jumps has to be compensated. 7

1.1.2 The Inﬁnitesimal Generator As a Markov process,a Lévy process possesses an associated family of proba- bility transition functions that forma contraction semigroup.The inﬁnitesimal generator of this contraction semigroup will be of great use to us,and for Lévy processes its form is closely related to the generating triplet. For a Lévy process X with distribution µ t = µ X t ,the probability transition function P t :R⊗B(R) →R is deﬁned as: P t (x,B) = µ t (B −x),t ≥ 0,x ∈ R,B ∈ B(R). ([34] p.207) A semigroup of bounded linear operators on C 0 (R),(the space of continuous functions that vanish at inﬁnity) can be associated to these functions: (P t f)(x) =

R P t (x,dy)f(y) =

R µ t (dy)f(x +y) = E[f(x +X t ),f ∈ C 0 (R).] The inﬁnitesimal generator of the above strongly continuous contraction semi- group {P t } is deﬁned by (Lf)(x) = lim t→0 P t f(x) −f(x) t ,(1.1.3) where the limit is understood in the sense of uniform convergence of functions. The domain D(L) is deﬁned to be the set of f such that the limit exists. 8

Theorem 1.1.4 ([34] p.208).Let the family of operators {P t :t ≥ 0} be deﬁned as above from a Lévy process {X t } t≥0 on R.Then (P t ) t≥0 is a strongly continuous semigroup on C 0 (R) whose inﬁnitesimal generator L satisﬁes: Lf(x) = 1 2 σ 2 f

(x) +γf

(x) +

R (f(x+y) −f(x) −yf

(x)1 |y|≤1 )ν(dy) (1.1.4) for any f ∈ C ∞ 0 ⊂ D(L),where (σ,ν,γ) is the generating triplet of {X t }. The following result,Dynkin’s formula,will be used in the derivation of the Hamilton-Jacobi-Bellman partial integro-diﬀerential equation associated with a stochastic control problem. Theorem 1.1.5 (Dynkin’s Formula,[34] p 330).Let {X t } be a Lévy process on R with X 0 = x,and let L be its inﬁnitesimal generator.Deﬁne the ﬁltration F X = (F X t ) t∈[0,T] where F X t = σ{X s ,0 ≤ s ≤ t} ∧ N(P). If τ is an F X stopping time,with E[τ] < ∞ then: E x [

τ 0 Lf(x +X t )dt] = E x [f(x +X τ )] −f(x),for f ∈ D(L). A less standard,inhomogeneous,version of Dynkin’s formula will be more suited to our needs,so we state it as well. Theorem 1.1.6 (Dynkin’s Formula,[15] p 128).Let {X t } be a Lévy process with generator L.Let f:[0,∞) ×R →R where f(t,x) is diﬀerentiable in t, and (viewed as a function of x) in the domain of L.For each t,let f t ,Lf be 9

continuous.Let the ﬁltration F X be deﬁned as in Theorem1.1.5.If τ is an F X stopping time,with E[τ] < ∞ then: E x

τ t f t (u,X u ) +Lf(u,X u )du

= E x [f(τ,X τ )] −f(t,x). 1.1.3 Basic Examples of Lévy Processes Both for intuition and because they comprise the essential processes to which our control problem applies,we list the canonical examples of Lévy processes. Example 1.1.7.The Lévy process X t = γt with generating triplet (0,0,γ) is the (deterministic) linear drift. Example 1.1.8.For a standard Brownian motion (W t ) t≥0 ,the process X t = σW t is a Lévy process and has generating triplet (σ,0,0).For each t ≥ 0,X t has a normal distribution with mean 0 and variance σ 2 t. As seen from the Lévy-Itô decomposition,the only continuous Lévy processes are sums of the above two examples.We now look at the most basic processes with discontinuous paths. Example 1.1.9.The Poisson process {N t :t ≥ 0} taking values in N 0 with parameter λ,has,at each t ≥ 0,the Poisson distribution with mean λt: P[N t = k] = e −λt (λt) k k! . The Lévy measure for N t is:ν(B) = λδ 1 (B),B ∈ B(R) and its generating triplet is (0,λδ 1 ,0). 10

The Poisson process makes jumps of size 1 at times whose increments are ex- ponentially distributed with mean 1 λ .If the jump sizes are instead independent and identically distributed we have a compound Poisson process. Example 1.1.10.A compound Poisson process is a Lévy process of the form: Y t = N t

n=1 X n where X n ,n ∈ N are independent and identically distributed random variables on R such that P[X n = 0] = 0 and N t is a Poisson process with mean λ, independent of {X n } n .The generating triplet of Y t is (0,λµ X 1 ,0) where µ X 1 is the distribution of X 1 . The compound Poisson process is essential to an understanding of the jump structure of a Lévy processes,and we discuss the intuitive relation here.The Lévy measure ν of a process X t is a ﬁnite measure on R\{0} if and only if X t is a compound Poisson process.This is the only type of discontinuous Lévy process with almost surely ﬁnitely many jumps over bounded time intervals.In fact all discontinuous Lévy processes can be thought of as limits of compound Poisson processes.This can be visualized by considering the limit term in the Lévy-Itô decomposition (1.1.2).For any ﬁxed > 0,the process represented by the remaining sum is a compound Poisson process. In a unit time interval (recall that this is precisely what the Lévy measure describes) a Lévy process has only ﬁnitely many jumps of size bounded away from 0,otherwise the path couldn’t be càdlàg.There can be,however,an 11

accumulation of inﬁnitely many jumps with jump sizes converging to 0.Of course,this is not possible for a compound Poisson process because the number of jumps is a Poisson random variable of a ﬁxed intensity,and this is ﬁnite almost surely. 1.1.4 Stable Processes Just as we can make the following statement relating the distribution of a Brownian motion,W t at various times: W at d = a 1 2 W t ,a > 0, there is a class of Lévy processes that generalizes this selfsimilarity relation. We now introduce this important class of self-similar or stable Lévy processes. The motivation for presenting these is that we later prove certain self-similar processes satisfy the assumptions of our main control result. Deﬁnition 1.1.5.A probability measure,µ,on R,is called stable if,for any a > 0 there are b > 0 and c ∈ R such that ˆµ(z) a = ˆµ(bz)e icz . It is called strictly stable if,for any a > 0 there is b > 0 such that ˆµ(z) a = ˆµ(bz). A Lévy process is called a stable or strictly stable process if its distribution at time t = 1 is stable or strictly stable. 12

Deﬁnition 1.1.6.A stochastic process {X t :t ≥ 0} on R is called selfsimilar if,for any a > 0 there is b > 0 such that for all t ≥ 0, X at d = bX t .(1.1.5) It is called broad-sense selfsimilar if for any a > 0 there is b > 0 and a function c(t),c:[0,∞) →R such that X at d = bX t +c(t). A Lévy process is selfsimilar or broad-sense selfsimilar if and only if it is respectively strictly stable or stable ([34] p.71).Broad-sense selfsimilarity is selfsimilarity after translation by a deterministic function of time.Here and in our proofs below,we focus on selfsimilarity,as broad-sense selfsimilarity is usually the same but with more cumbersome notation. Stable Lévy processes have many tractable analytic properties we can exploit. We survey the essential properties here,but will refer to more as needed in the proofs below.The authoritative reference on stable distributions is Zo- latarev[39]. Theorem 1.1.11 ([34] pp73-76).Let X t be a nontrivial selfsimilar Lévy pro- cess on R.Then there exists α ∈ (0,2] such that for a,b as in (1.1.5) we have b = a 1 α .We then say X t is α-stable,with index α. Notable cases of α include 13

1.the Cauchy distribution for symmetric strictly stable distribution where α = 1,with the corresponding Cauchy process. 2.The symmetric strictly stable distribution with α = 2 is Gaussian,with the Brownian motion process corresponding to a strictly 2-stable process. 3.The strictly stable distribution of index α = 1 2 is known as the Lévy distribution if the support of the distribution is non-negative. The Lévy measure ν of an α-stable process on R is absolutely continuous w.r.t Lebesgue measure and has density: ν(dx) =

c + x −α−1 dx if x > 0, c − |x| −α−1 dx if x < 0, (1.1.6) where c + and c − are two non-negative real numbers.If c + = c − the process is symmetric.One can interpret c + and c − as a measure on the (one dimensional) unit sphere.In higher dimensions,such a measure is more interesting.One can show that the Lévy measure of a stable process in higher dimensions is the product of a measure on the unit sphere and a ‘radial’ measure ([34] p.79). Theorem 1.1.12 ([34] p.86,p.89).If µ is a strictly α-stable distribution on R with 0 < α ≤ 2.,then ˆµ(t) = exp(−c 1 |t| α e −i( π 2 )θ sgnt ), for c 1 > 0 and θ ∈ R with |θ| ≤ min(( 2−α α ),1).These parameters are uniquely determined by µ. 14

If µ is also symmetric and nontrivial,then the characteristic function simpliﬁes to: ˆµ(z) = e −c|z| α The following proposition relates integrability of ˆµ to the existence and smoothness of a probability density function. Proposition 1.1.13 ([34] p.190).If a probability measure µ on R satisﬁes

R |ˆµ(z)| |z| n dx < ∞ for some n ∈ N + ,then u has a density p(x) of class C n and the derivatives of p(x) of orders 0,...,n tend to 0 as |x| →∞. For a stable process X,at every t > 0 there is a constant K > 0 such that |ˆµ t (z)| < exp(−Kt |z| α ).Proposition 1.1.13 then implies that X t has a smooth density p(t,x). The existence of a density function coupled with the selfsimilarity relation, will give us a scaling property so useful for our purposes that we formalize it here and give a brief derivation. Proposition 1.1.14.Let X t be a strictly α-stable Lévy process whose distri- bution µ t at time t admits a density p(t,x),x ∈ R.Then p(t,x) = t − 1 α p(1,xt − 1 α ).(1.1.7) Proof.For F(t,x) = P[X t ≤ x],the equality {X at } d = {a 1 α X t } implies that F(at,y) = F(t,ya − 1 α ).Letting a = 1 t ,we have F(1,y) = F(t,ya − 1 α ).Changing 15

variables y = xa 1 α and taking derivatives of the distribution functions to get the density functions gives the desired relation. Outside of the cases (α, c + −c − c + +c − ) ∈ {( 1 2 ,±1),(1,0),(2,0)},the probability den- sity of an α-stable process is not known in closed form.Various asymptotic expansions exist for diﬀerent cases (depending on α and other parameters relat- ing to the symmetry and drift of the process).Non-Gaussian stable processes are often popular in modelling due to the polynomial (rather than exponential) decay of their density functions.This gives a higher likelihood of large jumps or movements (often referred to as “fat tails”) when compared to a Brownian motion. It is primarily an imposition on the probability density of a Lévy process that determines whether a given process will satisfy the assumptions of our control theorem.At this point we defer the more detailed discussion of the densities of α-stable processes and stable/Brownian mixtures to Chapter 3,where we prove the assumptions hold. 1.2 Existing Work 1.2.1 Stochastic Optimal Control of Lévy Processes The problemanalyzed here,controlling the drift of a Lévy process,exists some- where between theory and example.On the one hand,we provide existence of a classical solution for a broad set of Lévy processes,and for a diverse (but somewhat restricted) class of reward functions.On the other,our controlled 16

process possesses constant coeﬃcients in both the Brownian and jump compo- nents.This reduces its generality but strengthens the conclusions.We brieﬂy review the type of results that are available in the control literature. A starting point for the stochastic optimal control of Markov processes with continuous paths is Flemming and Soner [15],which gives both veriﬁcation theorems for classical solutions and a detailed primer on the viscosity solution approach.There has been considerable work of late in the stochastic control of jump processes.A notable collection of such control results can be found in Øksendal and Sulem [29]. The more signiﬁcant recent activity focuses on viscosity solutions (see [4] [6] [21] [16]).For regularity results of general nonlinear integral equations,of the kind associated with control of Lévy processes,particularly the fractional Laplacian corresponding to symmetric stable processes,see the recent work of Caﬀarelli and Silvestre ([10],[9],[36] ). However as our result provides a classical solution,we survey what is available in this regard.A general remark on type of results available for obtaining classical solutions to control problems driven by jump processes (e.g.in [29] ) is that one ﬁnds veriﬁcation theorems.The nature of the veriﬁcation theorems is to take,as given,a classical solution to a partial-integro-diﬀerential equa- tion (PIDE) corresponding to the formal Hamilton-Jacobi-Bellman equation (HJB).The existence of such a solution,in addition to many assumptions and estimates on the process and the solution itself,allow the conclusion that the solution of the PIDE coincides with the value function to the control prob- 17

lem.The diﬃculty here is that the existence of a classical solution to the HJB equation and the set of assumptions one has to verify is (in our experience) only the beginning of the problem. Fixed-point arguments yielding classical solutions,such as the one we make to prove the main result in Chapter 2,are a staple of non-linear analysis and non- linear partial diﬀerential equations.Examples of such arguments in stochastic control,applied to continuous processes driven by Brownian motion,can be found in Bensoussan ([5] Theorems II.2.3,IV.2.1 ). 1.2.2 The Lévy Density,Probability Density,and Score Function The third chapter of this work attempts to classify what Lévy processes,L, have a probability density function p(t,x) which satisﬁes the following condi- tion: sup t∈[0,T] E

T t |g(u,X u −X t )| q du < ∞. for some q > 1 where g(t,x) = p x (t,x) p(t,x) . For the purposes of the control result,all that is needed above is ﬁniteness when t = 0.We handle this in a fairly straightforward way by exploiting the relation of g with the score function from information theory,and generalizing some basic results on the score function and the Fisher information (see [35] [23] ). For stable processes,classical work on density functions was done by Blumen- thal and Getoor [8] and McKean [28],while [34] and [39] provide a summary. 18

For the density of stable/Gaussian mixtures,tight bounds are established in Song and Vondraček [37]. An interesting question is whether one can formulate the above condition on the probability density in terms of conditions on the Lévy density.Singularity in the integral term above can arise either at small times (when the process diﬀuses from(0,0)) or at the tails in x.For singular Lévy measures,Picard de- veloped a method for determining existence and regularity of density functions [31],[32].Asymptotic behavior of the probability density near time 0 is stud- ied in [33],[18],[19],[20].Tail behavior of the survival function of inﬁnitely divisible distributions is studied in [13] and [30],the latter also contains an overview of other work in this direction. 1.2.3 Numerical Methods Numerical methods for stochastic control problems are quite diverse both in terms of details of implementation and in terms of applications.We present three distinct implementations of the Lévy control problem:a iterative ﬁxed- point method,dynamic programming,and an iterative technique using Fourier transform methods.Each of these are quite traditional. The framework of dynamic programming originated with the work of Bell- man [3].Fourier techniques are ubiquitous,of course.We make use of the industry-standard fast Fourier transform package fftw [17].Though the idea of transform techniques is standard,we were inﬂuenced by the use of Fourier techniques in solutions of expectations of functionals of Lévy processes by 19

Jaimungal [22]. We do not prove convergence,but we brieﬂy discuss the two standard ap- proaches for doing so.Convergence of ﬁnite diﬀerence schemes for value func- tions that are viscosity solutions was developed by Barles and Souganidis in [2].A full exposition is given in [15] but applies only to problems driven by Brownian motions. The alternative method for showing convergence was developed by Kushner [25] [26].The idea behind this approach is to approximate the controlled dif- fusion by a Markov chain.The latter is endowed with properties that make it “locally consistent” with the diﬀusion process.The idea is that the numerical solution is an exact solution to the approximation of the control problem.The next step is to show that the approximated problemconverges to the controlled diﬀusion problem (as opposed to showing the numerical implementation con- verges to the solution).The seminal reference on this is [27].Results are given here for jump processes,but only of the compound Poisson type. 20