首页 习题解答:随机微分方程(Oksendal著,第六版)

习题解答:随机微分方程(Oksendal著,第六版)

举报
开通vip

习题解答:随机微分方程(Oksendal著,第六版) Stochastic Differential Equations, Sixth Edition Solution of Exercise Problems Yan Zeng July 16, 2006 This is a solution manual for the SDE book by Øksendal, Stochastic Differential Equations, Sixth Edition. It is complementary to the books own solution, ...

习题解答:随机微分方程(Oksendal著,第六版)
Stochastic Differential Equations, Sixth Edition Solution of Exercise Problems Yan Zeng July 16, 2006 This is a solution manual for the SDE book by Øksendal, Stochastic Differential Equations, Sixth Edition. It is complementary to the books own solution, and can be downloaded at www.math.fsu.edu/z˜eng. If you have any comments or find any typos/errors, please email me at yz44@cornell.edu. This version omits the problems from the chapters on applications, namely, Chapter 6, 10, 11 and 12. I hope I will find time at some point to work out these problems. 2.8. b) Proof. E[eiuBt ] = ∞∑ k=0 ik k! E[Bkt ]u k = ∞∑ k=0 1 k! (− t 2 )ku2k. So E[B2kt ] = 1 k! (− t2 )k (−1)k (2k)! = (2k)! k! · 2k t k. d) Proof. Ex[|Bt −Bs|4] = n∑ i=1 Ex[(B(i)t −B(i)s )4] + ∑ i6=j Ex[(B(i)t −B(i)s )2(B(j)t −B(j)s )2] = n · 4! 2! · 4 · (t− s) 2 + n(n− 1)(t− s)2 = n(n+ 2)(t− s)2. 2.11. Proof. Prove that the increments are independent and stationary, with Gaussian distribution. Note for Gaussian random variables, uncorrelatedness=independence. 2.15. Proof. Since Bt −Bs ⊥ Fs := σ(Bu : u ≤ s), U(Bt −Bs) ⊥ Fs. Note U(Bt −Bs) d= N(0, t− s). 3.2. 1 Proof. WLOG, we assume t = 1, then B31 = n∑ j=1 (B3j/n −B3(j−1)/n) = n∑ j=1 [(Bj/n −B(j−1)/n)3 + 3B(j−1)/nBj/n(Bj/n −B(j−1)/n)] = n∑ j=1 (Bj/n −B(j−1)/n)3 + n∑ j=1 3B2(j−1)/n(Bj/n −B(j−1)/n) + n∑ j=1 3B(j−1)/n(Bj/n −B(j−1)/n)2 := I + II + III By Problem EP1-1 and the continuity of Brownian motion. I ≤ [ n∑ j=1 (Bj/n −B(j−1)/n)2] max 1≤j≤n |Bj/n −B(j−1)/n| → 0 a.s. To argue II → 3 ∫ 1 0 B2t dBt as n → ∞, it suffices to show E[ ∫ 1 0 (B2t − B(n)t )2dt] → 0, where B(n)t =∑n j=1B 2 (j−1)/n1{(j−1)/n s, then E [ Mt Ms |Fs ] = E [ eσ(Bt−Bs)− 1 2σ 2(t−s)|Fs ] = E[eσBt−s ] e 1 2σ 2(t−s) = 1 The second equality is due to the fact Bt −Bs is independent of Fs. 4.4. Proof. For part a), set g(t, x) = ex and use Theorem 4.12. For part b), it comes from the fundamental property of Itoˆ integral, i.e. Itoˆ integral preserves martingale property for integrands in V. Comments: The power of Itoˆ formula is that it gives martingales, which vanish under expectation. 4.5. 4 Proof. Bkt = ∫ t 0 kBk−1s dBs + 1 2 k(k − 1) ∫ t 0 Bk−2s ds Therefore, βk(t) = k(k − 1) 2 ∫ t 0 βk−2(s)ds This gives E[B4t ] and E[B 6 t ]. For part b), prove by induction. 4.6. (b) Proof. Apply Theorem 4.12 with g(t, x) = ex and Xt = ct+ ∑n j=1 αjBj . Note ∑n j=1 αjBj is a BM, up to a constant coefficient. 4.7. (a) Proof. v ≡ In×n. (b) Proof. Use integration by parts formula (Exercise 4.3.), we have X2t = X 2 0 + 2 ∫ t 0 XsdX + ∫ t 0 |vs|2ds = X20 + 2 ∫ t 0 XsvsdBs + ∫ t 0 |vs|2ds. So Mt = X20 + 2 ∫ t 0 XsvsdBs. Let C be a bound for |v|, then E [∫ t 0 |Xsvs|2ds ] ≤ C2E [∫ t 0 |Xs|2ds ] = C2 ∫ t 0 E [∣∣∣∣∫ s 0 vudBu ∣∣∣∣2 ] ds = C2 ∫ t 0 E [∫ s 0 |vu|2du ] ds ≤ C 4t2 2 . So Mt is a martingale. 4.12. Proof. Let Yt = ∫ t 0 u(s, ω)ds. Then Y is a continuous {F (n)t }-martingale with finite variation. On one hand, 〈Y 〉t = lim ∆tk→0 ∑ tk≤t |Ytk+1 − Ytk |2 ≤ lim ∆tk→0 (total variation of Y on [0, t]) ·max tk |Ytk+1 − Ytk | = 0. On the other hand, integration by parts formula yields Y 2t = 2 ∫ t 0 YsdYs + 〈Y 〉t. So Y 2t is a local martingale. If (Tn)n is a localizing sequence of stopping times, by Fatou’s lemma, E[Y 2t ] ≤ lim n E[Y 2t∧Tn ] = E[Y 2 0 ] = 0. So Y· ≡ 0. Take derivative, we conclude u = 0. 4.16. (a) Proof. Use Jensen’s inequality for conditional expectations. (b) 5 Proof. (i) Y = 2 ∫ T 0 BsdBs. So Mt = T + 2 ∫ t 0 BsdBs. (ii) B3T = ∫ T 0 3B2sdBs + 3 ∫ T 0 Bsds = 3 ∫ T 0 B2sdBs + 3(BTT − ∫ T 0 sdBs). So Mt = 3 ∫ t 0 B2sdBs + 3TBt − 3 ∫ t 0 sdBs = ∫ t 0 3(B2s + (T − s)dBs. (iii)Mt = E[exp(σBT )|Ft] = E[exp(σBT − 12σ2T )|Ft] exp(12σ2T ) = Zt exp( 12σ2T ), where Zt = exp(σBt− 1 2σ 2t). Since Z solves the SDE dZt = ZtσdBt, we have Mt = (1 + ∫ t 0 ZsσdBs) exp( 1 2 σ2T ) = exp( 1 2 σ2T ) + ∫ t 0 σ exp(σBs + 1 2 σ2(T − s))dBs. 5.1. (ii) Proof. Set f(t, x) = x/(1 + t), then by Itoˆ’s formula, we have dXt = df(t, Bt) = − Bt(1 + t)2 dt+ dBt 1 + t = − Xt 1 + t dt+ dBt 1 + t (iii) Proof. By Itoˆ’s formula, dXt = cosBtdBt − 12 sinBtdt. So Xt = ∫ t 0 cosBsdBs − 12 ∫ t 0 Xsds. Let τ = inf{s > 0 : Bs 6∈ [−pi2 , pi2 ]}. Then Xt∧τ = ∫ t∧τ 0 cosBsdBs − 12 ∫ t∧τ 0 Xsds = ∫ t 0 cosBs1{s≤τ}dBs − 12 ∫ t∧τ 0 Xsds = ∫ t 0 √ 1− sin2Bs1{s≤τ}dBs − 12 ∫ t∧τ 0 Xsds = ∫ t∧τ 0 √ 1−X2sdBs − 1 2 ∫ t∧τ 0 Xsds. So for t < τ , Xt = ∫ t 0 √ 1−X2sdBs − 12 ∫ t 0 Xsds. (iv) Proof. dX1t = dt is obvious. Set f(t, x) = e tx, then dX2t = df(t, Bt) = e tBtdt+ etdBt = X2t dt+ e tdBt 5.3. Proof. Apply Itoˆ’s formula to e−rtXt. 5.5. (a) Proof. d(e−µtXt) = −µe−µtXtdt+ e−µtdXt = σe−µtdBt. So Xt = eµtX0 + ∫ t 0 σeµ(t−s)dBs. (b) 6 Proof. E[Xt] = eµtE[X0] and X2t = e 2µtX20 + σ 2e2µt( ∫ t 0 e−µsdBs)2 + 2σe2µtX0 ∫ t 0 e−µsdBs. So E[X2t ] = e 2µtE[X20 ] + σ 2e2µt ∫ t 0 e−2µsds since ∫ t 0 e−µsdBs is a martingale vanishing at time 0 = e2µtE[X20 ] + σ 2e2µt e−2µt − 1 −2µ = e2µtE[X20 ] + σ 2 e 2µt − 1 2µ . So V ar[Xt] = E[X2t ]− (E[Xt])2 = e2µtV ar[X0] + σ2 e 2µt−1 2µ . 5.6. Proof. We find the integrating factor Ft by the follows. Suppose Ft satisfies the SDE dFt = θtdt + γtdBt. Then d(FtYt) = FtdYt + YtdFt + dYtdFt = Ft(rdt+ αYtdBt) + Yt(θtdt+ γtdBt) + αγtYtdt = (rFt + θtYt + αγtYt)dt+ (αFtYt + γtYt)dBt. (1) Solve the equation system { θt + αγt = 0 αFt + γt = 0, we get γt = −αFt and θt = α2Ft. So dFt = α2Ftdt− αFtdBt. To find Ft, set Zt = e−α2tFt, then dZt = −α2e−α2tFtdt+ e−α2tdFt = e−α2t(−α)FtdBt = −αZtdBt. Hence Zt = Z0 exp(−αBt − α2t/2). So Ft = eα 2tF0e −αBt− 12α2t = F0e−αBt+ 1 2α 2t. Choose F0 = 1 and plug it back into equation (1), we have d(FtYt) = rFtdt. So Yt = F−1t (F0Y0 + r ∫ t 0 Fsds) = Y0eαBt− 1 2α 2t + r ∫ t 0 eα(Bt−Bs)− 1 2α 2(t−s)ds. 5.7. (a) Proof. d(etXt) = et(Xtdt+ dXt) = et(mdt+ σdBt). So Xt = e−tX0 +m(1− e−t) + σe−t ∫ t 0 esdBs. (b) 7 Proof. E[Xt] = e−tE[X0] +m(1− e−t) and E[X2t ] = E[(e −tX0 +m(1− e−t))2] + σ2e−2tE[ ∫ t 0 e2sds] = e−2tE[X20 ] + 2m(1− e−t)e−tE[X0] +m2(1− e−t)2 + 1 2 σ2(1− e−2t). Hence V ar[Xt] = E[X2t ]− (E[Xt])2 = e−2tV ar[X0] + 12σ2(1− e−2t). 5.9. Proof. Let b(t, x) = log(1 + x2) and σ(t, x) = 1{x>0}x, then |b(t, x)|+ |σ(t, x)| ≤ log(1 + x2) + |x| Note log(1 + x2)/|x| is continuous on R − {0}, has limit 0 as x → 0 and x → ∞. So it’s bounded on R. Therefore, there exists a constant C, such that |b(t, x)|+ |σ(t, x)| ≤ C(1 + |x|) Also, |b(t, x)− b(t, y)|+ |σ(t, x)− σ(t, y)| ≤ 2|ξ| 1 + ξ2 |x− y|+ |1{x>0}x− 1{y>0}y| for some ξ between x and y. So |b(t, x)− b(t, y)|+ |σ(t, x)− σ(t, y)| ≤ |x− y|+ |x− y| Conditions in Theorem 5.2.1 are satisfied and we have existence and uniqueness of a strong solution. 5.10. Proof. Xt = Z + ∫ t 0 b(s,Xs)ds + ∫ t 0 σ(s,Xs)dBs. Since Jensen’s inequality implies (a1 + · · · + an)p ≤ np−1(ap1 + · · ·+ apn) (p ≥ 1, a1, · · · , an ≥ 0), we have E[|Xt|2] ≤ 3 ( E[|Z|2] + E [∣∣∣∣∫ t 0 b(s,Xs)ds ∣∣∣∣2 ] + E [∣∣∣∣∫ t 0 σ(s,Xs)dBs ∣∣∣∣2 ]) ≤ 3 ( E[|Z|2] + E[ ∫ t 0 |b(s,Xs)|2ds] + E[ ∫ t 0 |σ(s,Xs)|2ds] ) ≤ 3(E[|Z|2] + C2E[ ∫ t 0 (1 + |Xs|)2ds] + C2E[ ∫ t 0 (1 + |Xs|)2ds]) = 3(E[|Z|2] + 2C2E[ ∫ t 0 (1 + |Xs|)2ds]) ≤ 3(E[|Z|2] + 4C2E[ ∫ t 0 (1 + |Xs|2)ds]) ≤ 3E[|Z|2] + 12C2T + 12C2 ∫ t 0 E[|Xs|2]ds = K1 +K2 ∫ t 0 E[|Xs|2]ds, where K1 = 3E[|Z|2] + 12C2T and K2 = 12C2. By Gronwall’s inequality, E[|Xt|2] ≤ K1eK2t. 5.11. 8 Proof. First, we check by integration-by-parts formula, dYt = ( −a+ b− ∫ t 0 dBs 1− s ) dt+ (1− t) dBt 1− t = b− Yt 1− t dt+ dBt Set Xt = (1− t) ∫ t 0 dBs 1−s , then Xt is centered Gaussian, with variance E[X2t ] = (1− t)2 ∫ t 0 ds (1− s)2 = (1− t)− (1− t) 2 So Xt converges in L2 to 0 as t → 1. Since Xt is continuous a.s. for t ∈ [0, 1), we conclude 0 is the unique a.s. limit of Xt as t→ 1. 5.14. (i) Proof. dZt = d(u(B1(t), B2(t)) + iv(B1(t), B2(t))) = 5u · (dB1(t), dB2(t)) + 12∆udt+ i5 v · (dB1(t), dB2(t)) + i 2 ∆vdt = (5u+ i5 v) · (dB1(t), dB2(t)) = ∂u ∂x (B(t))dB1(t)− ∂v ∂x (B(t))dB2(t) + i( ∂v ∂x (B(t))dB1(t) + ∂u ∂x (B(t))dB2(t)) = ( ∂u ∂x (B(t)) + i ∂v ∂x (B(t)))dB1(t) + (i ∂v ∂x + i ∂u ∂x (B(t)))dB2(t) = F ′(B(t))dB(t). (ii) Proof. By result of (i), we have deαB(t) = αeαB(t)dB(t). So Zt = eαB(t) + Z0 solves the complex SDE dZt = αZtdB(t). 5.15. Proof. The deterministic analog of this SDE is a Bernoulli equation dytdt = rKyt− ry2t . The correct substitu- tion is to multiply −y−2t on both sides and set zt = y−1t . Then we’ll have a linear equation dzt = −rKzt + r. Similarly, we multiply −X−2t on both sides of the SDE and set Zt = X−1t . Then −dXt X2t = −rKdt Xt + rdt− β dBt Xt and dZt = −dXt X2t + dXt · dXt X3t = −rKZtdt+ rdt− βZtdBt + 1 X3t β2X2t dt = rdt− rKZtdt+ β2Ztdt− βZtdBt. Define Yt = e(rK−β 2)tZt, then dYt = e(rK−β 2)t(dZt + (rK − β2)Ztdt) = e(rK−β2)t(rdt− βZtdBt) = re(rK−β2)tdt− βYtdBt. Now we imitate the solution of Exercise 5.6. Consider an integrating factor Nt, such that dNt = θtdt+γtdBt and d(YtNt) = NtdYt + YtdNt + dNt · dYt = Ntre(rK−β2)tdt− βNtYtdBt + Ytθtdt+ YtγtdBt − βγtYtdt. 9 Solve the equation { θt = βγt γt = βNt, we get dNt = β2Ntdt+ βNtdBt. So Nt = N0eβBt+ 1 2β 2t and d(YtNt) = Ntre(rK−β 2)tdt = N0re(rK− 1 2β 2)t+βBtdt. Choose N0 = 1, we have NtYt = Y0 + ∫ t 0 re(rK− β2 2 )s+βBsds with Y0 = Z0 = X−10 . So Xt = Z−1t = e (rK−β2)tY −1t = e(rK−β 2)tNt Y0 + ∫ t 0 re(rK− 1 2β 2)s+βBsds = e(rK− 1 2β 2)t+βBt x−1 + ∫ t 0 re(rK− 1 2β 2)s+βBsds . 5.15. (Another solution) Proof. We can also use the method in Exercise 5.16. Then f(t, x) = rKx − rx2 and c(t) ≡ β. So Ft = e−βBt+ 1 2β 2t and Yt satisfies dYt = Ft(rKF−1t Yt − rF−2t Y 2t )dt. Divide −Y 2t on both sides, we have −dYt Y 2t = ( −rK Yt + rF−1t ) dt. So dY −1t = −Y −2t dYt = (−rKY −1t + rF−1t )dt, and d(erKtY −1t ) = e rKt(rKY −1t dt+ dY −1 t ) = e rKtrF−1t dt. Hence erKtY −1t = Y −1 0 + r ∫ t 0 erKseβBs− 1 2β 2sds and Xt = F−1t Yt = e βBt− 12β2t e rKt Y −10 + r ∫ t 0 eβBs+(rK− 1 2β 2)sds = e(rK− 1 2β 2)t+βBt x−1 + r ∫ t 0 e(rK− 1 2β 2)s+βBsds . 5.16. (a) and (b) Proof. Suppose Ft is a process satisfying the SDE dFt = θtdt+ γtdBt, then d(FtXt) = Ft(f(t,Xt)dt+ c(t)XtdBt) +Xtθtdt+XtγtdBt + c(t)γtXtdt = (Ftf(t,Xt) + c(t)γtXt +Xtθt)dt+ (c(t)FtXt + γtXt)dBt. Solve the equation { c(t)γt + θt = 0 c(t)Ft + γt = 0, we have { γt = −c(t)Ft θt = c2(t)F (t). So dFt = c2(t)Ftdt − c(t)FtdBt. Hence Ft = F0e 12 ∫ t 0 c 2(s)ds−∫ t0 c(s)dBs . Choose F0 = 1, we get desired integrating factor Ft and d(FtXt) = Ftf(t,Xt)dt. 10 (c) Proof. In this case, f(t, x) = 1x and c(t) ≡ α. So Ft satisfies Ft = e−αBt+ 1 2α 2t and Yt satisfies dYt = Ft · 1F−1t Yt dt = F 2 t Y −1 t dt. Since dY 2t = 2YtdYt + dYt · dYt = 2F 2t dt = 2e−2αBt+α 2tdt, we have Y 2t = 2 ∫ t 0 e−2αBs+α 2sds+ Y 20 , where Y0 = F0X0 = X0 = x. So Xt = eαBt− 1 2α 2t √ x2 + 2 ∫ t 0 e−2αBs+α2sds. (d) Proof. f(t, x) = xγ and c(t) ≡ α. So Ft = e−αBt+ 12α2t and Yt satisfies the SDE dYt = Ft(F−1t Yt) γdt = F 1−γt Y γ t dt. Note dY 1−γt = (1 − γ)Y −γt dYt = (1 − γ)F 1−γt dt, we conclude Y 1−γt = Y 1−γ0 + (1 − γ) ∫ t 0 F 1−γs ds with Y0 = F0X0 = X0 = x. So Yt = eαBt− 1 2α 2t(x1−γ + (1− γ) ∫ t 0 e−α(1−γ)Bs+ α2(1−γ) 2 sds) 1 1−γ . 5.17. Proof. Assume A 6= 0 and define ω(t) = ∫ t 0 v(s)ds, then ω′(t) ≤ C +Aω(t) and d dt (e−Atω(t)) = e−At(ω′(t)−Aω(t)) ≤ Ce−At. So e−Atω(t)−ω(0) ≤ CA (1− e−At), i.e. ω(t) ≤ CA (eAt− 1). So v(t) = ω′(t) ≤ C +A · CA (eAt− 1) = CeAt. 5.18. (a) Proof. Let Yt = logXt, then dYt = dXt Xt − (dXt) 2 2X2t = κ(α− Yt)dt+ σdBt − σ 2X2t dt 2X2t = (κα− 1 2 α2)dt− κYtdt+ σdBt. So d(eκtYt) = κYteκtdt+ eκtdYt = eκt[(κα− 12σ 2)dt+ σdBt] and eκtYt − Y0 = (κα− 12σ2) e κt−1 κ + σ ∫ t 0 eκsdBs. Therefore Xt = exp{e−κt log x+ (α− σ 2 2κ )(1− e−κt) + σe−κt ∫ t 0 eκsdBs}. (b) Proof. E[Xt] = exp{e−κt log x+(α− σ22κ )(1−e−κt)}E[exp{σe−κt ∫ t 0 eκsdBs}]. Note ∫ t 0 eκsdBs ∼ N(0, e2κt−12κ ), so E[exp{σe−κt ∫ t 0 eκsdBs}] = exp { 1 2 σ2e−2κt e2κt − 1 2κ } = exp { σ2(1− e−2κt) 4κ } . 11 5.19. Proof. We follow the hint. P [∫ T 0 ∣∣∣b(s, Y (K)s )− b(s, Y (K−1)s )∣∣∣ ds > 2−K−1 ] ≤ P [∫ T 0 D ∣∣∣Y (K)s − Y (K−1)s ∣∣∣ ds > 2−K−1 ] ≤ 22K+2E (∫ T 0 D ∣∣∣Y (K)s − Y (K−1)s ∣∣∣ ds )2 ≤ 22K+2E [ D2 ∫ T 0 ∣∣∣Y (K)s − Y (K−1)s ∣∣∣2 dsT ] ≤ 22K+2D2TE [∫ T 0 ∣∣∣Y (K)s − Y (K−1)s ∣∣∣2 ds ] ≤ D2T22K+2 ∫ T 0 AK2 t K K! ds = D2T22K+2AK2 (K + 1)! TK+1. P [ sup 0≤t≤T ∣∣∣∣∫ t 0 ( σ(s, Y (K)s )− σ(s, Y (K−1)s ) ) dBs ∣∣∣∣ > 2−K−1] ≤ 22K+2E [∣∣∣∣∫ t 0 ( σ(s, Y (K)s )− σ(s, Y (K−1)s ) ) dBs ∣∣∣∣2 ] ≤ 22K+2E [∫ t 0 ( σ(s, Y (K)s )− σ(s, Y (K−1)s ) )2 ds ] ≤ 22K+2E [∫ t 0 D2|Y (K)s − Y (K−1)s |2ds ] ≤ 22K+2D2 ∫ T 0 AK2 t K K! dt = 22K+2D2AK2 (K + 1)! TK+1. So P [ sup 0≤t≤T |Y (K+1)t − Y (K)t | > 2−K ] ≤ D2T 22K+2AK2 (K + 1)! TK+1 +D2 22K+2AK2 (K + 1)! TK+1 ≤ (A3T ) K+1 (K + 1)! , where A3 = 4(A2 + 1)(D2 + 1)(T + 1). 7.2. Remark: When an Itoˆ diffusion is explicitly given, it’s usually straightforward to find its infinitesimal generator, by Theorem 7.3.3. The converse is not so trivial, as we’re faced with double difficulties: first, the desired n-dimensional Itoˆ diffusion dXt = b(Xt)dt + σ(Xt)dBt involves an m-dimensional BM Bt, where m is unknown a priori; second, even if m can be determined, we only know σσT , which is the product of an n×m and an m× n matrix. In general, it’s hard to find σ according to σσT . This suggests maybe there’s more than one diffusion that has the given generator. Indeed, when restricted to C20 (R+), BM, BM killed at 0 and reflected BM all have Laplacian operator as generator. What differentiate them is the domain of generators: domain is part of the definition of a generator! 12 With the above theoretical background, it should be OK if we find more than one Itoˆ diffusion process with given generator. A basic way to find an Itoˆ diffusion with given generator can be trial-and-error. To tackle the first problem, we try m = 1, m = 2, · · · . To tackle the second problem, note σσT is symmetric, so we can write σσT as AMAT where M is the diagonalization of σσT , and then set σ = AM1/2. In general, to deal directly with σTσ instead of σ, we should use the martingale problem approach of Stoock and Varadhan. See the preface of their classical book for details. a) Proof. dXt = dt+ √ 2dBt. b) Proof. d ( X1(t) X2(t) ) = ( 1 cX2(t) ) dt+ ( 0 αX2(t) ) dBt. c) Proof. σσT = ( 1 + x21 x1 x1 1 ) . If d ( X1(t) X2(t) ) = ( 2X2(t) log(1 +X21 (t) +X 2 2 (t)) ) dt+ ( a b ) dBt, then σσT has the form ( a2 ab ab b2 ) , which is impossible since x21 6= (1 + x21) · 1. So we try 2-dim. BM as the driving process. Linear algebra yields σσT = ( 1 x1 0 1 )( 1 0 x1 1 ) . So we can choose dXt = ( 2X2(t) log(1 +X21 (t) +X 2 2 (t)) ) dt+ ( 1 X1(t) 0 1 )( dBt(t) dB2(t) ) . 7.3. Proof. Set FXt = σ(Xs : s ≤ t) and FBt = σ(Bs : s ≤ t). Since σ(Xt) = σ(Bt), we have, for any bounded Borel function f(x), E[f(Xt+s)|FXt ] = E[f(xec(t+s)+αBt+s)|FBt ] = EBt [f(xec(t+s)+αBs)] ∈ σ(Bt) = σ(Xt). So E[f(Xt+s)|FXt ] = E[f(Xt+s)|Xt]. 7.4. a) Proof. Choose b ∈ R+, so that 0 < x < b. Define τ0 = inf{t > 0 : Bt = 0}, τb = inf{t > 0 : Bt = b} and τ0b = τ0 ∧ τb. Clearly, limb→∞ τb = ∞ a.s. by the continuity of Brownian motion. Consequently, {τ0 < τb} ↑ {τ0 <∞} as b ↑ ∞. Note (B2t − t)t≥0 is a martingale, by Doob’s optional stopping theorem, we have Ex[B2t∧τ0b ] = E x[t ∧ τ0b]. Apply bounded convergence theorem to the LHS and monotone convergence theorem to the RHS, we get Ex[τ0b] = Ex[B2τ0b ] <∞. In particular, τ0b <∞ a.s. Moreover, by considering the martingale (Bt)t≥0 and similar argument, we have Ex[Bτ0b ] = E x[B0] = x. This leads to the equation{ P x(τ0 < τb) · 0 + P x(τ0 > τb) · b = x P x(τ0 < τb) + P x(τ0 > τb) = 1. Solving it gives P x(τ0 < τb) = 1− xb . So P x(τ0 <∞) = limb→∞ P x(τ0 < τb) = 1. 13 b) Proof. Ex[τ ] = limb→∞Ex[τ0b] = limb→∞Ex[B2τ0b ] = limb→∞ b 2 · xb =∞. Remark: (1) Another easy proof is based on the following result, which can be proved independently and via elementary method: let W = (Wt)t≥0 be a Wiener process, and T be a stopping time such that E[T ] <∞. Then E[WT ] = 0 and E[W 2T ] = E[T ] ([6]). (2) The solution in the book is not quite right, since Dynkin’s formula assumes Ex[τK ] <∞, which needs proof in this problem. 7.5. Proof. The hint is detailed enough. But if we want to be really rigorous, note Theorem 7.4.1. (Dynkin’s formula) studies Itoˆ diffusions, not Itoˆ processes, to which standard form semi-group theory (in particular, the notion of generator) doesn’t apply. So we start from scratch, and re-deduce Dynkin’s formula for Itoˆ processes. First of all, we note b(t, x), σ(t, x) are bounded in a bounded domain of x, uniformly in t. This suffices to give us martingales, not just local martingales. Indeed, Itoˆ’s formula says |X(t)|2 = |X(0)|2 + ∫ t 0 ∑ i 2Xi(s)dXi(s) + ∫ t 0 ∑ i 〈dXi(s)〉 = |X(0)|2 + 2 ∑ i ∫ t 0 Xi(s)bi(s,X(s))ds+ 2 ∑ ij ∫ t 0 Xi(s)σij(s,X(s))dBj(s) + ∑ i ∫ t 0 σ2ii(s,Xs)ds. Let τ = t ∧ τR where τR = inf{t > 0 : |Xt| ≥ R}. Then by previous remark on the boundedness of σ and b,∫ t∧τR 0 Xi(s)σij(s,X(s))dBj(s) is a martingale. Take expectation, we get E[|X(τ)|
本文档为【习题解答:随机微分方程(Oksendal著,第六版)】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑, 图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
该文档来自用户分享,如有侵权行为请发邮件ishare@vip.sina.com联系网站客服,我们会及时删除。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。
本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。
网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
下载需要: 免费 已有0 人下载
最新资料
资料动态
专题动态
is_100541
暂无简介~
格式:pdf
大小:275KB
软件:PDF阅读器
页数:33
分类:理学
上传时间:2011-05-22
浏览量:620