1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Almost sure central limit theorem

42 29 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Nội dung

MINISTRY OF EDUCATION AND TRAINING HANOI PEDAGOGICAL UNIVERSITY DEPARTMENT OF MATHEMATICS PHAM NGOC QUYNH HUONG ALMOST SURE CENTRAL LIMIT THEOREM BACHELOR THESIS Hanoi, 2019 MINISTRY OF EDUCATION AND TRAINING HANOI PEDAGOGICAL UNIVERSITY DEPARTMENT OF MATHEMATICS PHAM NGOC QUYNH HUONG ALMOST SURE CENTRAL LIMIT THEOREM Speciality: Applied Mathmatics BACHELOR THESIS Supervisor: PhD Pham Viet Hung Hanoi, 2019 Confirmation This dissertation has been written on the basis of my research project carried at Hanoi Pedagogical University 2, under the supervision of PhD Pham Viet Hung The manuscript has never been published by others The author Pham Ngoc Quynh Huong Acknowledgment First and foremost, my heartfelt goes to my admirable supervisor, Mr Pham Viet Hung (Institute of Mathematics, Vietnam Academy of Science and Technology), for his continuous supports when I met obstacles during the journey The completion of this study would not have been possible without his expert advice, close attention and unswerving guidance Secondly, I am keen on expressing my deep gratitude for my family for encouraging me to continue this thesis I owe my special thanks to my parents for their emotional and material sacrifices as well as their understanding and unconditional support Finally, I own my thanks to many people who helped me and encouraged me during my work My special thanks to Mr Nguyen Phuong Dong (Hanoi Pedagogical University No.2) for his guidance on drawing up my work I am specially thankful to all my best friends at university for endless incentive Contents Preliminaries 1.1 Probability Space 1.2 Random Variables 1.2.1 Definition 1.2.2 Distribution functions 1.2.3 Expectation 1.2.4 Variance 1.2.5 Examples 1.2.6 Some inequalities 4 13 random variables 14 14 14 15 1.4 Convergence of Random Variables 1.4.1 Definition 1.4.2 Relations among kinds of convergences 1.4.3 Some proofs 1.4.4 Some other theorems 1.4.5 Strong law of large numbers 16 16 16 17 19 21 1.5 Central Limit Theorem 1.5.1 Characteristic function 1.5.2 Central Limit Theorem 1.5.3 Berry-Esseen Theorem 22 22 24 25 Almost sure central limit theorem 26 2.1 Introduction 26 2.2 Almost Sure Central Limit Theorem 31 2.3 A universal result in Almost Sure Central Limit Theorem 36 References 37 1.3 Random Vectors 1.3.1 Definition 1.3.2 Independent 1.3.3 Covariance Introduction The Central Limit Theorem has been described as one of the most remarkable results, a beautiful pearl in a lot aspects of mathematics, especially in the world of probability and statistics It is one of the oldest results in probability theory, occupies a unique position at the heart of probabilistic limit theory and plays a central role in the theory of statistical inference Much of its importance stems from its proven adaptability and utility in many areas of mathematics, while it accounts hugely for the importance of the normal distribution in theoretical investigations The Central limit theorem has been developed in some directions to construct Kolmogorov distance, Cramer’s theorem (Large deviations) and so on In this thesis, a new direction to develop the theorem which was introduced in 1980s is presented: Almost Sure Central Limit Theorem The theorem is synthesized from the interest whether assertions are possible for almost every realization of the random variables Xn The Almost sure central limit theorem states in its simplest form that a sequence of independent, identically distributed random variables (Xn )n≥1 , with E(Xi ) = and E(Xi2 ) = 1, obeys P lim N →∞ log N N I n=1 Sn √ ≤x n = Φ(x) = 1, for each value x ∈ R I{.} here denotes the indicator function of events, Φ denotes the distribution function of the standard normal distribution and Sn is the nth partial sum of the above mentioned sequence of random variables This thesis mainly discuss about how the theorem was established, its statement and proof Besides, an universal result of this generalization is also under consideration Chapter Preliminaries 1.1 Probability Space Let Ω be a non-empty set without any special structure and 2Ω be the set of all subsets of Ω, including the empty set ∅ Definition 1.1.1 Let A be a subset of 2Ω Then A is a σ-algebra if it satisfies following properties: ∅ ∈ A and Ω ∈ A If A ∈ A then Ac := Ω \ A ∈ A A is closed under countable unions and countable intersections, i.e, if Ai ∈ A ∞ then ∞ i=1 Ai ∈ A i=1 Ai ∈ A and Definition 1.1.2 A probability measure defined on a σ-algebra A of Ω is a function P : A → [0, 1] which satisfies two following properties: P(Ω) = P possesses countable additivity, that is, for every countable sequence (An )n≥1 of elements of A, pairwise disjoint, one gets ∞ ∞ An P n=1 = P(An ) n=1 Then, we have the definition of a probability space as follows Definition 1.1.3 The probability space (Ω, A, P) is constructed by three elements: the sample space Ω, σ-algebra A, probability measure P defined above Proposition 1.1.1 Let (Ω, A, P) be a probability space, then it has following properties: (i) P(∅) = (ii) P(Ac ) = − P(A) (iii) P is additive (iv) If A, B ∈ A and A ⊆ B then P(A) ≤ P(B) BACHELOR THESIS PHAM NGOC QUYNH HUONG 1.2 Random Variables 1.2.1 Definition Definition 1.2.1 Let (Ω, A) be a measurable space and B(R) be the Borel σ-algebra on R A map X : Ω → R is said to be A-measurable if X −1 (B) := {ω : X(ω) ∈ B} ∈ A, ∀B ∈ B(R) Then, the A-measurable function X is called a random variable Note that there are types of random variables: discrete random variables and continuous random variables A discrete random variable has a finite or countable range whereas a continuous function takes on an uncountably infinite number of possible outcomes Remark 1.2.1 X is a random variable if and only if for all a ∈ R, {ω : X(ω) < a} ∈ A Remark 1.2.2 Let ϕ : R → R be a measurable function Then ϕ(X) is also a random variable 1.2.2 Distribution functions Definition 1.2.2 The function FX : R → R that satisfies FX (x) = P[X < x], x∈R is called the distribution function of X Besides, one can check that FX has the following properties: FX is a non-decreasing function; F is left-continuous and has right limit at any point in R; lim FX (x) = and lim FX (x) = x→−∞ x→∞ Definition 1.2.3 Let X be a continuous random variable If there exists a function fX satisfying a FX (a) = P[X < a] = fX (x)dx ∀ a ∈ R −∞ then one says fX to be the density function of X In addition, the density function f = fX has properties as follows: +∞ f (x) is non-negative for all x ∈ R and f (x)dx = −∞ BACHELOR THESIS PHAM NGOC QUYNH HUONG b f (x)dx for all x ∈ R and for any a, b ∈ R such that a < b P[a < X < b] = a Moreover, for any A ∈ B(R), it is true that P[X ∈ A] = f (x)dx A Here I present some celebrated kinds of distributions for discrete random variables and continuous random variables: Example 1.2.1 (For discrete random variables) Poisson distribution X is said to have Poisson distribution with parameter λ > 0, denoted by X ∼ P oi(λ) if X(Ω) = {0, 1, 2, } and e−λ λk , k = 0, 1, k! P[X = k] = Bernoulli distribution X has Bernoulli distribution with parameter p ∈ [0, 1] if it takes only and as its range and P[X = 1] = − P[X = 0] = p X represents a kind of experiments with only two outcomes: ”success” (X = 1) and failure (X = 0) Binomial distribution X has Binomial distribution with parameter p ∈ [0, 1] and n ∈ N, denoted by X ∼ B(n, p), if X takes values in {0, 1, , n} and n k p (1 − p)n−k , k P[X = k] = where k = 0, 1, , n Example 1.2.2 (For continuous random variables) Uniform distribution The function f (x) =   b−a 0 if a ≤ x ≤ b otherwise is called the Uniform distribution on [a, b] and denoted by U [a, b].The distribution function corresponding to f is   if x < a  0 F (x) = x−a if a ≤ x ≤ b b−a    if x > b Normal distribution The Normal distribution with the mean a and variance σ has the form (x−a)2 f (x) = √ e− 2σ2 , x ∈ R, 2πσ If a = and σ = 1, N (0, 1) is the standard normal distribution BACHELOR THESIS PHAM NGOC QUYNH HUONG 1.2.3 Expectation The expectation of a random variable X, which is denoted by E(X), can be thought of as the mean value of X The value can be represent by the two following formulas   xi P(X = xi ) if X is discrete,   i∈N +∞ E(X) =    xf (x)dx if X is continuous −∞ Proposition 1.2.1 Let X, Y be two arbitrary random variables Then E(c) = c, where c ∈ R; E[cX] = cE[X], where c is a constant; E[aX + bY ] = aE[X] + bE[Y ], for all a, b ∈ R; If X ≤ Y then E[X] ≤ E[Y ] Definition 1.2.4 A random variable X is said to be integrable if E(|X|) < ∞ We denote L1 as the set of integrable random variables Notice that an event A happens almost surely if P(A) = and X = Y a.s if P(X = Y ) = Theorem 1.2.2 Let X, Y be integrable random variables If X = Y a.s then E(X) = E(Y ) Proof Firstly, we consider the case X, Y are non-negative random variables Let A = {w : X(w) = Y (w)}, we have P(A) = In addition, E(Y ) = E(Y IA + Y IAc ) = E(Y IA ) + E(Y IAc ) = E(Y IA ) + E(XIAc ) Suppose that (Yn ) is a sequence of simple random variables which increases to Y Therefore, (Yn IA ) is also a sequence of simple random variables and (Yn IA ) increases to (Y IA ) Suppose that for each n ≥ 1, Yn is bounded by Mn , so ≤ E(Yn IA ) ≤ E(Mn IA ) = Mn P(A) = for each n Hence, E(Y IA ) = and similarly, E(XIA ) = Drop the indicator function, we obtain E(X) = E(Y ) BACHELOR THESIS PHAM NGOC QUYNH HUONG In general, if X ∼ N (a, σ ), then ϕX (t) = √ By the use in change of variable y = ∞ 2πσ x−a , σ eitx e− (x−a)2 2σ dx −∞ we obtain: ∞ eita ϕX (t) = √ eitσy e−y /2 dy 2π −∞ 2 = eita−t σ /2 Theorem 1.5.3 Two random vectors have the same distribution if their characteristic functions coincide Theorem 1.5.4 Let (Fn )n≥1 be a sequence of distribution functions whose charactereitx dFn (x) The following statements istic functions (ϕn )n≥1 are defined by ϕn (t) = R hold: w If Fn − → F for some distribution F then (ϕn ) converges point-wise to the characteristic funtion ϕ of F If ϕn (t) → ϕ(t), t ∈ R then the following statements are equivalent: w (i) ϕ(t) is a characteristic function and Fn − → F where F is a distribution function whose characteristic function is ϕ; (ii) ϕ is continuous at t = 1.5.2 Central Limit Theorem Theorem 1.5.5 (Central Limit Theorem) Let (Xn )n≥1 be a sequence of independent and identically distributed random variables with E(Xn ) = µ, DXn = σ < ∞ Denote Sn = X1 + · · · + Xn Then Yn = Sn − nµ w √ − → N (0, 1) σ n Proof Let ϕ be the characteristic function of Xj −µ where j = 1, , n Since (Xn )n≥1 is identically distributed, ϕ does not depend on j In addition, because Xn ’s are independent, one gets: n ϕYn (t) = E exp it j=1 Xj − µ √ σ n n = E exp it j=1 Xj − µ √ σ n = ϕn t √ σ n It is obvious that E(Xj − µ) = and E((Xj − µ)2 ) = σ < ∞ So, from the Theorem 1.5.2, ϕ has continuous 2nd derivative and the following expansion ϕ(t) = − t2 σ + t2 α(t) 24 BACHELOR THESIS PHAM NGOC QUYNH HUONG where α(t) → as t → Recall that ϕYn (t) = ϕn t √ σ n t2 t2 = exp n ln − + 2α 2n nσ t √ = exp n ln ϕ σ n Now by using the expansion ln(1 + x) = x + o(x) where t √ σ n o(x) → as x → 0, we have x −t2 t2 −t2 t2 t t √ √ + 2α +o + 2α 2n nσ 2n nσ σ n σ n 2 2 t −t t −t t t √ √ + 2α +o + 2α = 2n nσ 2n nσ σ n σ n ln ϕYn (t) = n It is clear that when n → ∞, ϕYn (t) → − −t2 Thus, 2 /2 lim ϕYn (t) = e−t n→∞ Applying the Theorem 1.5.4, we have the result as desired 1.5.3 Berry-Esseen Theorem Here we state without proof the Berry–Esseen inequality: Theorem 1.5.6 Let X1 , X2 , be i.i.d random variables that have E(Xn ) = 0, E(Xn )2 = X1 + · · · + Xn σ < ∞ and E|Xn |3 = ρ < ∞ Denote Sn = Then there exists a posin tive constant C such that for all x and n |Fn (x) − Φ(x)| ≤ where Fn (x) = P √ Sn n ≤x σ 25 Cρ √ , σ3 n Chapter Almost sure central limit theorem 2.1 Introduction Firstly, we recall the classic Central limit theorem (CLT): Central limit theorem Let (Xn )n≥1 be a sequence of i.i.d random variables with E(Xn ) = 0, D(Xn ) = for all n ≥ Denote Sn = X1 + · · · + Xn Then S w √n − → N (0, 1) n Or we can write ∀x ∈ R, lim P n→∞ S √n ≤ x n = Φ(x) = √ 2π x /2 e−u du −∞ By Cesaro limit theorem: Given a convergent sequence {an }n≥1 with lim an = a, then n→∞ a1 + · · · + an = a; n→∞ n lim so we can conclude that lim N →∞ N N S √n ≤ x n P n=1 = Φ(x) As a natural question, one could ask that for a fixed value x ∈ R, whether one could have lim N →∞ N N I n=1 Sn (w) √ ≤x n = Φ(x) a.s i.e N →∞ N N P w ∈ Ω : lim I n=1 Sn (w) √ ≤x n = Φ(x) = However, the statement above is not true due to the following theorem 26 BACHELOR THESIS PHAM NGOC QUYNH HUONG Theorem 2.1.1 Suppose that E|X|3 < ∞ Then P w ∈ Ω : lim sup N →∞ x∈R N N I n=1 Sn √ ≤x n − Φ(x) = = Proof We denote A := w ∈ Ω : lim sup N →∞ x∈R N N I n=1 Sn √ ≤x n − Φ(x) = , then we can claim that A is an event In the proof, we need the following lemma so-called Hewitt-Savage lemma Before stating the lemma, let us define a permutable event Consider a sequence of i.i.d random variables (Xi )i≥1 , and an event A that is defined on this sequence A is called to be permutable if A is unchanged by finite permutation of indices of the sequence Lemma 2.1.2 (Hewitt-Savage Lemma) Given an infinite sequence of i.i.d random variables Then a permutable event A has either probability zero or probability one Proof of the lemma We need to show that any such event A is independent of itself Firstly, let σ(X1 , , Xn ) be the σ-algebra generated by random variables X1 , , Xn Because σ(X1 , , Xn ) is an algebra that generates F which contains A, there exist n≥1 a sequence of events An ∈ σ(X1 , , Xn ),i.e, An = A|Fn such that P(An A) → 0, (2.1) as n → ∞ Notice that the approximating events have the form An = {w : (w1 , , wn ) ∈ Bn } where wi = Xi (w) and Bn ∈ Rn Let πn be the finite permutation defined by    j + n if ≤ j ≤ n πn = j − n if n + ≤ j ≤ 2n    j if j ≥ 2n + Due to the fact that joint distribution of the coordinates is permutation ally invariant, one gets P(w : w ∈ An Besides, we have {w : πn (w) ∈ An A) = P(w : πn (w) ∈ An A) A} while {w : πn (w) ∈ An } = {w : (wn+1 , , w2n ∈ Bn )} 27 (2.2) BACHELOR THESIS PHAM NGOC QUYNH HUONG Letting An denote the last event above, we have: P(w : πn (w) ∈ An A) = P(w : w ∈ An A) (2.3) It implies from (2.2) and (2.3) that A) = P(An P(An Since An C ⊂ (An A)(A A) (2.4) C), it follows form (2.1) and (2.4) that An ) ≤ P(An P(An An ) → A) + P(A whence ≤ P(An ) − P(An An ) ≤ P(An ∪ An ) − P(An ∩ An ) An ) → = P(An and P(An ∩ An ) → P(A) However, An and An are independent Therefore, we have P(An ∩ An ) = P(An )P(An ) → P2 (A), which show that P(A) = P2 (A) as desired It is easy to see that the considering event A is permutable Then applying the above lemma, P(A) = or By contradiction, assume that P(A) > 0, hence A must have probability one N Consider the measure µN = N n=1 δ Sn√(w) , where δ is the Dirac measure Denote f (y) = n I(−∞,x) (y), then for all x ∈ R f (y)dµN (y) = N R D It means that µ − → N I n=1 Sn (w) √ ≤x n e−y /2 → Φ(x) = f (y) √ dy 2π R /2 e−y √ dy 2π Hence, g(y)dµN (y) → R e−y /2 g(y) √ dy, 2π R which holds for all bounded and continuous function g Put g(y) = eity , for fixed value t ∈ R we get eity dµN (y) = R e−y /2 eity √ dy 2π R i.e N N √ exp itSn (w)/ n → exp(−t2 /2), n=1 28 (2.5) BACHELOR THESIS PHAM NGOC QUYNH HUONG as N → ∞, which holds for all w ∈ A We introduce √ Yn = Yn (t) = Yn (t, w) = exp(itSn (w)/ n); Yn (t, w)P(dw) EYn = EYn (t) = Ω Then EYn → exp(−t2 /2) in the observation of the central limit theorem and one can write (2.5) in the form ZN (t) = N N a.s (Yn − EYn ) −→ n=1 as N → ∞ Moreover, we see that |Yn | ≤ and |EYn | ≤ 1, so |ZN | ≤ It follows that E|ZN |2 → (2.6) as N → ∞ by Dominated convergence theorem We find that |ZN |2 = Zn Z n = N2 N N E|ZN |2 = N2 N N (Yn − EYn )(Y m − EY m ), n=1 m=1 (EYn Y m − EYn EY m ) n=1 m=1 Now, in special case, suppose that Xj ∼ N (0, 1), then we mark all the corresponding ∗ Sn quantities by an asterisk * In other words, Yn∗ = exp it √ For m ≤ n, we have n S∗ S∗ ∗ EYn∗ Y m = E exp it √n exp −it √m n m ∗ ∗ ∗ X + · · · + Xn X ∗ + · · · + Xm = E exp it √ exp it √ n m ∗ S − S∗ ∗ √ −√ exp it n √ m = E exp itSm n m n ∗ 1 S − S∗ ∗ √ −√ = E exp itSm E exp it n √ m , n m n ∗ ∗ where in the last line, we use the fact that two random variables Sm and Sn∗ − Sm are ∗ independent Notice that Sm ∼ N (0, m), we have ∗ E exp itSm = ϕN (0,m) = exp −t2 1 √ −√ n m 1 t √ −√ n m 1 √ −√ n m 29 m BACHELOR THESIS PHAM NGOC QUYNH HUONG ∗ Similarly, with Sn∗ − Sm ∼ N (0, n − m), E exp it ∗ Sn∗ − Sm √ n n−m = exp − t2 n Therefore, m t n ∗ EYn∗ Y m = exp −t2 + ∗ ∗ ∗ ∗ Now, it is remarkable that EYn∗ Y m is a real number, so EYn∗ Y m = EYn∗ Y m = EY n Ym∗ Consequently, E|ZN∗ |2 = N N n−1 m t n − exp(−t2 ) + N {1 − exp(−t2 )} m t n − + N (exp(t2 ) − 1) exp −t2 + n=1 m=1 N exp(−t ) = N2 n−1 exp n=1 m=1 Applying the fact ex ≥ + x, we get E|ZN∗ |2 exp(−t2 ) N2 = n−1 Now, using the inequalities N √ N n=1 m=1 n=1 m=1 n−1 m +N n N n=1 m=1 N N m ≥ m=1 n, ∀n ≥ 2, it implies that exp(−t2 ) N2 N exp(−t2 ) ≥ N2 √ m t + N t2 n m +N n xdx = exp(−t2 ) ≥ N2 ≥ exp(−t2 ) N2 = exp(−t2 ) N2 = exp(−t2 ) N2 2(n − 1)3/2 and 2(n − 1) ≥ N n=1 N 2(n − 1)3/2 √ +N n 4(n − 1)3/2 √ +N n−1 n=1 N −1 √ 2 (n − 1) + N n=1 √ N (N − 1) + N t2 exp(−t2 ) for sufficiently large N Now we consider the case Xn are subject to a general distribution Because E|X|3 < ∞, in the spirit of the Berry-Esseen theorem, we see that E|Zn |2 − E|Zn∗ |2 ≤ √CN → as N → ∞ This together with (2.6) contradicts the last inequality and so proves the theorem ≥ 30 BACHELOR THESIS PHAM NGOC QUYNH HUONG Remark 2.1.1 The statement of Theorem 2.1.1 is still true in general case without the assumption that E|X|3 < ∞ The proof stays the same, except for the fact that E|Zn |2 −E|Zn∗ |2 → will be deduced from Cesaro limit theorem instead of Berry-Esseen theorem 2.2 Almost Sure Central Limit Theorem Theorem 2.1.1 implies that arithmetic means are not suitable for the sequence I √Sn ≤x As a result, we shall test a stronger method, namely, logarithm means With n the new method, the Almost Sure Central Theorem is stated as follows Theorem 2.2.1 (Almost sure central limit theorem) Suppose that E|Xn |3 < ∞, one has P lim N →∞ log N N n=1 I n Sn √ ≤x n = Φ(x) =1 (2.7) for all x ∈ R Proof To prove the theorem, we consider the integrated characteristic functions ∞ u g(u) = ϕX (t)dt = −∞ eiux − dF (x) ix We introduce the following lemma Lemma 2.2.2 Let g(u), gn (u) be the integrated characteristic functions of the distribution F (x), Fn (x), respectively Assume that F (x) has no defect, i.e, the characteristic function of F (x) is continuous at for all x ∈ (−∞, ∞) and that gn (u) → g(u) for all rational u Then Fn (x) → F (x) in all x ∈ CF (x) Proof of the Lemma Let Fn (x) be a subsequence of Fn (x) such that Fn (x) → F ∗ (x) eiux − Then gn (u) → g ∗ (u) even if F ∗ (x) has a defect because → as |x| → ∞ ix ∗ ∗ This implies that g (u) = g(u), ∀u ∈ Q and because g (u) and g(u) are continuous, we conclude g ∗ (u) = g(u), ∀u ∈ R But then, f ∗ (t) = f (t), and for t = it follows that F ∗ (x) can not have a defect Thus, F ∗ (x) = F (x) and Fn (x) → F (x) for every convergent subsequence Fn (x), which constructs the lemma Back to the proof of the theorem, we introduce WN (t) = log N N n=1 (Yn (t) − EYn (t)) n u VN (u) = WN (t)dt 31 BACHELOR THESIS PHAM NGOC QUYNH HUONG for finite u ∈ Q We consider u u u u E|VN (u)|2 = |EWN (t)W N (s)|dtds EWN (t)W N (s)dtds ≤ 0 0 For m ≤ m, we have EWN (t)W N (s) = log2 N N N N N (EYn (t)Y m (s) − EYn (t)EY m (s)); nm n=1 m=1 Now we calculate ∗ |EWN∗ (t)W N (s)| = log2 N ∗ ∗ EYn∗ (t)Y m (s) − EYn∗ (t)EY m (s) nm n=1 m=1 We have S∗ S∗ ∗ EYn∗ (t)Y m (s) = E exp it √n exp −is √m n m ∗ ∗ ∗ X + · · · + Xn X ∗ + · · · + Xm = E exp it √ exp −is √ n m s t t ∗ ∗ = E exp i √ − √ Sm exp i √ (Sn∗ − Sm ) n m n s t t ∗ ∗ Sm ) = E exp i √ − √ E exp i √ (Sn∗ − Sm n m n ∗ ∗ due to the fact that Sm and S ∗ n − Sm are independent ∗ Notice that Sm ∼ N (0, m), we have t s √ −√ n m E exp i ∗ Sm = ϕN (0,m) s t √ −√ n m = exp − t s √ −√ n m = exp −s2 − t2 exp 2 m m st n ∗ Similarly, with Sn∗ − Sm ∼ N (0, n − m), t ∗ i √ (Sn∗ − Sm ) n E exp −t2 (n − m) 2n = exp Thus, we obtain ∗ EYn∗ (t)Y m (s) = exp −s2 − t2 exp m st − n Besides, we have ∗ EYn∗ (t)EY m (s) = exp 32 −t2 exp −s2 BACHELOR THESIS PHAM NGOC QUYNH HUONG = exp −t2 − s2 Therefore, we get ∗ |EWN∗ (t)W N (s)| exp(−(t2 + s2 )/2) = log2 N exp(−(t2 + s2 )/2) = log2 N N N m st − n exp nm n=1 m=1 N N n−1 m st − + | exp(st) − 1| n n2 n=1 exp nm n=1 m=1 ∗ To evaluate the value of |EWN∗ (t)W N (s)|, we introduce an inequality: |eax − 1| ≤ a|x|e|x| , for all real number a ∈ (0, 1) and for all x ∈ R Proof of the inequality Firstly, suppose that x ≥ Put f (x) = eax − − axex We have f (x) = aeax − aex − axex ≤ since a ∈ (0, 1) It means that f (x) is decreasing on [0, ∞) Moreover, f (0) = Therefore, f (x) ≤ on [0, ∞) Now, we assume that x < Put f (x) = − eax + axe−x Calculating the first derivatives of f , we obtain f (x) = −aeax + ae−x − axe−x ≥ since a ∈ (0, 1), i.e, f (x) is increasing on (−∞, 0) In this case, f (0) = again Thus, f (x) ≤ on (−∞, 0) Thus, we get the inequality as desired Applying the above inequality, we have ∗ |EWN∗ (t)W N (s)| = exp(−(t2 + s2 )/2) log2 N N n−1 exp nm n=1 m=1 N n √ n3 n N m st − + | exp(st) − 1| n n n=1 exp(−(t2 + s2 )/2) √ ≤ 2|st| exp(|st|) log N mn3 n=1 m=1 k Now, using the inequality k t=1 N n=1 xi dx, we have ti ≤ √ n3 n m m=1 N ≤ n=1 33 √ x BACHELOR THESIS PHAM NGOC QUYNH HUONG N = n=1 N √ √ n n3 =2 n=1 n N ≤2 1+ 1 dx x = 2(1 + log N ) ≤ log N Moreover, exp −(t2 + s2 ) exp |st| ≤ since the sum −(t 2+s ) + |st| ≤ Finally, we have ∗ |EWN∗ (t)W N (s)| ≤ 5|st| 2|st|2 log N ≤ log N log N if N is large enough By the hypothesis E|Xn |3 < ∞ and in the spirit of Berry-Esseen theorem, we have C1 ∗ |EYn (t)Y m (s) − EYn∗ (t)Y m (s)| ≤ √ , m C2 ∗ |EYm (s)EY n (t) − EYm∗ (s)EY n (t)| ≤ √ m for m ≤ n and consequently ∗ |EWN (t)W N (s)| ≤ |EWN (t)WN (s) − EWn∗ (t)WN∗ (s)| + |EWN∗ (t)W N (s)| ≤ log2 N n N n 1 ∗ √ + |EWN∗ (t)W N (s)| mn m n=1 m=1 N C = log2 N ≤ N ∗ ∗ |EYn (t)Y m (s) − EYn∗ (t)Y m (s)| + |EWN∗ (t)W N (s)| mn n=1 m=1 C ≤ log2 N C1 N ∗ ∗ |EYn (t)Y m (s) − EYn∗ (t)Y m (s)| + |EWN∗ (t)W N (s)| mn n=1 m=1 ≤ log2 N n N n=1 n ∗ m−3/2 + |EWN∗ (t)W N (s)| m m=1 m−3/2 N m=1 log N n=1 5|st| + n log N 34 BACHELOR THESIS PHAM NGOC QUYNH HUONG n C1 N m=1 ≤ log N n C1 = m−3/2 5|st| dx + x log N m−3/2 m=1 + log N C3 ≤ log N 5|st| log N for sufficiently large N This implies E|VN (u)|2 ≤ C log N where C = u2 C3 Now we consider the subsequence Nk = [ek ], ∞ ∞ and a nondecreasing positive sequence (cn ) satisfying cn+1 = O(1) (cn → ∞) cn Put dk = log ck+1 and Dn = k≤n dk Then we have ck N →∞ Dn Ifk (X1 , ,Xk )

Ngày đăng: 22/08/2019, 18:35

TỪ KHÓA LIÊN QUAN

w