Vietnam Journal of Mathematics 33:1 (2005) 33–41 On the Almost Sure Convergence of Weighted Sums of I.I.D. Random Variables Dao Quang Tuyen Institute of Mathematics, 18 Hoang Quoc Viet Road, 10307 Hanoi, Vietnam Received July 17, 2003 Revised February 20, 2004 Abstract. We generalize some theorems of Chow and Lai [2] to general weighted sums of i.i.d. random variables. A characterization of moment conditions like Ee α|X| β |X| γ < ∞ or E|X| α (log + |X|) β < ∞ is also given. 1. Introduction Let X 1 ,X 2 , be independent identically distributed random variables with zero means. Let (a nk ),n,k=1, 2, , be any array of real numbers and (m n )beany sequence of positive integers such that m n →∞. The problem is to find best conditions for almost sure convergence to zero of S n = m n k=1 a nk X k . Some convergence theorems for S n have been obta ined by Chow [1], Chow and Lai [2], Hanson and Koopman [4], Pruitt [5] and Stout [6]. In [2] Chow and Lai have proved strong theorems for the case a nk = f (n)c nk where f (n) ↓ 0and(c nk ) satisfies some summable conditions like lim sup n k c 2 nk < ∞ or lim sup n k |c nk | < ∞. In this paper we generalize some of these results to more general (a nk ). In addition, we give a characterization of general moment condition like Ef(|X 1 |) < ∞ by almost sure convergence to zero o f X n a n (f). For example, one such known result ([2] Theorem 1) states that E|X 1 | α < ∞ for any α ≥ 1if and only if n −1/α X n → 0a.s. 34 Dao Quang Tuyen 2. Results We shall use the following definition. An array (a nk ) is said to converge to a sequence (a n ) almost uniformly as k →∞, if for every ε>0thereexistsK(ε) such that |a nk − a n | <εfor all n and all k, except at most K(ε) k for each n. It is obvious that if (a nk ) → k (a n ) almost uniformly then a nk → k a n for all n. Note that, for arrays, uniform convergence implies almost uniform convergence. But the converse is not true. The array in the proof of Corollary 2 is an example. Theorem 1. Let X 1 ,X 2 , be i.i.d. mean 0 random variables. Then Ee t|X 1 | < ∞ for all t>0 if and only if S n = ∞ k=1 a nk X k → 0 a.s. for every array of real numbers (a nk ) satisfying (a) A n := ∞ k=1 a 2 nk < ∞ for all n, (b) a 2 nk A n −→ k 0 almost uniformly, (c) ∞ 1 e − a √ A n < ∞ for some a>0. This theorem improves Theorem 2 in [2], which deals with a nk = c n−k /logn where ∞ 1 c 2 n < ∞. This array clearly satisfies a), b) and c) of Theorem 1. Theorem 2. Let (X n ) be any sequence of i.i.d. mean 0 random variables, (a nk ) be any array of r eal numbers and (m n ) be any sequence of positive integers such that m n →∞.Then m n k=1 a nk X k → 0 a.s. if there exists a sequence of positive numbers (c n ) satisfying (a) m n k=1 |a nk |≤c m n ∀n ≥ 1, (b) c n is monotone non-increasing and tending to zero, (c) c n X n → 0 a.s. Remark. We say that S n = m n 1 a nk X k converges to zero almost surely and absolutely if S n = m n 1 |a nk ||X k |→0 a.s. Because the proof of Theorem 2 holds when S n is replaced by S n , Theorem 2 states the convergence of S n in absolute sense too. Consequently under the conditions of Theorem 2 m n 1 a nk X k → 0a.s. for all sequences (m n ) such that m n →∞and m n ≤ m n for all n. We shall give below some corollaries of this theorem. We shall use the fol- lowing notations [2]. Let e 1 (x)=e x ,e 2 (x)=e 1 (e x ), etc., and let log 2 x = loglogx, log 3 x = log(log 2 x), etc. By convention we shall also write log 1 x = logx, log 0 x = e 0 (x)=1ande k = e k (1). For definiteness let us define log k x =1 for all x>0 such that log k x<1orlog k x is not defined. Almost Sure Convergence of Weighted Sums of I.I.D. Random Variables 35 Corollary 1. Let X 1 ,X 2 , be i.i.d. mean 0 random variables. For any α>0 and k =1, 2, , the following statements are equivalent: (a) Ee k (t |X 1 | α ) < ∞∀t>0; (b) lim n (log k n) −1/α X n =0a.s.; (c) lim n m n 1 a ni X i =0a.s. for every sequence (m n ) and array (a ni ) such that m n →∞and m n 1 |a ni | = O ((log k m n ) −1/α ). This corollary clearly improves Theorem 5 in [2]. It applies to more general array (a nk ) and to every sequence m n →∞. Of course the most important case is m n = n ∀n. Corollary 2. Let X 1 ,X 2 , be i.i.d. mean 0 random variables. For any α ≥ 1 the following statements are equivalent: (a) E |X 1 | α < ∞; (b) lim n n −1/α X n =0a.s.; (c) lim n m n 1 a nk X k =0a.s. for every sequence (m n ) and array (a nk ) such t hat m n →∞and m n 1 |a nk | = O (m −1/α n ). This corollary is essentially weaker than Theorem 1 in [2]. We write it down to show a simple consequence of Theorem 2. It would be stronger than Theorem 1 in [2] if the last condition in (c) could b e replaced by m n 1 a 2 nk = O (m −1/α n ). But our method of proof is not suitable to derive such a result. By Corollaries 1 a nd 2, we can see that the finiteness of exp ectation of some function of X 1 is equivalent to a condition like (c) of Theorem 2. By the theorem below, we obtain such equivalent conditions for more general functions. Hence Theorem 2 extends its applicability. Theorem 3. Let X 1 ,X 2 , be i.i.d. r andom variables. (a) For any α>0 and β ≥ 0 E|X 1 | α (log + |X 1 |) β < ∞ if and only if lim n X n log β/α n n 1/α =0a.s. (b) For any α>0,β>0 and γ ≥ 0 Ee α |X 1 | β |X 1 | γ < ∞ if and only if lim sup n |X n | (log n) 1/β ≤ 1 α 1/β a.s. Theorem 2 and Theorem 3 together lead to the following corollaries. 36 Dao Quang Tuyen Corollary 3. Let X 1 ,X 2 , be i.i.d. mean 0 random variables. For any α>0 and β ≥ 0, the following statements are equivalent: (a) E |X 1 | α (log + |X 1 |) β < ∞; (b) lim n X n log β/α n n 1/α =0a.s.; (c) lim n m n k=1 a nk X k =0a.s. for every sequence (m n ) and array (a nk ) such t hat m n →∞and m n k=1 |a nk | = O (m −1/α n log β/α m n ). Corollary 4. Let X 1 ,X 2 , be i.i.d. mean 0 random variables. Suppose Ee α |X 1 | β |X 1 | γ < ∞ for any α>0,β>0 and γ ≥ 0.Then m n k=1 a nk X k → 0 a.s. for every sequence (m n ) and array (a nk ) such that m n →∞and m n k=1 |a nk | = o (log −1/β m n ). From Theorem 3 we can also obtain other consequences. If the common distribution function of i.i.d. random variables X 1 ,X 2 is exponential then Ee α|X 1 | < ∞ if and only if α<λ. Hence by Theorem 3 lim sup n |X n |/log n ≤ 1/α a.s. if and only if 1/α > 1/λ. Because this equivalence holds for all α>0, lim sup n |X n |/ log n must be equal 1 /λ a.s. By the same method, we can derive similar conclusions for other distribution functions. For example, we have the following statement, written for well known distribution functions. If X 1 ,X 2 , are i.i.d. random variables, then almost surely lim sup n |X n | logn = ⎧ ⎪ ⎨ ⎪ ⎩ 1 α , if X 1 is Laplace with parameter α 1 λ , if X 1 is gamma with parameter α, λ 2, if X 1 is χ 2 lim sup n |X n | √ 2logn = σ, if X 1 is N(0,σ 2 ) α, if X 1 is Rayleigh with parameter α lim sup n |X n | log 1/α n = 1 λ , if X 1 is Weibull with parameters α, λ lim n X n n 1/α =0ifandonlyif ⎧ ⎪ ⎨ ⎪ ⎩ α<a, if X 1 is Pareto with parameters a, b α<a, if X 1 is Student’s t with parameter a 0 <α<1, if X 1 is Cauchy. 3. Proofs ProofofTheorem1. Without loss of generality suppose EX 2 1 =1. Setϕ(t):= Ee tX 1 for all real t.Thenϕ(t) is an entire function and ϕ(0) = 1,ϕ (0) = 0 and ϕ (0) = 1. Hence there exists t 0 > 0 such that for all |t|≤t 0 ϕ(t) ≤ 1+t 2 . By (b) for any real t there exists K(t) such that |a nk | √ A n |t| <t 0 Almost Sure Convergence of Weighted Sums of I.I.D. Random Variables 37 for all n and all k except for at most K(t) k for each n. Hence we have, setting S nm = m k=1 a nk X k , Ee t S nm √ A n = Ee t k/∈I m (t) a nk X k / √ A n k∈I m (t) ϕ a nk √ A n t ≤ Ee |t||X 1 | K(t) k∈I m (t) 1+ a 2 nk A n t 2 ≤ Ee |t||X 1 | K(t) e t 2 , where I m (t)={k ≤ m; |(a nk / √ A n )t| <t 0 }.Alsowehave Ee t |S nm | √ A n ≤ E e t S nm √ A n + e −t S nm √ A n ≤ 2 e t 2 Ee |t||X 1 | K(t) . Hence by Fatou lemma and (a), setting the last term by H(t), we have Ee t |S n | √ A n ≤ H(t). For any ε>0, by Markov inequality and c), the last inequality leads to ∞ n=1 P (|S n | >ε)= ∞ n=1 P e t|S n |/ √ A n >e tε/ √ A n ≤ ∞ n=1 e −tε / √ A n Ee t|S n |/ √ A n ≤ H(t) ∞ n=1 e −tε / √ A n < ∞ if t is chosen such that tε > a. So we obtain that S n → 0 completely and therefore almost surely. In the converse, for any given sequence (c n ) such that c 2 = ∞ 1 c 2 n < ∞ and c 2 1 > 0 define a nk := c n−k / log n for k ≤ n and a nk := 0 for k>n. Then a) holds for (a nk )withA n ≤ c 2 / log 2 n. Condition b) also holds because, as c n → 0, for any ε>0thereexistsK(ε) such that c 2 n /c 2 1 <εfor n>K(ε). Consequently a 2 nk A n = c 2 n−k n i=1 c 2 n−i < c 2 n−k c 2 1 <ε if n − k>K(ε) i.e. if k<n− K(ε). Hence there are only at most K(ε) k for which the chain of inequalities above is not true, as a nk =0fork>n. Condition c) holds for any a>cas ∞ 1 e −a/ √ A n ≤ ∞ 1 e (−a/c)logn = ∞ 1 n −a/c < ∞. To this array (a nk ) Theorem 2 of [2] is applicable. Hence, by assuming S n → 0 a.s., we obtain that Ee t|X 1 | < ∞ for all t>0. 38 Dao Quang Tuyen Proof of Theorem 2. Since c n ↓, for any ε>0 we have, setting S n = m n 1 a nk X k , |S n |≤ m n k=1 |a nk | max k≤m n |X k |≤c m n max k≤m n |X k | ≤ max c m n max k≤N |X k |, max N<k≤m n c k |X k | for any N.Sincec n X n → 0 a.s. for any ε>0 and almost all ω there exists N = N(ω) such that c n |X n | <εfor all n>N(ω). Hence for almost all ω |S n |≤max c m n max k≤N(ω) |X k |,ε . Because c n → 0andm n →∞we obtain that lim sup n |S n |≤ε a.s., which leads to the conclusion since ε can be chosen arbitrarily small. Proof of Corollary 1. By Theorem 5 in [2], (a) is equivalent to (b). Statement (c) implies (b), because both (c) and (26) of Theorem 5 in [2] are applicable to a nk := (log k n) −1/α /(n − k) 2 for k<nand a nk := 0 otherwise, and to m n := n. Hence (b) holds by Theorem 5 in [2]. Conversely, suppose there exist an array (a nk ) and a sequence (m n )satisfy- ing the conditio ns in (c). Then there exists a constant K such that m n 1 |a ni |≤ K (log k m n ) −1/α ∀n. Define c n = K (log k n) −1/α ∀n.Thenc n satisfies all con- ditions of Theorem 2. Hence, by Theorem 2, (b) implies (c). Proof of Corollary 2. Define a nk = n −1/α /(n − k) 2 for k<nand a nk =0 otherwise, and m n = n. By Theorem 1 in [2] we see that (c) implies (b) with these (a nk ), (m n ) and (a) is equivalent to (b). Lastly, arguing similarly as in the proof of Corollary 1, we can show that (b) implies (c). Lemma 1. Let f : R + → R + be a function such that f(x) is monotone increas- ing on [b, ∞) for some b ≥ 0 and is bounde d on [0,b] if b>0. Define f −1 as the inverse function of f restricted on [b, ∞) and as any p ositive function on [0,f(b)) if b>0.ThenEf(|X 1 |) < ∞ if and only if lim sup n |X n |/f −1 (an) ≤ 1 a.s. for some and therefore for all real a>0. Proof. To show the “only if” part o f the conclusion, suppose Ef(|X 1 |) < ∞. Let us fix any a>0andletN be any p ositive integer such that aN > f (b). We have, since f −1 is monotone increasing on [f(b), ∞), ∞ >Ef(|X 1 |) ≥ ∞ i=N ai P f −1 (ai) < |X 1 |≤f −1 (a(i +1)) = a NP |X 1 | >f −1 (aN) + ∞ i=N+1 P |X 1 | >f −1 (ai) . Almost Sure Convergence of Weighted Sums of I.I.D. Random Variables 39 Consequently, since X n are i.i.d., ∞ n=1 P |X n | f −1 (an) > 1 < ∞. By Borel–Cantelli lemma [3], lim sup n |X n | f −1 (an) ≤ 1a.s. Conversely, let us fix any a>0andletN b e any positive integer such that aN > f(b). Then we have Ef(|X 1 |) ≤ Ef(|X 1 |)1 {|X 1 |≤f −1 (aN)} + Ef(|X 1 |)1 {|X 1 |>f −1 (aN)} ≤ max 0≤x≤f −1 (aN) f(x)+ ∞ i=N a(i +1)P f −1 (ai) < |X 1 |≤f −1 a(i +1)) . Thelastsum,asisshownbefore, ≤ a + a NP |X 1 | >f −1 (aN) + ∞ i=N+1 P |X i | >f −1 (ai) , where the last sum is finite by Borel–Cantelli lemma, since X n are independent and lim sup n |X n |/f −1 (an) ≤ 1 a.s. Hence we obtain the finiteness of Ef(|X 1 |), since f is bounded on [0,b]. Proof of Theorem 3. To show a), set g(x)=x α (log + x) β for x ≥ 0. Then g(x) is monotone increasing and g −1 exists on the set [1, ∞). For x ∈ [0, 1) define g −1 (x) = 1. By Lemma 1 E |X 1 | α (log + |X 1 |) β < ∞ if and only if lim sup n |X n | g −1 (an) ≤ 1 a.s. for all a>0. Because the exact form of g −1 is unknown, we shall estimate its behavior at +∞ by the following function. Put h(x)= α β x ln β x 1/α for x ≥ 2andh(x)=1for0≤ x<2. We shall show that h(an) /g −1 (an) → 1 for all a>0. Note that, for large enough n, g h(an) g g −1 (an) = β log α log (an) +1− log log β (an) log (an) β −→ 1 as n →∞. We shall prove a more general statement: for every two se- quences 0 <a n →∞, 0 <b n →∞if g(a n ) /g(b n ) → 1thena n /b n → 1. Suppose there are such (a n )and(b n ), but in contrary lim sup n a n /b n = c>1. Then there exist n i such that a n i /b n i → c. Hence a α n i /b α n i → c α and a n i >b n i for large enough n i . So lim sup n log a n i / log b n i ≥ 1. Conse- quently lim sup n a α n log β a n / (b α n log β b n ) ≥ c α > 1, which contradicts the as- sumption g(a n ) /g(b n ) → 1. So lim sup n a n /b n ≤ 1. Similarly we obtain that lim inf n a n /b n ≥ 1, hence lim n a n /b n =1. So we have, by Lemma 1, E |X 1 | α (log + |X 1 |) β < ∞ if and only if for all a>0 40 Dao Quang Tuyen lim sup n |X n | h(an) = lim sup n |X n | g −1 (an) g −1 (an) h(an) ≤ 1a.s., which is equivalent to lim sup n |X n |log β/α n n 1/α ≤ α β/α a 1/α a.s. for all a>0. Since a>0 can b e chosen arbitrarily small the last inequality is equivalent to lim sup n |X n |log β/α n/n 1/α =0a.s. For proving b) set g(x)=e αx β x γ for x ≥ 0. Also define h(x)=(logα γ/β x − log(log x) γ/β ) 1/β α −1/β for x ≥ d,andh(x)=1for0≤ x<d,whered is chosen large enough such that h(x) is well defined. Then it is easy to show g(h(n)) /g(g −1 (n)) → 1, where g −1 is the inverse function of g. As before, in order to show h(n) /g −1 (n) → 1, let us prove that for every sequences a n → ∞,b n →∞if g(a n ) /g(b n ) → 1thena n /b n → 1. Suppose we have such (a n ) and (b n ) but in contrary lim sup n a n /b n = c>1. Then there exist subsequences a n i ,b n i such that a n i /b n i → c. Therefore (a β n i − b β n i ) /b β n i → c β − 1 > 0and (a γ n i − b γ n i ) /b γ n i → c γ − 1 > 0ifγ>0. Consequently, since b β n i →∞, g(a n i ) g(b n i ) = e α a β n i − b β n i b β n i b β n i a γ n i − b γ n i b γ n i + e α a β n i − b β n i b β n i b β n i −→ ∞, which contradicts the assumption. So lim sup n a n /b n ≤ 1. Similarly we have lim inf n a n /b n ≥ 1. Hence we obtain that lim n a n /b n =1. So we have, since h(n) / log 1/β n → 1 /α 1/β , lim sup n |X n | g −1 (n) ≤ 1a.s. ifand only if lim sup n |X n | h(n) ≤ 1 a.s. if and only if lim sup |X n | log 1/β n ≤ 1 α 1/β a.s. Proof of Corollary 3. Defining c n = Kn −1/α log β/α n and acting similarly as in the proof of Corollary 1, by Theorem 2, we obtain that (b) implies (c). Con- versely (c) implies (b) b ecause by (c) lim n n −1/α log β/α n n k=1 (n−k +1) −2 X k =0 a.s. Then by Lemma 3 in [2] we obtain (b), arguing similarly a s in the pro of of Theorem 1 in [2]. Proof of Corollary 4. Set b m n := m n 1 |a nk | and define c k ,k=1, 2, , such that c k log 1/β k =max {n;m n ≥k} {b m n log 1/β m n }. We shall show that c n satisfies all conditions of Theorem 2. We have c n log 1/β n is monotone non-increasing and tending to zero, since b m n log 1/β m n → 0 by the assumption. Hence c n =(c n log 1/β n)(log −1/β n), as the product of two monotone non-increasing and tending to zero sequences, has the same properties. By the definition of c n we have b m n ≤ c m n for all n.By Theorem 3 lim sup n |X n | log 1/β n ≤ 1 α 1/β a.s. Hence almost surely Almost Sure Convergence of Weighted Sums of I.I.D. Random Variables 41 lim sup n |X n c n | = lim sup n |X n | log 1/β n · c n log 1/β n ≤ 1 α 1/β lim n c n log 1/β n =0. So c n satisfies all conditions of Theorem 2. By Theorem 2 we obtain the conclusion. References 1. Y. S. Chow, Some convergence theorems for independent random variables, Ann. Math. Statist. 37 (1966) 1482–1493. 2. Y. S. Chow and T. L. Lai, Limiting behavior of weighted sums of independent random variables, Ann. Pr obab. 1 (1973) 810–824. 3. Y. S. Chow and H. Teicher, Probability Theory, Springer, New York, Heidelberg, Berlin, 1978. 4. D. L. Hanson and L. H. Koopman, On the convergence rate of the law of large numbers for linear combinations of independent random variables, Ann. Math. Statist. 36 (1965) 559–564. 5. W. E. Pruitt, Summability of independent random variables. J. Math. Mech. 15 (1966) 769–776. 6. W. F. Stout, Some results on the complete and almost sure convergence of linear combinations of independent random variables and martingale differences, Ann. Math. Statist. 39 (1968) 1549–1562. . Vietnam Journal of Mathematics 33:1 (2005) 33–41 On the Almost Sure Convergence of Weighted Sums of I. I .D. Random Variables Dao Quang Tuyen Institute of Mathematics, 18 Hoang Quoc Viet Road,. and T. L. Lai, Limiting behavior of weighted sums of independent random variables, Ann. Pr obab. 1 (1973) 810–824. 3. Y. S. Chow and H. Teicher, Probability Theory, Springer, New York, Heidelberg, Berlin,. Hanoi, Vietnam Received July 17, 2003 Revised February 20, 2004 Abstract. We generalize some theorems of Chow and Lai [2] to general weighted sums of i. i .d. random variables. A characterization of