1. Trang chủ
  2. » Khoa Học Tự Nhiên

On the complete convergence for sequences of coordinatewise negatively associated random vectors in Hilbert spaces

20 295 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 20
Dung lượng 301,04 KB
File đính kèm Preprint1337.rar (257 KB)

Nội dung

We develop the BaumKatz theorem for sequences of coordinatewise negatively associated random vectors in real separable Hilbert spaces. We also show that the concept of coordinatewise negative association is more general than the concept of negative association of Ko et al. (2009) 9. Moreover, some related results still hold for this concept. Illustrative examples are provided.

On the complete convergence for sequences of coordinatewise negatively associated random vectors in Hilbert spaces Nguyen Van Huan∗ Department of Mathematics and Applications, Saigon University, Ho Chi Minh City, Vietnam Nguyen Van Quang Department of Mathematics, Vinh University, Nghe An Province, Vietnam Nguyen Tran Thuan Department of Mathematics, Vinh University, Nghe An Province, Vietnam Abstract We develop the Baum-Katz theorem for sequences of coordinatewise negatively associated random vectors in real separable Hilbert spaces. We also show that the concept of coordinatewise negative association is more general than the concept of negative association of Ko et al. (2009) [9]. Moreover, some related results still hold for this concept. Illustrative examples are provided. Keywords: Coordinatewise negatively associated, Real separable Hilbert space, Coordinatewise weakly bounded 2000 MSC: 60F15, 60B11, 60B12 ∗ Corresponding author: Department of Mathematics and Applications, Saigon University, 273 An Duong Vuong Street, Ward 3, District 5, Ho Chi Minh City, Vietnam; Tel.: +84 917918008 (Mobile), +84 839381913 (Office); fax: +84 838305568. Email addresses: vanhuandhdt@yahoo.com (Nguyen Van Huan), nvquang@hotmail.com (Nguyen Van Quang), tranthuandhv@gmail.com (Nguyen Tran Thuan) Preprint submitted to Elsevier July 25, 2013 1. Introduction Hsu and Robbins [7] introduced the concept of complete convergence and proved that the sequence of arithmetic means of independent, identically distributed (i.i.d.) random variables converges completely to the expected value of the variables, provided their variance is finite. The necessity was proved by Erd¨os [4, 5]. The result of Hsu-Robbins-Erd¨os is a fundamental theorem in probability theory and was later generalized and extended during a process which led to the now classical paper by Baum and Katz [3]. Theorem 1.1 ([3]). Let r, α be real numbers (r > 1; α > 1/2; αr > 1), and let {Xn , n 1} be a sequence of i.i.d. random variables with zero means. Then the following three statements are equivalent: (a) E|X1 |r < ∞. ∞ αr−2 n (b) n=1 ∞ (c) Xk > εnα < ∞ for every ε > 0. P k=1 n n=1 n αr−2 1 P sup α k n k k Xl > ε < ∞ for every ε > 0. l=1 This result has been extensively studied for many classes of dependent random variables. For negatively associated random variables, we refer to Shao [16], Kuczmaszewska [10], Baek et al. [2], Kuczmaszewska and Lagodowski [11], and other authors. In this paper, we discuss the concept of negative association for random vectors in a real separable Hilbert space and develop the Baum-Katz theorem for sequences of coordinatewise negatively associated random vectors which by our knowledge have not yet been studied in the literature. The concept of negative association for random variables was introduced by Alam and Saxena [1] and carefully studied by Joag-Dev and Proschan [8]. A finite family {Yi , 1 i n} of random variables is said to be negatively associated (NA) if for any disjoint subsets A, B of {1, 2, ..., n} and any real coordinatewise nondecreasing functions f on R|A| , g on R|B| , Cov f (Yi , i ∈ A), g(Yj , j ∈ B) 0 whenever the covariance exists, where |A| denotes the cardinality of A. An infinite family of random variables is NA if every finite subfamily is NA. 2 As in Ko et al. [9], a finite family {Xi , 1 i n} of Rd -valued random vectors is said to be NA if for any disjoint subsets A, B of {1, 2, ..., n} and any real coordinatewise nondecreasing functions f on R|A|d , g on R|B|d , Cov f (Xi , i ∈ A), g(Xj , j ∈ B) 0 whenever the covariance exists. An infinite family of Rd -valued random vectors is NA if every finite subfamily is NA. Let H be a real separable Hilbert space with the norm · generated by an inner product ·, · , let {ej , j 1} be an orthonormal basis in H, let X be an H-valued random vector, and X, ej will be denoted by X (j) . Ko et al. [9] introduced the concept of H-valued NA sequence as follows. Definition 1.2 ([9]). A sequence {Xn , n 1} of H-valued random vectors (1) (2) (d) is said to be NA if for any d 1, the sequence Xn , Xn , ..., Xn , n 1 of Rd -valued random vectors is NA. In the following definition, we present another concept of negative association for H-valued random vectors which is more general than the concept of Ko et al. [9]. Definition 1.3. A sequence {Xn , n 1} of H-valued random vectors is said to be coordinatewise negatively associated (CNA) if for each j 1, the (j) sequence {Xn , n 1} of random variables is NA. Obviously, if a sequence of H-valued random vectors is NA, then it is CNA. However, the reverse is not true in general. In the following example, we derive an Rd -valued CNA sequence which is not NA. Example 1.4. Let d be an integer (d 2), and let {Yn , n = 1, 2, ..., d} be a sequence of random variables which is not NA. We consider a sequence (1) (2) (d) Xn = (Xn , Xn , ..., Xn ), n = 1, 2, ..., d of Rd -valued random vectors as (n) follows: For each n = 1, 2, ..., d, Xn = Yn and for each j = 1, 2, ..., d, (j) {Xn , n = 1, 2, ..., d} is a sequence of independent random variables. Then the sequence {Xn , n = 1, 2, ..., d} is CNA, but it is not NA. Ko et al. [9] obtained the almost sure convergence for sequences of Hvalued NA random vectors. The key tool for proving their result is the maximal inequality provided by the following lemma. 3 Lemma 1.5 ([9], Lemma 3.3). Let {Xn , n 1} be a sequence of H-valued NA random vectors with EXn = 0 and E Xn 2 < ∞, n 1. Then, we have k E max 1 k n n 2 E Xi 2 , Xi i=1 n 1. (1.1) i=1 Let us note that there is a misprint in Lemma 3.3 of Ko et al. [9], as the following example shows. Example 1.6. Let {X, Xn , n 1} be a sequence of i.i.d. random variables with zero means and finite second moments. Then k E max 1 k 2 2 Xi = E max{|X1 |, |X1 + X2 |} 2 i=1 |X1 | + |X1 + X2 | + |X1 | − |X1 + X2 | =E 2 1 1 = EX12 + EX22 + E|X22 + 2X1 X2 | 2 2 1 1 2 2 EX1 + EX2 + |EX22 + 2E(X1 X2 )| 2 2 = EX12 + EX22 . 2 It is not hard to check that if X is a symmetric random variable taking values in {−1; 1}, then E|X22 + 2X1 X2 | = |EX22 + 2E(X1 X2 )|. So (1.1) fails. The following lemma is an improvement of Lemma 1.5 and was proved by Shao [16] in the case of NA random variables. Lemma 1.7. Let {Xn , n 1} be a sequence of H-valued CNA random vectors with EXn = 0 and E Xn 2 < ∞, n 1. Then, we have k E max 1 k n n 2 Xi E Xi 2 , 2 i=1 n 1. i=1 Remark 1.8. In view of Lemma 1.7 and the proof of Theorem 3.4 of Ko et al. [9], it is interesting to observe that Theorem 3.4 in [9] (see also Miao [14, Theorems 3.2 and 3.3], Thanh [17, Theorems 2.2 and 3.1]) does not only hold for H-valued NA sequences, but also holds for H-valued CNA sequences. 4 Let {X, Xn , n 1} be a sequence of H-valued random vectors. We consider the following inequality 1 n C1 P(|X (j) | > t) n (j) P(|Xk | > t) C2 P(|X (j) | > t). (1.2) k=1 If there exists a positive constant C1 (C2 ) such that the left-hand side (right-hand side) of (1.2) is satisfied for all j 1, n 1 and t 0, then the sequence {Xn , n 1} is said to be coordinatewise weakly lower (upper) bounded by X. The sequence {Xn , n 1} is said to be coordinatewise weakly bounded by X if it is both coordinatewise weakly lower and upper bounded by X. Note that (1.2) is, of course, automatic with X = X1 and C1 = C2 = 1 if {Xn , n 1} is a sequence of identically distributed random vectors. In the rest of the paper, the symbol C will denote a generic positive constant which is not necessarily the same one in each appearance. 2. Main results and discussions Theorem 2.1. Let r, α be positive real numbers (1 r < 2; αr > 1), and let {Xn , n 1} be a sequence of H-valued CNA random vectors with zero means. Suppose that {Xn , n 1} is coordinatewise weakly upper bounded by a random vector X. If ∞ E|X (j) |r < ∞, (2.1) j=1 then ∞ k nαr−2 P max 1 k n n=1 Xl > εnα < ∞ for every ε > 0. l=1 Remark 2.2. From (2.2) and Lemma 4 of Lai [12] we have ∞ nαr−2 P sup n=1 k n 1 kα k Xl > ε < ∞ for every ε > 0. l=1 Then by the Kronecker lemma, 1 P sup α k n k k Xl > ε = o n1−αr l=1 5 for every ε > 0. (2.2) Therefore, the conclusion (2.2) describes the rate of convergence in the strong law of large numbers. Remark 2.3. Theorem 2.1 still holds if the condition that {Xn , n 1} is coordinatewise weakly upper bounded by X is replaced by the following weaker condition: n 1 n ∞ ∞ (j) P(|Xk | > t) P(|X (j) | > t), C k=1 j=1 n 1, t 0. j=1 Remark 2.4. In the case 0 < r < 1, the implication (2.1) ⇒ (2.2) of Theorem 2.1 holds without coordinatewise negative association and mean zero conditions on the random vectors. Under the assumptions of Theorem 2.1, (2.1) implies (2.2). A natural question is whether or not the converse is true. A negative answer to this question will be given in following example. Example 2.5. We consider the space 2 consisting of square summable real ∞ 2 1/2 . Let {Xn , n sequences x = {xk , k 1} with norm x = k=1 xk (j) 1} be a sequence of 2 -valued i.i.d. random vectors such that P Xn = ±j −1/r = 1/2 for all n 1 and j 1. It is well known that the space 2 is of type 2, and so it is of type p for all r < p 2 (for details see Pisier [15]). Then, for every ε > 0, ∞ k nαr−2 P max 1 k n n=1 Xl > εnα l=1 ∞ C k n n=1 ∞ α(r−p)−2 E max 1 k n Xl n=1 p n α(r−p)−2 C n n=1 l=1 ∞ nα(r−p)−1 E X1 =C ∞ p k=1 ∞ nα(r−p)−1 E =C n=1 j=1 and therefore (2.2) holds. However, (2.1) fails since ∞ ∞ (j) E|X1 |r = j=1 j=1 6 1 = ∞. j E Xk 1 j 2/r p/2 < ∞, p The following theorem provides sufficient conditions for (2.1) to hold. Theorem 2.6. Let r, α be positive real numbers such that αr 1, and let {Xn , n 1} be a sequence of H-valued CNA random vectors with zero means. Suppose that {Xn , n 1} is coordinatewise weakly bounded by a random vector X with ∞ E |X (j) |r I(|X (j) | 1) < ∞. (2.3) j=1 If ∞ ∞ k n αr−2 (j) 1 k n j=1 n=1 > εnα < ∞ for every ε > 0, Xl P max (2.4) l=1 then (2.1) holds. Obviously, if the condition (2.3) is not satisfied, then the conclusion (2.1) fails. The following example shows that, in Theorem 2.6, we cannot remove (2.3) or even replace it by the weaker condition E |X (j) |r I(|X (j) | 1) = o(1) as j → ∞. Example 2.7. Let p, r be positive real numbers such that r < p 2. We consider the sequence {Xn , n 1} in Example 2.5. Then, for every ε > 0, we have ∞ ∞ k n αr−2 Xl 1 k n j=1 n=1 ∞ (j) P max l=1 ∞ C k n α(r−p)−2 j=1 n=1 ∞ ∞ p Xl 1 k n l=1 n j=1 n=1 ∞ ∞ =C (j) E max (j) E|Xk |p nα(r−p)−2 C > εnα (since R is of type p) k=1 ∞ n α(r−p)−1 (j) E|X1 |p =C j=1 n=1 ∞ n n=1 so that (2.4) holds. We also see that E |X (j) |r I(|X (j) | However, the conclusion (2.1) fails. 7 1 α(r−p)−1 j=1 j p/r < ∞, 1) = 1/j = o(1). Remark 2.8. Let r, s be positive real numbers such that s < r < 2, and let {Xn , n 1} be a sequence of 2 -valued i.i.d. random vectors with (j) −1/s P Xn = ±j = 1/2, n 1. Then by using the same arguments as in Example 2.7, we can show that the conditions (2.3) and (2.4) are satisfied. Theorem 2.6 ensures that (2.1) holds. The example below shows that Theorem 2.6 can fail if the series in (2.4) diverges. Example 2.9. Let r > 2, let {Yn , n 1} be a sequence of i.i.d. symmetric random variables such that E|Y1 |r/2 = ∞ and |Yn | < ∞ for all n 1. For each n 1, set ∞ Xn(j) r/2 = Yn I(j − 1 |Yn | < j), j Xn(j) ej , 1; Xn = j=1 where {ej , j 1} is an orthonormal basis in 2 . Then, for each j 1, (j) {Xn , n 1} is a sequence of i.i.d. symmetric random variables. Moreover, since ∞ ∞ Xn(j) 2 = Yn2 < ∞, EXn = E Xn(j) ej = 0, j=1 n 1, j=1 {Xn , n 1} is a sequence of 2 -valued CNA random vectors with zero means and coordinatewise weakly bounded by X1 . Now ∞ (j) (j) (1) (1) 1) (2) (2) 1) < ∞ 1) = E |X1 |r I(|X1 | E |X1 |r I(|X1 | j=1 + E |X1 |r I(|X1 | and so (2.3) is satisfied. Let α = 2/r, we will show that the series in (2.4) diverges. Without loss of generality, assume that ε = 1. Then, we have ∞ ∞ k (j) nαr−2 P max 1 k n j=1 n=1 ∞ ∞ Xl > εnα l=1 ∞ ∞ (j) (j) P |X1 | > n2/r = j=1 n=1 ∞ ∞ P |X1 |r/2 > n n=1 j=1 P j−1 |Y1 |r/2 < j n=1 j=n+2 8 E|Y1 |r/2 − 2 = ∞. Thus, (2.3) does not imply (2.4). Moreover, in this case ∞ ∞ (j) E|X1 |r (j) |X1 |r = E|Y1 |r = ∞. =E j=1 j=1 Hence, (2.1) fails. Note that if r 2, then the condition (2.1) is stronger than the condition E X r < ∞. (2.5) However, in the special case when H is finite dimensional, (2.1) and (2.5) are equivalent. Moreover, we have the following corollary. Corollary 2.10. Let r, α be positive real numbers (1 r < 2; αr > 1), let H be a finite dimensional real Hilbert space, and let {Xn , n 1} be a sequence of H-valued CNA random vectors with zero means. Suppose that {Xn , n 1} is coordinatewise weakly bounded by a random vector X. Then (2.1), (2.2), (2.4), (2.5) are equivalent. 3. Proofs Proof of Lemma 1.7. In view of the proof of Lemma 4 of Matula [13], we have k E max 1 k n ∞ 2 Xi =E k 2 max 1 k n i=1 Xi , ej j=1 i=1 ∞ k 2 max E j=1 Xi , ej 1 k n i=1 ∞ = k E max max 1 k n j=1 ∞ k (j) E max j=1 ∞ 1 k n (j) Xi i=1 1 k n E Xi 1 k n j=1 i=1 E Xi 2 . =2 i=1 The proof is completed. 9 i=1 k (j) − Xi E max j=1 2 (j) − Xi max n (j) 2 2 + i=1 n ; ∞ 2 Xi k 2 i=1 2 To prove Theorem 2.1, we need the following lemma. Note that the proof of this lemma is quite simple if H is finite dimensional. Lemma 3.1. Let p, r, α be positive real numbers (r < p; αr > 1), and let X be an H-valued random vector satisfying (2.1). Then ∞ ∞ nα(r−p)−1 E (X (j) )p I(|X (j) | nα ) < ∞. j=1 n=1 Proof. We have ∞ ∞ nα(r−p)−1 E (X (j) )p I(|X (j) | nα ) nα(r−p)−1 E (X (j) )p I(|X (j) | 1) j=1 n=1 ∞ ∞ = j=1 n=1 ∞ ∞ nα(r−p)−1 E (X (j) )p I(1 < |X (j) | + nα ) = I1 + I2 . j=1 n=1 ∞ It follows from (2.1) that I1 E|X (j) |r ∞ nα(r−p)−1 < ∞. Now we prove n=1 j=1 I2 < ∞. Indeed, ∞ ∞ I2 = p nα n α(r−p)−1 xp−1 P |X (j) | I(1 < |X (j) | 0 j=1 n=1 ∞ ∞ 1 nα(r−p)−1 p xp−1 P |X (j) | > 1 dx 0 j=1 n=1 ∞ ∞ +p nα ) > x dx nα n j=1 n=1 ∞ ∞ (j) r E|X | 1 ∞ n n=1 j=1 xp−1 P |X (j) | > x dx α(r−p)−1 α(r−p)−1 +C n n j=1 n=1 ∞ =C +C ∞ I3 (j), j=1 10 α(r−p)−1 P |X (j) | > k α k pα−1 k=1 and ∞ ∞ (j) P |X | > k I3 (j) = α k pα−1 nα(r−p)−1 n=k k=1 ∞ ∞ P |X (j) | > k α k pα−1 C k k=1 = C α(p − r) 1 xα(p−r)+1 dx ∞ k αr−1 P |X (j) | > k α k=1 ∞ ∞ =C k αr−1 k=1 ∞ P nα < |X (j) | (n + 1)α n=k nαr P nαr < |X (j) |r C (n + 1)αr C E|X (j) |r . n=1 Since the last constant C depends only on p, r and α, we obtain I2 < ∞. Proof of Theorem 2.1. For n, k, j (j) (j) (j) Ynk = Xk I(|Xk | 1, set (j) (j) nα ) + nα I(Xk > nα ) − nα I(Xk < −nα ); ∞ (j) Ynk = Ynk ej . j=1 Then for every ε > 0, ∞ k n αr−2 1 k n n=1 ∞ = n n=1 ∞ αr−2 Xl > εnα P max l=1 k ∞ (j) Xl ej > εnα P max 1 k n l=1 j=1 (j) nαr−2 P max max |Xk | > nα n=1 ∞ 1 k n j 1 k ∞ (j) Ynl ej > εnα nαr−2 P max + n=1 1 k n l=1 j=1 11 ∞ ∞ n n (j) αr−2 n=1 P(|Xk | > nα ) j=1 k=1 ∞ k + n αr−2 ∞ ∞ Ynl > εnα P max 1 k n n=1 l=1 nαr−1 P(|X (j) | > nα ) C (by (1.2)) j=1 n=1 ∞ + k n αr−2 1 k n n=1 ∞ n + (Ynl − EYnl ) > εnα /2 P max αr−2 n=1 l=1 1 P α max n 1 k n k EYnl > ε/2 l=1 = J1 + J2 + J3 . Repeating the arguments given in the end of the proof of Lemma 3.1 shows that J1 < ∞. (j) It is well known that for all j 1, {Ynk , k 1} is NA, so {Ynk , k 1} is CNA. By the Markov inequality and Lemma 1.7, ∞ J2 k n C α(r−2)−2 1 k n n=1 ∞ C 2 (Ynl − EYnl ) E max l=1 ∞ n n α(r−2)−2 n=1 ∞ ∞ E Ynk − EYnk j=1 n=1 C n n α(r−2)−2 n=1 k=1 E Ynk k=1 n (j) nα(r−2)−2 =C 2 E(Ynk )2 . k=1 Note that (j) (j) (j) (j) E(Ynk )2 = n2α P(|Xk | > nα ) + E (Xk )2 I(|Xk | 12 nα ) . 2 According to Lemma 2.1. of Gut [6], ∞ ∞ n (j) nα(r−2)−2 E(Ynk )2 n=1 nαr−1 P(|X (j) | > nα ) C n=1 ∞ k=1 nα(r−2)−1 E (X (j) )2 I(|X (j) | +C nα ) n=1 ∞ nαr−1 P(|X (j) | > nα ). +C n=1 Since the above constants do not depend on j, it follows from Lemma 3.1 that J2 < ∞. In order to prove J3 < ∞, we only need to prove that k 1 J4 = α max n 1 k n (j) Note that EXl = 0 for all l ∞ J4 1 max nα 1 k n j=1 ∞ 1 max nα 1 k n j=1 ∞ 1 max nα 1 k n j=1 C nα−1 C nαr−1 EYnl = o(1). l=1 1 and j 1. Then by (2.1), k (j) EYnl l=1 k (j) (j) E Xl I(|Xl | nα ) l=1 + 1 nα ∞ n (j) nα P |Xk | > nα j=1 k=1 ∞ k (j) (j) E Xl I(|Xl | > nα ) nP |X (j) | > nα +C j=1 l=1 ∞ ∞ E |X (j) |I(|X (j) | > nα ) + C j=1 ∞ j=1 ∞ E|X (j) |r + C j=1 nP |X (j) | > n1/r E |X (j) |r I(|X (j) | > n1/r ) = o(1). j=1 Therefore J3 < ∞. The proof of theorem is completed. Proof of Remark 2.3. The proof is, except for details, the same as the proof of Theorem 2.1 and will be omitted. 13 Proof of Remark 2.4. Looking at the arguments given in the beginning of the proof of Theorem 2.1, it suffices to show that ∞ k n αr−2 Ynl > εnα < ∞ for every ε > 0. P max 1 k n n=1 l=1 In fact, ∞ k n αr−2 Ynl > εnα P max 1 k n n=1 l=1 ∞ k nα(r−1)−2 E max C n=1 ∞ E Ynk n=1 ∞ ∞ k=1 n n C k=1 j=1 n=1 ∞ ∞ +C (j) P(|Xk | > nα ) k=1 n n j=1 n=1 ∞ (j) α(r−1)−2 (j) E |Xk |I(|Xk | nα ) k=1 ∞ nα(r−1)−1 E |X (j) |I(|X (j) | C +C +C E|Ynk | n nαr−2 =C (j) α(r−1)−2 j=1 n=1 ∞ ∞ ∞ l=1 n nα(r−1)−2 C Ynl 1 k n j=1 n=1 ∞ αr−1 n P(|X (j) | > nα ) < ∞ j=1 n=1 So (3.1) holds. 14 nα ) (by Lemma 3.1). (3.1) Proof of Theorem 2.6. By (2.3), we have ∞ ∞ ∞ (j) r (j) r E|X | = j=1 (j) E |X | I(|X | E |X (j) |r I(|X (j) | > 1) 1) + j=1 j=1 ∞ ∞ (k + 1)αr P k α < |X (j) | C+ (k + 1)α j=1 k=1 ∞ ∞ k nαr−1 P k α < |X (j) | C +C (k + 1)α n=1 j=1 k=1 ∞ ∞ nαr−1 P |X (j) | > nα . =C +C j=1 n=1 Thus, it suffices to show that ∞ ∞ nαr−1 P |X (j) | > nα < ∞. (3.2) j=1 n=1 Now we prove that there exists a positive integer n0 such that ∞ ∞ (j) α P |X | > n n (j) P max |Xk | > nα C j=1 1 k n j=1 for all n > n0 . (3.3) By (1.2), we have n (j) C1 n P |X (j) | > nα P |Xk | > nα k=1 n (j) (j) P |Xk | > nα ; max |Xl | = l=k;1 l n k=1 n (j) nα (j) P |Xk | > nα ; max |Xl | > nα + l=k;1 l n k=1 (j) = P max |Xk | > nα + K1 . 1 k n 15 (3.4) By (1.2) again, n K1 (j) (j) I(|Xk | > nα ) I( max |Xl | > nα ) E 1 l n k=1 n (j) (j) (j) I(|Xk | > nα ) − P(|Xk | > nα ) I max |Xl | > nα ) =E 1 l n k=1 n (j) (j) P(|Xk | > nα ) I max |Xl | > nα +E 1 l n k=1 (j) K2 + C2 n P(|X (j) | > nα ) P max |Xl | > nα . (3.5) 1 l n (j) (j) Note that for all n, j 1, {I(Xk > nα ), k 1} and {I(Xk < −nα ), k are NA. Then K2 can be estimated as follows: 1} n (j) I(|Xk | > nα ) Var K2 (j) P max |Xl | > nα 1 l n k=1 n n (j) (j) I(Xk > nα ) + 2Var 2Var k=1 I(Xk < −nα ) (j) P max |Xl | > nα 1 l n k=1 n n (j) (j) Var I(Xk < −nα ) Var I(Xk > nα ) + 2 k=1 k=1 (j) P max |Xl | > nα 1 l n n (j) 1 l n k=1 2 a (j) P(|Xk | > nα ) P max |Xl | > nα 2 n (j) P(|Xk | > nα ) + k=1 a (j) P max |Xl | > nα 1 l n 2 2C2 n a (j) P(|X (j) | > nα ) + P max |Xl | > nα , 1 l n a 2 (3.6) where a > 4C2 /C1 . Combining (3.4)-(3.6), we get C1 − 2C2 a (j) n P(|X (j) | > nα ) 1+ P max |Xk | > nα 1 k n a 2 (j) (j) α + C2 n P(|X | > n ) P max |Xl | > nα . 1 l n 16 (3.7) On the other hand, it follows from (2.4) that ∞ ∞ (j) nαr−2 P max |Xl | > εnα < ∞. 1 k n j=1 n=1 (3.8) Then we have ∞ ∞ 2n(αr−1) n=1 P j=1 ∞ 2n+1 −1 ∞ C P j=1 n=1 ∞ max 1 k 2n (j) |Xk | nα mαr−2 > ε2 m=2n ∞ 2n+1 −1 mαr−2 P C C (j) max |Xk | > ε 2nα 1 k 2n j=1 n=1 m=2n ∞ ∞ αr−2 n (j) max |Xk | > (ε/2α )mα 1 k m (j) P max |Xk | > (ε/2α )nα < ∞. 1 k n j=1 n=1 α = o(1). Therefore, there This implies that ∞ j=1 P max1 k n |Xk | > n exists a positive integer n0 such that (j) ∞ (j) P max |Xl | > nα j=1 1 l n 2 a for all n > n0 , so that C1 − 4C2 n P(|X (j) | > nα ) a 1+ a (j) P max |Xk | > nα 1 k n 2 by (3.7). Since a > 4C2 /C1 and n0 does not depend on j, we obtain (3.3). 17 Thus, by (3.3) and (3.8) we have ∞ ∞ nαr−1 P |X (j) | > nα j=1 n=1 ∞ n0 ∞ n = αr−1 (j) P |X | > n j=1 n=1 ∞ n0 + ∞ n αr−2 n=n0 +1 n P |X (j) | > nα j=1 n (j) nαr−2 C j=1 n=1 ∞ P |Xk | > nα k=1 ∞ (j) nαr−2 +C α n=n0 +1 ∞ ∞ P max |Xk | > nα ) j=1 1 k n (j) nαr−2 P max |Xk | > nα < ∞ C j=1 n=1 1 k n and so (3.2) holds. The proof of theorem is completed. Proof of Corollary 2.10. Since H is finite dimensional, (2.3) is obvious. Moreover, (2.1) and (2.2) are respectively equivalent to (2.5) and (2.4). The rest of the proof follows immediately from Theorems 2.1 and 2.6. Open problem. It is worth to address the question of the validity of Theorem 2.1 (respectively, Theorem 2.6) when (2.1) is replaced by (2.5) (respectively, (2.4) is replaced by (2.2)). Acknowledgments. This work was completed while the authors were visiting the Vietnam Institute for Advanced Study in Mathematics (VIASM). The authors would like to thank the VIASM for their support and hospitality. References [1] K. Alam, K.M.L. Saxena, Positive dependence in multivariate distributions, Comm. Statist. A-Theory Methods 10 (1981) 1183–1196. [2] J.I. Baek, I.B. Choi, S.L. Niu, On the complete convergence of weighted sums for arrays of negatively associated variables, J. Korean Statist. Soc. 37 (2008) 73–80. 18 [3] L.E. Baum, M. Katz, Convergence rates in the law of large numbers, Trans. Amer. Math. Soc. 120 (1965) 108–123. [4] P. Erd¨os, On a theorem of Hsu and Robbins, Ann. Math. Statistics 20 (1949) 286–291. [5] P. Erd¨os, Remark on my paper “On a theorem of Hsu and Robbins”, Ann. Math. Statistics 21 (1950) 138. [6] A. Gut, Complete convergence for arrays, Period. Math. Hungar. 25 (1992) 51–75. [7] P.L. Hsu, H. Robbins, Complete convergence and the law of large numbers, Proc. Nat. Acad. Sci. U. S. A. 33 (1947) 25–31. [8] K. Joag-Dev, F. Proschan, Negative association of random variables, with applications, Ann. Statist. 11 (1983), 286–295. [9] M.H. Ko, T.S. Kim, K.H. Han, A note on the almost sure convergence for dependent random variables in a Hilbert space, J. Theoret. Probab. 22 (2009) 506–513. [10] A. Kuczmaszewska, On complete convergence in MarcinkiewiczZygmund type SLLN for negatively associated random variables, Acta Math. Hungar. 128 (2010) 116–130. [11] A. Kuczmaszewska, Z.A. Lagodowski, Convergence rates in the SLLN for some classes of dependent random fields, J. Math. Anal. Appl. 380 (2011) 571–584. [12] T.L. Lai, Convergence rates and r-quick versions of the strong law for stationary mixing sequences, Ann. Probability 5 (1977) 693–706. [13] P. Matula, A note on the almost sure convergence of sums of negatively dependent random variables, Statist. Probab. Lett. 15 (1992) 209–213. [14] Y. Miao, H´ajek-R´enyi inequality for dependent random variables in Hilbert space and applications, Rev. Un. Mat. Argentina 53 (2012) 101– 112. 19 [15] G. Pisier, Probabilistic methods in the geometry of Banach spaces. Probability and analysis, Lecture Notes in Math., 1206, Springer, Berlin, 1986. [16] Q.M. Shao, A comparison theorem on moment inequalities between negatively associated and independent random variables, J. Theoret. Probab. 13 (2000) 343–356. [17] L.V. Thanh, On the almost sure convergence for dependent random vectors in Hilbert spaces, Acta Math. Hungar. 139 (2013) 276–285. 20 [...]... Probabilistic methods in the geometry of Banach spaces Probability and analysis, Lecture Notes in Math., 1206, Springer, Berlin, 1986 [16] Q.M Shao, A comparison theorem on moment inequalities between negatively associated and independent random variables, J Theoret Probab 13 (2000) 343–356 [17] L.V Thanh, On the almost sure convergence for dependent random vectors in Hilbert spaces, Acta Math Hungar... J Theoret Probab 22 (2009) 506–513 [10] A Kuczmaszewska, On complete convergence in MarcinkiewiczZygmund type SLLN for negatively associated random variables, Acta Math Hungar 128 (2010) 116–130 [11] A Kuczmaszewska, Z.A Lagodowski, Convergence rates in the SLLN for some classes of dependent random fields, J Math Anal Appl 380 (2011) 571–584 [12] T.L Lai, Convergence rates and r-quick versions of the. .. n=1 1 k n and so (3.2) holds The proof of theorem is completed Proof of Corollary 2.10 Since H is finite dimensional, (2.3) is obvious Moreover, (2.1) and (2.2) are respectively equivalent to (2.5) and (2.4) The rest of the proof follows immediately from Theorems 2.1 and 2.6 Open problem It is worth to address the question of the validity of Theorem 2.1 (respectively, Theorem 2.6) when (2.1) is replaced... was completed while the authors were visiting the Vietnam Institute for Advanced Study in Mathematics (VIASM) The authors would like to thank the VIASM for their support and hospitality References [1] K Alam, K.M.L Saxena, Positive dependence in multivariate distributions, Comm Statist A-Theory Methods 10 (1981) 1183–1196 [2] J.I Baek, I.B Choi, S.L Niu, On the complete convergence of weighted sums for. .. strong law for stationary mixing sequences, Ann Probability 5 (1977) 693–706 [13] P Matula, A note on the almost sure convergence of sums of negatively dependent random variables, Statist Probab Lett 15 (1992) 209–213 [14] Y Miao, H´ajek-R´enyi inequality for dependent random variables in Hilbert space and applications, Rev Un Mat Argentina 53 (2012) 101– 112 19 [15] G Pisier, Probabilistic methods in. .. arrays of negatively associated variables, J Korean Statist Soc 37 (2008) 73–80 18 [3] L.E Baum, M Katz, Convergence rates in the law of large numbers, Trans Amer Math Soc 120 (1965) 108–123 [4] P Erd¨os, On a theorem of Hsu and Robbins, Ann Math Statistics 20 (1949) 286–291 [5] P Erd¨os, Remark on my paper On a theorem of Hsu and Robbins”, Ann Math Statistics 21 (1950) 138 [6] A Gut, Complete convergence. .. convergence for arrays, Period Math Hungar 25 (1992) 51–75 [7] P.L Hsu, H Robbins, Complete convergence and the law of large numbers, Proc Nat Acad Sci U S A 33 (1947) 25–31 [8] K Joag-Dev, F Proschan, Negative association of random variables, with applications, Ann Statist 11 (1983), 286–295 [9] M.H Ko, T.S Kim, K.H Han, A note on the almost sure convergence for dependent random variables in a Hilbert. .. theorem is completed Proof of Remark 2.3 The proof is, except for details, the same as the proof of Theorem 2.1 and will be omitted 13 Proof of Remark 2.4 Looking at the arguments given in the beginning of the proof of Theorem 2.1, it suffices to show that ∞ k n αr−2 Ynl > εnα < ∞ for every ε > 0 P max 1 k n n=1 l=1 In fact, ∞ k n αr−2 Ynl > εnα P max 1 k n n=1 l=1 ∞ k nα(r−1)−2 E max C n=1 ∞ E Ynk n=1... j 1 Then by (2.1), k (j) EYnl l=1 k (j) (j) E Xl I(|Xl | nα ) l=1 + 1 nα ∞ n (j) nα P |Xk | > nα j=1 k=1 ∞ k (j) (j) E Xl I(|Xl | > nα ) nP |X (j) | > nα +C j=1 l=1 ∞ ∞ E |X (j) |I(|X (j) | > nα ) + C j=1 ∞ j=1 ∞ E|X (j) |r + C j=1 nP |X (j) | > n1/r E |X (j) |r I(|X (j) | > n1/r ) = o(1) j=1 Therefore J3 < ∞ The proof of theorem is completed Proof of Remark 2.3 The proof is, except for details, the. .. αr−1 k=1 ∞ P nα < |X (j) | (n + 1)α n=k nαr P nαr < |X (j) |r C (n + 1)αr C E|X (j) |r n=1 Since the last constant C depends only on p, r and α, we obtain I2 < ∞ Proof of Theorem 2.1 For n, k, j (j) (j) (j) Ynk = Xk I(|Xk | 1, set (j) (j) nα ) + nα I(Xk > nα ) − nα I(Xk < −nα ); ∞ (j) Ynk = Ynk ej j=1 Then for every ε > 0, ∞ k n αr−2 1 k n n=1 ∞ = n n=1 ∞ αr−2 Xl > εnα P max l=1 k ∞ (j) Xl ej > εnα ... j=1 Therefore J3 < ∞ The proof of theorem is completed Proof of Remark 2.3 The proof is, except for details, the same as the proof of Theorem 2.1 and will be omitted 13 Proof of Remark 2.4 Looking... 2.4 In the case < r < 1, the implication (2.1) ⇒ (2.2) of Theorem 2.1 holds without coordinatewise negative association and mean zero conditions on the random vectors Under the assumptions of Theorem... ∞ for every ε > l=1 Then by the Kronecker lemma, P sup α k n k k Xl > ε = o n1−αr l=1 for every ε > (2.2) Therefore, the conclusion (2.2) describes the rate of convergence in the strong law of

Ngày đăng: 16/10/2015, 09:29

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN