ON THE CRAMER TYPE MODERATE DEVIATION FOR JACK MEASURES

14 486 0
ON THE CRAMER TYPE MODERATE  DEVIATION FOR JACK MEASURES

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Chen, Fang and Shao (Ann. Probab., 2013, 262293) used Stein’s method to obtain Cram´er type moderate deviation results for bounded dependent random variables. The boundedness restriction prevents the application of their general result from computing bounds in various examples, Jack measure being one. The main result of this work is a proof for Cram´er type moderate deviation for Jack measure on partitions. This is proved by combining martingale properties of Jack measure and Stein’s method for zero bias coupling.

´ TYPE MODERATE ON THE CRAMER DEVIATION FOR JACK MEASURES ˆ Va ˇ n Tha ` nh1 Le 1 Department of Mathematics, Vinh University 182 Le Duan, Vinh, Nghe An, Vietnam Email: levt@vinhuni.edu.vn Abstract Chen, Fang and Shao (Ann. Probab., 2013, 262-293) used Stein’s method to obtain Cram´er type moderate deviation results for bounded dependent random variables. The boundedness restriction prevents the application of their general result from computing bounds in various examples, Jack measure being one. The main result of this work is a proof for Cram´er type moderate deviation for Jack measure on partitions. This is proved by combining martingale properties of Jack measure and Stein’s method for zero bias coupling. Key Words and Phrases: Cram´er moderate deviation, Jack measure, Stein’s method, zerobias coupling. 2010 Mathematics Subject Classifications: 60F05, 60D05. 1 Introduction and result For α > 0, the Jackα measure is a probability measure on the set of all partitions of size n, which chooses a partition λ of size n with probability Pα {(λ)} = αn n! , Πs∈λ (αa(s) + l(s) + 1)(αa(s) + l(s) + α) where the product is over all boxes in the partition. Here a(s) denotes the number of boxes in the same row of s and to the right of s (the “arm” of s) and l(s) denotes the number of boxes in the same column of s and below s (the “leg” of s). We notice that the Jack measure with parameter α = 1 agrees the Plancherel measure of the symmetric group. The case α = 2 corresponds to the Gelfand pair (S2n , H2n ) where S2n is a symmetric group and H2n is the hyperoctahedral group of size 2n n!. Borodin and Olshanski [2], and Okounkov [22] emphasized that the study of Jackα measure is an important problem, about which relatively little is known for general values of α. 1 Given α > 0, the random variable we wish to study is i Tn,α = Tn,α (λ) = α λi 2 α − λi 2 , (1.1) n 2 where λ is chosen from the Jackα measure on partitions of size n, λi is the length of the i-th row of λ and λi is the length of the i-th column of λ. As mentioned in Fulman [10, 12, 13], it is of interest to study the quantity Tn,α under Jack measure for several reasons. When α = 1 it reduces to the study of the character ratio of transpositions under the Plancherel measure which is now well understood. When α = 2, it is a spherical function of the Gelfand pair (S2n ; H2n ). Also, there is a natural random walk on perfect matchings of the complete graph on n vertices, whose eigenvalues are precisely Tn,2 (λ)/ n(n − 1), occurring with multiplicity proportional to the Jack2 measure of λ (see Diaconis and Holmes [9]). The central limit theorem for Tn,α is studied by a number of authors. When α = 1, Kerov [20] outlined the central limit theorem for character ratios, which states that the random variable Tn,1 is asymptotically normal with mean 0 and variance 1. A full proof of Kerov’s central limit theorem appears in Ivanov and Olshanski [19], and in Hora [17] in which the method of moments is ´ used. More recently, other proofs are given in Sniady [27] which uses the genus expansion of random matrix theory, and in Hora and Obata [18] which uses quantum probability. Note that in all these proofs of Kerov’s central limit theorem, there is no error term. The Berry-Esseen inequality for Tn,α was recently studied in a series of papers by Fulman [10, 11, 12, 13] and by some other authors. Fulman [10] is the first who uses the method of exchangeable pairs in Stein’s method (see [28]) to obtain a Berry-Esseen bound for Tn,α with error term 40.1n−1/4 . This rate was later improved in [12] using martingales to O(n−1/2+ε ) for any ε > 0, and in [13] to O(n−1/2 ) using Bolthausen’s inductive approach (see [1]) to Stein’s method. When α = 1, Shao and Su [26] also obtained the rate O(n−1/2 ) by using Stein’s method of exchangeable pairs. The mean central limit theorem for Tn,α was proved in Fulman and Goldstein [14] using Stein’s method for zero bias coupling. Recently, Chen, Fang and Shao [6] used Stein’s method to obtain Cram´er type moderate deviation results for dependent random variables whose dependence structure is defined in terms of an identity, called the Stein identity. However, useful bounds are only obtained there when the couplings can be bounded by a small quantity. This restriction prevents the application of their general result from computing bounds in various examples. In this paper, we combine martingale properties of Jack measure and Stein’s method for zero bias couplings to obtain a Cram´er type moderate deviation for Tn,α . Our main result is the following theorem. 2 Theorem 1.1. Let n ≥ 2, α > 0 and Tn,α be as in (1.1). Then for every 0 ≤ z ≤ n1/6 , Pα (Tn,α > z) 1 − 1 ≤ Cα (1 + z 3 ) √ , 1 − Φ(z) n (1.2) Pα (Tn,α < −z) 1 − 1 ≤ Cα (1 + z 3 ) √ , 1 − Φ(z) n (1.3) and 1 where Φ(z) = √ 2π z −∞ exp(−t2 /2)dt, Cα is a positive constant depending only on α. The proof of Theorem 1.1 will be presenting in Section 3. We would like to note the following two points in our proof. Firstly, we use martingale properties of Jack measure to bound the moment generating function of Tn,α . As a necessary step, we obtain a strong invariance principle for Tn,α which may be of independent interest. Secondly, we use zero bias coupling construction which is given by Fulman and Goldstein [14] and then adapt the method in [6] to obtain the bounds. Remark 1.2. From Theorem 1.1, one can easy to obtain the Berry-Esseen bound for Tn,α which is due to Fulman [13, Theorem 3.1]. More precise, Theorem 1.1 implies (see the argument at the end of Section 3) Cα Pα (Tn,α ≤ z) − Φ(z) ≤ √ for all z ∈ R, n (1.4) where Cα is a positive constant depending only on α. We would like to note that Raiˇc [24] also used Stein’s method to prove moderate deviation for local dependent random variables. Stein’s method for concentration inequalities and large deviation was studied by Chatterjee [3, 4], Chatterjee and Dey [5], and Ghosh and Goldstein [15]. 1 x Throughout this paper, Z is the standard normal random variable, Φ(x) = √ exp(−t2 /2)dt −∞ 2π is the distribution function of Z. For a set S, the indicator function of S is denoted by 1(S). The symbol C denotes a generic positive constant whose value may be different for each appearance, log x denotes the natural logarithm of max(x, e), and [x] denotes the integer part of x. 2 Zero bias coupling and martingale properties of Jack measures In this section, we recall the zero bias coupling construction and some martingale properties of Jack measures. 3 It was shown by Goldstein and Reinert [16] that for any mean zero random variable W with positive finite variance σ 2 , there exists a random variable W ∗ which satisfies EW f (W ) = σ 2 Ef (W ∗ ) (2.1) for all absolutely continuous f with E|W f (W )| < ∞. We say that such a W ∗ has the W -zero biased distribution. Goldstein and Reinert [16] (see also in Chen, Goldstein and Shao [8, Proposition 2.1]) showed that the distribution of W ∗ is absolutely continuous with distribution function G(x) = E[W (W − x)1(W ≤ x)]/σ 2 . (2.2) Let Tn,α be as in (1.1). Recently, Fulman and Goldstein [14] showed that the construction of ∗ with the Tn,α -zero bias distribution can be achieved. There is a Kerov’s growth a variable Tn,α process (due to Kerov [21]) giving a sequence of partitions (λ(1), . . . , λ(n)) with λ(j) distributed according to the Jackα measure on partitions of size j. We refer to Fulman [12] or Fulman and Goldstein [14] for details. Given Kerov’s process, and let X1,α = 0, Xj,α = cα (x) where x is the box added to λ(j − 1) to obtain λ(j) and the “α-content” cα (x) of a box x is defined to be α(column number of x − 1) − (row number of x − 1) (j ≥ 2). Then one can write (see [12, 13]) Tn,α = n j=1 α Xj,α . (2.3) n 2 Therefore, constructing ν from the Jackα measure on partitions of size n − 1 and then taking one step in Kerov’s growth process yields λ with the Jackα measure on partitions of size n, we have Tn,α = Vn,α + ηn,α , (2.4) where Vn,α = x∈ν cα (x) α n 2 = n−2 Xn,α cα (λ/ν) Tn−1,α , ηn,α = = , n n α 2 α n2 (2.5) and cα (λ/ν) denotes the content of the box added to ν to obtain λ. Fulman and Goldstein [14, ∗ ∗ Theorems 3.1 and 4.1] showed that there exists a random variable ηn,α satisfying ηn,α has ηn,α -zero biased distribution and ∗ ∗ Tn,α = Vn,α + ηn,α (2.6) has Tn,α -zero biased distribution. The following lemma states that the {Xj,α , 1 ≤ j ≤ n} are martingale differences satisfying special properties. Part (i) of Lemma 2.1 is due to Fulman [12]. Part (ii) of Lemma 2.1 is a result of Kerov [21]. 4 Lemma 2.1. Let α ≥ 1, 2 ≤ j ≤ n and λ(j − 1) be a partition of size j − 1. (i) [12] E(Xj,α |λ(j − 1)) = 0. 2 (ii) [21] E(Xj,α |λ(j − 1)) = α(j − 1). For a partition λ of size n, we recall that the length of row i of λ and the length of column i of λ are denoted by λi and λi , respectively. The next lemma is a concentration result for λ1 and λ1 which is due to Fulman [10, Lemma 6.6]. Lemma 2.2. [10] Let λ > 0. Then Pα (λ1 ≥ 2e n/α) ≤ αn2 √ n/α 42e , (2.7) and √ Pα (λ1 ≥ 2e αn) ≤ 3 n2 α42e √ . nα (2.8) Proof ∗ ∗ In this section, we will prove Theorem 1.1. Throughout this section, Tn,α , Tn,α , Vn,α , Xn,α , ηn,α , ηn,α are as in Section 2, equations (2.3)-(2.6). The proof of Theorem 1.1 has several steps so we shall break it up into ??? lemmas. In these ∗ lemmas, we always assume that α ≥ 1. The first lemma shows that ηn,α and ηn,α have very light tails. Lemma 3.1. We have Pα and √ 2e 2 2αn2 √ |ηn,α | ≥ √ ≤ , n−1 42e n/α √ √ 2e 2 2e 2 2α2 n3 ∗ √ Pα |ηn,α |> √ ≤ αnPα |ηn,α | ≥ √ ≤ . n−1 n−1 42e n/α (3.1) (3.2) Proof. Since α ≥ 1, conclusion (3.1) follows directly from Lemma 2.2 and the observation that |Xj,α | < max{αλ1 , λ1 } for all 1 ≤ j ≤ n. It is easy to see that |ηn,α | ≤ α(n − 1) α 5 n 2 ≤ √ 2α. ∗ Therefore |ηn,α | ≤ √ 2α (see [8, p. 29]). By Lemma 2.1 (i) and (ii), we also have Eηn,α = 0, 2 Eηn,α = 2/n. It thus follows from (2.2) that √ √ 2e 2 2e 2 ∗ ∗ Pα ηn,α = 1 − Pα ηn,α >√ ≤√ n−1 n−1 √ √ 2e 2 2e 2 = 1 − nE ηn,α ηn,α − √ 1 ηn,α ≤ √ /2 n−1 n−1 √ √ 2e 2 2e 2 = nE ηn,α ηn,α − √ 1 ηn,α > √ /2 n−1 n−1 √ 2e 2 ≤ αnPα ηn,α > √ n−1 and, similarly, √ √ 2e 2 2e 2 ≤ αnPα ηn,α < − √ . < −√ n−1 n−1 Pα ηn∗ The proof of (3.2) is completed. The following lemma bounds the moment generating function of Tn,α . Lemma 3.2. Let {Tn,α , n ≥ 1} be as above. Then there exists nα such that for all n ≥ nα and 0 ≤ t ≤ 2n1/6 , t2 76t3 + √ ). 2 n (3.3) αn(n − 1) . 2 (3.4) E exp(tTn,α ) ≤ 2 exp( Proof. From Lemma 2.1 (ii), we have n 2 EXj,α = Mn := j=1 √ Let f (t) = 8e2 αt, t > 0. Then f (Mn ) = 4e2 α 2n(n − 1) ≥ 4e2 αn. (3.5) From (3.1) and (3.5), we have 2 P (Xn,α ≥ f (Mn )) ≤ Pα √ 2e 2 2αn2 √ |ηn,α | ≥ √ ≤ . n−1 42e n/α Noting |Xn,α | ≤ n, it thus follows from (3.5) and (3.6) that ∞ E 2 2 Xn,α 1(Xn,α > f (Mn )) < ∞. f (Mn ) n=2 6 (3.6) Therefore, by applying Theorem 4.4 of Strassen [29], we can redefine {Tn,α , n ≥ 1} without changing its distribution on a richer probability space on which there exists a standard Wiener process {W (t), t ≥ 0} such that αn(n − 1) αn(n − 1) = o(n3/4 log n) a.s. Tn,α − W 2 2 (3.7) Since lim sup √ n→∞ W (n) = 1 a.s., 2n log log n it follows from (3.7) that lim sup √ n→∞ Tn,α = 1 a.s. 2 log log n (3.8) By (3.8) we have E exp(tTn,α ) ≤ exp(t 2 log log n) for all t ≥ 0 and for n large enough. Now, let ξ1 = 0 and ξj = Xj,α α 1 Xj,α n 2 α n 2 √ 2e 2 ≤√ n−1 (3.9) for 2 ≤ j ≤ n. Since {Xj,α , 1 ≤ j ≤ n} are martingale differences, {ξj , 1 ≤ j ≤ n} are supermartingale differences. By Lemma 2.1 (ii), we have 2(j − 1) E(ξj2 |λ(j − 1)) ≤ . (3.10) n(n − 1) √ 2e 2 From (3.10) and noting that ξj ≤ √ , we can apply Theorem 2.1 of Pinelis [23] to obtain n−1 n n ξj )) ≤ E(exp(t E(exp(t j=1 Zj )), (3.11) j=1 where {Zj , 1 ≤ j ≤ n} are √ independent mean 0 random variables such that each Zj takes only two 2e 2 2(j − 1) values, one of which is √ ; that is, and satisfies the condition EZj2 = n(n − 1) n−1 √ 2e 2 j−1 8e2 n 2(j − 1) P Zj = √ and P Zj = − = 2 . = 2 8e n + 2(j − 1) 8e n + 2(j − 1) n−1 ne 2(n − 1) √ 2e 2 Since |Zj | ≤ √ for all j, we have n−1 ∞ (tZj )k k! k=3 √ (2e 2t)3 2e√2t/√n−1 ≤ 1 + t2 EZj2 /2 + e 3/2 6(n − √ 1)3 3 √ √ 8 2e t 2e 2t/ n−1 ≤ exp t2 EZj2 /2 + e . 3(n − 1)3/2 EetZj = 1 + t2 EZj2 /2 + 7 (3.12) n j=1 By independence and noting E exp(t n j=1 EZj2 = 1, it follows from (3.12) that Zj ) √ t2 8 2e3 nt3 2e√2t/√n−1 ≤ exp e + 2 3(n − 1)3/2 2 3 t 76t ≤ exp for all n large enough. + √ 2 n (3.13) Combining (3.11) and (3.13), we have n ξj )) ≤ exp E(exp(t j=1 76t3 t2 + √ 2 n for all n large enough. (3.14) Now for all n, E exp(tTn,α ) √ √ = E exp(tTn,α ) 1( max Xj,α ≤ 2e nα) + 1( max Xj,α > 2e nα) 1≤j≤n 1≤j≤n n ≤ E exp t 1/2 n ξj + E exp(2tTn,α ) j=1 √ Pα (Xj,α > 2e nα) 1/2 . (3.15) j=1 Since Xj,α ≤ α(λ1 − 1) for all 1 ≤ j ≤ n, we have from (2.7) that √ Pα (Xj,α > 2e nα) ≤ Pα λ1 ≥ 2e √ n ≤ αn2 4−2e n/α . α (3.16) Combining (3.9) and (3.14)-(3.16), there exists nα such that for all n ≥ nα and all 0 ≤ t ≤ 2n1/6 , √ √ t2 76t3 + 2αn3 exp(t 2 log log n )4−e n/α + √ 2 n √ t2 76t3 ≤ exp + √ + 2αn3 exp(2n1/6 2 log log n − e n/α) 2 n t2 76t3 ≤ 2 exp + √ . 2 n E exp(tTn,α ) ≤ exp (3.17) Remark 3.3. When α = 1, Su [30, p. 345] noted that we can use Theorem 2.1 in Shao [25] 4 to obtain (3.7). In order to do this (for all α > 0 fixed), we need to use the fact that EXj,α = α2 (j − 1)(2j − 3) + α(α − 1)2 (j − 1) (see Fulman [13]). For x ≥ 0, let f = fx be the unique bounded solution of the Stein equation f (w) − wf (w) = 1(w ≤ x) − Φ(x), (3.18) g(w) = gx (w) = (wfx (w)) . (3.19) and let 8 Lemma 3.4. Let n ≥ nα where nα is as in the proof of Lemma 3.2, 5 < x ≤ 2n1/6 and let f = fx and g = gx be as in (3.18) and (3.19), respectively. Then √ 4e 2 Eg(Tn,α + u) ≤ C(1 + x) (1 − Φ(x)) for all |u| ≤ √ , n−1 (3.20) Eg 2 (Tn,α + u) < 3 + 2u2 for all u. (3.21) 3 and Proof. From the definition of f and g, we have (see Chen and Shao [7, p. 248])  √  2π(1 + ω 2 )eω2 /2 (1 − Φ(ω)) − ω Φ(x) if ω ≥ x, gx (ω) = √  2π(1 + ω 2 )eω2 /2 Φ(ω) + ω (1 − Φ(x)) if ω < x. Chen and Shao [7, p. 249] proved that g ≥ 0, g(ω) ≤ 2(1 − Φ(x)) for ω ≤ 0 and g(w) ≤ (3.22) 2 for 1 + w3 w ≥ x. Therefore Eg(Tn,α + u) = E(g(Tn,α + u)1(Tn,α + u ≤ 0)) + E(g(Tn,α + u)1(0 < Tn,α + u < x)) +E(g(Tn,α + u)1(Tn,α + u ≥ x)) 2 ≤ 2(1 − Φ(x)) + Pα (Tn,α + u ≥ x) 1 + x3 √ 2 +(1 − Φ(x))E 2π(1 + (Tn,α + u)2 )e(Tn,α +u) /2 + (Tn,α + u) 1(0 < Tn,α + u < x). (3.23) Using Markov’s inequality and Lemma 3.2, we have 2 Pα (Tn,α + u ≥ x) ≤ e−x E exp(xTn,α + xu) −x2 76x3 ≤ exp + √ + xu 2 n ≤ C exp(−x2 /2) ≤ C(1 + x)(1 − Φ(x)). 9 (3.24) On the other hand, 2 E e(Tn,α +u) /2 1(0 < Tn,α + u < x) x tet ≤ Pα (Tn,α + u > 0) + tet 0 [x] 2 /2 Pα (Tn,α + u > t)dt (by integration by parts) x 2 tet Pα (Tn,α + u > t)dt + /2 Pα (Tn,α + u > t)dt [x] j jet ≤1+ 2 x /2−jt jt je−j 2 j [x] 2 ≤e (1−j 2 )/2 76t3 √ + tu dt n (3.25) for all j − 1 ≤ t ≤ j and using Lemma 3.2) ∞ 2 j=1 [x] exp j=1 E exp(t(Tn,α + u))dt jejt Pα (Tn,α > t)dt + Cx −∞ e−j ≤1+C exp [x] t2 /2−jt /2+ju j=1 [x] ≤1+C ejt Pα (Tn,α + u > t)dt + x j−1 (by noting that e e−j e x /2 j=1 ≤1+2 /2 −t2 [x] [x] 1/2 2 et e Pα (Tn,α + u > t)dt + x j−1 j=1 ≤1+e /2 0 [x] ≤1+ 2 /2 EejTn,α + Cx 76j 3 √ + Cx (by Lemma 3.2) n ≤ 1 + C[x] + Cx ≤ C(1 + x), and similarly, 2 E(Tn,α + u)2 e(Tn,α +u) /2 1(0 < Tn,α + u < x) ≤ C(1 + x3 ). Combining (3.23)-(3.26), we get (3.20). To prove (3.21), we note that for 0 < fx (w) ≤ (3.26) √ 2π/4 and |fx (w)| ≤ 1 all w (see, e.g., [8, p. 16]). Thus Eg 2 (Tn,α + u) = E f (Tn,α + u) + (Tn,α + u)f (Tn,α + u) 2 ≤ 2Ef 2 (Tn,α + u) + 2E((Tn,α + u)f (Tn,α + u))2 π ≤ + 2E(Tn,α + u)2 4 π 2 = + 2ETn,α + 2u2 4 < 3 + 2u2 . (3.27) This completes the proof of Lemma 3.4. In order to get the Kolmogorov bound for Tn,α we will bound the Kolmogorov distance between ∗ ∗ Tn,α and Z first, and then we estimate the Kolmogorov distance between Tn,α and Tn,α . The ∗ . following lemma provides a moderate deviation for Tn,α 10 Lemma 3.5. There exists Nα ≥ nα , where nα is in Lemma 3.2, such that for all n ≥ Nα and for 5 < x ≤ 2n1/6 , we have ∗ |Pα (Tn,α ≤ x) − Φ(x)| ≤ C(1 + x3 )(1 − Φ(x)) √ . n (3.28) √ √ Proof. Let ε = 4e 2/ n − 1 and f = fx and g = gx be as in (3.18) and (3.19), respectively. For n ≥ nα , we have ∗ ∗ ∗ ∗ |Pα (Tn,α ≤ x) − Φ(x)| = |Ef (Tn,α ) − E(Tn,α )f (Tn,α )| ∗ ∗ = |ETn,α f (Tn,α ) − E(Tn,α )f (Tn,α )| ∗ ηn,α −ηn,α = |E g(Tn,α + u)du| 0ε ∗ g(Tn,α + u)1(|ηn,α − ηn,α | ≤ ε)du ≤E 0√ 2 2α ∗ g(Tn,α + u)1(|ηn,α − ηn,α | > ε)du +E 0 ε ≤ √ 2 2α ∗ (Eg 2 (Tn,α + u))1/2 P 1/2 (|ηn,α − ηn,α | > ε)du Eg(Tn,α + u)du + (3.29) 0 0 C(1 + x3 )(1 − Φ(x)) √ n √ √ √ √ ∗ + Pα (|ηn,α | > 2e 2/ n − 1) + Pα (|ηn,α | > 2e 2/ n − 1) ≤ √ 2 2α 1/2 3 + 2u2 du 0 (by Lemma 3.4) C(1 + x3 )(1 − Φ(x)) 2αn2 2α2 n3 1/2 √ √ √ 2 2α(3 + 16α) (by Lemma 3.1). ≤ + + n 42e n/α 42e n/α √ 2 Note that e−x /2 ≤ 2π(1 + x2 )(1 − Φ(x))/x for all x > 0 (see, e.g., [8, p. 38]). Therefore, we can choose Nα ≥ nα such that for all n ≥ Nα and for 5 < x ≤ 2n1/6 , 2 2α(3 + 16α) 2αn2 √ 42e n/α + 2α2 n3 √ 42e n/α 1/2 ≤ (1 + x3 )(1 − Φ(x)) √ . n (3.30) Thus, the conclusion of Lemma 3.5 follows from (3.29). Proof of Theorem 1.1. First, we assume that α ≥ 1. We rewrite (1.2) as follows: |Pα (Tn,α ≤ z) − Φ(z)| ≤ Cα (1 + z 3 )(1 − Φ(z)) √ . n (3.31) By Theorem 3.1 in Fulman [13], we see that (3.31) holds for all 0 ≤ z ≤ 5. It is also easy to see that if n < Nα , where Nα is as in Lemma 3.5, then (3.31) holds for all 0 ≤ z ≤ n1/6 by choosing Cα large enough. Therefore, we can assume that n ≥ Nα and prove (3.31) for 5 < z ≤ n1/6 . Let 11 √ √ ε = 4e 2/ n − 1. Then Pα (Tn,α ≤ z) − Φ(z) = 1 − Pα (Tn,α > z) − Φ(z) ∗ ∗ = 1 − Pα (Tn,α > z + Tn,α − Tn,α ) − Φ(z) ∗ ∗ ∗ ≤ 1 − Pα (Tn,α > z + ε) − Φ(x) + Pα (Tn,α > z + ε, Tn,α − Tn,α > ε) ∗ = Pα (Tn,α ≤ z + ε) − Φ(z + ε) + Φ(z + ε) − Φ(z) ∗ +Pα (ηn,α − ηn,α > ε) C(1 + (z + ε)3 ))(1 − Φ(z + ε)) exp(−z 2 /2)ε √ √ + ≤ n 2π ∗ +Pα (ηn,α − ηn,α > ε) (by Lemma 3.5) C(1 + z 3 )(1 − Φ(z)) ∗ √ + Pα (|ηn,α | > ε/2) + Pα (|ηn,α | > ε/2) ≤ n 3 C(1 + z )(1 − Φ(z)) √ ≤ (by Lemma 3.1 and (3.30)). n (3.32) Similarly, we can show that Pα (Tn,α ≤ z) − Φ(z) ≥ − C(1 + z 3 )(1 − Φ(z)) √ . n (3.33) We complete the proof of (1.2). We also see that (1.2) holds if we replace Tn,α by −Tn,α . Indeed, the proof does not change much. We only have to note the two following points. Firstly, if the random variable X ∗ has the X-zero biased distribution, then for all a = 0, aX ∗ has the (aX)-zero biased distribution (see [8, p. 29]). Secondly, we observe that Xj,α ≥ −(λ1 − 1), so that by (2.8), √ √ Pα (−Xj,α > 2e nα) ≤ Pα λ1 ≥ 2e nα ≤ n2 α4−2e √ nα , (3.34) and then we replace (3.16) by (3.34). Applying (1.2) with −Tn,α in place of Tn,α , we have that (1.3) holds. To obtain (1.2) and (1.3) for 0 < α < 1, we note that the Jackα probability that Tn,α = w is equal to the Jack1/α probability that Tn,1/α = −w (see Fulman [10, p. 277]). From this we conclude that Pα (Tn,α > z) = P1/α (Tn,1/α < −z) and Pα (Tn,α < −z) = P1/α (Tn,1/α > z). Therefore, (1.2) holding for α ≥ 1 implies that (1.3) holds for 0 < α ≤ 1 and (1.3) holding for α ≥ 1 implies that (1.2) holds for 0 < α ≤ 1. Finally, we explain the statement in Remark 1.4. It is clear that we only need to prove (1.2) for |z| > n1/6 . By Burkholder’s inequality and Lemma 2.1, it is easy to see that E|Tn,α |3 ≤ Cα . For z > n1/6 , we have |P (Tn,α ≤ z) − Φ(z)| = |P (Tn,α > z) − (1 − Φ(z))| ≤ max{P (Tn,α > z), 1 − Φ(z)} E|Tn,α |3 C , 3} ≤ max{ z3 z Cα Cα ≤ 3 ≤√ . z n 12 (3.35) For z < −n1/6 , we argue similarly. Acknowledgements The author is grateful to Professor Louis Chen for many useful discussions and for providing unconditional help. The author also would like to thank Professor Andrew Rosalsky for very careful reading of the manuscript and valuable comments. This work is supported in part by Vietnam Institute for Advanced Study in Mathematics (VIASM) and Vietnam National Foundation for Science Technology Development (NAFOSTED), grant no. 101.01-2012.13. References [1] Bolthausen, E. An estimate of the remainder in a combinatorial central limit theorem. Z. Wahrsch. Verw. Gebiete. 66 (1984), no. 3, 379–386. [2] Borodin, A. and Olshanski, G. Z-measures on partitions and their scaling limits. European J. Combin. 26 (2005), no. 6, 795–834. [3] Chatterjee, S. Stein’s method for concentration inequalities. Probab. Theory Related Fields 138 (2007), no. 1-2, 305–321. [4] Chatterjee, S. Concentration of Haar measures, with an application to random matrices. J. Funct. Anal. 245 (2007), no. 2, 379–389. [5] Chatterjee, S., and Dey, P. S. Applications of Stein’s method for concentration inequalities. Ann. Probab. 38 (2010), no. 6, 2443–2485. [6] Chen, L. H. Y., Fang. X., and Shao, Q. M. From Stein identities to moderate deviations. Ann. Probab. 41 (2013), 262-293. [7] Chen, L. H. Y. and Shao, Q. M. A non-uniform Berry-Esseen bound via Stein’s method. Probab. Theory Related Fields. 120 (2001), no. 2, 236–254. [8] Chen, L. H. Y., Goldstein, L. and Shao, Q. M. Normal approximation by Stein’s method. Probability and its Applications (New York). Springer, Heidelberg, 2011. xii+405 pp. [9] Diaconis, P., and Holmes, S. Random walk on trees and matchings, Elec. J. Probab. 7 (2002) 17 pp. (electronic). [10] Fulman, J. Stein’s method, Jack measure, and the Metropolis algorithm. J. Combin. Theory Ser. A 108 (2004), no. 2, 275–296. [11] Fulman, J. Stein’s method and Plancherel measure of the symmetric group. Trans. Amer. Math. Soc. 357 (2005), no. 2, 555–570. [12] Fulman, J. Martingales and character ratios. Trans. Amer. Math. Soc. 358 (2006), no. 10, 4533–4552. [13] Fulman, J. An inductive proof of the Berry-Esseen theorem for character ratios. Ann. Comb. 10 (2006), no. 3, 319–332. [14] Fulman, J. and Goldstein, L. Zero biasing and Jack measures. Comb. Probab. Comput. 20 (2011), 753–762. [15] Ghosh, S. and Goldstein, L. Concentration of measures via size-biased couplings. Probab. Theory Related Fields 149 (2011), no. 1-2, 271–278. [16] Goldstein, L. and Reinert, G. Stein’s method and the zero bias transformation with application to simple random sampling. Ann. Appl. Probab. 7 (1997), no. 4, 935–952. 13 [17] Hora, A. Central limit theorem for the adjacency operators on the infinite symmetric group. Comm. Math. Phys. 195 (1998), no. 2, 405–416. [18] Hora, A. and Obata, N. Quantum probability and spectral analysis of graphs. Theoretical and Mathematical Physics. Springer, Berlin, 2007. xviii+371 pp. [19] Ivanov, V. and Olshanski, G. Kerov’s central limit theorem for the Plancherel measure on Young diagrams. Symmetric functions 2001: surveys of developments and perspectives, 93–151, NATO Sci. Ser. II Math. Phys. Chem., 74, Kluwer Acad. Publ., Dordrecht, 2002. [20] Kerov, S. Gaussian limit for the Plancherel measure of the symmetric group. C. R. Acad. Sci. Paris Ser. I Math. 316 (1993), no. 4, 303–308. [21] Kerov, S. Anisotropic Young diagrams and Jack symmetric functions. Funct. Anal. Appl. 34 (2000), 41–51. [22] Okounkov, A. The uses of random partitions. XIVth International Congress on Mathematical Physics, 379–403, World Sci. Publ., Hackensack, NJ, 2005. [23] Pinelis, I. Binomial upper bounds on generalized moments and tail probabilities of (super)martingales with differences bounded from above. High dimensional probability, 3352, IMS Lecture Notes Monogr. Ser., 51, Inst. Math. Statist., Beachwood, OH, 2006. [24] Raic, M. CLT-related large deviation bounds based on Stein’s method. Adv. in Appl. Probab. 39 (2007), no. 3, 731752. [25] Shao, Q. M. Almost sure invariance principles for mixing sequences of random variables. Stochastic Process. Appl. 48 (1993), no. 2, 319–334. [26] Shao, Q. M. and Su, Z. G. The Berry-Esseen bound for character ratios. Proc. Amer. Math. Soc. 134 (2006), no. 7, 2153–2159. ´ [27] Sniady, P. Gaussian fluctuations of characters of symmetric groups and of Young diagrams. Probab. Theory Related Fields 136 (2006), no. 2, 263–297. [28] Stein, C. Approximate computation of expectations. Institute of Mathematical Statistics Lecture Notes Monograph Series, 7. Institute of Mathematical Statistics, Hayward, CA. 1986. iv+164 pp. [29] Strassen, V. Almost sure behavior of sums of independent random variables and martingales. 1967 Proc. Fifth Berkeley Sympos. Math. Statist. and Probability (Berkeley, Calif., 1965/66) Vol. II: Contributions to Probability Theory, Part 1, pp. 315–343 Univ. California Press, Berkeley, Calif. [30] Su, Z. G. The law of the iterated logarithm for character ratios. Statist. Probab. Lett. 71 (2005), no. 4, 337–346. 14 [...]... + z 3 )(1 − Φ(z)) √ n (3.33) We complete the proof of (1.2) We also see that (1.2) holds if we replace Tn,α by −Tn,α Indeed, the proof does not change much We only have to note the two following points Firstly, if the random variable X ∗ has the X-zero biased distribution, then for all a = 0, aX ∗ has the (aX)-zero biased distribution (see [8, p 29]) Secondly, we observe that Xj,α ≥ −(λ1 − 1), so... ≤√ z n 12 (3.35) For z < −n1/6 , we argue similarly Acknowledgements The author is grateful to Professor Louis Chen for many useful discussions and for providing unconditional help The author also would like to thank Professor Andrew Rosalsky for very careful reading of the manuscript and valuable comments This work is supported in part by Vietnam Institute for Advanced Study in Mathematics (VIASM)... Berry-Esseen theorem for character ratios Ann Comb 10 (2006), no 3, 319–332 [14] Fulman, J and Goldstein, L Zero biasing and Jack measures Comb Probab Comput 20 (2011), 753–762 [15] Ghosh, S and Goldstein, L Concentration of measures via size-biased couplings Probab Theory Related Fields 149 (2011), no 1-2, 271–278 [16] Goldstein, L and Reinert, G Stein’s method and the zero bias transformation with application... Central limit theorem for the adjacency operators on the infinite symmetric group Comm Math Phys 195 (1998), no 2, 405–416 [18] Hora, A and Obata, N Quantum probability and spectral analysis of graphs Theoretical and Mathematical Physics Springer, Berlin, 2007 xviii+371 pp [19] Ivanov, V and Olshanski, G Kerov’s central limit theorem for the Plancherel measure on Young diagrams Symmetric functions 2001:... limit for the Plancherel measure of the symmetric group C R Acad Sci Paris Ser I Math 316 (1993), no 4, 303–308 [21] Kerov, S Anisotropic Young diagrams and Jack symmetric functions Funct Anal Appl 34 (2000), 41–51 [22] Okounkov, A The uses of random partitions XIVth International Congress on Mathematical Physics, 379–403, World Sci Publ., Hackensack, NJ, 2005 [23] Pinelis, I Binomial upper bounds on. .. concentration inequalities Probab Theory Related Fields 138 (2007), no 1-2, 305–321 [4] Chatterjee, S Concentration of Haar measures, with an application to random matrices J Funct Anal 245 (2007), no 2, 379–389 [5] Chatterjee, S., and Dey, P S Applications of Stein’s method for concentration inequalities Ann Probab 38 (2010), no 6, 2443–2485 [6] Chen, L H Y., Fang X., and Shao, Q M From Stein identities to moderate. .. National Foundation for Science Technology Development (NAFOSTED), grant no 101.01-2012.13 References [1] Bolthausen, E An estimate of the remainder in a combinatorial central limit theorem Z Wahrsch Verw Gebiete 66 (1984), no 3, 379–386 [2] Borodin, A and Olshanski, G Z -measures on partitions and their scaling limits European J Combin 26 (2005), no 6, 795–834 [3] Chatterjee, S Stein’s method for concentration... and then we replace (3.16) by (3.34) Applying (1.2) with −Tn,α in place of Tn,α , we have that (1.3) holds To obtain (1.2) and (1.3) for 0 < α < 1, we note that the Jack probability that Tn,α = w is equal to the Jack1 /α probability that Tn,1/α = −w (see Fulman [10, p 277]) From this we conclude that Pα (Tn,α > z) = P1/α (Tn,1/α < −z) and Pα (Tn,α < −z) = P1/α (Tn,1/α > z) Therefore, (1.2) holding for. .. moderate deviations Ann Probab 41 (2013), 262-293 [7] Chen, L H Y and Shao, Q M A non-uniform Berry-Esseen bound via Stein’s method Probab Theory Related Fields 120 (2001), no 2, 236–254 [8] Chen, L H Y., Goldstein, L and Shao, Q M Normal approximation by Stein’s method Probability and its Applications (New York) Springer, Heidelberg, 2011 xii+405 pp [9] Diaconis, P., and Holmes, S Random walk on trees... + n 42e n/α 42e n/α √ 2 Note that e−x /2 ≤ 2π(1 + x2 )(1 − Φ(x))/x for all x > 0 (see, e.g., [8, p 38]) Therefore, we can choose Nα ≥ nα such that for all n ≥ Nα and for 5 < x ≤ 2n1/6 , 2 2α(3 + 16α) 2αn2 √ 42e n/α + 2α2 n3 √ 42e n/α 1/2 ≤ (1 + x3 )(1 − Φ(x)) √ n (3.30) Thus, the conclusion of Lemma 3.5 follows from (3.29) Proof of Theorem 1.1 First, we assume that α ≥ 1 We rewrite (1.2) as follows: ... martingale properties of Jack measure and Stein’s method for zero bias couplings to obtain a Cram´er type moderate deviation for Tn,α Our main result is the following theorem Theorem 1.1 Let n ≥... Therefore, constructing ν from the Jack measure on partitions of size n − and then taking one step in Kerov’s growth process yields λ with the Jack measure on partitions of size n, we have Tn,α... [24] also used Stein’s method to prove moderate deviation for local dependent random variables Stein’s method for concentration inequalities and large deviation was studied by Chatterjee [3, 4],

Ngày đăng: 16/10/2015, 09:14

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan