The Central Limit Theorem for Sample Means (Averages) tài liệu, giáo án, bài giảng , luận văn, luận án, đồ án, bài tập l...
Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 2011, Article ID 576301, 9 pages doi:10.1155/2011/576301 Research Article Almost Sure Central Limit Theorem for Product of Partial Sums of Strongly Mixing Random Variables Daxiang Ye and Qunying Wu College of Science, Guilin University of Technology, Guilin 541004, China Correspondence should be addressed to Daxiang Ye, 3040801111@163.com Received 19 September 2010; Revised 1 January 2011; Accepted 26 January 2011 Academic Editor: Ond ˇ rej Do ˇ sl ´ y Copyright q 2011 D. Ye and Q. Wu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We give here an almost sure central limit theorem for product of sums of strongly mixing positive random variables. 1. Introduction and Results In recent decades, there has been a lot of work on the almost sure central limit theorem ASCLT, we can refer to Brosamler 1, Schatte 2, Lacey and Philipp 3, and Peligrad and Shao 4. Khurelbaatar and Rempala 5 gave an ASCLT for product of partial sums of i.i.d. random variables as follows. Theorem 1.1. Let {X n ,n ≥ 1} be a sequence of i.i.d. positive random variables with EX 1 μ>0 and VarX 1 σ 2 . Denote γ σ/μ the coefficient of variation. Then for any real x lim n →∞ 1 ln n n k1 1 k I ⎛ ⎝ k i1 S i k!μ k 1/γ √ k ≤ x ⎞ ⎠ F x a.s., 1.1 where S n n k1 X k , I∗ is the indicator function, F· is the distribution function of the random variable e N , and N is a standard normal variable. Recently, Jin 6 had p roved that 1.1 holds under appropriate conditions for strongly mixing positive random variables and gave an ASCLT for product of partial sums of strongly mixing as follows. 2 Journal of Inequalities and Applications Theorem 1.2. Let {X n ,n ≥ 1} be a sequence of identically distributed positive strongly mixing random variable with EX 1 μ>0 and VarX 1 σ 2 , d k 1/k, D n n k1 d k . Denote by γ σ/μ the coefficient of variation, σ 2 n Var n k1 S k − kμ/kσ and B 2 n VarS n . Assume E | X 1 | 2δ < ∞ for some δ>0, lim n →∞ B 2 n n σ 2 0 > 0, α n O n −r for some r>1 2 δ , inf n∈N σ 2 n n > 0. 1.2 Then for any real x lim n →∞ 1 D n n k1 d k I ⎛ ⎝ k i1 S i k!μ k 1/γσ k ≤ x ⎞ ⎠ F x a.s. 1.3 The sequence {d k ,k ≥ 1} in 1.3 is called weight. Under the conditions of Theorem 1.2, it is easy to see that 1.3 holds for every sequence d ∗ k with 0 ≤ d ∗ k ≤ d k and D ∗ n k≤n d ∗ k →∞ 7. Clearly, the larger the weight sequence d k is, the stronger is the result 1.3. In the following sections, let d k e ln k α /k,0≤ α<1/2,D n n k1 d k ,“” denote the inequality “≤” up to some universal constant. We first give an ASCLT for strongly mixing positive random variables. Theorem 1.3. Let {X n ,n ≥ 1} be a sequence of identically distributed positive strongly mixing random variable with EX 1 μ>0 and VarX 1 σ 2 , d k and D n as mentioned above. Denote by γ σ/μ the coefficient of variation, σ 2 n Var n k1 S k − kμ/kσ and B 2 n VarS n . Assume that E | X 1 | 2δ < ∞ for some δ>0, 1.4 α n O n −r for some r>1 2 δ , 1.5 lim n The Central Limit Theorem for Sample Means (Averages) The Central Limit Theorem for Sample Means (Averages) By: OpenStaxCollege Suppose X is a random variable with a distribution that may be known or unknown (it can be any distribution) Using a subscript that matches the random variable, suppose: μX = the mean of X σX = the standard deviation of X ¯ If you draw random samples of size n, then as n increases, the random variable X which consists of sample means, tends to be normally distributed and ¯ X ~ N μx , ( σx √n ) The central limit theorem for sample means says that if you keep drawing larger and larger samples (such as rolling one, two, five, and finally, ten dice) and calculating their means, the sample means form their own normal distribution (the sampling distribution) The normal distribution has the same mean as the original distribution and a variance that equals the original variance divided by, the sample size The variable n is the number of values that are averaged together, not the number of times the experiment is done To put it more formally, if you draw random samples of size n, the distribution of the ¯ random variable X, which consists of sample means, is called the sampling distribution of the mean The sampling distribution of the mean approaches a normal distribution as n, the sample size, increases ¯ The random variable X has a different z-score associated with it from that of the random ¯ ¯ variable X The mean x is the value of X in one sample 1/13 The Central Limit Theorem for Sample Means (Averages) z= ¯ x − μx ( ) σx √n ¯ μX is the average of both X and X ¯ σx = σx √n ¯ = standard deviation of X and is called the standard error of the mean To find probabilities for means on the calculator, follow these steps 2nd DISTR 2:normalcdf ( normalcdf lower value of the area, upper value of the area, mean, standard deviation √sample size ) where: • mean is the mean of the original distribution • standard deviation is the standard deviation of the original distribution • sample size = n An unknown distribution has a mean of 90 and a standard deviation of 15 Samples of size n = 25 are drawn randomly from the population a Find the probability that the sample mean is between 85 and 92 a Let X = one value from the original unknown population The probability question asks you to find a probability for the sample mean ¯ Let X = the mean of a sample of size 25 Since μX = 90, σX = 15, and n = 25, ¯ X ~ N 90, ( 15 √25 ) ¯ Find P(85 < x < 92) Draw a graph ¯ P(85 < x < 92) = 0.6997 The probability that the sample mean is between 85 and 92 is 0.6997 2/13 The Central Limit Theorem for Sample Means (Averages) normalcdf(lower value, upper value, mean, standard error of the mean) The parameter list is abbreviated (lower value, upper value, μ, σ √n ) 15 normalcdf(85,92,90, √25 ) = 0.6997 b Find the value that is two standard deviations above the expected value, 90, of the sample mean b To find the value that is two standard deviations above the expected value 90, use the formula: value = μx + (#ofTSDEVs) value = 90 + ( ) σx √n ( √1525 ) = 96 The value that is two standard deviations above the expected value is 96 σx 15 The standard error of the mean is √n = √25 = Recall that the standard error of the mean is a description of how far (on average) that the sample mean will be from the population mean in repeated simple random samples of size n Try It An unknown distribution has a mean of 45 and a standard deviation of eight Samples of size n = 30 are drawn randomly from the population Find the probability that the sample mean is between 42 and 50 ¯ P(42 < x < 50) = 42,50,45, √30 = 0.9797 ( ) 3/13 The Central Limit Theorem for Sample Means (Averages) The length of time, in hours, it takes an "over 40" group of people to play one soccer match is normally distributed with a mean of two hours and a standard deviation of 0.5 hours A sample of size n = 50 is drawn randomly from the population Find the probability that the sample mean is between 1.8 hours and 2.3 hours Let X = the time, in hours, it takes to play one soccer match The probability question asks you to find a probability for the sample mean time, in hours, it takes to play one soccer match ¯ Let X = the mean time, in hours, it takes to play one soccer match If μX = _, σX = , and n = _, then X ~ N( , ) by the central limit theorem for means ( μX = 2, σX = 0.5, n = 50, and X ~ N 2, 0.5 √50 ) ¯ Find P(1.8 < x < 2.3) Draw a graph ¯ P(1.8 < x < 2.3) = 0.9977 ( ) normalcdf 1.8,2.3,2, √50 = 0.9977 The probability that the mean time is between 1.8 hours and 2.3 hours is 0.9977 Try It The length of time taken on the SAT for a group of students is normally distributed with a mean of 2.5 hours and a standard deviation of 0.25 hours A sample size of n = 60 is drawn randomly from the population Find the probability that the sample mean is between two hours and three hours ¯ P(2 < x < 3) = normalcdf 2, 3, 2.5, ( 0.25 √60 )=1 To find percentiles for ...Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 2010, Article ID 130915, 10 pages doi:10.1155/2010/130915 Research Article Almost Sure Central Limit Theorem for a Nonstationary Gaussian Sequence Qing-pei Zang School of Mathematical Science, Huaiyin Normal University, Huaian 223300, China Correspondence should be addressed to Qing-pei Zang, zqphunhu@yahoo.com.cn Received 4 May 2010; Revised 7 July 2010; Accepted 12 August 2010 Academic Editor: Soo Hak Sung Copyright q 2010 Qing-pei Zang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Let {X n ; n ≥ 1} be a standardized non-stationary Gaussian sequence, and let denote S n n k1 X k , σ n VarS n . Under some additional condition, let the constants {u ni ;1≤ i ≤ n, n ≥ 1} satisfy n i1 1 −Φu ni → τ as n →∞for some τ ≥ 0andmin 1≤i≤n u ni ≥ clog n 1/2 ,forsomec>0, then, we have lim n →∞ 1/ log n n k1 1/kI{∩ k i1 X i ≤ u ki ,S k /σ k ≤ x} e −τ Φx almost surely for any x ∈ R,whereIA is the indicator function of the event A and Φx stands for the standard normal distribution function. 1. Introduction When {X, X n ; n ≥ 1} is a sequence of independent and identically distributed i.i.d. random variables and S n n k1 X k ,n ≥ 1,M n max 1≤k≤n X k for n ≥ 1. If EX0, VarX1, the so-called almost sure central limit theorem ASCLT has the simplest form as follows: lim n →∞ 1 log n n k1 1 k I S k √ k ≤ x Φ x , 1.1 almost surely for all x ∈ R, where IA is the indicator function of the event A and Φx stands for the standard normal distribution function. This result was first proved independently by Brosamler 1 and Schatte 2 under a stronger moment condition; since then, this type of almost sure version was extended to different directions. For example, Fahrner and Stadtm ¨ uller 3 and Cheng et al. 4 extended this almost sure convergence for partial sums to the case of maxima of i.i.d. random variables. Under some natural conditions, they proved as follows: lim n →∞ 1 log n n k1 1 k I M k − b k a k ≤ x G x a.s. 1.2 2 Journal of Inequalities and Applications for all x ∈ R, where a k > 0andb k ∈ R satisfy P M k − b k a k ≤ x −→ G x , as k −→ ∞ 1.3 for any continuity point x of G. In a related work, Cs ´ aki and Gonchigdanzan 5 investigated the validity of 1.2 for maxima of stationary Gaussian sequences under some mild condition whereas Chen and Lin 6 extended it to non-stationary Gaussian sequences. Recently, Dudzi ´ nski 7 obtained two-dimensional version for a standardized stationary Gaussian sequence. In this paper, inspired by the above results, we further study ASCLT in the joint version for a non-stationary Gaussian sequence. 2. Main Result Throughout this paper, let {X n ; n ≥ 1} be a non-stationary standardized normal sequence, and σ n VarS n .Herea b and a ∼ b stand for a Ob and a/b → 1, respectively. Φx is the standard normal distribution function, and φx is its density function; C will denote a positive constant although its value may change from one appearance to the next. Now, we state our main result as follows. Theorem 2.1. Let {X n ; n ≥ 1} be a sequence of non-stationary standardized Gaussian variables with covariance matrix r ij such that 0 ≤ r ij ≤ ρ |i−j| for i / j,whereρ n ≤ 1 for all n ≥ 1 and sup s≥n s−1 is−n ρ i log Vietnam Journal of Mathematics 33:4 (2005) 443–461 Central Limit Theorem for Functional of Jump Markov Processes Nguyen Van Huu, Vuong Quan Hoang, and Tran Minh Ngoc Department of Mathematics Hanoi National University, 334 Nguyen Trai Str., Hanoi, Vietnam Received February 8, 2005 Revised May 19, 2005 Abstract. In this paper some conditions a re given to ensure that for a jump homoge- neous Markov process {X(t),t≥ 0} the law of the integral functional of the process: T −1/2 T 0 ϕ(X(t))dt, converges to the normal law N(0,σ 2 ) as T →∞,whereϕ is a mapping from the state space E into R. 1. Introduction The central limit theorem is a subject investigated intensively by many well- known probabilists such as Linderberg, Chung, The results concerning cen- tral limit theorems, the iterated logarithm law, the lower and upper bounds of the moderate deviations are well understood for independent random variable sequences and for martingales but less is known for dependent random variables such as Markov chains and Markov processes. The first result on central limit for functionals of stationary Markov chain with a finite state space can be found in the book of Chung [5]. A technical method for establishing the central limit is the regeneration method. The main idea of this method is to analyse the Markov process with arbitrary state space by dividing it into independent and identically distributed random blocks between visits to fixed state (or atom). This technique has been developed by Athreya - Ney [2], Nummelin [10], Meyn - Tweedie [9] and recently by Chen [4]. The technical method used in this paper is based on central limit for mar- tingales and ergodic theorem. The paper is ogranized as follows: In Sec. 2, we shall prove that for a positive recurrent Markov sequence 444 Nguyen Van Huu, Vuong Quan Hoang, and Tran Minh Ngoc {X n ,n≥ 0} with Borel state space (E, B)andforϕ : E → R such that ϕ(x)=f(x) −Pf(x)=f(x) − E f(y)P (x, dy) with f : E → R such that E f 2 (x)Π(dx) < ∞,whereP (x, .) is the transition probability and Π(.) is the stationary distribution of the process, the distribution of n −1/2 n i=1 ϕ(X i ) converges to the normal law N(0,σ 2 )withσ 2 = E (ϕ 2 (x)+ 2ϕ(x)Pf(x))Π(dx). The central limit theorem for the integral functional T −1/2 T 0 ϕ(X(t))dt of jump Markov process {X(t),t≥ 0} will be established and proved in Sec. 3. Some examples will be given in Sec. 4. It is necessary to emphasize that the conditions for normal asymptoticity of n −1/2 n i=1 ϕ(X i ) is the same as in [8] but they are not equivalent to the ones established in [10, 11]. The results on the central limit for jump Markov processes obtained in this paper are quite new. 2. Central Limit for the Functional of Markov Sequence Let us consider a Markov sequence {X n ,n ≥ 0} defined on a basic probability space (Ω, F,P) with the Borel state space (E,B), where B is the σ-algebra generated by the countable family of subsets of E. Suppose that {X n ,n≥ 0} is homogeneous with transition probability P (x, A)=P (X n+1 ∈ A|X n = x),A∈B. We have the following definitions Definition 2.1. Markov process {X n ,n ≥ 0} is said to be irreducible if there exists a σ- finite measure μ on (E, B) such that for all A ∈B μ(A) > 0 implies ∞ n=1 P n (x, A) > 0, ∀x ∈ E where P n (x, A)=P (X m+n ∈ A|X m = x). The measure μ is called irreducible measure. By Proposition 2.4 of Nummelin [10], there exists a maximum irreducible measure μ ∗ possessing the property that if μ is any irreducible measure then μ μ ∗ . Definition 2.2. Markov process {X n ,n≥ 0} is said to be recurrent if ∞ n=1 P n (x, A)=∞, ∀x ∈ E,∀A ∈B: μ ∗ (A) > 0. The process is said to be Harris recurrent if P x (X n ∈ Ai.o.)=1. Central Limit Theor em for Functional of Jump Markov Processes 445 Let us notice that a process which is Harris recurrent is also recurrent. Theorem 2.1. If {X n ,n≥ 0} is recurrent then there exists a uniquely invariant measure Π(.) on (E,B) (up to constant multiples) in CONVERGENCE RATE IN THE CENTRAL LIMIT THEOREM FOR THE CURIE-WEISS-POTTS MODEL HAN HAN (HT080869E) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT OF MATHEMATICS NATIONAL UNIVERSITY OF SINGAPORE i Acknowledgements First and foremost, it is my great honor to work under Assistant Professor Sun Rongfeng, for he has been more than just a supervisor to me but as well as a supportive friend; never in my life I have met another person who is so knowledgeable but yet is extremely humble at the same time. Apart from the inspiring ideas and endless support that Prof. Sun has given me, I would like to express my sincere thanks and heartfelt appreciation for his patient and selfless sharing of his knowledge on probability theory and statistical mechanics, which has tremendously enlightened me. Also, I would like to thank him for entertaining all my impromptu visits to his office for consultation. Many thanks to all the professors in the Mathematics department who have taught me before. Also, special thanks to Professor Yu Shih-Hsien and Xu Xingwang for patiently answering my questions when I attended their classes. I would also like to take this opportunity to thank the administrative staff of the Department of Mathematics for all their kindness in offering administrative assistant once to me throughout my master’s study in NUS. Special mention goes to Ms. Shanthi D/O D Devadas, Mdm. Tay Lee Lang and Mdm. Lum Yi Lei for always entertaining my request with a smile on their face. Last but not least, to my family and my classmates, Wang Xiaoyan, Huang Xiaofeng and Hou Likun, thanks for all the laughter and support you have given me throughout my master’s study. It will be a memorable chapter of my life. Han Han Summer 2010 Contents Acknowledgements i Summary iii 1 Introduction 1 2 The Curie-Weiss-Potts Model 4 2.1 The Curie-Weiss-Potts Model . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 The Phase Transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 Stein’s Method and Its Application 17 3.1 The Stein Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.2 The Stein Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.3 An Approximation Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.4 An Application of Stein’s Method . . . . . . . . . . . . . . . . . . . . . . . 23 4 Main Results 31 Bibliography 37 ii iii Summary There is a long tradition in considering mean-field models in statistical mechanics. The Curie-Weiss-Potts model is famous, since it exhibits a number of properties of real substances, such as multiple phases, metastable states and others, explicitly. The aim of this paper is to prove Berry-Esseen bounds for the sums of the random variables occurring in a statistical mechanical model called the Curie-Weiss-Potts model or mean-field Potts model. To this end, we will apply Stein’s method using exchangeable pairs. The aim of this thesis is to calculate the convergence rate in the central limit theorem for the Curie-Weiss-Potts model. In chapter 1, we will give an introduction to this problem. In chapter 2, we will introduce the Curie-Weiss-Potts model, including the Ising model and the Curie-Weiss model. Then we will give some results about the phase transition of the Curie-Weiss-Potts model. In chapter 3, we state Stein’s method first, then give the Stein operator and an approximation theorem. In section 4 of this chapter, we will give an application of Stein’s method. In chapter, we will state the main result of this thesis and prove it. Chapter 1 Introduction There is a long tradition in considering mean-field models in statistical mechanics. The Curie-Weiss-Potts model is famous, since it exhibits a number of properties of real substances, such as multiple phases, metastable states and others, explicitly. The aim of this paper is to prove Berry-Esseen bounds for the sums of the This Provisional PDF corresponds to the article as it appeared upon acceptance. Fully formatted PDF and full text (HTML) versions will be made available soon. A note on the almost sure limit theorem for self-normalized partial sums of random variables in the domain of attraction of the normal law Journal of Inequalities and Applications 2012, 2012:17 doi:10.1186/1029-242X-2012-17 Qunying Wu (wqy666@glite.edu.cn) ISSN 1029-242X Article type Research Submission date 4 August 2011 Acceptance date 20 January 2012 Publication date 20 January 2012 Article URL http://www.journalofinequalitiesandapplications.com/content/2012/1/17 This peer-reviewed article was published immediately upon acceptance. It can be downloaded, printed and distributed freely for any purposes (see copyright notice below). For information about publishing your research in Journal of Inequalities and Applications go to http://www.journalofinequalitiesandapplications.com/authors/instructions/ For information about other SpringerOpen publications go to http://www.springeropen.com Journal of Inequalities and Applications © 2012 Wu ; licensee Springer. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A note on the almost sure limit theorem for self-normalized partial sums of random variables in the domain of attraction of the normal law Qunying Wu 1,2 1 College of Science, Guilin University of Technology, Guilin 541004, P. R. China 2 Guangxi Key Laboratory of Spatial Information and Geomatics, Guilin 541004, P.R. China Email address: wqy666@glite.edu.cn Abstract Let X, X 1 , X 2 , . . . be a sequence of independent and identically distributed random variables in the domain of attraction of a normal distribution. A universal result in almost sure limit theorem for the self-normalized partial sums S n /V n is established, where S n = n i=1 X i , V 2 n = n i=1 X 2 i . Mathematical Scientific Classification: 60F15. Keywords: domain of attraction of the normal law; self-normalized partial sums; almost sure central limit theorem. 1. Introduction Throughout this article, we assume {X, X n } n∈N is a sequence of independent and identically distributed (i.i.d.) random variables with a non-degenerate distribution function F. For each n ≥ 1, the symbol S n /V n denotes self- normalized partial sums, where S n = n i=1 X i , V 2 n = n i=1 X 2 i . We say that the random variable X belongs to the domain of attraction of the normal law, if there exist constants a n > 0, b n ∈ R such that S n − b n a n d −→ N, (1) where N is the standard normal random variable. We say that {X n } n∈N satisfies the central limit theorem (CLT). It is known that (1) holds if and only if lim x→∞ x 2 P(|X| > x) EX 2 I(|X| ≤ x) = 0. (2) In contrast to the well-known classical central limit theorem, Gine et al. [1] obtained the following self-normalized version of the central limit theorem: The Central Limit Theorem for Sums The Central Limit Theorem for Sums By: OpenStaxCollege Suppose X is a random variable with a distribution that may be known or unknown (it can be any distribution) and suppose: μX = the mean of Χ σΧ = the standard deviation of X If you draw random samples of size n, then as n increases, the random variable ΣX consisting of sums tends to be normally distributed and ΣΧ ~ N((n)(μΧ), (√n)(σΧ)) The central ... (n) Formula Review ¯ The Central Limit Theorem for Sample Means: X ~ N μx, ( σx √n ) ¯ The Mean X: μx Central Limit Theorem for Sample Means z-score and standard error of the mean: z= ¯ x − μx (... accounts for the different probabilities Find the 95th percentile for the mean time to complete one month's reviews Sketch the graph 8/13 The Central Limit Theorem for Sample Means (Averages) The. .. distribution for averages? The mean, median, and mode are equal The area under the curve is one The curve never touches the x-axis The curve is skewed to the right 12/13 The Central Limit Theorem for Sample