1. Trang chủ
  2. » Giáo án - Bài giảng

Statistics in geophysics inferential statistics

26 205 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 26
Dung lượng 226,02 KB

Nội dung

Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Statistics in Geophysics: Inferential Statistics Steffen Unkel Department of Statistics Ludwig-Maximilians-University Munich, Germany Winter Term 2013/14 1/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Parameter estimation We will be studying problems of statistical inference Many problems of inference have been dichotomized into two areas: estimation of parameters and tests of hypotheses Parameter estimation: Let X be a random variable, whose density is fX (x; θ), where the form of the density is assumed known except that it contains an unknown parameter θ The problem is then to use the observed values x1 , , xn of a random sample X1 , , Xn to estimate the value of θ or the value of some function of θ, say τ (θ) Winter Term 2013/14 2/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Estimator and estimate Any statistic T = g (X1 , , Xn ) whose values are used to estimate θ is defined to be an estimator of θ That is, T is a known function of observable random variables that is itself a random variable An estimate is the realized value t = g (x1 , , xn ) of an estimator, which is a function of the realized values x1 , , xn ¯n = n Xi is an estimator of a mean µ and x¯n Example: X i=1 n ¯n , t is x¯n and g (·) is the is an estimate of µ Here, T is X function defined by summing the arguments and then dividing by n Winter Term 2013/14 3/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Background In 1921, R A Fisher pointed out an attractive rationale, called maximum likelihood (ML), for estimating parameters This procedure says one should examine the likelihood function of the sample values and take as the estimates of the unknown parameters those values that maximize this likelihood function ML is unifying concept to cover a broad range of problems It is generally accepted as the best rationale to apply in estimating parameters, when one is willing to assume the form of the population probability law is known Winter Term 2013/14 4/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Likelihood function If X1 , , Xn are an i.i.d sample from a population with pdf or pmf f (x|θ), the likelihood function is defined by n f (xi |θ) L(θ) = L(θ|x1 , , xn ) = i=1 Maximum likelihood principle: Given x1 , , xn take as the estimate of θ the value θˆ that maximizes the likelihood, that is, ˆ = max L(θ) L(θ) θ The value θˆ that maximizes the likelihood is called the maximum likelihood estimate (MLE) for θ Winter Term 2013/14 5/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Log-likelihood and score function It is often more convenient to work with the logarithm of the likelihood function, called the log-likelihood: n ln f (xi |θ) l(θ) = ln L(θ) = i=1 If the log-likelihood is differentiable (in θ), possible candidates for the MLE are the values that solve s(θ) = ∂ l(θ) = ∂θ The first derivative of the log-likelihood is called the score function Winter Term 2013/14 6/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Example i.i.d Let x1 , , xn be realizations from Xi ∼ P(λ) (i = 1, , n) with unknown parameter λ The aim is to estimate λ by maximum likelihood Likelihood function: L(λ) = f (x1 , , , xn |λ) = f (x1 |λ) · · · f (xn |λ) n f (xi |λ) = i=1 n = i=1 Winter Term 2013/14 λxi exp(−λ) xi ! 7/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Example Likelihood for i.i.d sample of n=10 from X ~ Pois(λ=2) x1 = x2 = x3 = x4 = x5 = x6 = x7 = x8 = x9 = x10 = Likelihood L(λ) 1e−03 8e−04 6e−04 4e−04 2e−04 0e+00 λ Winter Term 2013/14 8/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Example Log−likelihood for i.i.d sample of n=10 from X ~ Pois(λ=2) x1 = x2 = x3 = x4 = x5 = x6 = x7 = x8 = x9 = x10 = Log−likelihood l(λ) −20 −40 −60 −80 λ Winter Term 2013/14 9/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Example Score function for i.i.d sample of n=10 from X ~ Pois(λ=2) x1 = x2 = x3 = x4 = x5 = x6 = x7 = x8 = x9 = x10 = 100 Score function s(λ) 80 60 40 20 0 λ Winter Term 2013/14 10/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Other estimation methods The method of moments uses sample moments to estimate the parameters of an assumed probability law Least squares estimation minimizes the sum of the squares of the deviations of the observed values and the fitted values Bayesian estimation is based on combining the evidence contained in the data with prior knowledge, based on subjective probabilities, of the values of unknown parameters Winter Term 2013/14 12/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Evaluating estimators We have outlined reasonable techniques for finding out estimators of parameters Are some of many possible estimators better in some sense, than others? When we are faced with the choice of two or more estimators for the same parameter, it becomes important to develop criteria for comparing them We will now define certain properties, which an estimator may or may not possess, that will help us in deciding whether one estimator is better than another Winter Term 2013/14 13/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Unbiasedness Definition: An estimator T = g (X1 , , Xn ) is defined to be an unbiased estimator of an unknown parameter θ if and only if E(T ) = θ for all values of θ The difference E(T ) − θ is called the bias of T and can be either positive, negative, or zero An estimator T of θ is said to be asymptotically unbiased if lim E(T ) = θ n→∞ Winter Term 2013/14 14/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Precision of estimation For observations x1 , , xn an estimator T yields an estimate t = g (x1 , , xn ) In general, the estimate will not be equal to θ For unbiased estimators the precision of the estimation method is captured by the variance of the estimator, Var(T ) The square root of Var(T ) (the standard deviation of T ) is called the standard error, which in general has to be estimated itself Winter Term 2013/14 15/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Lower bound for variance Let X be a random variable with density f (x, θ) Under certain regularity conditions: Var(T ) ≥ nE ∂ ∂θ ln f (x, θ) , where T is an unbiased estimator of θ The equation above is called the Cram´er-Rao inequality, and the right-hand side is called the Cram´er-Rao lower bound for the variance of unbiased estimators of θ Winter Term 2013/14 16/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Mean-squared error Definition: The mean-squared error (MSE) of T = g (X1 , , Xn ) (as an estimator for θ) is MSE(T ) = E[(T − θ)2 ] = Var(T ) + (E(T ) − θ)2 Suppose T is an unbiased estimator of θ, then MSE(T ) = Var(T ) Winter Term 2013/14 17/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Consistency Definition: Let T = g (X1 , , Xn ) be an estimator for θ Then, T is a consistent estimator for θ if lim P(|T − θ| ≥ ) = for any n→∞ >0 From the Chebyshev inequality we know that P(|T − θ| ≥ ) ≤ = 2 E[(T − θ)2 ] MSE(T ) It follows that if MSE(T ) → as n → ∞, then T is consistent Winter Term 2013/14 18/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Efficiency Definition: If T1 and T2 are two estimators of θ, then T1 is more efficient than T2 if MSE(T1 ) ≤ MSE(T2 ) for any value of θ with strict inequality holding somewhere For two unbiased estimators T1 and T2 of θ, T1 is more efficient than T2 if Var(T1 ) ≤ Var(T2 ) for any value of θ with strict inequality holding somewhere Winter Term 2013/14 19/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Interval estimation So far, we have dealt with the point estimation of a parameter It seems desirable that a point estimate should be accompanied by some measure of the possible error of the estimate We might make the inference of estimating that the true value of the parameter is contained in some interval Interval estimation: Define two statistics T1 = g1 (X1 , , Xn ) and T2 = g2 (X1 , , Xn ), where T1 ≤ T2 , so that [T1 , T2 ] constitutes an interval for which the probability can be determined that it contains the unknown θ Winter Term 2013/14 20/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Confidence interval Definition: Given a random sample X1 , , Xn let T1 = g1 (X1 , , Xn ) and T2 = g2 (X1 , , Xn ) be two statistics satisfying T1 ≤ T2 for which P(T1 ≤ θ ≤ T2 ) = − α Then the random interval [T1 , T2 ] is called a (1 − α)-confidence interval for θ − α is called the confidence coefficient and T1 and T2 are called the lower and upper confidence limits, respectively A value [t1 , t2 ], where tj = gj (x1 , , xn ) (j = 1, 2) is an observed (1 − α)-confidence interval for θ Winter Term 2013/14 21/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals One-sided confidence interval Definition: Let T1 = −∞ and T2 = g2 (X1 , , Xn ) be a statistic for which P(θ ≤ T2 ) = − α Then T2 is called a one-sided upper confidence limit for θ Similarly, let T2 = ∞ and T1 = g1 (X1 , , Xn ) be a statistic for which P(T1 ≤ θ) = − α Then T1 is called a one-sided lower confidence limit for θ Winter Term 2013/14 22/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Confidence intervals for the mean (with known variance) 100(1 − α) %-confidence interval for µ (scenario σ known) For a normally distributed random variable X : ¯ − z1−α/2 √σ , X ¯ + z1−α/2 √σ X n n For an arbitrarily distributed random variable X and n > 30, ¯ − z1−α/2 √σ , X ¯ + z1−α/2 √σ X n n is an approximate confidence interval for µ For < p < 1, zp is the p-quantile of the standard normal distribution, that is, it is the value for which F (zp ) = Φ(zp ) = p Hence, zp = Φ−1 (p) Winter Term 2013/14 23/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Confidence intervals for the mean (with unknown variance) 100(1 − α) %-confidence interval for µ (scenario σ unknown) For a normally distributed random variable X : ¯ + t1−α/2 (n − 1) √S ¯ − t1−α/2 (n − 1) √S , X X n n , n ¯ where S = n−1 i=1 (Xi − X ) and t1−α/2 (n − 1) being the (1 − α/2)-quantile of the t-distribution with n − degrees of freedom For an arbitrarily distributed random variable X and n > 30, ¯ − z1−α/2 √S , X ¯ + z1−α/2 √S X n n is an approximate confidence interval for µ Winter Term 2013/14 24/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Confidence intervals for the variance 100(1 − α) %-confidence interval for σ For a normally distributed random variable X : (n − 1)S (n − 1)S , χ21−α/2 (n − 1) χ2α/2 (n − 1) , where χ21−α/2 (n − 1) and χ2α/2 (n − 1) denote the (1 − α/2)-quantile and (α/2)-quantile, respectively, of the chi-square distribution with n − degrees of freedom Winter Term 2013/14 25/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Confidence interval for a proportion 100(1 − α) %-confidence interval for π In dichotomous populations and for n > 30, an approximate confidence interval for π = P(X = 1) is given by π ˆ − z1−α/2 π ˆ (1 − π ˆ) ,π ˆ + z1−α/2 n π ˆ (1 − π ˆ) n ¯ denotes the relative frequency where π ˆ=X Winter Term 2013/14 26/26 , [...]... of Estimators Confidence Intervals Interval estimation So far, we have dealt with the point estimation of a parameter It seems desirable that a point estimate should be accompanied by some measure of the possible error of the estimate We might make the inference of estimating that the true value of the parameter is contained in some interval Interval estimation: Define two statistics T1 = g1 (X1 ,... interval for which the probability can be determined that it contains the unknown θ Winter Term 2013/14 20/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Confidence interval Definition: Given a random sample X1 , , Xn let T1 = g1 (X1 , , Xn ) and T2 = g2 (X1 , , Xn ) be two statistics satisfying T1 ≤ T2 for which P(T1 ≤ θ ≤ T2 ) = 1 − α Then the random interval... combining the evidence contained in the data with prior knowledge, based on subjective probabilities, of the values of unknown parameters Winter Term 2013/14 12/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Evaluating estimators We have outlined reasonable techniques for finding out estimators of parameters Are some of many possible estimators better in some sense, than... important to develop criteria for comparing them We will now define certain properties, which an estimator may or may not possess, that will help us in deciding whether one estimator is better than another Winter Term 2013/14 13/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Unbiasedness Definition: An estimator T = g (X1 , , Xn ) is defined to be an unbiased estimator... Properties of Estimators Confidence Intervals Efficiency Definition: If T1 and T2 are two estimators of θ, then T1 is more efficient than T2 if MSE(T1 ) ≤ MSE(T2 ) for any value of θ with strict inequality holding somewhere For two unbiased estimators T1 and T2 of θ, T1 is more efficient than T2 if Var(T1 ) ≤ Var(T2 ) for any value of θ with strict inequality holding somewhere Winter Term 2013/14 19/26 Estimation... freedom Winter Term 2013/14 25/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Confidence interval for a proportion 100(1 − α) %-confidence interval for π In dichotomous populations and for n > 30, an approximate confidence interval for π = P(X = 1) is given by π ˆ − z1−α/2 π ˆ (1 − π ˆ) ,π ˆ + z1−α/2 n π ˆ (1 − π ˆ) n ¯ denotes the relative frequency where π ˆ=X Winter... − 1) being the (1 − α/2)-quantile of the t-distribution with n − 1 degrees of freedom For an arbitrarily distributed random variable X and n > 30, ¯ − z1−α/2 √S , X ¯ + z1−α/2 √S X n n is an approximate confidence interval for µ Winter Term 2013/14 24/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Confidence intervals for the variance 100(1 − α) %-confidence interval... α)-confidence interval for θ 1 − α is called the confidence coefficient and T1 and T2 are called the lower and upper confidence limits, respectively A value [t1 , t2 ], where tj = gj (x1 , , xn ) (j = 1, 2) is an observed (1 − α)-confidence interval for θ Winter Term 2013/14 21/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals One-sided confidence interval Definition:... an approximate confidence interval for µ For 0 < p < 1, zp is the p-quantile of the standard normal distribution, that is, it is the value for which F (zp ) = Φ(zp ) = p Hence, zp = Φ−1 (p) Winter Term 2013/14 23/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Confidence intervals for the mean (with unknown variance) 100(1 − α) %-confidence interval for µ (scenario... is met Winter Term 2013/14 11/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Other estimation methods The method of moments uses sample moments to estimate the parameters of an assumed probability law Least squares estimation minimizes the sum of the squares of the deviations of the observed values and the fitted values Bayesian estimation is based on combining the ... defined by summing the arguments and then dividing by n Winter Term 2013/14 3/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Background In 1921, R A Fisher pointed... parameters Winter Term 2013/14 12/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Evaluating estimators We have outlined reasonable techniques for finding out estimators... strict inequality holding somewhere Winter Term 2013/14 19/26 Estimation by Maximum Likelihood Properties of Estimators Confidence Intervals Interval estimation So far, we have dealt with the point

Ngày đăng: 04/12/2015, 17:09

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

  • Đang cập nhật ...

TÀI LIỆU LIÊN QUAN