1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Multivariate Statistics: Exercises and Solutions potx

367 314 1

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 367
Dung lượng 2,91 MB

Nội dung

[...]... of random variables or vectors X and Y conditional expectation of random variable or vector Y given X = x conditional expectation of Y given X conditional variance of Y given X = x conditional variance of Y given X covariance between random variables X and Y variance of random variable X correlation between random variables X and Y covariance between random vectors X and Y , i.e., Cov(X, Y ) = E(X −... 1.9 Use some bandwidth selection criterion to calculate the optimally chosen bandwidth h for the diagonal variable of the bank notes Would it be better to have one bandwidth for the two groups? The bandwidth h controls the amount of detail seen in the histogram Too large bandwidths might lead to loss of important information whereas a too small bandwidth introduces a lot of random noise and artificial... empirical covariance of random variables X and Y sampled by {xi }i=1, ,n and {yi }i=1, ,n (xi − x)2 i=1 sXY rXY = √ sXX sY Y S = {sXi Xj } R = {rXi Xj } empirical variance of random variable X sampled by {xi }i=1, ,n empirical correlation of X and Y empirical covariance matrix of X1 , , Xp or of the random vector X = (X1 , , Xp ) empirical correlation matrix of X1 , , Xp or of the random vector X =... p-value is smaller than the significance level α, the null hypothesis is rejected random variable and vector Random events occur in a probability space with a certain even structure A random variable is a function from this probability space to R (or Rp for random vectors) also known as the state space The concept of a random variable (vector) allows one to elegantly describe events that are happening... between “too large” and “too small” is provided by bandwidth selection methods The Silverman’s rule of thumb—referring to the normal distribution—is one of the simplest methods Using Silverman’s rule of thumb for Gaussian kernel, hopt = σn−1/5 1.06, the optimal bandwidth is 0.1885 for the genuine banknotes and 0.2352 for the counterfeit ones The optimal bandwidths are different and indeed, for comparison... the ith row and jth column (i.e., the row and column containing aij ): the minor of aij is |Aij |, and the cofactor is the “signed” minor (−1)i+j |Aij | cofactor matrix The cofactor matrix (or matrix of cofactors) of an n × n matrix A = {aij } is the n × n matrix whose ijth element is the cofactor of aij conditional distribution Consider the joint distribution of two random vectors X ∈ Rp and Y ∈ Rq... (µ, σ 2 ) Np (µ, Σ) L −→ P −→ CLT χ2 p χ2 1−α;p tn t1−α/2;n Fn,m F1−α;n,m density of the standard normal distribution distribution function of the standard normal distribution standard normal or Gaussian distribution normal distribution with mean µ and variance σ 2 p-dimensional normal distribution with mean µ and covariance matrix Σ convergence in distribution convergence in probability Central Limit... 8.7, 9.8, 9.8, 9.8, 10.4, 13.9, 15.1, 15.8, 16.8, 17.1, 17.3, 19.9, and construct the boxplot There are n = 16 federal states, the depth of the median is therefore (16 + 1).2 = 8.5 and the depth of fourths is 4.5 The median is equal to the average of the 8th and 9th smallest observation, i.e., M = 1 x( n ) + x( n +1) = 10.1 and the lower and upper fourths (quartiles) 2 2 2 are FL = 1 (x(4) + x(5) ) = 8.3,... and the summation is over all such permutations eigenvalues and eigenvectors An eigenvalue of an n × n matrix A is (by definition) a scalar (real number), say λ, for which there exists an n × 1 vector, say x, such that Ax = λx, or equivalently such that (A − λI)x = 0; any such vector x is referred to as an eigenvector (of A) and is said to belong to (or correspond to) the eigenvalue λ Eigenvalues (and. .. marginal distribution For two random vectors X and Y with the joint pdf f (x, y), the marginal pdfs are defined as fX (x) = f (x, y)dy and fY (y) = f (x, y)dx marginal moments The marginal moments are the moments of the marginal distribution mean The mean is the first-order empirical moment x = n n−1 i=1 xi = m1 xdFn (x) = mean squared error (MSE) Suppose that for a random vector C with a distribution .

Ngày đăng: 22/03/2014, 15:20

TỪ KHÓA LIÊN QUAN