1. Trang chủ
  2. » Giáo Dục - Đào Tạo

General Random Variables

8 192 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 8
Dung lượng 195,52 KB

Nội dung

Chapter 11 General Random Variables 11.1 Law of a Random Variable Thus far we have considered only random variables whose domain and range are discrete. We now consider a general random variable X :!IR defined on the probability space ; F ; P . Recall that: F is a  -algebra of subsets of  .  IP is a probability measure on F , i.e., IP A is defined for every A 2F . A function X :!IR is a random variable if and only if for every B 2BIR (the  -algebra of Borel subsets of IR), the set fX 2 B g 4 = X ,1 B  4 = f! ; X ! 2 B g2F; i.e., X :!IR is a random variable if and only if X ,1 is a function from BIR to F (See Fig. 11.1) Thus any random variable X induces a measure  X on the measurable space IR; BIR defined by  X B =IP  X ,1 B  8B 2BIR; where the probabiliy on the right is defined since X ,1 B  2F .  X is often called the Law of X – in Williams’ book this is denoted by L X . 11.2 Density of a Random Variable The density of X (if it exists) is a function f X : IR!0; 1 such that  X B = Z B f X xdx 8B 2BIR: 123 124 R Ω B}ε B X {X Figure 11.1: Illustrating a real-valued random variable X . We then write d X x=f X xdx; where the integral is with respect to the Lebesgue measure on IR. f X is the Radon-Nikodym deriva- tive of  X with respect to the Lebesgue measure. Thus X has a density if and only if  X is absolutely continuous with respect to Lebesgue measure, which means that whenever B 2BIR has Lebesgue measure zero, then IP fX 2 B g =0: 11.3 Expectation Theorem 3.32 (Expectation of a function of X ) Let h : IR!IR be given. Then IEhX 4 = Z  hX! dIP !  = Z IR hx d X x = Z IR hxf X x dx: Proof: (Sketch). If hx=1 B x for some B  IR , then these equations are IE 1 B X  4 = P fX 2 B g =  X B  = Z B f X x dx; which are true by definition. Now use the “standard machine” to get the equations for general h . CHAPTER 11. General Random Variables 125 Ω ε (X,Y) { (X,Y) C} C x y Figure 11.2: Two real-valued random variables X; Y . 11.4 Two random variables Let X; Y be two random variables !IR defined on the space ; F ; P .Then X; Y induce a measure on BIR 2  (see Fig. 11.2) called the joint law of X; Y  ,definedby  X;Y C 4 = IP fX; Y  2 C g 8C 2BIR 2 : The joint density of X; Y  is a function f X;Y : IR 2 !0; 1 that satisfies  X;Y C= ZZ C f X;Y x; y  dxdy 8C 2BIR 2 : f X;Y is the Radon-Nikodym derivative of  X;Y with respect to the Lebesgue measure (area) on IR 2 . We compute the expectation of a function of X; Y in a manner analogous to the univariate case: IEkX; Y  4 = Z  kX !;Y! dIP !  = ZZ IR 2 kx; y  d X;Y x; y  = ZZ IR 2 kx; y f X;Y x; y  dxdy 126 11.5 Marginal Density Suppose X; Y  has joint density f X;Y .Let B  IR be given. Then  Y B  = IP fY 2 B g = IP fX; Y  2 IR  Bg =  X;Y IR  B = Z B Z IR f X;Y x; y  dxdy = Z B f Y y  dy ; where f Y y  4 = Z IR f X;Y x; y  dx: Therefore, f Y y  is the (marginal) density for Y . 11.6 Conditional Expectation Suppose X; Y  has joint density f X;Y .Let h : IR!IR be given. Recall that IE hX jY  4 = IE hX jY  depends on ! through Y , i.e., there is a function g y  ( g depending on h ) such that IE hX jY != gY!: How do we determine g ? We can characterize g using partial averaging: Recall that A 2 Y A = fY 2 B g for some B 2BIR . Then the following are equivalent characterizations of g : Z A g Y  dIP = Z A hX  dIP 8A 2 Y ; (6.1) Z  1 B Y g Y  dIP = Z  1 B Y hX  dIP 8B 2BIR; (6.2) Z IR 1 B y g y  Y dy = ZZ IR 2 1 B yhx d X;Y x; y  8B 2BIR; (6.3) Z B g y f Y y  dy = Z B Z IR hxf X;Y x; y  dxdy 8B 2BIR: (6.4) CHAPTER 11. General Random Variables 127 11.7 Conditional Density A function f X jY xjy :IR 2 !0; 1 is called a conditional density for X given Y provided that for any function h : IR!IR : g y = Z IR hxf XjY xjydx: (7.1) (Here g is the function satisfying IE hX jY =gY; and g depends on h ,but f X jY does not.) Theorem 7.33 If X; Y  has a joint density f X;Y ,then f X jY xjy= f X;Y x; y  f Y y  : (7.2) Proof: Just verify that g defined by (7.1) satisfies (6.4): For B 2BIR; Z B Z IR hxf XjY xjydx | z  gy  f Y y  dy = Z B Z IR hxf X;Y x; y  dxdy : Notation 11.1 Let g be the function satisfying IE hX jY =gY: The function g is often written as g y = IEhXjY = y; and (7.1) becomes IE hX jY = y = Z IR hxf XjY xjydx: In conclusion, to determine IE hX jY  (a function of ! ), first compute g y = Z IR hxf XjY xjydx; and then replace the dummy variable y by the random variable Y : IE hX jY != gY!: Example 11.1 (Jointly normal random variables) Given parameters:  1  0; 2  0;,1 1 .Let X; Y  have the joint density f X;Y x; y= 1 2 1  2 p 1 ,  2 exp  , 1 21 ,  2   x 2  2 1 , 2 x  1 y  2 + y 2  2 2  : 128 The exponent is , 1 21 ,  2   x 2  2 1 , 2 x  1 y  2 + y 2  2 2  = , 1 21 ,  2  "  x  1 ,  y  2  2 + y 2  2 2 1 ,  2   = , 1 21 ,  2  1  2 1  x ,  1  2 y  2 , 1 2 y 2  2 2 : We can compute the Marginal density of Y as follows f Y y = 1 2 1  2 p 1 ,  2 Z 1 ,1 e , 1 21, 2  2 1  x ,  1  2 y  2 dx:e , 1 2 y 2  2 2 = 1 2 2 Z 1 ,1 e , u 2 2 du:e , 1 2 y 2  2 2 using the substitution u = 1 p 1, 2  1  x ,  1  2 y  , du = dx p 1, 2  1 = 1 p 2 2 e , 1 2 y 2  2 2 : Thus Y is normal with mean 0 and variance  2 2 . Conditional density. From the expressions f X;Y x; y= 1 2 1  2 p 1 ,  2 e , 1 21, 2  1  2 1 , x,  1  2 y  2 e , 1 2 y 2  2 2 ; f Y y= 1 p 2 2 e , 1 2 y 2  2 2 ; we have f X jY xjy = f X;Y x; y f Y y = 1 p 2 1 1 p 1,  2 e , 1 21, 2  1  2 1  x ,  1  2 y  2 : In the x -variable, f X jY xjy  is a normal density with mean  1  2 y and variance 1 ,  2  2 1 . Therefore, IE X jY = y= Z 1 ,1 xf X jY xjy dx =  1  2 y; IE "  X ,  1  2 y  2     Y = y  = Z 1 ,1  x ,  1  2 y  2 f X jY xjy dx = 1 ,  2  2 1 : CHAPTER 11. General Random Variables 129 From the above two formulas we have the formulas IE X jY =  1  2 Y; (7.3) IE "  X ,  1  2 Y  2     Y  =1, 2  2 1 : (7.4) Taking expectations in (7.3) and (7.4) yields IEX =  1  2 IEY =0; (7.5) IE "  X ,  1  2 Y  2  =1, 2  2 1 : (7.6) Based on Y , the best estimator of X is  1  2 Y . This estimator is unbiased (has expected error zero) and the expected square error is 1 ,  2  2 1 . No other estimator based on Y can have a smaller expected square error (Homework problem 2.1). 11.8 Multivariate Normal Distribution Please see Oksendal Appendix A. Let X denote the column vector of random variables X 1 ;X 2 ;::: ;X n  T ,and x the corresponding column vector of values x 1 ;x 2 ;::: ;x n  T . X has a multivariate normal distribution if and only if the random variables have the joint density f X x= p det A 2  n=2 exp n , 1 2 X ,  T :A:X ,  o : Here,  4 = 1 ;::: ; n  T = IEX 4 =IEX 1 ;::: ;IEX n  T ; and A is an n  n nonsingular matrix. A ,1 is the covariance matrix A ,1 = IE h X , :X ,  T i ; i.e. the i; j  thelement of A ,1 is IE X i ,  i X j ,  j  . The random variables in X are independent if and only if A ,1 is diagonal, i.e., A ,1 = diag 2 1 ; 2 2 ;::: ; 2 n ; where  2 j = IE X j ,  j  2 is the variance of X j . 130 11.9 Bivariate normal distribution Take n =2 in the above definitions, and let  4 = IE X 1 ,  1 X 2 ,  2   1  2 : Thus, A ,1 = "  2 1  1  2  1  2  2 2  ; A = 2 4 1  2 1 1, 2  ,   1  2 1, 2  ,   1  2 1, 2  1  2 2 1, 2  3 5 ; p det A = 1  1  2 p 1 ,  2 ; and we have the formula from Example 11.1, adjusted to account for the possibly non-zero expec- tations: f X 1 ;X 2 x 1 ;x 2 = 1 2 1  2 p 1 ,  2 exp  , 1 21 ,  2  " x 1 ,  1  2  2 1 , 2x 1 ,  1 x 2 ,  2   1  2 + x 2 ,  2  2  2 2  : 11.10 MGF of jointly normal random variables Let u =u 1 ;u 2 ;::: ;u n  T denote a column vector with components in IR ,andlet X have a multivariate normal distribution with covariance matrix A ,1 and mean vector  . Then the moment generating function is given by IEe u T :X = Z 1 ,1 ::: Z 1 ,1 e u T :X f X 1 ;X 2 ;::: ;X n x 1 ;x 2 ;::: ;x n  dx 1 :::dx n = exp n 1 2 u T A ,1 u + u T  o : If any n random variables X 1 ;X 2 ;::: ;X n have this moment generating function, then they are jointly normal, and we can read out the means and covariances. The random variables are jointly normal and independent if and only if for any real column vector u =u 1 ;::: ;u n  T IEe u T :X 4 = IE exp 8  : n X j =1 u j X j 9 = ; = exp 8  : n X j =1  1 2  2 j u 2 j + u j  j  9 = ; : [...]... problem 2.1) 11.8 Multivariate Normal Distribution Please see Oksendal Appendix A X x Let denote the column vector of random variables X1; X2; : : : ; Xn T , and the corresponding column vector of values x1 ; x2; : : : ; xn T has a multivariate normal distribution if and only if the random variables have the joint density X p o n fXx = 2detnA exp , 1 X , T :A:X ,  : 2   =2 Here, 4 4  = 1... jointly normal random variables u X Let = u1; u2; : : : ; un T denote a column vector with components in IR, and let have a multivariate normal distribution with covariance matrix A,1 and mean vector  Then the moment generating function is given by T IEeu :X = Z1 Z1 T eu :XfX1; X2; : : : ; Xn x1; x2; : : : ; xn dx1 : : :dxn ,1n ,1 o = exp 1 uT A,1 u + uT  : 2 ::: If any n random variables X1 ;...CHAPTER 11 General Random Variables 129 From the above two formulas we have the formulas IE X jY = IE " X, 2 Y; (7.3)  2 Y 1 1 2 Y = 1 , 2  2: 1 (7.4) IEY = 0; (7.5) Taking expectations in (7.3) and (7.4) yields IEX... T = IE X = IEX1; : : : ; IEXnT ; and A is an n  n nonsingular matrix A,1 is the covariance matrix h i A,1 = IE X , :X , T ; i.e the i; j th element of A,1 is IE Xi , i Xj , j  The random variables in if and only if A,1 is diagonal, i.e., A,1 = diag 2 ; 2; : : : ; 2 ; 1 2 n where 2 j = IE Xj , j 2 is the variance of Xj X are independent 130 11.9 Bivariate normal distribution Take... o = exp 1 uT A,1 u + uT  : 2 ::: If any n random variables X1 ; X2; : : : ; Xn have this moment generating function, then they are jointly normal, and we can read out the means and covariances The random variables are jointly normal and independent if and only if for any real column vector = u1 ; : : : ; un T u 8n 9 8n 9 = = X X1 2 2 4 IEeuT :X = IE exp : uj Xj ; = exp : 2 j uj + uj j ; : j =1 j . general h . CHAPTER 11. General Random Variables 125 Ω ε (X,Y) { (X,Y) C} C x y Figure 11.2: Two real-valued random variables X; Y . 11.4 Two random variables. Chapter 11 General Random Variables 11.1 Law of a Random Variable Thus far we have considered only random variables whose domain and range

Ngày đăng: 18/10/2013, 03:20

TỪ KHÓA LIÊN QUAN