1. Trang chủ
  2. » Cao đẳng - Đại học

Điện tử viễn thông lect04 1 khotailieu

51 34 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 51
Dung lượng 1,37 MB

Nội dung

4 Basic probability theory lect04.ppt S-38.1145 - Introduction to Teletraffic Theory – Spring 2006 Basic probability theory Contents • • • • • • Basic concepts Discrete random variables Discrete distributions (nbr distributions) Continuous random variables Continuous distributions (time distributions) Other random variables Basic probability theory Sample space, sample points, events • Sample space Ω is the set of all possible sample points ω ∈ Ω – Example Tossing a coin: Ω = {H,T} – Example Casting a die: Ω = {1,2,3,4,5,6} – Example Number of customers in a queue: Ω = {0,1,2, } – Example Call holding time (e.g in minutes): Ω = {x ∈ ℜ | x > 0} • Events A,B,C, ⊂ Ω are measurable subsets of the sample space Ω – Example “Even numbers of a die”: A = {2,4,6} – Example “No customers in a queue”: A = {0} – Example “Call holding time greater than 3.0 (min)”: A = {x ∈ ℜ | x > 3.0} Denote by the set of all events A ∈ – Sure event: The sample space Ω ∈ itself – Impossible event: The empty set ∅ ∈ • Basic probability theory Combination of events A ∪ B = {ω ∈ Ω | ω ∈ A or ω ∈ B} A ∩ B = {ω ∈ Ω | ω ∈ A and ω ∈ B} Ac = {ω ∈ Ω | ω ∉ A} • Union “A or B”: • Intersection “A and B”: • Complement “not A”: • Events A and B are disjoint if – A∩B=∅ • A set of events {B1, B2, …} is a partition of event A if – (i) Bi ∩ Bj = ∅ for all i ≠ j – (ii) ∪i Bi = A – Example Odd and even numbers of a die constitute a partition of the sample space: B1 = {1,3,5} and B2 = {2,4,6} A B1 B2 B3 4 Basic probability theory Probability • Probability of event A is denoted by P(A), P(A) ∈ [0,1] – Probability measure P is thus a real-valued set function defined on the set of events , P: → [0,1] • Properties: – (i) ≤ P(A) ≤ – (ii) P(∅) = – (iii) P(Ω) = – (iv) P(Ac) = − P(A) – (v) P(A ∪ B) = P(A) + P(B) − P(A ∩ B) – (vi) A ∩ B = ∅ ⇒ P(A ∪ B) = P(A) + P(B) – – A B (vii) {Bi} is a partition of A ⇒ P(A) = Σi P(Bi) (viii) A ⊂ B ⇒ P(A) ≤ P(B) Basic probability theory Conditional probability • • Assume that P(B) > Definition: The conditional probability of event A given that event B occurred is defined as P( A | B) = • P ( A∩ B ) P( B) It follows that P ( A ∩ B) = P( B) P( A | B ) = P( A) P ( B | A) Basic probability theory Theorem of total probability • • Let {Bi} be a partition of the sample space Ω It follows that {A ∩ Bi} is a partition of event A Thus (by slide 5) (vii ) P ( A) = ∑i P ( A ∩ Bi ) • Assume further that P(Bi) > for all i Then (by slide 6) P ( A) = ∑i P ( Bi ) P ( A | Bi ) • This is the theorem of total probability B1 A B2 B3 Ω B4 Basic probability theory Bayes’ theorem • • Let {Bi} be a partition of the sample space Ω Assume that P(A) > and P(Bi) > for all i Then (by slide 6) P ( Bi | A) = • P ( A∩ Bi ) P ( Bi ) P ( A|Bi ) = P ( A) P ( A) Furthermore, by the theorem of total probability (slide 7), we get P ( Bi | A) = • P ( Bi ) P ( A| Bi ) ∑ j P ( B j ) P ( A|B j ) This is Bayes’ theorem – Probabilities P(Bi) are called a priori probabilities of events Bi – Probabilities P(Bi | A) are called a posteriori probabilities of events Bi (given that the event A occured) Basic probability theory Statistical independence of events • Definition: Events A and B are independent if P( A ∩ B ) = P( A) P ( B) • It follows that = P ( A) P ( B ) P( B) = P ( A) P ( B | A) = P ( A) = P ( A) P ( B ) P ( A) = P( B) P( A | B) = • P ( A∩ B ) P( B) Correspondingly: P ( A∩ B ) Basic probability theory Random variables • • Definition: Real-valued random variable X is a real-valued and measurable function defined on the sample space Ω, X: Ω → ℜ – Each sample point ω ∈ Ω is associated with a real number X(ω) Measurability means that all sets of type { X ≤ x} : ={ω ∈ Ω | X (ω ) ≤ x} ⊂ Ω belong to the set of events , that is {X ≤ x} ∈ • The probability of such an event is denoted by P{X ≤ x} 10 Basic probability theory Continuous random variables • Definition: Random variable X is continuous if there is an integrable function fX: ℜ → ℜ+ such that for all x ∈ ℜ x FX ( x) := P{ X ≤ x} = ∫ f X ( y ) dy −∞ • • The function fX is called the probability density function (pdf) – The set SX, where fX > 0, is called the value set Properties: – (i) P{X = x} = for all x ∈ ℜ – (ii) P{a < X < b} = P{a ≤ X ≤ b} = ∫ab fX(x) dx – (iii) P{X ∈ A} = ∫A fX(x) dx – (iv) P{X ∈ ℜ} = ∫-∞∞ fX(x) dx = ∫S fX(x) dx = X 37 Basic probability theory Example fX(x) FX(x) x x1 x2 x x3 x1 probability density function (pdf) x2 x3 cumulative distribution function (cdf) SX = [x1, x3] 38 Basic probability theory Expectation and other distribution related parameters • Definition: The expectation (mean value) of X is defined by ∞ µ X := E[ X ] := ∫ f X ( x) x dx – −∞ Note 1: The expectation exists only if ∫-∞∞ fX(x)|x| dx < ∞ Note 2: If ∫-∞∞ fX(x)x = ∞, then we may denote E[X] = ∞ – – The expectation has the same properties as in the discrete case (see slide 21) • The other distribution parameters (variance, covariance, ) are defined just as in the discrete case – These parameters have the same properties as in the discrete case (see slides 22-24) 39 Basic probability theory Contents • • • • • • Basic concepts Discrete random variables Discrete distributions (nbr distributions) Continuous random variables Continuous distributions (time distributions) Other random variables 40 Basic probability theory Uniform distribution X ∼ U (a, b), a < b – continuous counterpart of “casting a die” • • Value set: SX = (a,b) Probability density function (pdf): f X ( x ) = , x ∈ ( a, b) b−a • Cumulative distribution function (cdf): FX ( x) := P{ X ≤ x} = x − a , x ∈ (a, b) • • • b−a Mean value: E[X] = ∫ab x/(b − a) dx = (a + b)/2 Second moment: E[X2] = ∫ab x2/(b − a) dx = (a2 + ab + b2)/3 Variance: D2[X] = E[X2] − E[X]2 = (b − a)2/12 41 Basic probability theory Exponential distribution X ∼ Exp(λ ), λ > – continuous counterpart of geometric distribution (“failure” prob ≈ λdt) – P{X ∈ (t,t+h] | X > t} = λh + o(h), where o(h)/h → as h → • • Value set: SX = (0,∞) Probability density function (pdf): f X ( x) = λe −λx , x > • Cumulative distribution function (cdf): FX ( x) = P{ X ≤ x} = − e −λx , x > • • • Mean value: E[X] = ∫0∞ λx exp(−λx) dx = 1/λ Second moment: E[X2] = ∫0∞ λx2 exp(−λx) dx = 2/λ2 Variance: D2[X] = E[X2] − E[X]2 = 1/λ2 42 Basic probability theory Memoryless property of exponential distribution • Exponential distribution has so called memoryless property: for all x,y ∈ (0,∞) P{ X > x + y | X > x} = P{ X > y} • Prove! – Tip: Prove first that P{X > x} = e−λx • Application: – Assume that the call holding time is exponentially distributed with mean h minutes – Consider a call that has already lasted for x minutes Due to memoryless property, this gives no information about the length of the remaining holding time: it is distributed as the original holding time and, on average, lasts still h minutes! 43 Basic probability theory Minimum of exponential random variables • Let X1 ∼ Exp(λ1) and X2 ∼ Exp(λ2) be independent Then X := min{ X1, X } ∼ Exp(λ1 + λ2 ) and λ P{ X = X i } = λ +iλ , i ∈ {1,2} • Prove! – Tip: See slide 15 44 Basic probability theory Standard normal (Gaussian) distribution X ∼ N (0,1) – limit of the “normalized” sum of IID r.v.s with mean and variance (cf slide 48) • • Value set: SX = (−∞,∞) Probability density function (pdf): f X ( x) = ϕ ( x) := e 2π • −1 x2 Cumulative distribution function (cdf): x FX ( x) := P{ X ≤ x} = Φ ( x) := ∫− ∞ ϕ ( y ) dy • • Mean value: E[X] = (symmetric pdf) Variance: D2[X] = 45 Basic probability theory Normal (Gaussian) distribution X ∼ N( µ , σ ), µ ∈ ℜ, σ > – if (X − µ)/σ ∼ N(0,1) • • Value set: SX = (−∞,∞) Probability density function (pdf): f X ( x) = FX ' ( x) = { X xà Cumulative distribution function (cdf): FX ( x) := P{ X x} = P ( ) xà σ }= Φ( ) x−µ σ Mean value: E[X] = µ + σE[(X − µ)/σ] = µ (symmetric pdf around µ) Variance: D2[X] = σ2D2[(X − µ)/σ] = σ2 46 Basic probability theory Properties of the normal distribution • (i) Linear transformation: Let X ∼ N(µ,σ2) and α,β ∈ ℜ Then 2 Y := αX + β ∼ N(à + , ) (ii) Sum: Let X1 ∼ N(µ1,σ12) and X2 ∼ N(µ2,σ22) be independent Then X + X ∼ N( µ1 + µ , σ 12 + σ 22 ) • (iii) Sample mean: Let Xi ∼ N(µ,σ2), i = 1,…n, be independent and identically distributed (IID) Then (cf slide 25) X n := 1n n 1σ 2) X ∼ N ( µ , ∑ i n i =1 47 Basic probability theory Central limit theorem (CLT) • • Let X1,…, Xn be independent and identically distributed (IID) with mean µ and variance σ2 (and the third moment exists) Central limit theorem: i.d (X − µ) → N(0,1) n σ/ n • It follows that for large values of n X n ≈ N ( µ , 1n σ ) 48 Basic probability theory Contents • • • • • • Basic concepts Discrete random variables Discrete distributions (nbr distributions) Continuous random variables Continuous distributions (time distributions) Other random variables 49 Basic probability theory Other random variables • In addition to discrete and continuous random variables, there are so called mixed random variables – containing some discrete as well as continuous portions • Example: – The customer waiting time W in an M/M/1 queue has an atom at zero (P{W = 0} = − ρ > 0) but otherwise the distribution is continuous FW(x) 1−ρ x 50 Basic probability theory THE END 51 ... {0 ,1} Point probabilities: P{ X = 0} = − p, • • • P{ X = 1} = p Mean value: E[X] = (1 − p)⋅0 + p 1 = p Second moment: E[X2] = (1 − p)⋅02 + p 12 = p Variance: D2[X] = E[X2] − E[X]2 = p − p2 = p (1. .. Value set: SX = {0 ,1, …} Point probabilities: P{ X = i} = p i (1 − p ) • • • Mean value: E[X] = ∑i ipi (1 − p) = p/ (1 − p) Second moment: E[X2] = ∑i i2pi (1 − p) = 2(p/ (1 − p))2 + p/ (1 − p) Variance:... set: SX = {0 ,1, …,n} Point probabilities: (in ) = i!(nn−! i)! n!= n⋅( n 1) L2 1 () P{ X = i} = in p i (1 − p ) n − i • • Mean value: E[X] = E[X1] + … + E[Xn] = np Variance: D2[X] = D2[X1] + … + D2[Xn]

Ngày đăng: 12/11/2019, 19:58

TÀI LIỆU CÙNG NGƯỜI DÙNG

  • Đang cập nhật ...

TÀI LIỆU LIÊN QUAN