1. Trang chủ
  2. » Công Nghệ Thông Tin

Networking Theory and Fundamentals - Lecture 3 ppt

36 303 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 36
Dung lượng 343,54 KB

Nội dung

TCOM 501: Networking Theory & Fundamentals Lecture January 29, 2003 Prof Yannis A Korilis 3-2 Topics Markov Chains Discrete-Time Markov Chains Calculating Stationary Distribution Global Balance Equations Detailed Balance Equations Birth-Death Process Generalized Markov Chains Continuous-Time Markov Chains 3-3 Markov Chain Stochastic process that takes values in a countable set Example: {0,1,2,…,m}, or {0,1,2,…} Elements represent possible “states” Chain “jumps” from state to state Memoryless (Markov) Property: Given the present state, future jumps of the chain are independent of past history Markov Chains: discrete- or continuous- time 3-4 Discrete-Time Markov Chain Discrete-time stochastic process {Xn: n = 0,1,2,…} Takes values in {0,1,2,…} Memoryless property: P{ X n +1 = j | X n = i, X n −1 = in −1 , , X = i0 } = P{ X n +1 = j | X n = i} Pij = P{ X n +1 = j | X n = i} Transition probabilities Pij Pij ≥ 0, ∞ ∑P j =0 ij =1 Transition probability matrix P=[Pij] 3-5 Chapman-Kolmogorov Equations n step transition probabilities Pijn = P{ X n + m = j | X m = i}, n, m ≥ 0, i, j ≥ Chapman-Kolmogorov equations n+m ij P ∞ = ∑ Pikn Pkjm , n, m ≥ 0, i, j ≥ k =0 Pijn is element (i, j) in matrix Pn Recursive computation of state probabilities 3-6 State Probabilities – Stationary Distribution State probabilities (time-dependent) π nj = P{ X n = j}, ∞ π n = (π 0n , π1n , ) ∞ P{ X n = j} = ∑ P{ X n −1 = i}P{ X n = j | X n −1 = i} ⇒ π = ∑ π in −1Pij n j i =0 i =0 In matrix form: π n = π n −1P = π n −2 P = = π P n If time-dependent distribution converges to a limit π = lim π n n→∞ π is called the stationary distribution π = πP Existence depends on the structure of Markov chain 3-7 Classification of Markov Chains Irreducible: States i and j communicate: Aperiodic: State i is periodic: ∃n, m : Pijn > 0, Pjim > ∃ d > 1: Piin > ⇒ n = α d Irreducible Markov chain: all states communicate Aperiodic Markov chain: none of the states is periodic 2 0 4 3-8 Limit Theorems Theorem 1: Irreducible aperiodic Markov chain For every state j, the following limit π j = lim P{ X n = j | X = i}, i = 0,1, 2, n →∞ exists and is independent of initial state i Nj(k): number of visits to state j up to time k N j (k )  P  π j = lim k →∞ k   X0 = i =  πj: frequency the process visits state j 3-9 Existence of Stationary Distribution Theorem 2: Irreducible aperiodic Markov chain There are two possibilities for scalars: π j = lim P{ X n = j | X = i} = lim Pijn n →∞ πj = 0, for all states j πj > 0, for all states j n →∞ No stationary distribution π is the unique stationary distribution Remark: If the number of states is finite, case is the only possibility 3-10 Ergodic Markov Chains Markov chain with a stationary distribution π j > 0, j = 0,1, 2, States are positive recurrent: The process returns to state j “infinitely often” A positive recurrent and aperiodic Markov chain is called ergodic Ergodic chains have a unique stationary distribution π j = lim Pijn n →∞ Ergodicity ⇒ Time Averages = Stochastic Averages ... Numerically determine limit of Pn 0 .31 0 0 .34 5 0 .34 5 lim P n =  0 .31 0 0 .34 5 0 .34 5   n →∞ 0 .31 0 0 .34 5 0 .34 5 (n ≈ 150) Effectiveness depends on structure of P 3- 1 5 Global Balance Equations Markov... π + π1 + π = P{gets wet} = π p = p 1− p 3? ?? p 3- 1 4 Example: Finite Markov Chain Taking p = 0.1:  1− p 1  π= , , = ( 0 .31 0, 0 .34 5, 0 .34 5)   3? ?? p 3? ?? p 3? ?? p  1  P =  0.9 0.1   0.9 0.1... the chain are independent of past history Markov Chains: discrete- or continuous- time 3- 4 Discrete-Time Markov Chain Discrete-time stochastic process {Xn: n = 0,1,2,…} Takes values in {0,1,2,…}

Ngày đăng: 22/07/2014, 18:22

TỪ KHÓA LIÊN QUAN