1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Communication Systems Engineering Episode 1 Part 9 pptx

25 210 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 25
Dung lượng 59,39 KB

Nội dung

Eytan Modiano Slide 1 16.36: Communication Systems Engineering Lectures 12/13: Channel Capacity and Coding Eytan Modiano Eytan Modiano Slide 2 Channel Coding • When transmitting over a noisy channel, some of the bits are received with errors Example : Binary Symmetric Channel (BSC) • Q: How can these errors be removed? • A: Coding: the addition of redundant bits that help us determine what was sent with greater accuracy 1 1 00 1-Pe 1-Pe Pe Pe = Probability of error Eytan Modiano Slide 3 Example (Repetition code) Repeat each bit n times (n-odd) Input Code 0 000…… 0 1 11 …… 1 Decoder: • If received sequence contains n/2 or more 1’s decode as a 1 and 0 otherwise – Max likelihood decoding P ( error | 1 sent ) = P ( error | 0 sent ) = P[ more than n / 2 bit errors occur ] = n i PP in n e i e ni       − =  − ∑ / () 2 1 Eytan Modiano Slide 4 Repetition code, cont. • For P e < 1/2, P(error) is decreasing in n – ⇒⇒ ⇒⇒ for any εε εε , ∃∃ ∃∃ n large enough so that P (error) < εε εε Code Rate: ratio of data bits to transmitted bits – For the repetition code R = 1/n – To send one data bit, must transmit n channel bits “bandwidth expansion” • In general, an (n,k) code uses n channel bits to transmit k data bits – Code rate R = k / n • Goal: for a desired error probability, εε εε , find the highest rate code that can achieve p(error) < εε εε Eytan Modiano Slide 5 Channel Capacity • The capacity of a discrete memoryless channel is given by, – Example: Binary Symmetric Channel (BSC) I(X;Y) = H (Y) - H (Y|X) = H (X) - H (X|Y) H (X|Y) = H (X|Y=0)*P(Y=0) + H (X|Y=1)*P(Y=1) H (X|Y=0) = H (X|Y=1) = P e log(1/P e ) + (1-P e )log(1/ 1-P e ) = H b (P e ) H (X|Y) = H b (P e ) => I(X;Y) = H(X) - H b (P e ) H (X) = P 0 log (1/P 0 ) + (1-P 0 ) log (1/ 1-P 0 ) = H b (p 0 ) => I (X;Y) = H b (P 0 ) - H b (P e ) CIXY px = max ( ; ) () Channel X Y 1 1 00 1-Pe 1-Pe Pe P 0 P 1 =1-P 0 Eytan Modiano Slide 6 Capacity of BSC I (X;Y) = H b (P 0 ) - H b (P e ) • H b (P) = P log(1/P) + (1-P) log(1/ 1-P) – H b (P) <= 1 with equality if P=1/2 C = max P0 {I (X;Y) = H b (P 0 ) - H b (P e )} = 1 - H b (P e ) C = 0 when P e = 1/2 and C = 1 when P e = 0 or P e =1 10 1/2 P H b (P) 1 10 1/2 Pe C = 1 - H b (Pe) 1 Eytan Modiano Slide 7 Channel Coding Theorem (Claude Shannon) Theorem: For all R < C and εε εε > o; there exists a code of rate R whose error probability < εε εε – εε εε can be arbitrarily small – Proof uses large block size n as n →→ →→ ∞∞ ∞∞ capacity is achieved • In practice codes that achieve capacity are difficult to find – The goal is to find a code that comes as close as possible to achieving capacity • Converse of Coding Theorem: – For all codes of rate R > C, ∃∃ ∃∃ εε εε 0 > 0, such that the probability of error is always greater than εε εε 0 For code rates greater than capacity, the probability of error is bounded away from 0 Eytan Modiano Slide 8 Channel Coding • Block diagram Source Source encoder Channel encoder Modulator Channel Demod Channel decoder Source decoder Sink Eytan Modiano Slide 9 Approaches to coding • Block Codes – Data is broken up into blocks of equal length – Each block is “mapped” onto a larger block Example: (6,3) code, n = 6, k = 3, R = 1/2 000 →→ →→ 000000 100 →→ →→ 100101 001 →→ →→ 001011 101 →→ →→ 101110 010 →→ →→ 010111 110 →→ →→ 110010 011 →→ →→ 011100 111 →→ →→ 111001 • An (n,k) binary block code is a collection of 2 k binary n-tuples (n>k) – n = block length – k = number of data bits – n-k = number of checked bits – R = k / n = code rate Eytan Modiano Slide 10 Approaches to coding • Convolutional Codes – The output is provided by looking at a sliding window of input Delay Delay + + + C i C i+1 U K C (2K) = U (2K) U (2K-2) , C (2K+1) = U (2K+1) U (2K) U (2K-1) + + + + mod ( 2 ) addition ( 1+1=0 ) [...]... Standard array 1 0 1 0  H =  0 1 0 1  0000 10 00 010 0 11 00 010 1 11 01 00 01 10 01 1 010 0 010 11 10 011 0 1 0 T H = 1 0  11 11 011 1 10 11 0 011 Suppose 011 1 is received, S = 10 , co-set leader = 10 00 Decode: C = 011 1 + 10 00 = 11 11 Eytan Modiano Slide 22 0 1  0 1  Syndrome 10 01 11 Minimum distance decoding correctly decoded incorrect decoding e1 c1 e2 c3 c2 e3 c4 c5 undetected error • • Minimum distance... linear combination of the basis (e1,e2,…, ek), every corresponding codeword can be represented as a linear combination of the corresponding rows of G Note: x1 C1, x2 C2 => x1+x2 C1+C2 Example • Consider the (6,3) code from earlier: 10 0 → 10 010 1; 010 → 010 111 ; 0 01 → 0 010 11 1 0 0 1 0 1 G = 0 1 0 1 1 1   0 0 1 0 1 1   Codeword for (1, 0 ,1) = (1, 0 ,1) G = (1, 0 ,1, 1 ,1, 0)  G=    IK PKx ( n − K )... 1, k = 2m -1 -m (e.g., (3 ,1) , (7,4), (15 ,11 )…) – – • R = 1 - m/(2m - 1) => very high rate dmin = 3 => single error correction Construction of Hamming codes – Parity check matrix (H) consists of all non-zero binary m-tuples Example: (7,4) hamming code (m=3) 1 0 1 1 1 0 0  H = 1 1 0 1 0 1 0 ,   0 1 1 1 0 0 1    Eytan Modiano Slide 25 1 0 G= 0 0  0 1 0 0 0 0 1 0 0 0 0 1 1 0 1 1 1 1 0 1. .. r 1) Find S = rHT = syndrome of r 2) Find the co-set leader e, corresponding to S 3) Decode: C = r+e • “Minimum distance decoding” – Eytan Modiano Slide 21 Decode into the codeword that is closest to the received sequence Example (syndrome decoding) • 1 0 1 0  G=  0 1 0 1  Simple (4,2) code Data 00 01 10 11 codeword 0000 010 1 10 10 11 11 Standard array 1 0 1 0  H =  0 1 0 1  0000 10 00 010 0... sequence e1 = (1, 0, 0) e2=(0 ,1, 0 0) ek = (0,0, ,1) • Eytan Modiano Slide 14 Codeword g1 = (1, 0, ,0, g (1, k +1) …g (1, n) ) g2 = (0 ,1, ,0, g(2,k +1) …g(2,n) ) gk = (0,0, ,k, g(k,k +1) …g(k,n) ) g1, g2, …,gk form a basis for the code The Generator Matrix  g1   g 11 g  g 2 21 G =  =  M  M g  g  k   k1 • For input sequence x = (x1,…,xk): Cx = xG – – – • Eytan Modiano Slide 15 g12 L g1n  g2 n... Modiano Slide 16      The parity check matrix  H=    PT I( n − K )      I (n-K) = (n − K)x(n - K) identity matrix Example: 1 1 0 1 0 0  H = 0 1 1 0 1 0    1 1 1 0 0 1    Now, if ci is a codework of C then, v ci H = 0 T • “C is in the null space of H” • Any codeword in C is orthogonal to the rows of H Eytan Modiano Slide 17 Decoding • • • v = transmitted codeword = v1 … vn r =... one with minimum weight – • Eytan Modiano Slide 19 Minimum distance decoding For a given syndrome, find the error pattern of minimum weight (emin) that gives this syndrome and decode: r’ = r + emin Standard Array M = 2K C1 e1 C2 e1 + C2 M e2 + C2 L CM e1 + CM Syndrome S1 e2 + CM S2 e2 ( n− K ) 1 • • S2 ( n− K ) 1 Row 1 consists of all M codewords Row 2 e1 = min weight n-tuple not in the array – I.e.,... LBC then Ci + Cj is also a codeword Eytan Modiano Slide 13 Systematic codes Theorem: Any (n,k) LBC can be represented in Systematic form where: data = x1 xk, codeword = x1 xk ck +1 xn – • Hence we will restrict our discussion to systematic codes only The codewords corresponding to the information sequences: e1 = (1, 0, 0), e2=(0 ,1, 0 0), ek = (0,0, ,1) for a basis for the code – – Clearly, they are linearly... every codeword can be broken into a data part and a redundant part – Previous (6,3) code was systematic Definitions: • Given X ∈ {0 ,1} n, the Hamming Weight of X is the number of 1 s in X • Given X, Y in {0 ,1} n , the Hamming Distance between X & Y is the number of places in which they differ, n dH ( X , Y ) = ∑ Xi ⊕ Yi = Weight ( X + Y ) i =1 X + Y = [ x1 ⊕ y1 , x2 ⊕ y2 ,L, xn ⊕ yn ] • The minimum distance... Modiano Slide 12 Linear Block Codes • A (n,k) linear block code (LBC) is defined by 2k codewords of length n C = { C1….Cm} • A (n,k) LBC is a K-dimensional subspace of {0 ,1} n – – • (0…0) is always a codeword If C1,C2 ∈ C, C1+C2 ∈ C Theorem: For a LBC the minimum distance is equal to the min weight (Wmin) of the code Wmin = min(over all Ci) Weight (Ci) Proof: Suppose dmin = dH (Ci,Cj), where C1,C2 ∈ C dH . = 3, R = 1/ 2 000 →→ →→ 000000 10 0 →→ →→ 10 010 1 0 01 →→ →→ 0 010 11 1 01 →→ →→ 10 111 0 010 →→ →→ 010 111 11 0 →→ →→ 11 0 010 011 →→ →→ 011 100 11 1 →→ →→ 11 10 01 • An (n,k) binary block code is. g gg gg k n n kkn =             =             1 2 11 12 1 21 2 1 M L M Eytan Modiano Slide 16 Example • Consider the (6,3) code from earlier: 10 0 →→ →→ 10 010 1; 010 →→ →→ 010 111 ; 0 01 →→ →→ 0 010 11 Codeword for (1, 0 ,1) = (1, 0 ,1) G = (1, 0 ,1, 1 ,1, 0) G. for (1, 0 ,1) = (1, 0 ,1) G = (1, 0 ,1, 1 ,1, 0) G =           10 010 1 010 111 0 010 11 GI P KKxnK =           = −() I KxK identitymatrix K Eytan Modiano Slide 17 The parity check matrix HP

Ngày đăng: 07/08/2014, 12:21

TỪ KHÓA LIÊN QUAN