1. Trang chủ
  2. » Công Nghệ Thông Tin

The Art of Error Correcting Coding phần 4 docx

27 309 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 27
Dung lượng 247,55 KB

Nội dung

BINARY CYCLIC CODES AND BCH CODES 71 17. Let C be a double-error-correcting (15, 7, 5) BCH code. Suppose that the all-zero code word is transmitted and that there are erasures in the last four consecutive positions so that ¯r(x) = f + fx + fx 2 + fx 3 , where f denotes an erasure. Find the most likely code polynomial that was transmit- ted. 18. Prove the existence of a binary cyclic (63, 58) code. (Hint: How many binary cyclic codes of length 63 are there?) 19. Let C 1 be a double-error-correcting BCH (15,7,5) code and C 2 be a triple-error- correcting BCH (15,5,7) code. (a) Find generator matrices G 1 and G 2 of C 1 and C 2 respectively. (b) Compute the weight distributions W(C 1 ) and W(C 2 ). (Hint: You may use a computer program for this task.) (c) Estimate the performance of codes C 1 and C 2 (BER curves) with soft-decision decoding over an AWGN channel. Compare the curves with respect to uncoded BPSK modulation. What are the respective coding gains? 20. Let A  be an  ×  Vandermonde matrix over a field. (a) Show that det ( A  ) = 0, that is, the rank of A  is . (b) Show that any ν ×ν submatrix A ν of A  has rank ν. 21. Let 2 m − 1 = n 1 n 2 ,wheren 1 > 1andn 2 > 1. (a) Show that the element β = α n 1 ,whereα is a primitive element in GF(2 m ),has order n 2 .Thatis,n 2 is the smallest positive integer such that β n 2 = 1. (b) Using β,abinary nonprimitive BCH code can be constructed. With n = 63, determine the generator polynomial of a binary nonprimitive BCH (21, 12, 5) code. 4 Nonbinary BCH codes: Reed–Solomon codes In this chapter, one of the most celebrated class of ECC schemes is introduced and their encoding and decoding algorithms explained. Reed–Solomon (RS) codes have found numerous applications in digital storage and communication systems. Examples include the famous RS (255, 223, 33) code for NASA space communications, shortened RS codes over GF(2 8 ) for CD-ROM, DVD and Terrestrial Digital HDTV transmission applications, an extended RS (128, 122, 7) code over GF(2 7 ) for cable modems, among many others. 4.1 RS codes as polynomial codes Similar to Reed–Muller (RM) codes, RS codes can be defined as code words with compo- nents equal to the evaluation of a certain polynomial. As a matter of fact, this was the way that RS codes were originally defined by Reed and Solomon in Reed and Solomon (1960). RM codes, finite-geometry codes (Lin and Costello 2005) and RS codes are all members of a large class of codes: Polynomial codes (Peterson and Weldon 1972), which are closely related to algebraic-geometry (AG) codes (Pretzel 1998). Let ¯u(x) = u 0 + u 1 x +···+u k−1 x k−1 (4.1) be an information polynomial, with u i ∈ GF(2 m ),1≤ i<k. Clearly, there are 2 mk such polynomials. By evaluating (4.1) over the nonzero elements of GF(2 m ), a code word in an RS (2 m − 1,k,d) code of length 2 m is obtained, ¯v =  u(1) u(α) u(α 2 ) ··· u(α 2 m −2 )  . (4.2) 4.2 Fro m binary BCH to RS codes RS codes can also be interpreted as nonbinary BCH codes. That is, RS codes are BCH codes in which the values of the code coefficient are taken from GF(2 m ). In particular, The Art of Error Correcting Coding, Second Edition Robert H. Morelos-Zaragoza  2006 John Wiley & Sons, Ltd. ISBN: 0-470-01558-6 74 NONBINARY BCH CODES: REED–SOLOMON CODES for a t d -error correcting RS code, the zeros of the code are 2t d consecutive powers of α. Moreover, because over GF(2 m ) minimal polynomials are of the form φ i (x) = (x − α i ), 0 ≤ i<2 m − 1, see Equation (3.9), the factors of the generator polynomial are now linear, and ¯g(x) = b+2t d −1  j=b (x + α j ), (4.3) where b is an integer, usually b = 0orb = 1. It follows from (4.3), and the BCH bound on page 53, that the minimum distance of aRS(n,k,d) code C over GF(2 m ) is d ≥ n − k + 1. On the other hand, the Singleton bound (Singleton 1964) d ≤ n − k + 1 implies that d = n − k + 1. A code that satisfies this equality is known as a maximum distance separable (MDS) code (Singleton 1964). Therefore, RS codes are MDS codes. This gives RS codes useful properties. Among them, shortened RS codes are also MDS codes. Using the isomorphism between GF(2 m ) and {0, 1} m , for every m-bit vector ¯x β there is a corresponding element β ∈ GF(2 m ), ¯x β j ∈{0, 1} m ⇐⇒ β j ∈ GF(2 m ), 0 ≤ j<2 m − 1. In other words, m information bits can be grouped to form symbols in GF(2 m ). Con- versely, if the elements of GF(2 m ) are expressed as vectors of m bits, then a binary linear code of length and dimension n = m(2 m − 1) and k = m(2 m − 1 − 2t d ), respectively, is obtained. The minimum distance of this code is at least 2t d + 1. This binary image code can correct, in addition to up to t d random errors, many random bursts of errors. For example, any single burst of up to m(t d − 1) + 1 bits can be corrected. This follows from the fact that a burst of errors of length up to m(q − 1) + 1 bits is contained in at most q symbols of GF(2 m ). Therefore, there are many combinations of random errors and bursts of errors that an RS code can correct. To a great extent, this is the reason why RS codes are so popular in practical systems. Example 4.2.1 Let m = 3, and GF(2 3 ) be generated by a primitive element α with p(α) = α 3 + α + 1 = 0. Let b = 0 and t d = 2. Then there is an RS(7, 3, 5) code C with generator polynomial ¯g(x) = (x + 1)(x + α)(x + α 2 )(x + α 3 ) = x 4 + α 2 x 3 + α 5 x 2 + α 5 x + α 6 . By mapping the symbols of GF(2 3 ) into binary vectors of length 3, code C becomes a binary (21, 9, 5) code that is capable of correcting up to 2 random errors, as well as any single burst of up to 4 bits. 4.3 Decoding RS codes The core of the decoding algorithms of RS codes is similar to that of binary BCH codes. The only difference is that the error values, e j  ,1≤  ≤ ν,forν ≤ t d , have to be computed. NONBINARY BCH CODES: REED–SOLOMON CODES 75 In general, this is done using the Forney algorithm (Forney 1974). The expression below holds for RS codes with an arbitrary set of 2t d consecutive zeros {α b ,α b+1 , α b+2t d −1 }, e j  = (α j  ) 2−b (α −j  ) σ  (α −j  ) , (4.4) where σ  (x) represents the formal derivative of σ(x)with respect to x. (A similar expression can be found in Reed and Chen (1999), p. 276.) The polynomial (x) in (4.4) is an error evaluator polynomial, which is defined as (x) = σ(x)S(x) mod x 2t d +1 . (4.5) Before introducing the first example of RS decoding, an alternative version of the Berlekamp–Massey algorithm (BMA) is presented, referred to as the Massey algorithm (or MA). The algorithm was invented by Massey (1969), and is also described in Michelson and Levesque (1985), Wicker (1995). Massey algorithm to synthesize a linear feedback shift-register (LFSR) 1. Initialize the algorithm with σ(x) = 1 (the LFSR connection polynomial), ρ(x) = x (the correction term), i = 1 (syndrome sequence counter),  = 0 (register length). 2. Get a new syndrome and compute discrepancy: d = S i +   j=1 σ j S i−j 3. Test discrepancy: d = 0? Yes: Go to 8. 4. Modify connection polynomial: σ new (x) = σ(x)− dρ(x) 5. Test register length: 2 ≥ i?Yes:Goto7. 6. Change register length and update correction term: Let  = i −  and ρ(x) = σ(x)/d 7. Update connection polynomial: σ(x)= σ new (x). 8. Update correction term: ρ(x) = xρ(x). 9. Update syndrome sequence counter: i = i + 1. 10. Stopping condition: If i<dgo to 2. Else, stop. Example 4.3.1 Let C be the same RS(7, 3, 5) code as in Example 4.2.1. Suppose that ¯r(x) = αx 2 + α 5 x 4 76 NONBINARY BCH CODES: REED–SOLOMON CODES is the received polynomial. Then S 1 = ¯r(1) = α + α 5 = α 6 , S 2 = ¯r(α) = α 3 + α 2 = α 5 , S 3 = ¯r(α 2 ) = α 5 + α 6 = α and S 4 = ¯r(α 3 ) = 1 +α 3 = α. Equation (3.16) gives:  α 6 α 5 α 5 α  σ 2 σ 1  =  α α  . Three methods of finding σ(x) are shown below. Direct solution (Peterson–Gorenstein–Zierler (PGZ) algorithm) Assume two errors. Then  2 = α 7 + α 10 = 1 +α 3 = α = 0. Therefore, two errors must have occurred and σ 2 = α 6 det  αα 5 αα  = α 6 , σ 1 = α 6 det  α 6 α α 5 α  = α, from which it follows that σ(x) = 1 +αx + α 6 x 2 = (1 + α 2 x)(1 + α 4 x). Massey algorithm S 1 = α 6 ,S 2 = α 5 ,S 3 = α, S 4 = α. • i = 0: σ(x) = 1,  = 0, ρ(x) = x. • i = 1: d = S 1 = α 6 , σ new (x) = σ(x)+dρ(x) = 1 + α 6 x, 2 = 0 <i,= i −  = 1, ρ(x) = σ(x)/d = α −6 = α, ρ(x) = xρ(x) = αx, σ(x) = σ new (x). • i = 2: d = S 2 +  1 j=1 σ j S 2−j = α 5 + α 6 α 6 = 0. ρ(x) = xρ(x) = αx 2 . • i = 3: d = S 3 +  1 j=1 σ j S 3−j = α + α 6 α 5 = α 2 . σ new (x) = σ(x)+ dρ(x) = 1 + α 6 x + α 3 x 2 , 2 = 2 <i,= i −  = 2, ρ(x) = σ(x)/d = α 5 + α 4 x, ρ(x) = xρ(x) = α 5 x + α 4 x 2 ,σ(x)= σ new (x). • i = 4: d = S 4 +  2 j=1 σ j S 4−j = α + α 6 α + α 3 α 5 = 1. σ new (x) = σ(x)+dρ(x) = 1 + α 6 x + α 3 x 2 + (1)(α 5 x + α 4 x 2 ) = 1 +αx + α 6 x 2 , 2 ≥ 4 ρ(x) = xρ(x) = α 5 x 2 + α 4 x 3 ,σ(x)= σ new (x). • i = 5 >d. Stop. NONBINARY BCH CODES: REED–SOLOMON CODES 77 Euclidean algorithm • Initial conditions: r 0 (x) = x 5 , r 1 (x) = S(x) = 1 + α 6 x + α 5 x 2 + αx 3 + αx 4 , b 0 (x) = 0, b 1 (x) = 1. • j = 2: x 5 = (1 + α 6 x + α 5 x 2 + αx 3 + αx 4 )(α 6 x + α 6 ) +α 5 x 3 + x 2 + αx +α 6 . r 2 (x) = α 5 x 3 + x 2 + αx +α 6 , q 2 (x) = α 6 x + α 6 , b 2 (x) = 0 +(α 6 x + α 6 )(1) = σ 6 x + σ 6 . • j = 3: 1 +α 6 x + α 5 x 2 + αx 3 + αx 4 = (α 5 x 3 + x 2 + αx +α 6 )(α 3 x + α 2 ) + α 6 x 2 + αx + α 3 . r 3 (x) = α 6 x 2 + αx + α 3 , q 3 (x) = α 3 x + α 2 , b 3 (x) = 1 +(α 3 x + α 2 )(α 6 x + α 6 ) = α 3 + α 4 x + α 2 x 2 . Algorithm s tops, as deg [ r 3 (x) ] = 2 = t d . It follows that σ(x)= α 3 + α 4 x + α 2 x 2 = α 3 (1 + αx + α 6 x 2 ). In all the above algorithms, the following error locator polynomial is found, up to a constant term: σ(x) = 1 +αx + α 6 x 2 = (1 + α 2 x)(1 + α 4 x). Therefore, the error positions are j 1 = 2 and j 2 = 4. In a computer program or a hardware implementation, Chien search yields these two values as the (inverse) roots of σ(x).Also, note that σ  (x) = α. (Because, in GF(2 m ), 2a = a + a = 0.) To compute the error values, using either the Berlekamp–Massey algorithm (BMA or MA versions) or the PGZ algorithm, the error evaluator polynomial (4.5) is needed,  = (1 +αx + α 6 x 2 )(1 +α 6 x + α 5 x 2 + αx 3 + αx 4 ) mod x 5 = (1 +α 5 x + α 3 x 2 ) mod x 5 , It is important to note that the Euclidean algorithm (EA) computes simultaneously σ(x) and (x),asσ(x) = b j last (x) and (x) = r j last (x). To verify this note that r 3 (x) = α 3 + αx +α 6 x 2 = α 3 (1 + α 5 x + α 3 x 2 ) = α 3 (x). With the error locations determined, the error values from Equation (4.4) are e 2 = (α 2 ) 2 (1 +α 5 α −2 + α 3 α −4 )α −1 = α, e 4 = (α 4 ) 2 (1 +α 5 α −4 + α 3 α −8 )α −1 = α 5 . 78 NONBINARY BCH CODES: REED–SOLOMON CODES Therefore, ¯e(x) = αx 2 + α 5 x 4 and the decoded word is ˆc(x) = ¯r(x) + ¯e(x) = 0. The two errors have been corrected. Note that the constant β is the same for both polynomials found by application of the extended EA. The EA finds β ·σ(x) and β · (x), for some nonzero constant β ∈ GF(2 m ). Nevertheless, both error locator and error evaluator polynomials have the same roots as those obtained by the PGZ or BMA algorithms, and thus the error values obtained are the same. In most of the computer programs to simulate encoding and decoding procedures of RS codes on the ECC web site, the following equivalent method of finding the error values is used (Lin and Costello 2005). Let z(x) = 1 +(S 1 + σ 1 )x + (S 2 + σ 1 S 1 + σ 2 )x 2 +···+(s ν + σ 1 S ν−1 +···+σ ν )x ν . (4.6) Then the error value is computed as (Berlekamp 1984) e j  = (α j  ) 1−b z(α j  ) ν  i=1 i= (1 +α j i −j  ) , (4.7) where 1 ≤  ≤ ν. Yet another alternative to Forney algorithm, for small values of t d , is to determine the error values directly as follows. For 1 ≤  ≤ ν, the error values e j  are related to the syndromes S i by a set of linear equations. Let β  = α j  denote the error enumerator, 1 ≤  ≤ ν.Then S i = ¯e(α b+i−1 ) = ν  =1 e j  α (b+i−1)j  = ν  =1 e j  β (b+i−1)  . (4.8) where 1 ≤ i ≤ 2t d . Each ν ×ν submatrix formed by the already known terms β (b+i−1)  is a Vandermonde matrix. After all the ν error locations j  are known, any set of ν equations of the form (4.8) can be used to find the error values. In particular, choosing the first ν syndromes:      S 1 S 2 . . . S ν      =        β b 1 β b 2 ··· β b ν β b+1 1 β b+1 2 ··· β b+1 ν . . . . . . . . . . . . β b+ν−1 1 β b+ν−1 2 ··· β b+ν−1 ν             e j 1 e j 2 . . . e j ν      , (4.9) is a system of linear equations that can be solved using GF(2 m ) arithmetic. NONBINARY BCH CODES: REED–SOLOMON CODES 79 Example 4.3.2 Consider the same RS code and received polynomial in Example 4.3.1. Then (4.9) gives:  α 6 α 5  =  11 α 2 α 4  e 2 e 4  . The determinant of the 2 × 2 matrix is  = α 4 + α 2 = α. From this it follows that e 2 = α −1 det  α 6 1 α 5 α 4  = α 6 (α 3 + α 5 ) = α 6 · α 2 = α, and e 4 = α −1 det  1 α 6 α 2 α 5  = α 6 (α 5 + α) = α 6 · α 6 = α 5 , which are the same error values as those obtained with the Forney algorithm. Again, it is emphasized that this can only be done efficiently (and practically) for relatively small values of the error correcting capability, t d , of the RS code. 4.3.1 Remarks on decoding algorithms Unlike the BMA, all the syndromes in the EA are used in the first computation step. However, in terms of the number of GF(2 m ) operations, the BMA is generally more efficient than the EA. On the other hand, all the steps in the EA are identical, which translates into a more efficient hardware implementation. Also, the three decoding methods discussed here for (binary and nonbinary) BCH codes are examples of incomplete – or bounded distance – decoding. That is, they are able to detect situations in which the number of errors exceeds the capability of the code. There are other approaches to decoding BCH codes, the most notable being the use of a discrete Fourier transform over GF(2 m ). This is covered extensively in Blahut (1984), where the reader is referred to for details. Sudan (1997) introduced an algorithm that allows the correction of errors beyond the minimum distance of the code. It applies to RS codes and more generally to AG codes. This algorithm produces a list of code words (it is a list- decoding algorithm) and is based on interpolation and factorization of polynomials over GF(2 m ) and its extensions. Sudan algorithm was improved in Guruswami and Sudan (1999). 4.3.2 Errors-and-erasures decoding For the correction of erasures, the main change to the RS decoding procedures described above is that an erasure locator polynomial τ(x) needs to be introduced, defined as τ(x) = µ  =1 (1 + y i  x), where y i  = α i  ,for1≤  ≤ µ, denotes the position of an erasure. By definition, the positions of the erasures are known. Therefore, only the erasure values need to be found. This can be done, as before, in the Forney algorithm step. In computing 80 NONBINARY BCH CODES: REED–SOLOMON CODES the syndromes of the received polynomial, it can be shown that any values of the erasures can be replaced, without any difference in the decoded word. The decoding procedure is similar to the errors-only RS decoder, with the following exceptions. A modified syndrome polynomial, or modified Forney syndrome, is formed, T(x)= S(x)τ(x) +1modx 2t d +1 . (4.10) The BMA algorithm can be applied to find σ(x) with the following modifications: 1. The discrepancy is now defined as d i = T i+µ+1 +  i  j=1 σ (i) j T i+µ+1−j , (4.11) with d 0 = T µ+1 . 2. The algorithm finishes when the following stopping condition is met: i ≥ l i+1 + t d − 1 − µ/2. After σ(x) is obtained, a modified errors-and-erasure evaluator, or errata evaluator, is computed as ω(x), ω(x) = [ 1 +T(x) ] σ(x) mod x 2t d +1 . (4.12) In addition, the following errata locator polynomial is computed, φ(x) = τ(x)σ(x). (4.13) The resulting errata evaluation, or modified Forney algorithm,isgivenby e j  = (α j  ) 2−b ω(α −j  ) φ  (α −j  ) , (4.14) 1 ≤  ≤ ν, for the error values, and f i  = (y i  ) 2−b ω(y −1 i  ) φ  (y −1 i  ) , (4.15) 1 ≤  ≤ µ, for the erasure values. For errors-and-erasures decoding, the EA can also be applied to the modified syndrome polynomial T(x),using1+ T(x) instead of S(x) as in errors-only decoding. That is, the initial conditions are r 0 (x) = x 2t d +1 and r 1 (x) = 1 + T(x). The algorithm stops when deg[r j (x)] ≤(d − 1 +µ)/2, with ω(x) = r j (x) and σ(x) = b j (x). Example 4.3.3 Let C be an RS (15, 9, 7) code over GF(2 4 ) with zeros {α, α 2 , ,α 6 }, where α is a primitive element satisfying p(α) = α 4 + α 3 + 1 = 0. As a reference, a table of elements of GF(2 4 ) as powers of a primitive element α, with α 4 + α 3 + 1 = 0, is shown below. NONBINARY BCH CODES: REED–SOLOMON CODES 81 Table of elements of GF(2 4 ), p(x) = x 4 + x 3 + 1. Power Vector 0 0000 1 0001 α 0010 α 2 0100 α 3 1000 α 4 1001 α 5 1011 α 6 1111 α 7 0111 α 8 1110 α 9 0101 α 10 1010 α 11 1101 α 12 0011 α 13 0110 α 14 1100 The generator polynomial of C is ¯g(x) = 6  i=1 (x + α i ) = x 6 + α 12 x 5 + x 4 + α 2 x 3 + α 7 x 2 + α 11 x + α 6 . Suppose that the polynomial associated with a code word ¯v is ¯v(x) = α 5 + α 3 x + α 13 x 2 + αx 3 + α 7 x 4 + α 4 x 5 + αx 6 + α 4 x 7 + α 6 x 8 + α 3 x 10 + α 5 x 11 + α 6 x 12 + α 13 x 13 + α 10 x 14 . Let the received polynomial be ¯r(x) = α 7 + α 3 x + α 13 x 2 + α 14 x 3 + α 7 x 4 + αx 5 + αx 6 + α 4 x 7 + α 6 x 8 + α 3 x 10 + α 5 x 11 + α 11 x 12 + α 13 x 13 + α 10 x 14 . Assume that, aided by side information from the receiver, it is determined that the values in positions α 0 and α 5 are unreliable, and thus declared as erasures. Note that ¯e(x) = α 14 + α 8 x 3 + α 5 x 5 + αx 12 is the polynomial associated with the errata. 1 After reception, besides ¯r(x), the decoder knows that µ = 2 erasures have occurred in positions α 0 and α 5 . Therefore, it computes the erasure locator polynomial τ(x) = (1 + x)(1 + α 5 x) = 1 +α 10 x + α 5 x 2 . (4.16) 1 The decoder obviously does not know this, except for the positions of the erasures. This polynomial is used as a reference against which the correctness of the decoding results can be verified. [...]... following set of linear equations, similar to (4. 8), hold between the syndromes and the values of the errors and positions: µ ν ¯ Si = e(α b+i ) = ej α (b+i)j + =1 fj α (b+i)j , (4. 20) =1 where 1 ≤ i ≤ 2td As before, any set of ν + µ ≤ td equations can be used to solve the values of the errors and erasures Example 4. 3 .4 Direct solution of the errata values for the code in the previous example: After the BMA... i P (1 − Ps )n−i , n i s (4. 23) where Ps denotes the probability of a symbol error at the input of the RS decoder, PS = 1 − (1 − p)m , and p denotes the probability of a bit error at the input of the RS decoder The probability of a word error can be upper bounded by (1.33), td Pe (C) < 1 − i=0 2 It n i P (1 − Ps )n−i i s (4. 24) should be noted that the bound is tight only for bounded distance decoders,... In particular, estimate the coding gain 7 Consider the binary image Cb of an RS (7, 3, 5) code over GF(23 ) (a) Determine the parameters (n, k, d) of Cb and its burst -error- correcting capability (b) How does Cb compare with the Fire code in example 3.1.7? 8 Discuss the error correcting capabilities of the binary image Cb of an RS (127, 121, 7) code over GF(27 ) 9 Show that the binary image Cb of an... Consider again the memory-2 rate-1/2 convolutional encoder Then   1 + x5 x3 + x4 x2 + x3 x3 + x4 x 2 + x 3 x 5 + x 2 x 4 + x x 5 + x 2  3  (x) =  3 x + x 4 x 2 + x 3 x 5 + x 2 x 2 + x 3  x3 + x4 x2 + x3 x5 + x2 x2 + x3 The WDS of the code from the DT construction in Example 5.2.2 is obtained by adding the terms in the first row of 3 (x) References on other methods of computing the WDS of a linear... memory, in the sense that the output symbols depend not only on the input symbols but also on previous inputs and/or outputs In other words, the encoder is a sequential circuit or a finite-state machine The state of the encoder is defined as the contents of the memory In the computer programs that implement the Viterbi algorithm and other decoding procedures involving a trellis, found on the ECC web... v(x) Two errors and two eraˆ ¯ ¯ ¯ sures have been corrected Direct solution of errata values For small values of the minimum distance of an RS code, the erasure values may be obtained by solving a set of linear equations Let e(x) be the error polynomial associated ¯ with an error pattern resulting from the presence of ν errors and µ erasures, µ ν ej x j + e(x) = ¯ =1 fj x j (4. 19) =0 Then, the following... paths in the trellis that start in the all-zero state and, after K bits are fed into the encoder, can end at any state The rate of CDT is the same as that of the convolutional encoder However, the minimum distance of the resulting block code is reduced and dDT < df Example 5.2.2 Consider the memory-2 rate-1/2 encoder in Figure 5.1 Information vectors of length K = 3 information bits are encoded The direct... gn−1 denote the impulse responses of a rate-1/n convolutional encoder These ¯ are also called the generator sequences – or generators – of the code, on the basis of the observation made in the preceding text, and indicate the actual physical connections of the encoder Example 5.1.3 Continuing with the memory-2 rate-1/2 convolutional code of ¯ Example 5.1.1, the encoder in Figure 5.1 produces g0... There are other distances associated with a convolutional code, when the length of the sequence is of the order of the constraint length, but these are not relevant for the discussion in this book More details on the structure of a general rate-k/n convolutional encoder can be found in the references Lin and Costello (2005) and Johannesson and Zigangirov (1999) 5.2 Connections with block codes There is... constructed by placing the state diagram of the code at each time interval, with branches connecting the states between time i and i + 1, in correspondence with the encoder table The branches of the trellis are labeled in the same way as the state diagram Convention: When there is no dichotomy, the input information bit does not need to appear explicitly in the branch label For FIR encoders, the information . (4. 23) where P s denotes the probability of a symbol error at the input of the RS decoder, P S = 1 − (1 − p) m , and p denotes the probability of a bit error at the input of the RS decoder. The. detect situations in which the number of errors exceeds the capability of the code. There are other approaches to decoding BCH codes, the most notable being the use of a discrete Fourier transform. relatively small values of the error correcting capability, t d , of the RS code. 4. 3.1 Remarks on decoding algorithms Unlike the BMA, all the syndromes in the EA are used in the first computation

Ngày đăng: 14/08/2014, 12:20

TỪ KHÓA LIÊN QUAN