The Art of Error Correcting Coding phần 3 doc

27 260 0
The Art of Error Correcting Coding phần 3 doc

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

44 BINARY CYCLIC CODES AND BCH CODES g n-k 0 g g 1 u , u , , u 0 1 k-1 n-k [x u(x)] mod g(x)b(x) = D flip-flops XOR n-k n-k+1 n-1 1 2 1 2 kn-k = v , v , , v Figure 3.2 Circuit for systematic encoding: division by ¯g(x). 3.1.5 Shortened cyclic codes and CRC codes There are many practical applications in which an error correcting code with simple encod- ing and decoding procedures is desired, but existing constructions do not give the desired length, dimension and minimum distance. The following text is from an email sent to the author: We plan to use a simple FEC/ECC scheme to detect/correct single-bit errors in a 64-bit data block. The objective is to find or choose an ECC scheme to correct single-bit errors with up to 8 bits of overhead, giving a maximum of 72 bits (64 data bits plus 8 redundant bits) in total. Naturally, since 72 is not of the form 2 m − 1, none of the cyclic codes studied so far can be applied directly. One possible solution is to use a cyclic Hamming (127, 120, 3) code and to shorten it until a dimension k = 64 is reached. This yields a shortened Hamming (71, 64, 3) code. 2 Shortening is accomplished by not using all the information bits of a code. Let s denote the number of information bits not used, referred to as the shortening depth.LetC denote a cyclic (n,k,d) code. A shortened message is obtained by fixing s (arbitrary) message positions to zero. This leaves k −s positions available for the message bits. Without loss of generality, let the highest positions in a message be set to zero. Then ¯u(x) = u 0 + u 1 x +···+u k−1−s x k−1−s . The output of a systematic encoder, when the input is the message polynomial ¯u(x), produces the code polynomial ¯v(x) = x n−k ¯u(x) + [x n−k ¯u(x) mod ¯g(x)], of degree up to n − 1 − s. This shows that the resulting shortened code C s is a linear (n − s, k −s,d s ) code with d s ≥ d. In general, the shortened code C s is no longer cyclic. 2 This is an example to introduce the concept of shortening. A (72,64,4) single-error-correcting/double-error- detecting (SEC/DED) code, based on a shortened Hamming code, and adding an overall parity-check bit, was proposed for the IBM model 360 mainframe (Hsiao 1970). This code has also been used in byte-oriented memory devices. BINARY CYCLIC CODES AND BCH CODES 45 Example 3.1.6 Let C denote the cyclic Hamming (7, 4, 3) code with generator polynomial ¯g(x) = 1 +x + x 3 . A new code is derived from C by sending 2 leading zeros followed by two information bits and the same three redundant bits computed by an encoder in C.This process gives a set of code words that forms a shortened linear (5, 2, 3) code. The fundamental property of a shortened code C s obtained from a cyclic code is that, although the code is generally no longer cyclic, the same encoder and decoder can be used, after the leading zeros are properly taken into account. In computer simulations, it is easy to simply pad the code words with zeros followed by the code word in C s and use the same encoding and decoding algorithms discussed in the book. This method is widely used in hardware implementations of Reed–Solomon decoders. Alternatively, the leading zeros in a message do not need to be included in the code word. Instead, the decoder circuit is modified to multiply the incoming received polynomial ¯r(x) by x n−k+s modulo ¯g(x), instead of x n−k modulo ¯g(x) in the conventional decoder. More details on the modified encoder and decoder structures for a shortened cyclic code can be found in Lin and Costello (2005), Peterson and Weldon (1972), Wicker (1995), among other references. Another possible solution is to try to construct other classes of cyclic codes with the desired parameters. Good families of cyclic codes not covered in the text are Euclidean geometry (EG) and projective geometry (PG) codes (Lin and Costello 2005). 3 Yet another possibility is to use a nonbinary cyclic code, such as the Reed–Solomon code discussed in Chapter 4, and to express it in terms of bits. This binary image of a Reed–Solomon (RS) code will have the additional feature of being able to correct many bursts of errors. Binary images of shortened RS codes have found numerous applications, including memory devices. CRC codes One of the most popular forms of ECC are the cyclic redundancy check codes, or CRC codes. These cyclic codes are used to detect errors in blocks of data. CRC codes are cyclic codes of length n ≤ 2 m − 1. Typically, CRC codes have generator polynomials of the form (1 + x)¯g(x),where ¯g(x) is the generator polynomial of a cyclic Hamming code. Common values of m are 12, 16 and 32. The choice of the generator polynomials is dictated by the undetected error probability, which depends on the weight distribution of the code. The computation of the undetected error probability of a cyclic code is tantamount to determining its weight distribution. This has remained an elusive task, even after 50 years of coding theory, with some progress reported in Fujiwara et al. (1985), Kazakov (2001) and references therein. Table 3.1 shows a list of the most popular generator polynomials of CRC codes, or CRC polynomials. 3.1.6 Fire codes A natural extension of CRC codes are Fire codes. These binary cyclic (n,k,d) codes are capable of correcting single error bursts of length b or less and have a generator polynomial of the form ¯g(x) = (x 2b−1 + 1) ¯p(x), 3 These codes can be used to construct powerful low-density parity-check (LDPC) codes that have excellent performance with iterative decoding techniques. See Section 8.3 in Chapter 8. 46 BINARY CYCLIC CODES AND BCH CODES Table 3.1 Generator polynomials of some CRC codes. Code m ¯g(x) CRC-12 12 x 12 + x 11 + x 3 + x 2 + x +1 CRC-16 16 x 16 + x 15 + x 2 + 1 CRC-CCITT 16 x 16 + x 12 + x 5 + 1 CRC-32 32 x 32 + x 26 + x 23 + x 22 + x 16 + x 12 + x 11 +x 10 + x 8 + x 7 ++x 5 + x 4 + x 2 + x +1 where ¯p(x) is an irreducible polynomial of degree c ≥ b and relatively prime to (x 2b−1 + 1). The length of a Fire code is n = LCM(e, 2b − 1),wheree is the smallest integer such that ¯p(x) divides (x e + 1). (This is called exponent in Peterson and Weldon (1972) and period in Lin and Costello (2005).) The dimension is k = LCM(e, 2b − 10) − (2b − 1) − c.Note that the parity-check bits associated with the polynomial (x 2b−1 + 1) are interleaved and evenly spaced at every 2b − 1 positions. Therefore, at most b − 1 successive parity-check bits are affected by bursts of length up to b. This fact, together with the polynomial ¯p(x), is sufficient to determine the location of the error burst. Decoding is accomplished by an error-trapping decoder (Kasami 1964) that is similar in structure to the Meggit decoder presented in the next section. For more details, see Section 11.3 of Peterson and Weldon (1972) and Section 20.3.1 of Lin and Costello (2005). Example 3.1.7 Let b = 4 and c = 4. Choose ¯p(x) = x 4 + x +1 with e = 15. The length and dimension of this Fire code are n = LCM(15, 7) = 105 and k = 105 − 7 − 4 = 94, respectively. In other words, a Fire (105, 94) code capable of correcting any single burst of four or less errors has generator polynomial ¯g(x) = (x 7 + 1)(x 4 + x +1) = x 11 + x 8 + x 7 + x 4 + x +1. 3.2 General decoding of cyclic codes Let ¯r(x) = ¯v(x) + ¯e(x),where¯e(x) is the error polynomial associated with an error vec- tor produced after transmission over a BSC channel. Then the syndrome polynomial is defined as ¯s(x)  = ¯r(x) mod ¯g(x) = ¯e(x) mod ¯g(x). (3.8) Figure 3.3 shows the general architecture of a decoder for cyclic codes. The syndrome polynomial ¯s(x) is used to determine the error polynomial ¯e(x). Since a cyclic code is first of all a linear code, this architecture can be thought of as a “standard array approach” to the decoding of cyclic codes. The decoding problem amounts to finding the (unknown) error polynomial ¯e(x) from the (known) syndrome polynomial ¯s(x). These two polynomials are related by equation (3.8), which is the basis of a syndrome decoder (also referred to as a Meggit decoder (Meggit 1960)) for cyclic codes. A related decoder is the error-trapping decoder (Kasami 1964), which checks if the error polynomial ¯e(x) is contained (“trapped”) in the syndrome polynomial ¯s(x). Only a limited number of classes of codes have relatively simple BINARY CYCLIC CODES AND BCH CODES 47 n-stage register r(x) = v(x) + e(x) v(x) ~ s(x) s(x) = r(x) mod g(x) e(x) Division by g(x) Detect error e(x) Figure 3.3 General architecture of a decoder for cyclic codes. decoders, for example, cyclic Hamming and Golay codes. As the error-correcting capa- bility t = [(d min − 1)/2] increases, however, the complexity of an architecture based only on the detection of errors with combinatorial logic becomes too large. Suppose that an error in the position corresponding to x n−1 (the first received bit) occurs. In other words, ¯e(x) = x n−1 . The corresponding syndrome polynomial is ¯s(x) = x n−1 mod ¯g(x). The code is cyclic, and thus if an error pattern affecting a given position is detected, any other error can be detected as well, by cyclically shifting the contents of the syndrome polynomial and the error polynomial. The syndrome decoder checks the syndrome for each received position and, if the pattern x n−1 mod ¯g(x) is detected, that position is corrected. Example 3.2.1 In this example, the decoding of a cyclic (7, 4, 3) Hamming code is illus- trated. For this code, ¯g(x) = x 3 + x +1. The syndrome decoding circuit is shown in Figure 3.4. The received bits are stored in a shift register and at the same time fed to a divide-by- ¯g(x) circuit. After all the seven bits have been received, the shift register contents are shifted one at a time, and a combinatorial gate checks if the syndrome polynomial x 6 mod(1 + x +x 3 ) = 1 +x 2 , or (101) in binary vector notation is present in the shift register when the output of the gate is equal to one, and the error is at the position x 6 and is corrected. At the same time, the error is fed back to the divide-by- ¯g(x) circuit to bring all the contents of the register equal to zeros, upon successful completion of decoding. This also allows detection of any anomalies at the end of the decoding process, by checking that the contents of the shift register are not all equal to zero. Attention is now focused on cyclic codes with large error-correcting capabilities, for which the decoding problem can be treated as that of solving sets of equations. Because of this, the notion of a field, a set in which one can multiply, add and find inverses, is required. Cyclic codes have a rich algebraic structure. It will be shown later that powerful decoding algorithms can be implemented efficiently when the roots of the generator polynomial are invoked and arithmetic over a finite field is used. 48 BINARY CYCLIC CODES AND BCH CODES r(x) = v(x) + e(x) v(x) ~ 7-stage register 110 x 1 x 3 n-1 x mod g(x) e(x) Figure 3.4 Syndrome decoder for a binary cyclic Hamming (7,4) code. Recall that the generator polynomial is the product of binary irreducible polynomials: ¯g(x) =  j∈J ⊂{1,2, ,} φ j (x). The algebraic structure of cyclic codes enables one to find the factors (roots) of each φ j (x) in a splitting field (also known as extension field). In the case of interest, that is, when the underlying symbols are bits, the splitting field becomes a Galois field. 4 Some authors refer to Galois fields as finite fields. The standard notation that will be used in the text is GF(q), where q = 2 m . (Although, in general, q can be the power of any prime number.) Example 3.2.2 In this example, the reader is reminded that the concept of splitting field is very familiar. Consider the field of real numbers. Over this field, it is well known that the poly- nomial x 2 + 1 is irreducible. However, over the complex field, it splits into (x + i)(x −i), where i = √ −1. Thus the complex field is the splitting field of the real field! 3.2.1 GF(2 m ) arithmetic It can be shown, with basic abstract algebra concepts (Lin and Costello 2005; Peterson and Weldon 1972), that in the field of binary numbers any polynomial of degree m can be split over GF(2 m ). For the purposes of this book, it is sufficient to learn basic computational aspects of finite fields. Serious readers are urged to study a good textbook on abstract algebra. 5 Decoding with GF(2 m ) arithmetic allows replacement of complex combinatorial cir- cuits with practical processor architectures that can solve Equation (3.8) as a set of linear equations. In the following text, the necessary tools to solve equations involved in decoding of cyclic codes are introduced. Important properties of GF(2 m ) The field GF(2 m ) is isomorphic (with respect to “+”) to the linear space {0, 1} m .Inother words, for every element β ∈ GF(2 m ), there exists a unique m-dimensional binary vector ¯v β ∈{0, 1} m . 4 After the famous French mathematician Evariste Galois (1811-1832). 5 The author likes Herstein (1975). BINARY CYCLIC CODES AND BCH CODES 49 There is a primitive element α ∈ GF(2 m ), such that every nonzero element β in GF(2 m ) can be expressed as β = α j ,0≤ j ≤ 2 m − 2. This element α is the root of an irreducible polynomial, called a primitive polynomial, p(x) over {0, 1},thatis,p(α) = 0. A primitive element α of the field GF(2 m ) satisfies the equation α 2 m −1 = 1, and n = 2 m − 1isthe smallest positive integer such that α n = 1. Example 3.2.3 Let α be a primitive element of GF(2 3 ) such that p(α) = α 3 + α + 1 = 0 and α 7 = 1. The table below shows three different ways to express, or represent, elements in GF(2 3 ). Power Polynomial Vector – 0 000 1 1 001 αα010 α 2 α 2 100 α 3 1 + α 011 α 4 α + α 2 110 α 5 1 + α + α 2 111 α 6 1 + α 2 101 When adding elements in GF(2 m ), the vector representation is the most useful, because a simple exclusive -or operation is needed. However, when elements are to be multiplied, the power representation is the most efficient. Using the power representation, a multipli- cation becomes simply an addition modulo 2 m − 1. The polynomial representation may be appropriate when making operations modulo a polynomial. An example of the need of this polynomial representation was seen in the discussion of shortened cyclic codes, where the value of x n−1 mod ¯g(x) was required. In the power representation, because α 2 m −1 = 1 holds, note that α 2 m = αα 2 m −1 = α, α 2 m +1 = α 2 α 2 m −1 = α 2 , and so on. This is to say that the powers of α are to be computed modulo 2 m − 1. Applying the same argument shows that α −1 = α −1+2 m −1 = α 2 m −2 .In Example 3.2.3 above, α −1 = α 2 3 −2 = α 6 . In general, the inverse β −1 = α k of an element β = α  is found by determining the integer k,0≤ k<2 m − 1suchthatα +k = 1, which can be expressed as  + k = 0mod(2 m − 1). Therefore,  = 2 m − 1 − k. Also, in the polynomial representation, the equation p(α) = 0 is used to reduce the expressions. In Example 3.2.3, α 3 = α 3 + 0 = α 3 + (α 3 + α + 1) = α +1. Log and antilog tables A convenient way to perform both multiplications and additions in GF(2 m ) is to use two look-up tables, with different interpretations of the address. This allows one to change between polynomial (vector) representation and power representation of an element of GF(2 m ). The antilog table A(i) is useful when performing additions. The table gives the value of a binary vector, represented as an integer in natural representation, A(i), that corresponds to the element α i . The log table L(i) is used when performing multiplications. This table 50 BINARY CYCLIC CODES AND BCH CODES gives the value of a power of alpha, α L(i) , that corresponds to the binary vector represented by the integer i. The following equality holds: α L(i) = A(i). The best way to understand how to use the tables in the computation of arithmetic operations in GF(2 m ) is through an example. Example 3.2.4 Consider GF(2 3 ) with p(α) = α 3 + α + 1, and α 7 = 1. The log and antilog tables are the following: Address GF(2 m )-to-vector Vector-to-GF(2 m ) i Antilog table, A(i) Log table, L(i) 01 −1 12 0 24 1 33 3 46 2 57 6 65 4 70 5 Consider the computation of an element γ = α(α 3 + α 5 ) 3 in vector form. Using the properties of GF(2 3 ), γ can be computed as follows: α 3 + α 5 = 110 ⊕ 111 = 001 = α 2 . Thus, γ = α(α 2 ) 3 = α 1+6 = α 7 (= 1). On the other hand, using the log and antilog tables, the computation of γ proceeds as follows: γ = A(L(A(3) ⊕ A(5)) ∗ 3 + 1) = A(L(3 ⊕ 7) ∗ 3 + 1) = A(L(4) ∗ 3 + 1) = A(2 ∗ 3 + 1) = A(7) = (A(0) = 1). In the last step, use was made of the fact that α 7 = 1. Antilog and log tables are used in performing addition and multiplication over GF(2 m ). Computer programs are available on the ECC web site for simulating encoding and decoding algorithms of BCH and Reed–Solomon codes, with arithmetic in GF(2 m ). These algorithms are described in the subsequent sections. More properties of GF(2 m ) The minimal polynomial φ i (x) of an element α i is the smallest degree polynomial that has α i as a root. The following properties regarding minimal polynomials can be shown. The minimal polynomial φ i (x) has binary coefficients and is irreducible over GF(2) ={0, 1}. BINARY CYCLIC CODES AND BCH CODES 51 Moreover, φ i (x) has roots α i , α 2i , ,α 2 κ−1 i ,whereκ divides m. These elements are known as the conjugate elements of α i in GF(2 m ). The powers of the conjugate elements form a cyclotomic coset (see MacWilliams and Sloane (1977), p. 104, and (von zur Gathen and Gerhard 1999), p. 391): C i  ={i, 2i, 4i, ,2 κ−1 i}. Cyclotomic cosets (also called cycle sets in Peterson and Weldon (1972), p. 209) have the property that they partition the set I 2 m −1 of integers modulo 2 m − 1. In other words, cyclotomic cosets are disjoint, that is, their intersection is the empty set C i ∩ C j =∅, i = j, and the union of all cyclotomic cosets  i C i = I 2 m −1 . Example 3.2.5 The cyclotomic sets modulo 7 are: C 0 ={0} C 1 ={1, 2, 4} C 3 ={3, 6, 5} The primitive element α of GF(2 m ) satisfies the equation α 2 m −1 = 1, and all elements can be expressed as powers of α. From this it follows that the polynomial (x 2 m −1 + 1) factors over the binary field as (x 2 m −1 + 1) = M  j=0 φ  j (x), and splits completely over GF(2 m ) as (x 2 m −1 + 1) = 2 m −2  j=0 (x +α j ). (3.9) The order n i of an element β = α i of GF(2 m ) is the smallest positive integer such that β n i = 1. The order n i of every element in GF(2 m ) divides 2 m − 1. Importantly, the degree of a minimal polynomial φ i (x) is equal to the cardinality (num- ber of elements) of the cyclotomic coset C i , deg [ φ i (x) ] =|C i |. This suggests the following method for finding all factors of (x 2 m −1 + 1): 1. Generate the cyclotomic cosets modulo 2 m − 1. 2. For each cyclotomic coset C s , compute the minimal polynomial φ s (x) as the product of linear factors (x −α i s ),wherei s ∈ C s , φ s (x) =  i s ∈C s (x +α i s ). (3.10) This method can be used in computing the generator polynomial of any cyclic code of length n = 2 m − 1. It is used in the computer simulation programs for BCH codes available on the ECC web site, to compute the generator polynomial given the zeros of the code. 52 BINARY CYCLIC CODES AND BCH CODES Example 3.2.6 Consider GF(2 3 ) with p(x) = x 3 + x +1. The roots of each of the factors of the polynomial x 7 + 1 are shown in the following table. The reader is invited to verify that in fact the products of the linear factors in (3.10) give the resulting binary polynomials. C s Conjugate elements Minimal polynomial, φ s (x) C 0 ={0} 1 φ 0 (x) = x + 1 C 1 ={1, 2, 4} α,α 2 ,α 4 φ 1 (x) = x 3 + x +1 C 3 ={3, 6, 5} α 3 ,α 6 ,α 5 φ 3 (x) = x 3 + x 2 + 1 3.3 Binary BCH codes BCH codes are cyclic codes that are constructed by specifying their zeros, that is, the roots of their generator polynomials: A BCH code of d min ≥ 2t d + 1 is a cyclic code whose generator polynomial ¯g(x) has 2t d consecutive roots α b , α b+1 , α b+2t d −1 . Therefore, a binary BCH (n,k,d min ) code has a generator polynomial ¯g(x) = LCM{φ b (x), φ b+1 (x), . . . , φ b+2t d −1 (x)}, length n = LCM{n b ,n b+1 , ,n b+2t d −1 }, and dimension k = n − deg[ ¯g(x)]. A binary BCH code has a designed minimum distance equal to 2t d + 1. However, it should be noted that its true minimum distance may be larger. Example 3.3.1 With GF(2 3 ), p(x) = x 3 + x +1, t d = 1 and b = 1, the polynomial ¯g(x) = LCM{φ 1 (x), φ 2 (x)}=x 3 + x +1, generates a binary BCH (7,4,3) code. (This is actually a binary cyclic Hamming code!) Note that the Hamming weight of ¯g(x) is equal to 3, so that – in this case, but not always – the designed distance is equal to the true minimum distance of the code. Example 3.3.2 Consider GF(2 4 ), p(x) = x 4 + x +1, with t d = 2 and b = 1.Then ¯g(x) = LCM{φ 1 (x), φ 3 (x)} = (x 4 + x +1)(x 4 + x 3 + x 2 + x +1) = x 8 + x 7 + x 6 + x 4 + 1 generates a double-error-correcting binary BCH (15,7,5) code. Example 3.3.3 With GF(2 4 ), p(x) = x 4 + x +1, t d = 3 and b = 1, the polynomial ¯g(x) = LCM{φ 1 (x), φ 3 (x)φ 5 (x)} = (x 4 + x +1)(x 4 + x 3 + x 2 + x +1) (x 2 + x +1) = x 10 + x 8 + x 5 + x 4 + x 2 + x +1 generates a triple-error-correcting binary BCH (15,5,7) code. BINARY CYCLIC CODES AND BCH CODES 53 3.3.1 BCH bound A lower bound on the minimum distance of a BCH code, known as the BCH bound,is derived next. This is useful not only to estimate the error-correcting capabilities of cyclic codes in general, but also to point out particular features of BCH codes. Note that the elements α b ,α b+1 , ,α b+2t d −1 are roots of the generator polynomial ¯g(x), and that every code word ¯v in the BCH code is associated with a polynomial ¯v(x), which is a multiple of ¯g(x). It follows that ¯v(x) ∈ C ⇐⇒ ¯v(α i ) = 0,b≤ i<b+2t d . (3.11) Code word ¯v then satisfies the following set of 2t d equations, expressed in matrix form, based on (3.11):         11··· 1 α b α b+1 ··· α b+2t d −1 (α b ) 2 (α b+1 ) 2 ··· (α b+2t d −1 ) 2 . . . . . . . . . . . . (α b ) n−1 (α b+1 ) n−1 (α b+2t d −1 ) n−1          v 0 v 1 v 2 ··· v n−1  = ¯ 0. (3.12) Consequently, the parity-check matrix of a binary cyclic BCH code is given by H =       1 α b (α b ) 2 ··· (α b ) n−1 1 α b+1 (α b+1 ) 2 ··· (α b+1 ) n−1 . . . . . . . . . . . . . . . 1 α b+2t d −1 (α b+2t d −1 ) 2 ··· (α b+2t d −1 ) n−1       (3.13) This matrix H has the characteristic that every 2t d × 2t d submatrix (formed by an arbitrary set of 2t d columns of H )isaVandermonde matrix (see, e.g., von zur Gathen and Gerhard (1999), p. 95). Therefore (see Section 2.1), any 2t d columns of H are linearly independent, from which it follows that the minimum distance of the code is d ≥ 2t d + 1. (Peterson and Weldon (1972) p. 270, MacWilliams and Sloane (1977) p. 201, and Lin and Costello (2005) p. 149.) Another interpretation of the results above is the following: BCH bound If the generator polynomial ¯g(x) of a cyclic (n,k,d) code has  consecutive roots, say α b ,α b+1 , ,α b+−1 ,thend ≥ 2 + 1. 3.4 Polynomial codes The important class of cyclic polynomial codes includes cyclic RM codes, BCH and Reed–Solomon codes, and finite-geometry codes (Kasami et al. 1968; Lin and Costello 2005; Peterson and Weldon 1972). Polynomial codes are also specified by setting conditions on their zeros: Let α be a primitive element of GF(2 ms ).Lets be a positive integer, and b a divisor of 2 s − 1.Thenα h is a root of the generator polynomial g(x) of a µ-th order polynomial code if and only if b divides h and min 0≤<s W 2 s (h2  ) = jb, with 0 <j <  m b  − µ, [...]... processing elements to accomplish the following tasks: • Compute the syndromes,7 by evaluating the received polynomial at the zeros of the code Si = r (α i ), ¯ i = b, b + 1, , b + 2td − 1 (3. 17) • Find the coefficients of the error- locator polynomial σ (x) • Find the inverses of the roots of σ (x), that is, the locations of the errors, α j1 , , α jν • Find the values of the errors ej1 , , ejν (Not... erasure.10 Declaring erasures is the simplest form of soft-decision, which will be the focus of attention in Chapter 7 Introduction of erasures has the advantage, with respect to errors-only decoding, that the positions are known to the decoder Let d be the minimum distance of a code, ν be the number of errors and µ be the number of erasures contained in a received word Then, the minimum Hamming distance... However, in terms of the number of GF(2m ) operations, it is very efficient This version of the algorithm is implemented in most of the C programs to simulate BCH codes that can be found on the ECC web site Example 3. 5.1 Let C be the triple -error- correcting BCH (15,5,7) code of Example 3. 3 .3 As a reference, to check the numerical computations, the power and vector representations of GF(24 ), with primitive... codes of length 32 1e+00 "ub .32 .31 .02" "ub .32 .26.04" "ub .32 .21.06" "ub .32 .16.08" "ub .32 .11.12" "ub .32 .06.16" "ub .32 .01 .32 " 1e-02 BER 1e-04 1e-06 1e-08 1e-10 1e-12 −10 −9 −8 −7 −6 −5 −4 3 −2 −1 0 1 2 3 Es/No (dB) 4 5 6 7 8 9 10 11 12 Figure 3. 11 Union bounds on the BER for extended BCH code of length 32 68 BINARY CYCLIC CODES AND BCH CODES Union bounds on bit error rate for extended BCH codes of length... 8 x 3 + α 4 x 4 + x 5 + αx 6 )(α 14 x + α 13 ) + α 8 x 5 + α 12 x 4 + α 11 x 3 + α 13 8 5 r2 (x) = α x + α 12 x 4 + α 11 x 3 + α 13 , q2 (x) = α 14 x + α 13 , and b2 (x) = b0 (x) + q2 (x)b1 (x) = α 14 x + α 13 • j = 3: S(x) = (α 8 x 5 + α 12 x 4 + α 11 x 3 + α 13 )(α 8 x + α 2 ) + α 14 x 4 + α 3 x 3 + α 2 x 2 + α 11 x r3 (x) = α 14 x 4 + α 3 x 3 + α 2 x 2 + α 11 x, q3 (x) = α 8 x + α 2 , and b3 (x)... decoding algorithm was first considered by Peterson (1974) A solution to the key equation (3. 16) is to be found using standard techniques for solving a set of linear equations This solution gives the coefficients of σ (x) The decoding problem is that the number of actual errors is unknown Therefore, a guess has to be made as to the actual number of errors, ν, in the received word Assume that not all the. .. appending, at the start of each code vector, an overall parity-check bit The extended code of a cyclic code is no longer cyclic Let H denote the parity-check of the cyclic code, then the parity-check matrix of the extended code, denoted as Hext , is given by   1 1 ··· 1 0   (3. 29) Hext =  0  H 0 Appendix A lists the weight distributions of all extended binary BCH codes of length up to 128 These data... Example 3. 1.5),   1 1 1 1 1 1 1 1 0 1 1 1 0 1 0 0  H = 0 0 1 1 1 0 1 0 0 0 0 1 1 1 0 1 It can be easily verified that A(ext) (x) = 1 + 14x 4 + x 8 To compute the WDS of the binary cyclic Hamming (7, 4, 3) , use (3. 30) to obtain (ext) −→ 8A3 = 4A4 4A4 = (8 − 4)A3 −→ (ext) 8A7 = 8A8 −→ A3 = 7, A4 = 7, A7 = 1 3. 6.1 Error performance evaluation With knowledge of the WDS of a code C, the error performance... αx + α 12 x 2 ) + (1)(α 13 )−1 x 2 (1 + αx) = 1 + αx + α 7 x 2 + α 3 x 3 , 5 = max{ 4 , 2 + 4 − 2} = 3, 5 + 3 − 1 ≤ 4 ? No: (5) (5) (5) d5 = S6 + S5 σ1 + S4 σ2 + S3 3 = α + 1 · α + α 4 (α 7 ) + α 8 (α 3 ) = 0 • Iteration 6: i = 5, d5 = 0, σ (6) (x) = σ (5) (x) = 1 + αx + α 7 x 2 + α 3 x 3 , 6 = 5 = 3, 6 + 3 − 1 ≤ 3 ? Yes: End Therefore σ (x) = 1 + αx + α 7 x 2 + α 3 x 3 That the odd numbered iterations... 2−n+k (1 + x)n B 1−x , 1+x (3. 27) which can also be expressed as B(x) = 2−k (1 + x)n A 1−x 1+x (3. 28) Therefore, for high-rate codes, it is simpler to compute the WDS B(x) of the dual code and then use (3. 27) to compute A(x) Alternatively, the WDS of a low-rate code is easy to compute and the WDS of the dual code can be obtained using (3. 28) For some classes of codes, the trellis structure can be . (3. 17) • Find the coefficients of the error- locator polynomial σ(x). • Find the inverses of the roots of σ(x), that is, the locations of the errors, α j 1 , ,α j ν . • Find the values of the errors. compute the generator polynomial given the zeros of the code. 52 BINARY CYCLIC CODES AND BCH CODES Example 3. 2.6 Consider GF(2 3 ) with p(x) = x 3 + x +1. The roots of each of the factors of the. α,α 2 ,α 4 φ 1 (x) = x 3 + x +1 C 3 = {3, 6, 5} α 3 ,α 6 ,α 5 φ 3 (x) = x 3 + x 2 + 1 3. 3 Binary BCH codes BCH codes are cyclic codes that are constructed by specifying their zeros, that is, the roots of their

Ngày đăng: 14/08/2014, 12:20

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan