Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 27 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
27
Dung lượng
267,47 KB
Nội dung
BINARY CONVOLUTIONAL CODES 99 x 2 x 2 xz xz x S SS µµ µ T(x) S 1 init (0) (2) (1) (3) 3 2 S final (0) (x) (x) (x) yz yz z xyz yz 1 Figure 5.9 The modified state diagram of a memory-2 rate-1/2 convolutional code. Example 5.3.3 Continuing with Example 5.3.2 , a modified state diagram for the computa- tion of the CWES is shown in Figure 5.9. The equations are 1 −xz −xz −yz 10 0 −xyz 1 − xyz µ 1 (x) µ 2 (x) µ 3 (x) = 0 x 2 yz 0 , with T(x,y, z) = x 2 zµ 1 (x). The solution is: T (x,y, z) = x 5 yz 3 1 − xyz(1 + z) = x 5 yz 3 + x 6 y 2 z 4 (1 + z) + x 7 y 3 z 5 (1 + z) 2 +··· 5.4 Performance bounds Bounds on the bit error probability of a binary convolutional code of rate k/n can be obtained with the aid of the CWES described in the previous section. For transmission over a binary symmetric channel (BSC) and binary transmission over an additive white Gaus- sian noise (AWGN) channel (Viterbi and Omura 1979) and maximum-likelihood decoding (MLD) 6 , the following upper bounds hold, respectively, P b < 1 k ∂T(x,y,z) ∂y x= √ 4p(1−p),y=1,z=1 , (5.16) P b < 1 k ∂T(x,y,z) ∂y x=e −RE b /N 0 ,y=1,z=1 , (5.17) where the energy-per-bit-to-noise ratio E b /N 0 is related to the energy-per-symbol-to-noise ratio E s /N 0 via the code rate R = k/n is as follows: E b N 0 = 1 R E s N 0 . (5.18) 6 An example of an MLD algorithm is the Viterbi decoder, explained in the next section. 100 BINARY CONVOLUTIONAL CODES Union bounds may be used to estimate the bit-error rate (BER) performance of a convo- lutional c ode. The reader should be aware, however, that the bounds in Equations (5.16) and (5.17) are quite loose. Fortunately, tighter (close to the actual BER performance) bounds exist at relatively mild channel c onditions, that is, low values of p for the BSC and high values of E s /N 0 for the AWGN channel, and are given by the following (Johannesson a nd Zigangirov 1999): P b < 1 k (1 + x) 2 ∂T(x,y,z) ∂y + (1 − x) 2 ∂T(−x,y,z) ∂y x= √ 4p(1−p),y=1,z=1 , (5.19) P b < 1 k Q 2Rd f E b N 0 e 2Rd f E b /N 0 ∂T(x,y,z) ∂y x=e −RE b /N 0 ,y=1,z=1 . (5.20) The application of these bounds is given in a numerical example 5.5.4 after discussing the Viterbi algorithm in the next section. Example 5.4.1 The bounds in Equations (5.16) and (5.19) on the probability of a bit error P b for the 4-state rate-1/2 convolutional code of Example 5.3.3, with transmission over a BSC with crossover probability p and MLD, are plotted in Figure 5.10. Evident from the figure is the fact that the bound in Equation (5.19) is tighter than the bound in Equation (5.16). 10 –3 10 –2 10 –1 10 –6 10 –5 10 –4 10 –3 10 –2 10 –1 p P b Bound in (5.16) Bound in (5.19) Figure 5.10 Bounds in Equations (5.16) and (5.19) on the BER of a memory-2 rate-1/2 convolutional code with d f = 5. Transmission over BSC with crossover probability p and MLD. BINARY CONVOLUTIONAL CODES 101 5.5 Decoding: Viterbi algorithm with Hamming metrics The trellis of convolutional codes has a regular structure. It is possible to take advantage of the repetitive pattern of the trellis in decoding. However, for linear block codes obtained from terminating convolutional codes and long information sequences, MLD is simply too complex and inefficient to implement. An efficient solution to the decoding problem is a dynamic programming algorithm known as the Viterbi algorithm, also known as the Viterbi decoder (VD). This is a maximum likelihood decoder in the sense that it finds the closest coded sequence ¯v to the received sequence ¯r by processing the sequences on an information bit-by-bit (branches of the trellis) basis. In other words, instead of keeping a score of each possible coded sequence, the VD tracks the states of the trellis. 5.5.1 Maximum-likelihood decoding and metrics The likelihood of a received sequence ¯ R after transmission over a noisy memoryless channel, given that a coded sequence ¯ V is sent, is defined as the conditional probability density function p ¯ R | ¯ V (¯r|¯v) = n−1 i=0 P(r i |v i ), (5.21) where ¯ V and ¯ R are the transmitted and received sequences, respectively. It is easy to show that for a BSC with parameter p, p ¯ R | ¯ V (¯r|¯v) = n−1 i=0 (1 − p) p 1 − p d H (r i ,v i ) , (5.22) with d H (r i ,v i ) = 1, if r i = v i ,andd H (r i ,v i ) = 0, if r i = v i .Thatis,d H (r i ,v i ) is the Hamming distance between bits r i and v i . For an AWGN channel, the likelihood is given by p ¯ R | ¯ V (¯r|¯v) = n−1 i=0 1 √ πN 0 e − 1 N 0 ( r i −m(v i ) ) 2 , (5.23) where m(·) denotes a binary modulated signal. Here, m is defined as a one-to-one mapping between bits {0, 1} and real numbers {− √ E s , + √ E s },whereE s is the energy per symbol. This mapping is also known as binary phase-shift keying (BPSK) modulation or polar mapping. An MLD selects a coded sequence ¯v that maximizes Equation (5.21). By taking the log- arithm of Equation (5.21), the following can be shown. For the BSC, an MLD is equivalent to choosing the code sequence that minimizes the Hamming distance d H (¯r, ¯v) = n−1 i=0 d H (r i ,v i ). (5.24) Similarly, for the AWGN channel, it is the squared Euclidean distance d E (¯r, ¯v) = n−1 i=0 ¯r − ¯ m(v) 2 (5.25) 102 BINARY CONVOLUTIONAL CODES that is minimized by the coded sequence selected by the MLD. In this section, a BSC with crossover error probability p is considered. The AWGN channel is covered in Chapter 7. 5.5.2 The Viterbi algorithm Let S (k) i denote a state in the trellis at stage i. Each state S (k) i in the trellis is assigned a state metric, or simply a metric, M(S (k) i ),andapath in the trellis, ¯y (k) . A key observation in applying the Viterbi algorithm is: With i being the time, most likely paths per state ¯y (k) i (the ones closest to the received sequence) will eventually coincide at some time i −. In his paper, Viterbi (1967) indicates that the value of for memory-m rate-1/2binary convolutional codes should be >5m. The VD operates within a range of L received n-tuples (output bits per state transition) known as the decoding depth. The value of L must be such that L>. In the following text, the Viterbi algorithm applied to a memory-m rate-1/n binary convolutional code is described and its operation illustrated via a simple example. Some additional notation is needed: Let ¯v[i] = (v 0 [i]v 1 [i] v n−1 [i]) denote the coded bits in a branch (state transition), and let ¯r[i] = (r 0 [i]r 1 [i] r n−1 [i]) denote the output of the channel. Basic decoding steps Initialization Set i = 0. Set metrics and paths M(S (k) 0 ) = 0, ¯y (k) 0 = () (empty). The specific way in which the initialization of the paths is performed is irrelevant, as shown later. For the sake of clarity of presentation of the algorithm, it is assumed that the paths are represented as lists that are initialized to the empty list. 1. Branch metric computation At stage i, compute the partial branch metrics BM (b) i = d H (¯r[i], ¯v[i]), (5.26) b = n−1 =0 v [i]2 n−1− , associated with the n outputs ¯v[i] of every branch (or state transition) and the n received bits ¯r[i]. BINARY CONVOLUTIONAL CODES 103 Branch metric generator Add, compare and select Update paths and metrics r _ u ~ BMG ACS Traceback RAM Figure 5.11 Block diagram of a Viterbi decoder. 2. Add, compare and select (ACS) For each state S (k) i , k = 0, 1, ,2 m − 1, and corresponding pair of incoming branches from two precursor states S (k 1 ) i−1 and S (k 2 ) i−1 , the algorithm compares the extended branch metrics M(S (k 1 ) i−1 ) + BM (b 1 ) i and M(S (k 2 ) i−1 ) + BM (b 2 ) i ,where b j = n−1 =0 v [i]2 n−1− i = 1, 2, and selects a winning branch, giving the smallest path metric, and updates the metric, M(S (k) i ) = min{M(S (k 1 ) i−1 ) + BM (b 1 ) i ,M(S (k 2 ) i−1 ) + BM (b 2 ) i }. (5.27) 3. Path memory update For each state S (k) i , k = 0, 1, ,2 m − 1, update the survivor paths ¯y (k) as follows, with the output of the winning branch ¯v k j ,j ∈{1, 2}, ¯y (k) i = ( ¯y (k j ) i−1 , ¯v k j ). (5.28) 4. Decode symbols If i>L, then output as the estimated coded sequence ¯y (k ) i−L ,wherek is the index of the state S (k ) with the smallest metric. Set i = i +1 and go to decoding step 1. It should be stressed that this is not the only way to implement the Viterbi algorithm. The procedure in the preceding text can be considered a classical algorithm. This is shown in Fig. 5.11. There are alternative implementations that, depending on the particular structure of the underlying convolutional encoder, may offer advantages (see, e.g., Fossorier and Lin (2000)). In addition, in the last step of the algorithm, symbol decoding can be applied to information bits directly. This is the form usually employed in the software implementations of VDs that are available on the ECC web site. In hardware implementations, a method based on a traceback memory is favored that estimates the original information sequence, indirectly, on the basis of state transitions. This technique is discussed later in the chapter. Example 5.5.1 Consider again the memory-2 rate-1/2 convolutional encoder with gener- ators (7, 5). Note that d f = 5 for this code. This example shows how a single error can be corrected. Suppose that ¯v = (11, 01, 01, 00, 10, 11) is transmitted over a BSC and that ¯r = (10, 01, 01, 00, 10, 11) is received (one error in the second position). The operation of the VD is illustrated in Figures 5.12 to 5.17. The evolution of the metric values with respect to the decoding stages is shown in the following table: 104 BINARY CONVOLUTIONAL CODES State/Stage i = 0 i = 1 i = 2 i = 3 i = 4 i = 5 i = 6 S (0) i 0111121 S (1) i 0001213 S (2) i 0111122 S (3) i 0011222 After processing six stages of the trellis (i = 6), the state with the smallest metric is S (0) 6 with associated (survivor) path ¯y (0) 6 = ¯v. One error has been corrected. 5.5.3 Implementation issues In this section, some of the implementation issues related to VDs are discussed. The tech- niques provided here apply equally to any VD that operates over channels with additive metrics, such as the BSC, AWGN and flat Rayleigh fading channels. Path metric initialization The VD can operate in the same mode from the start (i = 0). The survivor paths can have arbitrary values, without affecting the decoder’s performance. The first L decoded bits are 00 01 10 11 11 01 Add Compare Select Transmitted Received 01 00 10 11 00 10 1101 0110 t = 0: 00 01 10 11 00 01 10 11 00 01 10 11 00 01 10 11 00 01 10 11 M(S ) = 0 0 M(S ) = 0 M(S ) = 0 00 01 10 11 00 11 00 10 01 10 M(S ) = 0 0 0 0 00 01 10 11 00 11 11 00 10 01 01 10 00 01 10 11 10 (0) (1) (2) (3) t = 1: Branch metric computation: 1 10 (0) BM = d (00, ) = 1 1 10 1 10 1 10 (1) (2) (3 ) (0) 1 M(S ) = min { M(S ) + BM , M(S ) + BM } 00 (0) (1)(0) (3) 11 Update metric M(S ) = 1 (0) 1 1 (0) 1 (0) (0) 1 M(S ) = 1, 11 11 M(S ) = 1, 11 (1) M(S ) = 0, y = (10) (1) (2) (2) (3) (3) M(S ) = 0, y = (10) = 1, select (flip a coin) branch from state 0 Update path y = (00) y = (00) y = (11) 11 01 BM = d (01, ) = 2 BM = d (11, ) = 1BM = d (10, ) = 0 Figure 5.12 VD operation for Example 5.5.1, at i = 0andi = 1. BINARY CONVOLUTIONAL CODES 105 00 11 11 00 10 01 01 10 (0) (1) (2) (3) 2 2 2 2 y = (10,11) y = (10,01) y = (10,00) y = (11,01) 10 01 00 01 10 11 00 11 00 10 01 10 00 01 10 11 00 01 10 11 10 00 01 10 11 1 0 1 0 t = 2: 00 01 10 11 00 01 10 11 11 01 01 Figure 5.13 VD operation for Example 5.5.1, at i = 2. 00 01 10 11 11 01 00 11 11 00 10 01 01 10 (0) (1) (2) (3) y = (10,00,01) 3 3 3 3 y = (10,01,11) y = (11,01,01) y = (10,01,00) 00 01 10 11 00 11 00 10 01 10 00 01 10 11 10 00 01 10 11 01 00 01 10 11 1 0 1 1 01 00 01 10 11 t = 3: 00 01 10 11 00 01 10 11 0110 01 Figure 5.14 VD operation for Example 5.5.1, at i = 3. 106 BINARY CONVOLUTIONAL CODES 00 01 10 11 11 01 (0) (1) (2) (3) 4 4 4 4 y = (10,00,01,10) y = (10,01,11,00) y = (10,01,00,10) y = (11,01,01,00) 00 01 10 11 00 11 00 10 01 10 00 01 10 11 10 00 01 10 11 00 01 10 11 00 01 10 11 00 1 1 1 1 00 01 10 11 01 0010 01 00 01 10 11 00 01 10 11 00 01 10 11 00 11 11 00 10 01 01 10 01 01 t = 4: Figure 5.15 VD operation for Example 5.5.1, at i = 4. (0) (1) (2) (3) 5 5 5 5 y = (10,00,01,10,10) y = (10,01,11,00,00) y = (11,01,01,00,10) y = (10,01,11,00,11) 00 01 10 11 00 11 00 10 01 10 00 01 10 11 10 00 01 10 11 01 00 01 10 11 01 00 01 10 11 00 00 01 10 11 00 01 10 11 1 2 1 2 10 t = 5: 01 00 1010 01 00 01 10 11 11 01 00 01 10 11 00 01 10 11 00 01 10 11 00 01 10 11 00 11 11 00 10 01 01 10 Figure 5.16 VD operation for Example 5.5.1, at i = 5. BINARY CONVOLUTIONAL CODES 107 00 01 10 11 00 11 00 10 01 10 00 01 10 11 10 00 01 10 11 01 00 01 10 11 01 00 01 10 11 00 00 01 10 11 00 01 10 11 10 2 1 2 2 11 00 01 10 11 t = 6: 00 11 11 00 10 01 01 10 (0) (1) (2) (3) 6 6 6 6 y = (10,00,01,10,10,10) y = (11,01,01,00,10,11) y = (10,00,01,10,10,01) y = (10,01,11,00,00,11) 00 01 10 11 00 01 10 11 00 01 10 11 00 01 10 11 00 01 10 11 00 01 10 11 11 01 01 00 10 1110 01 Figure 5.17 VD operation for Example 5.5.1, at i = 6. therefore random and give no information. For this reason, the value of L contributes to the decoding delay and is also known as the decoding depth. Moreover, provided that L is large enough (L ,where>5m for rate-1/2 binary codes), the decoded bit can either output from the path with the lowest metric or always output from the zero state path ( ¯y (0) ). The latter method is easier to implement and does not result in loss of performance. The programs on the ECC web site that implement MLD using the Viterbi algorithm work in this fashion. Also note that in Example 5.5.1 the branch labels (output) were stored in the survivor paths. This was done in order to facilitate understanding of the algorithm. In a practical implementation, however, it is the corresponding information bits that are stored. This is discussed in the following text, in connection with path memory management. Synchronization Branch symbols must be properly aligned with the received symbols. Any misalignment can be detected by monitoring the value of a random variable associated with the VD. Two commonly used synchronization variables are (1) path metric growth, and (2) channel BER estimates. The statistics of these variables give an indication of abnormal decoding behavior. Assume that the received sequence is not properly received, that is, the n-bit branch labels ¯v[i] in the decoder are not properly aligned, or synchronized, with the received sequence ¯r[i]. Example 5.5.2 Figure 5.18 shows an example for a rate-1/2 in which the received sequence ¯r is not synchronized with the reference coded sequence ¯v. 108 BINARY CONVOLUTIONAL CODES r 0 [i] r 1 [i] r 0 [i + 1] r 1 [i + 1] r 0 [i + 2] 1 [i + 2]r v 0 [i] v 1 [i] v 0 [i + 1] v 1 [i + 1] v 0 [i + 2] 1 [i + 2]v r 0 [i] r 1 [i] r 1 [i + 1] r 0 [i + 2] 1 [i + 2]rSynchronized Received Reference SKIP Figure 5.18 Example of branch misalignment in a Viterbi decoder. decoder XOR r BER Viterbi estimator ^ v Advance Delay Figure 5.19 Channel error rate estimation for a BSC. In other words, not all the bits in the received subsequence ¯r[i] belong to the same trellis stage in the decoder. In this case, the two events that may occur are (1) the path metrics are close to each other and grow rapidly and (2) the estimated channel BER approaches 1/2. Figure 5.19 shows the block diagram of a VD and a BER monitor. A synchronization stage needs to be added, external to the decoder itself, whose function is to advance the reference sequence ¯v in the decoder until the statistics return to normal. This can be done by skipping received symbols (a maximum of n −1 times) until the syn- chronization variables indicate normal decoding behavior. This is indicated in Figure 5.18 of Example 5.5.2 for the case of a rate-1/2 convolutional code. Metric normalization As the VD operates continuously, the path metrics will grow proportional to the length on the received sequence. To avoid overflow or saturation (depending on the number represen- tation used), the metrics need to be normalized. There are basically two methods of doing this. Both rely on the following two properties of the Viterbi algorithm: 1. The MLD path selection depends only on the metric differences. 2. The metric differences are bounded. [...]... Example 5. 5.3 Continuing with Example 5. 5.1, using the traceback technique, at the end of the i = 6 received pair of bits from the channel, the traceback memory would have the contents shown in Table 5. 2 The traceback memory is read, starting from the last bit (e.g., use a “last in, first out”), i = 6 (in general T = L) The row address is given by the state with the best metric This is state S (0) in the. .. respect to the original (n, k, d) code, the shortened (n − s, k − s, ds ) has the same redundancy Therefore, the left-hand side of (6.3) has the same value In other words, the number of cosets does not change On the other hand, for s > 0, the right-hand side becomes smaller In other words, the number of error patterns of Hamming weight up to t decreases If the original code does not meet the Hamming... carefully the standard array of the shortened (5, 2,3) code of Example 6.1.1 above, in order to understand the enhancement in the error- correcting capability of a shortened code with respect to the original code Example 6.1.2 The standard array for the shortened (5, 2, 3) code of Example 6.1.1 is given in Table 6.1 From the standard array, it follows that, although the minimum distance of the code is... ( 15, 17) 5 Repeat the previous problem using the RSC encoder of problem 1 6 Show that the conditional probabilities pR|V (¯ |v) with transmission over a BSC and ¯ ¯ r ¯ an AWGN channel are given by Equations (5. 22) and (5. 23), respectively 7 Show that puncturing always results in a decrease of the value of the free distance of the resulting PC code 8 Show that the memory-2 rate-2/3 PC code, with mother... 0-470-0 155 8-6 120 MODIFYING AND COMBINING CODES To construct a shortened (5, 2, 3) code, any two among the four leftmost columns of G can be removed Suppose that the first and second columns of G are removed, and that the first and second rows, corresponding to the nonzero elements of the columns, are removed These rows and columns are indicated by boldface types in (6.2) The remaining entries of G form the. .. to the next branch symbol in the trellis and repeats the test on pm,j Otherwise, pm,j = 1 and the received symbol are used to update the branch metric Every time the branch pointer is advanced, a check is made to determine whether all the symbols in this branch have been processed If they have, then the ACS operation is performed This method is used in some of the programs that implement Viterbi decoding... where L is the decoding depth The estimated information bits are taken from the (single) portion of the merged paths at stage i − L There are different techniques to extract the information bits Two of the most common are (1) register exchange and (2) traceback memory • Register exchange: This method is the easiest to implement in software All the survivor paths are updated at each iteration of the Viterbi... d The rate of the code increases since the dimension (or number of information symbols) does not change while the redundancy (or number of parity-check symbols) is reduced Puncturing is achieved by removing certain columns of the parity-check matrix H of the original code, to obtain matrix Hp of Cp This technique is similar to shortening of the dual code If ¯ ¯ ¯ H = p0 p2 pk−1 |In−k , denotes the. .. CONVOLUTIONAL CODES 117 3 Using the zero-tail construction and the binary memory-2 rate-1/2 convolutional encoder with generators (5, 7), construct a binary linear block code C of dimension k = 5 Determine (a) the length and minimum Hamming distance of code C (b) the WDS A(x) of C (c) the error performance of C with binary transmission over an AWGN channel 4 Repeat the previous problem using the binary memory-3... ds = 3, there are two error patterns of Hamming weight two, namely 11000 and 01100, that can be corrected Note that shortening a code reduces the length and dimension of the code, at the same time maintains the same redundancy, so that more error patterns can be corrected This can be explained from the Hamming bound for a t -error- correcting linear block (n, k, d) code, the inequality (1.26) of which . received (one error in the second position). The operation of the VD is illustrated in Figures 5. 12 to 5. 17. The evolution of the metric values with respect to the decoding stages is shown in the following. with Example 5. 5.1, using the traceback technique, at the end of the i = 6 received pair of bits from the channel, the traceback memory would have the contents shown in Table 5. 2. The traceback. discussing the Viterbi algorithm in the next section. Example 5. 4.1 The bounds in Equations (5. 16) and (5. 19) on the probability of a bit error P b for the 4-state rate-1/2 convolutional code of Example