Space-Time Coding phần 7 pdf

34 165 0
Space-Time Coding phần 7 pdf

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Appendix 5.1 MAP Algorithm 175 Appendix 5.1 MAP Algorithm We present a MAP decoding algorithm for a system depicted in Fig. 5.32. In order to simplify the analysis, the following description of the MAP algorithm is specific to (n, 1, m) binary convolutional codes, though it could easily be generalized to include k/n rate convolutional codes, as well as decoding of block codes. A binary message sequence, denoted by c and given by c = (c 1 ,c 2 , ,c t , ,c N ), (5.16) where c t is the message symbol at time t and N is the sequence length, is encoded by a linear code. In general the message symbols c t can be nonbinary but for simplicity we assume that they are independently generated binary symbols and have equal a priori probabilities. The encoding operation is modeled as a discrete time finite-state Markov process. This process can be graphically represented by state and trellis diagrams. In respect to the input c t , the finite-state Markov process generates an output v t and changes its state from S t to S t+1 ,wheret + 1 is the next time instant. The process can be completely specified by the following two relationships v t = f(S t ,c t ,t) S t+1 = g(S t ,c t ,t) (5.17) The functions f(·) and g(·) are generally time varying. The state sequence from time 0 to t is denoted by S t  t and is written as S t 0 = (S 0 ,S 1 , ,S t ) (5.18) The state sequence is a Markov process, so that the probability P(S t+1 | S 0 ,S 1 , ,S t ) of being in state S t+1 , at time (t + 1), given all states up to time t, depends only on the state S t , at time t, P(S t+1 | S 0 ,S 1 , ,S t ) = P(S t+1 | S t ) (5.19) The encoder output sequence from time 0 to t is represented as v t 0 = (v 0 , v 1 , ,v t ) (5.20)     Encoder Modulator Decoder cv x noise Memoryless +     r n Figure 5.32 System model 176 Space-Time Turbo Trellis Codes where v t = (v t,0 ,v t,1 , ,v t,n−1 ) (5.21) is the code block of length n. The code sequence v t  t is modulated by a BPSK modulator. The modulated sequence is denoted by x t  t and is given by x t 0 = (x 0 , x 1 , ,x t ) (5.22) where x t = (x t,0 ,x t,1 , ,x t,n−1 ) (5.23) and x t,i = 2v t,i − 1,i= 0, 1, ,n− 1 (5.24) As there is a one-to-one correspondence between the code and modulated sequence, the encoder/modulator pair can be represented by a discrete-time finite-state Markov process and can be graphically described by state or trellis diagrams. The modulated sequence x t  t is corrupted by additive white Gaussian noise, resulting in the received sequence r t  t = (r t , r t+1 , ,r t  ) (5.25) where r t = (r t,0 ,r t,1 , ,r t,n−1 ) (5.26) and r t,i = x t,i + n t,i i = 0, 1, ,n− 1 (5.27) where n t,i is a zero-mean Gaussian noise random variable with variance σ 2 . Each noise sample is assumed to be independent from each other. The decoder gives an estimate of the input to the discrete finite-state Markov source, by examining the received sequence r t  t . The decoding problem can be alternatively formulated as finding the modulated sequence x t  t or the coded sequence v t  t . As there is one-to-one correspondence between the sequences v t  t and x t  t , if one of them has been estimated, the other can be obtained by simple mapping. The discrete-time finite-state Markov source model is applicable to a number of sys- tems in communications, such a s linear convolutional and block coding, continuous phase modulation and channels with intersymbol interference. The MAP algorithm minimizes the symbol (or bit) error probability. For each transmitted symbol it generates its hard estimate and soft output in the form of the a posteriori probability on the basis of the received sequence r. It computes the log-likelihood ratio (c t ) = log P r {c t = 1 |r} P r {c t = 0 |r} (5.28) Appendix 5.1 MAP Algorithm 177 for 1 ≤ t ≤ τ ,whereτ is the received sequence length, and compares this value to a zero threshold to determine the hard estimate c t as c t =  1if(c t )>0 0otherwise (5.29) The value (c t ) represents the soft information associated with the hard estimate c t .It might be used in a next decoding stage. We assume that a binary sequence c of length N is encoded by a systematic convolutional code of rate 1/n. The encoding process is modeled by a discrete-time finite-state Markov process described by a state and a trellis diagram with the number of states M s . We assume that the initial state S 0 = 0 and the final state S τ = 0. The received sequence r is corrupted by a zero-mean Gaussian noise with variance σ 2 . As an example a rate 1/2 memory order 2 RSC encoder is shown in Fig. 5.33, and its state and trellis diagrams are illustrated in Figs. 5.34 and 5.35, respectively.     +                     + c v 0 v 1 Figure 5.33 A rate 1/2 memory order 2 RSC encoder         10 00 11 01     0/01 1/11 1/11 0/01 0/00 1/10 1/10 0/00     Figure 5.34 State transition diagram for the (2,1,2) RSC code 178 Space-Time Turbo Trellis Codes                                                                                                                                       S=00 S=01 S=10 S=11 t=0 t=1 t=2 t=3 t=4 0/00 0/00 0/00 0/00 1/11 1/11 1/11 1/11 0/01 0/01 0/01 0/01 0/01 1/10 1/10 1/10 1/10 1/10 0/00 0/00 Figure 5.35 Trellis diagram for the (2,1,2) RSC code The content of the shift register in the encoder at time t represents S t and it transits into S t+1 in response to the input c t+1 giving as output the coded block v t+1 . The state transition of the encoder is shown in the state diagram. The state transitions of the encoder are governed by the transition probabilities p t (l | l  ) = Pr{S t = l | S t−1 = l  };0≤ l, l  ≤ M s − 1 (5.30) The encoder output is determined by the probabilities q t (x t | l  ,l) = Pr{x t | S t−1 = l  ,S t = l};0≤ l, l  ≤ M s − 1 (5.31) Because of one-to-one correspondence between x t and v t we have q t (x t | l  ,l) = Pr{v t , v | S t−1 = l  ,S t = l};0≤ l, l  ≤ M s − 1 (5.32) For the encoder in Fig. 5.33, p t (l|l  ) is either 0.5, when there is a connection from S t−1 = l  to S t = l or 0 when there is no connection. q t (x|l  ,l) is either 1 or 0. For example, from Figs. 5.34 and 5.35 we have p t (2|0) = p t (1) = 0.5; p t (1|2) = p t (1) = 0.5 p t (3|0) = 0; p t (1|3) = p t (0) = 0.5 and q t (−1, −1|0, 0) = 1 q t (−1, +1|0, 0) = 0 q t (+1, −1|0, 1) = 0 q t (+1, +1|0, 2) = 1 (5.33) For a given input sequence c = (c 1 ,c 2 , ,c N ) Appendix 5.1 MAP Algorithm 179 the encoding process starts at the initial state S 0 = 0 and produces an output sequence x τ 1 ending in the terminal state S τ = 0, where τ = N + m. The input to the channel is x τ 1 and the output is r τ 1 = (r 1 , r 2 , ,r τ ). The transition probabilities of the Gaussian channel are defined by Pr{r τ 1 |x τ 1 }= τ  j=1 R(r j |x j ) (5.34) where R(r j |x j ) = n−1  i=0 Pr(r j,i |x j,i ) (5.35) and Pr{r j,i |x j,i =−1}= 1 √ 2πσ e − (r j,i +1) 2 2σ 2 (5.36) Pr{r j,i |x j,i = 1}= 1 √ 2πσ e − (r j,i −1) 2 2σ 2 (5.37) where σ 2 is the noise variance. Let c t be the information bit associated with the transition S t−1 to S t , producing as output v t . The decoder gives an estimate of the input to the Markov source, by examining r τ t .The MAP algorithm provides the log likelihood ratio, denoted by (c t ), given the received sequence r τ 1 , as indicated in Eq. (5.28) where Pr{c t = i|r τ 1 }, i = 0, 1, is the APP of the data bit c t . The decoder makes a decision by comparing (c t ) to a threshold equal to zero. We can compute the APPs in (5.28) as Pr{c t = 0|r τ 1 }=  (l  ,l)B 0 t Pr{S t−1 = l  ,S t = l|r τ 1 } (5.38) where B 0 t is the set of transitions S t−1 = l  → S t = l that are caused by the input bit c t = 0. For example, B 0 t for the diagram in Fig. 5.35 are (3,1), (0,0), (1,2) and (2,3). Also Pr{c t = 1|r τ 1 }=  (l  ,l)∈B 1 t Pr{S t−1 = l  ,S t = l|r τ 1 } (5.39) where B 1 t is the set of transitions S t−1 = l  → S t = l that are caused by the input bit c t = 1. For the diagram in Fig. 5.35, B 1 t consists of (0,2), (2,1), (3,3) and (1,0). Equation (5.38) can be written as Pr{c t = 0|r τ 1 }=  (l  ,l)∈B 0 t Pr{S t−1 = l  ,S t = l, r τ 1 } Pr{r τ 1 } (5.40) 180 Space-Time Turbo Trellis Codes The APP of the decoded data bit c t can be derived from the joint probability defined as σ t (l  ,l) = Pr{S t−1 = l  ,S t = l, r τ 1 },l= 0, 1, ,M s − 1 (5.41) Equation (5.40) can be written as Pr{c t = 0|r τ 1 }=  (l  ,l)∈B 0 t σ t (l  ,l) Pr{r τ 1 } (5.42) Similarly the APP for c t = 1isgivenby Pr{c t = 1|r τ 1 }=  (l  ,l)∈B 1 t σ t (l  ,l) Pr{r τ 1 } (5.43) The log-likelihood ratio (c t ) is then (c t ) = log  (l  ,l)∈B 1 t σ t (l  ,l)  (l  ,l)∈B 0 t σ t (l  ,l) (5.44) The log-likelihood (c t ) represents the soft output of the MAP decoder. It can be used as an input to another decoder in a concatenated scheme or in the next iteration in an iterative decoder. In the final operation, the decoder makes a hard decision by comparing (c t ) to a threshold equal to zero. In order to compute the joint probability σ t (l  ,l) required for calculation of (c t ) in (5.44), we define the following probabilities α t (l) = Pr{S t = l, r t 1 } (5.45) β t (l) = Pr{r τ t+1 |S t = l} (5.46) γ i t (l  ,l)= Pr{c t = i, S t = l, r t |S t−1 = l  }; i = 0, 1 (5.47) Now we can express σ t (l  ,l) as σ t (l  ,l) = α t−1 (l  ) · β t (l) ·  i∈(0,1) γ i t (l  ,l) (5.48) The log-likelihood ratio (c t ) can be written as (c t ) = log  (l  ,l)∈B 1 t α t−1 (l  )γ 1 t (l  ,l)β t (l)  (l  ,l)∈B 0 t α t−1 (l  )γ 0 t (l  ,l)β t (l) (5.49) We can obtain α defined in (5.45) as α t (l) = M s −1  l  =0 α t−1 (l  ) ·  i∈(0,1) γ i t ,(l  ,l) (5.50) for t = 1, 2, τ. Appendix 5.1 MAP Algorithm 181 For t = 0 we have the boundary conditions α 0 (0) = 1andα 0 (l) = 0forl = 0 We can express β t (l) defined in (5.46) as β t (l) = M s −1  l  =0 β t+1 (l  )  i∈(0,1) γ i t+1 (l  ,l) (5.51) for t = τ − 1, ,1, 0. The boundary conditions are β τ (0) = 1andβ τ (l) = 0forl = 0. We can write for γ i t (l  ,l) defined in (5.47) γ i t (l  ,l) = p t (l|l  ) · q t (x|l  ,l)·R(r t |x t ) We can further express γ i t (l  ,l) as γ i t (l  ,l) =                    p t (i) exp         − n−1  j=0 (r i t,j − x i t,j (l)) 2 2σ 2         for (l, l  ) ∈ B i t 0otherwise where p t (i) is the a priori probability of c t = i and x i t,j (l) is the encoder output associated with the transition S t−1 = l  to S t = l and input c t = i. Note that the expression for R(r t |x t ) is normalized by multiplying (5.35) ( √ 2πσ) n . Summary of the MAP Algorithm 1. Forward recursion • Initialize α 0 (l), l = 0, 1, ,M s − 1 α 0 (0) = 1andα 0 (l) = 0forl = 0 • For t = 1, 2, τ, l = 0, 1, M s − 1 and all branches in the trellis calculate γ i t (l  ,l) = p t (i) exp  −d 2 (r t , x t ) 2σ 2  for i = 0, 1 (5.52) where p t (i) is the a priori probability of each information bit, d 2 (r t , x t ) is the squared Euclidean distance between r t and the modulated symbol in the trellis x t . • For i = 0, 1storeγ i t (l  ,l). • For t = 1, 2, ,τ,andl = 0, 1, ,M s − 1 calculate and store α t (l) α t (l) = M s −1  l  =0  i∈(0,1) α t−1 (l  )γ i t (l  ,l) (5.53) The graphical representation of the forward recursion is given in Fig. 5.36. 182 Space-Time Turbo Trellis Codes       l  l  l                  α t (l) γ 0 t (l  ,l) γ 1 t (l  ,l) α t−1 (l  ) α t−1 (l  ) Figure 5.36 Graphical representation of the forward recursion       l  l  l                  β t (l) γ 0 t+1 (l, l  ) γ 1 t+1 (l, l  ) β t+1 (l  ) β t+1 (l  ) Figure 5.37 Graphical representation of the backward recursion 2. Backward recursion • Initialize β τ (l), l = 0, 1, ,M s − 1 β τ (0) = 1andβ τ (l) = 0forl = 0 • For t = τ − 1, 1, 0andl = 0, 1, M s − 1 calculate β t (l) as β t (l) = M s −1  l  =0  i∈(0,1) β t+1 (l  )γ i t+1 (l, l  ) (5.54) where γ i t+1 (l, l  ) was computed in the forward recursion. • For t<τcalculate the log-likelihood (c t ) as (c t ) = log M s −1  l=0 α t−1 (l  )γ 1 t (l  ,l)β t (l) M s −1  l=0 α t−1 (l  )γ 0 t (l  ,l)β t (l) (5.55) The graphical representation of the backward recursion is shown in Fig. 5.37. Note that because Eq. (5.55) is a ratio, the values for α t (l  ) and β t (l) can be normalized at any node which keeps them from overflowing. Bibliography 183 If the final state of the trellis is not known, the probability β τ (l), can be initialized as β τ (l) = 1 Ms , ∀l (5.56) Bibliography [1] C. Berrou, A. Glavieux and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: turbo codes”, Proc. Inter. Conf. Commun., 1993, pp. 1064–1070. [2] G. Ungerboeck, “Channel coding with multilevel phase signals”, IEEE Trans. Inform. Theory, vol. 28, Jan. 1982, pp. 55–67. [3] P. Robertson and T. Worz, “Coded modulation scheme employing turbo codes”, IEE Electronics Letters, vol. 31, no. 18, Aug. 1995, pp. 1546–1547. [4] P. Robertson and T. Worz, “Bandwidth-efficient turbo trellis coded modulation using punctured component codes”, IEEE Journal on Selec. Areas in Communications, vol. 16, no. 2, pp. 206–218, Feb. 1998. [5] S. Benedetto, D. Divsalar, G. Montorsi and F. Pollara, “Parallel concatenated trellis coded modulation”, Proc. IEEE ICC’96, pp. 974–978. [6] Y. Liu and M. Fitz, “Space-time turbo codes”, 13th Annual Allerton Conf. on Commun. Control and Computing, Sept. 1999. [7] Dongzhe Cui and A. Haimovich, “Performance of parallel concatenated space-time codes”, IEEE Commun. Letters, vol. 5, June 2001, pp. 236–238. [8] V. Tarokh, N. Seshadri and A. Calderbank, “Space-Time Codes for High Data Rate Wireless Communication: Performance Criterion and Code Construction”, IEEE Trans. Inform. Theory, vol. 44, no. 2, March 1998, pp. 744–765. [9] S. Baro, G. Bauch and A. Hansmann, “Improved codes for space-time trellis-coded modulation”, IEEE Trans. Commun. Letters, vol. 4, Jan. 2000, pp. 20–22. [10] J. C. Guey, M. Fitz, M. R. Bell and W. Y. Kuo, “Signal design for transmitter diver- sity wireless communication systems over Rayleigh fading channels”, Proc. of IEEE VTC’96, pp. 136–140. [11] Z. Chen, J . Yuan and B. Vucetic, “Improved space-time trellis coded modulation scheme on slow Rayleigh fading channels”, IEE Electronics Letters, vol. 37, no. 7, March 2001, pp. 440–441. [12] J. Yuan, B. Vucetic, Z. Chen and W. Firmanto, “Performance of space-time coding on fading channels”, Proc. of Intl. Symposium on Inform. Theory (ISIT) 2001, Washington D.C, June 2001. [13] L. R. Bahl, J. Cocke, F. Jelinek and J. Raviv, “Optimal decoding of linear codes for minimizing symbol error rate”, IEEE Trans. Inform. Theory, vol. IT-20, pp. 284–287, Mar. 1974. [14] B. Vucetic and J. Yuan, Turbo Codes Principles and Applications, Kluwer Publishers, 2000. [15] D. Tujkovic, “Recursive space-time trellis codes for turbo coded modulation”, Proc. of GlobeCom 2000, San Francisco. [16] E. Telatar, “Capacity of multi-antenna Gaussian channels”, The European Transactions on Telecommunications, vol. 10, no. 6, Nov./Dec. 1999, pp. 585–595. [17] D. Divsalar, S. Dolinar and F. Pollara, “Low complexity turbo-like codes”, Proc. of 2nd Int’l. Symp. on Turbo Codes and Related Topics, Brest, 2000, pp. 73–80. 184 Space-Time Turbo Trellis Codes [18] S. Y. Chung, T. Richardson and R. Urbanke, “Analysis of sum-product decoding of low-density-parity-check codes using Gaussian approximation”, submitted to IEEE Trans. Inform. Theory. [19] I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products, Fifth edition, Academic Press. [20] S. ten Brink, “Convergence of iterative decoding”, Electron. Lett., vol. 35, no. 13, pp. 806–808, May 24th, 1999. [21] D. Divsalar, S. Dolinar and F. Pollara, “Iterative turbo decoder analysis based on den- sity evolution”, IEEE Journal on Selected Areas in Communications, vol. 9, pp. 891– 907, May 2001. [22] W. Firmanto, B. Vucetic, J. Yuan and Z. Chen, “Space-time Turbo Trellis Coded Modulation for Wireless Data Communications”, Eurasip Journal on Applied Signal Processing, vol. 2002, no. 5, May 2002, pp. 459–470. [23] W. Firmanto, J. Yuan and B. Vucetic, “Turbo Codes with Transmit Diversity: Per- formance Analysis and Evaluation”, IEICE Trans. Commun., vol. E85-B, no. 5, May 2002. [...]... (MAP) methods are applied for decoding A method which can significantly improve the performance of PIC detectors, called decision statistics combining is also presented The performance of various receiver structures is discussed and illustrated by simulation results Space-Time Coding Branka Vucetic and Jinhong Yuan c 2003 John Wiley & Sons, Ltd ISBN: 0- 470 -8 475 7-3 186 Layered Space-Time Codes Figure 6.1... conventional decoding algorithms developed for (1-D)-component codes, leading to much lower complexity compared to maximum likelihood decoding The complexity of the LST receivers grows linearly with the data rate Though in the original proposal the number of receive antennas, denoted by nR , is required to be equal or greater than the number of transmit antennas, the use of more advanced detection/decoding...6 Layered Space-Time Codes 6.1 Introduction Space-time trellis codes have a potential drawback that the maximum likelihood decoder complexity grows exponentially with the number of bits per symbol, thus limiting achievable data rates Foschini [35] proposed a layered space-time (LST) architecture that can attain a tight lower bound on the... LST Transmitters There is a number of various LST architectures, depending on whether error control coding is used or not and on the way the modulated symbols are assigned to transmit antennas An uncoded LST structure, known as vertical layered space-time (VLST) or vertical Bell Laboratories layered space-time (VBLAST) scheme [43], is illustrated in Fig 6.1 The input information sequence, denoted by... channel encoder, interleaved, modulated and then transmitted by a particular transmit 188 Layered Space-Time Codes antenna It is assumed that channel encoders for various layers are identical However, different coding in each sub-stream can be used A better performance is achieved by a diagonal layered space-time (DLST) architecture [35], in which a modulated codeword of each encoder is distributed... signal to ˆ remove its interference contribution, giving the received signal for level i − 1 ˆ ri−1 = ri − xti hi (6. 27) 194 Layered Space-Time Codes where hi is the ith column in the channel matrix H, corresponding to the path attenuations from antenna i The operation xti hi in (6. 27) replicates the interference contribution caused ˆ by xti in the received vector ri−1 is the received vector free from... matrix columns in the space-time domain This simple transmission process can be combined with conventional block or convolutional one-dimensional codes, to improve the performance of the system This term “one-dimensional” refers to the space domain, while these codes can be multidimensional in the time domain The block diagrams of various LST architectures with error control coding are shown in Fig... complexity of this detection algorithm is exponential in the number of the transmit antennas For coded LST schemes, the optimum receiver performs joint detection and decoding on an overall trellis obtained by combining the trellises of the layered space-time coded and the channel code The complexity of the receiver is an exponential function of the product of the number of the transmit antennas and the code... are available, it is possible to remove ni = nR − do (6 .7) interferers with diversity order of do [9] The diversity order can be expressed as do = nR − ni (6.8) If the interference suppression starts at layer nT , then at this layer (nT − 1) interferers need to be suppressed Assuming that nR = nT , the diversity order in this layer, according to (6 .7) is 1 In the 1st layer, there are no interferers to... i = 1, 2, , nT (6.16) where q(x) denotes the hard decision on x A QR factorization algorithm [7] is presented in Appendix 6.1 Example 6.1 For a system with three transmit antennas, the decision statistics for various layers can be expressed as yt1 = (R1,1 )t xt1 + (R1,2 )t xt2 + (R1,3 )t xt3 + n 1 (6. 17) yt2 = (R2,2 )t xt2 + (R2,3 )t xt3 + n 2 (6.18) yt3 = (R3,3 )t xt3 + n 3 (6.19) The estimate on . and illustrated by simulation results. Space-Time Coding Branka Vucetic and Jinhong Yuan c  2003 John Wiley & Sons, Ltd ISBN: 0- 470 -8 475 7-3 186 Layered Space-Time Codes Figure 6.1 A VLST architecture 6.2. modulation”, Proc. IEEE ICC’96, pp. 974 – 978 . [6] Y. Liu and M. Fitz, Space-time turbo codes”, 13th Annual Allerton Conf. on Commun. Control and Computing, Sept. 1999. [7] Dongzhe Cui and A. Haimovich,. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: turbo codes”, Proc. Inter. Conf. Commun., 1993, pp. 1064–1 070 . [2] G. Ungerboeck, “Channel coding with multilevel phase signals”, IEEE

Ngày đăng: 14/08/2014, 12:20

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan