1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Convolutional Coding pot

38 205 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Cấu trúc

  • Slide 1

  • Slide 2

  • Slide 3

  • Slide 4

  • Slide 5

  • Slide 6

  • Slide 7

  • Slide 8

  • Slide 9

  • Slide 10

  • Slide 11

  • Slide 12

  • Slide 13

  • Slide 14

  • Slide 15

  • Slide 16

  • Slide 17

  • Slide 18

  • Slide 19

  • Slide 20

  • Slide 21

  • Slide 22

  • Slide 23

  • Slide 24

  • Slide 25

  • Slide 26

  • Slide 27

  • Slide 28

  • Slide 29

  • Slide 30

  • Slide 31

  • Slide 32

  • Slide 33

  • Slide 34

  • Slide 35

  • Slide 36

  • Slide 37

  • Slide 38

Nội dung

Convolutional Coding Today, we are going to talk about:  Another class of linear codes, known as Convolutional codes.  Structures of the encoder and different ways for representing it.: state diagram and trellis representation of the code.  What is a Maximum likelihood decoder?  How the decoding is performed for Convolutional codes (the Viterbi algorithm) ?  Soft decisions vs. hard decisions 2 Convolutional codes  Convolutional codes offer an approach to error control coding substantially different from that of block codes.  A convolutional encoder:  encodes the entire data stream, into a single codeword.  does not need to segment the data stream into blocks of fixed size ( Convolutional codes are often forced to block structure by periodic truncation ).  is a machine with memory.  This fundamental difference in approach imparts a different nature to the design and evaluation of the code.  Block codes are based on algebraic/combinatorial techniques.  Convolutional codes are based on construction techniques. 3 Convolutional codes-cont’d  A Convolutional code is specified by three parameters or where  is the coding rate, determining the number of data bits per coded bit.  In practice, typically k=1 is chosen and we assume that from now on.  K is the constraint length of the encoder a where the encoder has K-1 memory elements.  There is different definitions in literatures for constraint length. ),,( Kkn ),/( Knk nkR c /= 4 Block diagram of the DCS Information source Rate 1/n Conv. encoder Modulator Information sink Rate 1/n Conv. decoder Demodulator    sequenceInput 21 , ), ,,( i mmm=m       bits) coded ( rdBranch wo 1 sequence Codeword 321 , ), ,,,( n nijiii i , ,u, ,uuU UUUU = = = G(m)U , ) ˆ , , ˆ , ˆ ( ˆ 21 i mmm=m        dBranch worper outputs 1 dBranch worfor outputsr Demodulato sequence received 321 , ), ,,,( n nijii i i i , ,z, ,zzZ ZZZZ = =Z C h a n n e l 5 A Rate ½ Convolutional encoder  Convolutional encoder (rate ½, K=3)  3 shift-registers where the first one takes the incoming data bit and the rest, form the memory of the encoder. Input data bits Output coded bits m 1 u 2 u First coded bit Second coded bit 21 ,uu (Branch word) 6 A Rate ½ Convolutional encoder 1 0 0 1 t 1 u 2 u 11 21 uu 0 1 0 2 t 1 u 2 u 01 21 uu 1 0 1 3 t 1 u 2 u 00 21 uu 0 1 0 4 t 1 u 2 u 01 21 uu )101(=m Time Output OutputTime Message sequence: (Branch word) (Branch word) 7 A Rate ½ Convolutional encoder Encoder)101(=m )1110001011(=U 0 0 1 5 t 1 u 2 u 11 21 uu 0 0 0 6 t 1 u 2 u 00 21 uu Time Output Time Output (Branch word) (Branch word) 8 Effective code rate  Initialize the memory before encoding the first bit (all- zero)  Clear out the memory after encoding the last bit (all- zero)  Hence, a tail of zero-bits is appended to data bits.  Effective code rate :  L is the number of data bits and k=1 and Rc=1/n is assumed: data Encoder codewordtail ceff R KLn L R < −+ = )1( 9 Encoder representation  Vector representation:  We define n binary vector with K elements (one vector for each modulo-2 adder). The i:th element in each vector, is “1” if the i:th stage in the shift register is connected to the corresponding modulo- 2 adder, and “0” otherwise.  Example: m 1 u 2 u 21 uu )101( )111( 2 1 = = g g 10 [...]... levels Decoding based on soft-bits, and using Euclidean distance measure, is called the “soft-decision decoding” On AWGN channels, 2 dB and on fading channels 6 dB gain are obtained by using softdecoding over hard-decoding 26 The Viterbi algorithm The Viterbi algorithm performs Maximum likelihood decoding It find a path through trellis with the largest metric (maximum correlation or minimum distance)... Viterbi decoding-cont’d  i=2 0 2 2 1 2 1 0 1 0 1 1 1 0 0 2 1 0 0 1 2 2 1 1 t1 t2 t3 t4 t5 t6 32 Example of Hard decision Viterbi decoding-cont’d  i=3 0 2 2 3 1 2 1 0 0 3 0 1 1 2 1 0 0 1 2 2 t2 1 0 0 t1 1 t3 2 1 1 t4 t5 t6 33 Example of Hard decision Viterbi decoding-cont’d  i=4 0 2 2 3 1 1 0 3 0 2 0 1 2 0 1 1 1 0 3 1 2 2 t2 1 0 0 t1 0 2 t3 1 1 3 t4 2 t5 t6 34 Example of Hard decision Viterbi decoding-cont’d... Decoding based on hard-bits is called the “hard-decision decoding” 25 Soft and hard decision-cont’d  In Soft decision:      The demodulator provides the decoder with some side information together with the decision The side information provides the decoder with a measure of confidence for the decision The demodulator outputs which are called soft-bits, are quantized to more than two levels Decoding... thus defined is unique and correspond to the ML codeword 29 Example of Hard decision Viterbi decoding m = (101) U = (11 10 00 10 11) Z = (11 10 11 10 01) 0/00 0/00 0/00 1/11 1/11 0/00 0/11 1/00 1/01 0/11 0/10 1/01 0/01 t2 0/11 1/11 0/10 t1 0/00 t3 0/10 0/01 t4 t5 t6 30 Example of Hard decision Viterbi decoding-cont’d  Label al the branches with the branch metric (Hamming distance) Γ( S (ti ), ti )... Optimum decoding  If the input sequence messages are equally likely, the optimum decoder which minimizes the probability of error is the Maximum likelihood decoder  ML decoder, selects a codeword among all the possible codewords which maximizes the likelihood p(Z | U (m′) ) where Z is the received function U (m′) is one of the possible codewords: sequence and 2L codewords to search!!! ML decoding rule:... function becomes ∞ n γ U (m) = ∑∑ z ji s (jim ) =< Z, S ( m ) > i =1 j =1  Inner product or correlation between Z and S Maximizing the correlation is equivalent to minimizing the Euclidean distance ML decoding rule: Choose the path which with minimum Euclidean distance to the received sequence 24 Soft and hard decisions  In hard decision:    The demodulator makes a firm or hard decision whether one... machine only encounters a finite number of states State of a machine: the smallest amount of information that, together with a current input to the machine, can predict the output of the machine In a Convolutional encoder, the state is represented by the content of the memory Hence, there are states 2 K −1 14 State diagram – cont’d  A state diagram is a way to represent the encoder  A state diagram... the received function U (m′) is one of the possible codewords: sequence and 2L codewords to search!!! ML decoding rule: Choose U ( m′) if p (Z | U ( m′) ) = max(m) p (Z | U ( m ) ) over all U 21 ML decoding for memory-less channels  Due to the independent channel statistics for memoryless channels, the likelihood function becomes p (Z | U (m) ) = p z1 , z2 , , zi , ( Z1 , Z 2 , , Z i , | U (m) ∞... each state, and keeps only the path with the largest metric, called the survivor, together with its metric It proceeds in the trellis by eliminating the least likely paths L 2 K −1 ! It reduces the decoding complexity to 27 The Viterbi algorithm - cont’d  Viterbi algorithm: A Do the following set up:    B For a data block of L bits, form the trellis The trellis has L+K-1 sections or levels and... m) Path metric  ∞ ) = ∑ log p ( Z i | U i =1 (m) i Branch metric ∞ n ) = ∑∑ log p ( z ji | u (jim ) ) i =1 j =1 Bit metric The path metric up to time index "i", is called the partial path metric ML decoding rule: Choose the path with maximum metric among all the paths in the trellis This path is the “closest” path to the transmitted sequence 22 Binary symmetric channels (BSC) 1 1 p Modulator input . decoding is performed for Convolutional codes (the Viterbi algorithm) ?  Soft decisions vs. hard decisions 2 Convolutional codes  Convolutional codes offer an approach to error control coding. techniques.  Convolutional codes are based on construction techniques. 3 Convolutional codes-cont’d  A Convolutional code is specified by three parameters or where  is the coding rate,. Convolutional Coding Today, we are going to talk about:  Another class of linear codes, known as Convolutional codes.  Structures of the encoder

Ngày đăng: 07/07/2014, 06:20

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN