1. Trang chủ
  2. » Tất cả

Course report digital communication turbo code turbo code structure and iteration turbo decoding algorithm

23 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

VIETNAM NATIONAL UNIVERSITY HO CHI MINH CITY UNIVERSITY OF SCIENCE COURSE REPORT DIGITAL COMMUNICATION School year 2022 2023 TURBO CODE GROUP 4 Nguyen Phat Dat – 20207090 Pham Van Truong An – 20207015[.]

VIETNAM NATIONAL UNIVERSITY HO CHI MINH CITY UNIVERSITY OF SCIENCE  COURSE REPORT DIGITAL COMMUNICATION School year: 2022-2023 TURBO CODE GROUP Nguyen Phat Dat – 20207090 Pham Van Truong An – 20207015 Pham Van Nam Anh – 20207017 Dinh Huy Hoang – 20207092 CLASS: 20DTV-CLC-1 LECTURER: TRAN THI THAO NGUYEN TABLE OF CONTENT DIGITAL COMMUNICATION – 20dtvclc1 – GROUP S I THE CONCEPT OF TURBO CODES: 1.1 The concept of turbo codes: 1.1.1 Valid functions (LIKELIHOOD FUNCTIONS): 1.1.2 Case of two signal layers: 1.1.3 Log-likelihood ratio: 1.1.4 The principle of iterative decoding: 1.2 Log-likelihood algebra: .5 1.2.1 Two–dimensional single – parity code: 1.2.2 Product code: 1.2.3 Extrinsic likelihood: .9 II TURBO CODE STRUCTURE AND ITERATION TURBO DECODING ALGORITHM: 2.1 Introduction: 10 2.2 Encoder structure and iterative decoder: 11 2.3 Turbo decoding algorithm: 13 2.3.1 Overview of decryption algorithms: 14 2.3.2 MAP algorithm: 14 2.3.3 The principle of the soft open Viterbi decoder: 15 III TURBO CODE APPLICATION IN MOBILE COMMUNICATIONS : 3.1 Limitations when applying Turbo code to multimedia communications: 19 3.2 Transmission channel characteristics: 19 3.3 Recommendations when applying Turbo codes to multimedia communications: 19 REFERENCES SKILLS REPORT DIGITAL COMMUNICATION – 20dtvclc1 – GROUP I THE CONCEPT OF TURBO CODES: The narrative code schema was first proposed by Forney as a method to enhance encryption by combining or more simple related blocks or component codes As a result, the codes have a much greater ability to correct errors than other error correction codes, and they are provided with a structure that allows easy contact for complex decoding to become lighter A contiguous code sequence is mostly used for power-limiting systems such as transmitters on deep-space probes 1.1 The concept of turbo codes: 1.1.1 Valid functions (LIKELIHOOD FUNCTIONS): The mathematical setup of testing hypotheses is based on Bayes' theorem For the communications industry, applications related to the AWGN channel are of most interest, the greatest effect of Bayes' theorem describing the posterior probability (APP-a posterior probability) of a decision in the terms of the continuous random variable x is: (1.1) and (1.2) where is APP, and d = I represents the data d belonging to the i-th signal class from the set M class Furthermore, the probability density function representation (pdf) of the data addition noise signal has a continuous value that is received x, given that on the signal layer d=i Also, called a priori probability, is the probability that it occurs in the i-th signal layer 1.1.2 Case of two signal layers: Binary cits and are represented by electron potential levels of +1 and 1, respectively The variable d is used to represent the emitted bit of data, each time it occurs, as a potential or logical element Sometimes one format is more convenient than another; We can recognize the difference in each situation The binary bit (or potential level 1) is the element that is not in addition DIGITAL COMMUNICATION – 20dtvclc1 – GROUP 4 Figure 1.1: Valid function The same decision principle, called maximum a posteriori (MAP), can be thought of as the minimum probability of error rule, which evaluates the priori probability of data The general expression for the MAP principle in the term APPs is: (1.3) Equation (1.2) shows that we can choose the assumptions H1, ( d = + ) if APP, P( d = - 1| x ) is greater than APP, ( d = -1|x ) Conversely, the choice of hypothesis H2, ( d = -1 ) Using Bayes' theorem in equations (1.1), APPs in equations (1.3) can be replaced by equivalent expressions, yielding: (1.4) Here, pdf(x) appears on both sides of the inequality (1.1) so it has been reduced Equation (1.4) is expressed in the term ratio, creating the so-called loglikelihood radio test as follows: There's: (1.5) 1.1.3 Log-likelihood ratio: We create useful metrics called log-validity ratios by introducing an algorithm of valid ratios developed in equations (1.3) to equations (1.5) It is the real number representing the hard decision output of the detector, which is defined (1.6) Thus: DIGITAL COMMUNICATION – 20dtvclc1 – GROUP (1.7) There's: L(d|x) = L(x|d) + L(d) (1.8) Here L(x | d) is the LLR of the test statistics x generated by the measurement of the x channel exit under the condition that d = +1 or d = -1 could have been transmitted, respectively, and L(d) is the LLR a priori of data d 1.1.4 The principle of iterative decoding: In conventional receivers, demodulators are usually designed to make soft decisions that are then transmitted to the decoder The improvement of quality (performance) - error (errorperformance) using the system as soft decision compared to hard decision rated close to 2dB in AWGN As the decoder can be called a Frontend-Soft/Exit Decoder, because the final decoding process at the decoder's exit must end in bits We will illustrate the SISO set for the system code: Feedback for the next iteration Figure 2.2 Entry-soft/exit-soft error decoder (for system code) 1.2 Log-likelihood algebra: To explain the best iterative response of the soft decoder exit, we will use the concept of Log-Validation Algebra For statistically independent data d, the sum of two log-valid ratios (LLRs) is defined as follows: (1.9) DIGITAL COMMUNICATION – 20dtvclc1 – GROUP (1.10) Thus, derived from the definition of we have: Therefore: On the other hand: Here d1 and d2 are statistically independent data bits representing the +1 and -1 positions corresponding to logical levels and In this way, d ⊕ d2 produce -1 when d1 and d2 have the same values (either +1 or -1) and produce +1 when d1 and d2 have different values Therefore: DIGITAL COMMUNICATION – 20dtvclc1 – GROUP The sum of two LLRs denoted by the operand is defined as the LLR of the modular-2 sum of the base statistically independent data bits Equation (1.10) approximates equation (1.19) and will be very beneficial in the number example that we will consider later The log-validation sign is equivalent to the operation described in the equation The sum of LLRs, as described in equation (1.9) or (1.10) produces several interesting outcomes after whether LLRs are very large or very small: and Decode the horizontal row and use the equation (1.8), to generate the horizontal alien LLR: Set for vertical decoding in step 4 Vertical decoding, and using the equation (1.8), we create a vertical exotic LLR: Set vertical decoding in step Then repeat step to Step After enough iterations (steps through 5) to make reliable decisions, move on to step 7 The soft exits are: DIGITAL COMMUNICATION – 20dtvclc1 – GROUP 1.2.1 Two–dimensional single – parity code: At encoders, data bits and parity bits have a relationship between data bits and parity in separate rows or columns expressed as binary numbers (1,0): (1.11) and (1.12) The ⊕ sign is modular addition-2 The bits are represented under strings: At the entrance to the receiver, false noise bits are represented by the string, here for each bit of data received, for each parity bit The indices i and j represent the position in the encoder exit array However, it is more convenient for us to represent the receiving sequence where k is the time index Both conventions will be used Using the relationship deduced from equations 2.7 to 2.9, and assuming the same type of AWGN noise, LLR allows a measurement of the transmission channel of the signal received at time k, written as: (1.13a) Thus, we have explored the theoretical basis of the log-validation ratio (LLR), a concept that is the foundation for building the Turbo decoding structure diagram Now to see the effect of the above algorithm, let's look at the example of the multiplication code (ie the code is built based on two-dimensional space) 1.2.2 Product code: The structure can be described as an array of data created by rows and columns Different proportions of the structure are named d for data, for horizontal parity (oriented towards rows), DIGITAL COMMUNICATION – 20dtvclc1 – GROUP and for column parity (oriented towards columns) As a result, the data is encoded with two codes - horizontal code and vertical code Figure 1.4 Two-dimensional multiplier Figure 1.5a Encoder exit classification indicators Figure 1.5.b The log-valid entrance ratio to the decoder 1.2.3 Extrinsic likelihood: Here the term: the extraneous LLR representation is distributed by the code (corresponding to the data acquisition and its priori probability , combined with the corresponding parity code acquisition) Generally, the soft exit for the signal receiving corresponding data is: (1.14) DIGITAL COMMUNICATION – 20dtvclc1 – GROUP 10 II TURBO CODE STRUCTURE AND ITERATION TURBO DECODING ALGORITHM: 2.1 Introduction: The VA decoding algorithm is the algorithm that searches for the most likely state sequence with a veterinary signal sequence Turbo Code was first introduced in 1993, consisting of two parallel Recursive Systematic Convolution Codes (RSCC) combining a mixed set and precision decoding algorithm Conventional Turbo decoding algorithms have the same characteristics that are combined between loop decoding algorithms and component decoding types with soft input, and soft output (SISO) Basic VA and MAP decoding algorithms differ in only the optimal level The MAP decoding algorithm differs from the VA foil algorithm that defines each specific state that has the greatest likelihood with the veterinary signal sequence: However, if the output is a soft quantity, it can improve the quality significantly The intrinsic difference between them is that the states estimated by the VA algorithm must be routed through the grid, while the states estimated by the MAP algorithm not need to be connected to routes As such, the output of one decoder can use the information known in advance for the other We can briefly describe the MAP algorithm with SISO decoding: We need to determine: According to the composition probability formula: This is the posterior probability as the PDF function, f( ) is a priori probability, normally we assume that initially at the entrance then: DIGITAL COMMUNICATION – 20dtvclc1 – GROUP 11 f( Thus determination )⇔ is equivalent to determination: The Turbo code has an error control quality in the range of a few tenths of a dB from the Shannon limit Soon, the quality was increased to about 2dB thanks to the control results in the Turbo code The larger component is encouraged to be applied to radio communication systems as bandwidth requirements increase due to data communication service requirements As such, Code Turbo has two important parts That is, parallel connection twisted codes and iterative decoding 2.2 Encoder structure and iterative decoder: The Turbo code has a structure of at least two RSC codes connected in parallel in combination with a disturber and SISO decoding algorithm: Figure 2.1 Turbo code encoding diagram Thus, the S-system input data sequence is fed directly to the RSC1 convolution code to generate test bits and passed through the disturber to the generated RSC2 Breath and inspection systems, are incorporated into the comb and synthesis channel to eliminate the reduction of the test breath to speed up encoding The encoder output signal is controlled and DIGITAL COMMUNICATION – 20dtvclc1 – GROUP 12 transmitted over the channel as shown in Figure 3.1 Figure 3.2 is an example of an RSC encoding scheme, where the input string is fed right to the output called the system steam chain Figure 2.2 RSC code Figure 2.3 State diagram (a) and grid diagram of convolution code (b) The decoder uses loop decoding so the residual information is used as pre-test information for other SISO (RSC1) decoders Since the player uses a shuffler, there are the same disturbers in the decoder at the player and the corresponding shuffling solvers At the code level, the splitter separates the evapotranspiration systems and checks for compatibility with SISO decoders SISO is a decoder with a software input path, and a software output path, in which the input is channel reliability spend ed, pre-test information The output consists of posthumous information L(d), residual system information also called extraneous information (external information) Therefore, the repeater decoder has the following structure: DIGITAL COMMUNICATION – 20dtvclc1 – GROUP 13 Figure 2.4 Iterative decoding diagram Done in the diagram above, the decoder is the SISO decoder Thus, the code string at the output of the multiplex channel will be fed to the splitter through the transmission channel Both strings passed through the shuffler and then combined with the string leading to the input of decoder 2, so the decoder's output will be the string for string over assembler 2.3 Turbo decoding algorithm: There are types of algorithms: + MAP decoding algorithm + SOVA decryption algorithm DIGITAL COMMUNICATION – 20dtvclc1 – GROUP 14 2.3.1 Overview of decryption algorithms: Turbo code uses serial decoders because the subsequent connection scheme is capable of splitting information between the connecting decoders, whereas decoders with song connection schemes mostly decode each other independently The fire protection code has a parallel connection encryption structure, but the fire protection decoding process is based on the subsequent connection decryption scheme Figure 3.5 presents an overview of the decoding algorithms based on the trellis diagram Figure 2.5: Overview of decryption algorithms The first family is the MAP algorithm, also known as the BCJR algorithm (Bahl-Cocke-JelinekRaviv, the name of the four people who discovered this algorithm) This algorithm involves algorithms that decode the greatest probability of occurrence (ML) to minimize deterministic errors In addition to these two decryption algorithms, they have several other iterative decryption techniques 2.3.2 MAP algorithm: A decoder is a combination of multiple decoders (usually two decoders) and a decoder loop (interactive) Most files in the Viterbi algorithm provide software output values (soft output or reliability information) for a software value comparator used to resolve bit output Another algorithm of interest is Balh's symbol, the Maximum A Posteriori (MAP), which has been published DIGITAL COMMUNICATION – 20dtvclc1 – GROUP 15 Figure 2.6: MAP iterative decoder The decryption algorithm is performed as follows: Separate the realized signal into corresponding sequences for decoder and pseudosphere 2 In the first loop, the a priori information of decoder is brought to After decoder has produced extrinsic information, it will be inserted and taken to decoder which acts as the a priori information of this decoder Decoder after giving extrinsic information, the loop ends The extrinsic information of the 2nd decoder will be decrypted and returned to decoder as a priori information The decryption process repeats itself until the specified number of iterations are performed After the last loop, the estimated value is calculated by uninserting the information at the 2nd decoder and making a hard decision 2.3.3 The principle of the soft open Viterbi decoder: This additional Viterbium decoder is referred to as the software output Viterbi algorithm decoder (SOVA As for the Turbo codes, we encounter two becoming scared when using DIGITAL COMMUNICATION – 20dtvclc1 – GROUP 16 conventional Viterbi decoders Compared to convolutional codes, the Viterbi algorithm produces an ML output sequence Figure 2.7 Connected SOVA decoder 2.3.3.1 Generalized SOVA decoder reliability: Figure 2.8: Survivor and competition lines for estimating reliability The line immediately indicates the survivor line (assuming here part of the ML line) and the line shows the Competition line (which occurs simultaneously) at the opposite point of time to state In Figure 3.8 the Trellis chart shows statuses One cumulative measure Vs(S1,t) assigns the remaining line to each node and the cumulative measure Vc(S1,t) assigns to the competing line for each node L(t) = | Vs(S1,t)Vc(S1,t)| DIGITAL COMMUNICATION – 20dtvclc1 – GROUP 17 Figure 2.9: An example showing the assignment of confidence using direct metric values To improve road survivor trust values, a feature that allows reversing access to updated trust values is assumed + Update the trust value of MEMlow L(t-MEMlow) by assigning prices lowest trust values between MEM=0 and MEM=MEMlow* Update the trust values at these MEMs with the following procedure: + Find the lowest MEM greater than 0, just like MEMlow whose trust value is not updated 2.3.3.2 Block diagram of SOVA decoder: SOVA decoders can be implemented in different ways But it is possible that according to the computational bias, it is easy to implement SOVA decoders for codes with a large mandatory length K and a long frame size due to the need to update all survivors on the road Since the new update procedure only makes sense for the ML path, the SOVA decoder implementation only performs the update procedure for the ML path presented in configuration 3.10 Figure 2.10: SOVA decoder block diagram DIGITAL COMMUNICATION – 20dtvclc1 – GROUP 18 The implementation of this SOVA decoder consists of two separate SOVA decoders The SOVA decoder takes the input L(u) and Lucy, which are the confidence and received values that have been balanced respectively, and outputs u‟ and L(u‟), respectively, the estimated bit decisions and the following information L(u‟) The second SOVA decoder (with ML path information) recalculates the ML path and also calculates and updates trust values Figure 2.11: Repeater SOVA decoder The decoder processes the received channel bits on a basic frame The presentation of fire protection decoders is the closed-loop connection of SOVA component decoders These bits are balanced by the trust channel value and are retrieved through CS registers The Turbo code algorithm repeats with the nth iteration as follows: The SOVA1 decoder has an input string of Lcy1 (system), Lcy2 (parity), and outputs the string Le2(u') For the first iteration, the string Le2(u')=0 because there is no a priori value (no extrinsic value from SOVA2) Extrinsic information from SOVA1 is calculated in Le1(u')= L1(u') - Le2(u')- Lcy1 (3.4) where : Lc = 2 The inserted strings {(Lcy1 )} Lcy1 and Le1(u') are I and I{Le1(u')} The SOVA2 decoder has inputs of the strings Lcy1 (system), and I(Lcy3)(parity inserted in the decoder) and I{Le1(u')} (u' I{L2( u')} information a priori) and outputs the strings }and I{ DIGITAL COMMUNICATION – 20dtvclc1 – GROUP 19 The extrinsic information from SOVA2 is obtained as: I{Le2(u')} = I{L2(u')} - I{Le1(u')} - I(Lcy1) The strings I{Le2(u')} and I{ u'} are solved and are Le2(u') and you' Le2(u‟) is returned to SOVA1 as information a priori for the next iteration and u‟ is the output of the bits estimated for the nth iteration III TURBO CODE APPLICATION IN MOBILE COMMUNICATIONS : 3.1 Limitations when applying Turbo code to multimedia communications: - Real-time calculation; - Big data blocks; - Limited bandwidth; 3.2 Transmission channel characteristics: The characteristics of the transmission channel in MMC are much simpler and more immutable than in the wireless environment One method to understand channel characteristics is to use Bayesian networks: - Data is transmitted over the transmission channel and uses a variety of Turbo Code models with different parameters - Record BER values - Establish a Bayesian form with the accuracy of the resulting code and the influencing factors being network nodes A direct seam will go from the elements to the accuracy of the resulting code - Use the recorded BER values to try for this network - Find out a set of parameters to optimize adjustments 3.3 Recommendations when applying Turbo codes to multimedia communications: Large frame size: As mentioned above, an important characteristic of MCC is the large block of data From here comes the idea of using a large frame size for Turbo code The large frame size is synonymous with a large insertor size and will greatly increase the quality of the Turbo code With this large frame size, the code gain of TC can be increased by the following: - Reduce the BER of the transmission channel - Increase response times by reducing the number of decoding iterations or using some of the decoding enhancements outlined below Improved decryption process: - Dynamic decryption: The decryption method is encapsulated in the following two points: DIGITAL COMMUNICATION – 20dtvclc1 – GROUP 20 ... TURBO CODE STRUCTURE AND ITERATION TURBO DECODING ALGORITHM: 2.1 Introduction: 10 2.2 Encoder structure and iterative decoder: 11 2.3 Turbo decoding. .. data is: (1.14) DIGITAL COMMUNICATION – 20dtvclc1 – GROUP 10 II TURBO CODE STRUCTURE AND ITERATION TURBO DECODING ALGORITHM: 2.1 Introduction: The VA decoding algorithm is the algorithm that searches... twisted codes and iterative decoding 2.2 Encoder structure and iterative decoder: The Turbo code has a structure of at least two RSC codes connected in parallel in combination with a disturber and

Ngày đăng: 25/03/2023, 05:44

Xem thêm:

w