1. Trang chủ
  2. » Công Nghệ Thông Tin

The Essential Guide to Image Processing- P15 ppt

30 587 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 30
Dung lượng 801,26 KB

Nội dung

17.4 Quantization 427 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 180 190 200 210 220 230 270 280 290 300 310 320 FIGURE 17.2 The original 512 ϫ 512 Lena image (top) with an 8 ϫ 8 block (bottom) identified with black boundary and with one corner at [209, 297]. 428 CHAPTER 17 JPEG and JPEG2000 187 188 189 202 209 175 66 41 191 186 193 209 193 98 40 39 188 187 202 202 144 53 35 37 189 195 206 172 58 47 43 45 197 204 194 106 50 48 42 45 208 204 151 50 41 41 41 53 209 179 68 42 35 36 40 47 200 117 53 41 34 38 39 63 FIGURE 17.3 The 8 ϫ 8 block identified in Fig. 17.2. 915.6 451.3 25.6 212.6 16.1 212.3 7.9 27.3 216.8 19.8 2228.2 225.7 23.0 20.1 6.4 2.0 22.0 277.4 223.8 102.9 45.2 223.7 24.4 25.1 30.1 2.4 19.5 28.6 251.1 232.5 12.3 4.5 5.1 222.1 22.2 21.9 217.4 20.8 23.2 214.5 20.4 20.8 7.5 6.2 29.6 5.7 29.5 219.9 5.3 25.3 22.4 22.4 23.5 22.1 10.0 11.0 0.9 0.7 27.7 9.3 2.7 25.4 26.7 2.5 FIGURE 17.4 DCT of the 8 ϫ 8 block in Fig. 17.3. values of q[m,n] are restricted to be integers with 1 Յ q[m,n]Յ 255, and they deter- mine the quantization step for the corresponding coefficient. The quantized coefficient is given by qX[m,n]ϭ  X[m,n] q[m,n]  round . A quantization table (or matrix) is required for each image component. How- ever, a quantization table can be shared by multiple components. For example, in a luminance-plus-chrominance Y Ϫ Cr Ϫ Cb representation, the two chrominance com- ponents usually share a common quantization matrix. JPEG quantization tables given in Annex K of the standard for luminance and components are shown in Fig. 17.5. These tables were obtained from a series of psychovisual experiments to determine the visibility thresholds for the DCT basis functions for a 760 ϫ 576 image with chrominance com- ponents downsampled by 2 in the horizontal direction and at a viewing distance equal to six times the screen width. On examining the tables, we observe that the quantization table for the chrominance components has larger values in general implying that the quantization of the chrominance planes is coarser when compared with the luminance plane. This is done to exploit the human visual system’s (HVS) relative insensitivity to chrominance components as compared with luminance components. The tables shown 17.4 Quantization 429 16 11 10 16 24 40 51 61 12 12 14 19 26 58 60 55 14 13 16 24 40 57 69 56 14 17 22 29 51 87 80 62 18 22 37 56 68 109 103 77 24 35 55 64 81 104 113 92 49 64 78 87 103 121 120 101 72 92 95 98 112 100 103 99 17 18 24 47 18 21 26 66 24 26 56 47 66 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 FIGURE 17.5 Example quantization tables for luminance (left) and chrominance (right) components provided in the informative sections of the standard. have been known to offer satisfactory performance, on the average, over a wide variety of applications and viewing conditions. Hence they have been widely accepted and over the years have become known as the “default” quantization tables. Quantization tables can also be constructed by casting the problem as one of optimum allocation of a given budget of bits based on the coefficient statistics. The general principle is to estimate the variances of the DCT coefficients and assign more bits to coefficients with larger variances. We now examine the quantization of the DCT coefficients given in Fig. 17.4 using the luminance quantization table in Fig. 17.5(a). Each DCT coefficient is divided by the corresponding entry in the quantization table, and the result is rounded to yield the array of quantized DCT coefficients in Fig. 17.6. We observe that a large number of quantized DCT coefficients are zero, making the array suitable for runlength coding as described in Section 17.6.Theblock fromthe Lena imagerecovered afterdecoding is shown inFig.17.7. 17.4.2 Quantization Table Design With lossy compression, the amount of distortion introduced in the image is inversely related to the number of bits (bit rate) used to encode the image. The higher the rate, the lower the distortion. Naturally, for a given rate, we would like to incur the minimum possible distortion. Similarly, for a given distortion level, we would like to encode with the minimum rate possible. Hence lossy compression techniques are often studied in terms of their rate-distortion (RD) performance that bounds according to the highest compression achievable at a given level of distortion they introduce over different bit rates. The RD performance of JPEG is determined mainly by the quantization tables. As mentioned before, the standard does not recommend any particular table or set of tables and leaves their design completely to the user. While the image quality obtained from the use of the “default” quantization tables described earlier is ver y good, there is a need to provide flexibility to adjust the image quality by changing the overall bit rate. In practice, scaled versions of the “default” quantization tables are very commonly used to vary the quality and compression performance of JPEG. For example, the popular IJPEG implementation, freely available in the public domain, allows this adjustment through 430 CHAPTER 17 JPEG and JPEG2000 57 41 2 0 0 0 0 0 18 1 216 210000 0 25 2141000 20 0021000 0 21000000 00 00 0000 00 00 0000 00 00 0000 FIGURE 17.6 8 ϫ 8 discrete cosine transform block in Fig. 17.4 after quantization with the luminance quan- tization table shown in Fig. 17.5. 181 185 196 208 203 159 86 27 191 189 197 203 178 118 58 25 192 193 197 185 136 72 36 33 184 199 195 151 90 48 38 43 185 207 185 110 52 43 49 44 201 198 151 74 32 40 48 38 213 161 92 47 32 35 41 45 216 122 43 32 39 32 36 58 FIGURE 17.7 The block selected from the Lena image recovered after decoding. the use of quality factor Q for scaling all elements of the quantization table. The scaling factor is computed as Scale factor ϭ ⎧ ⎪ ⎨ ⎪ ⎩ ϩ 5000 Q for 1 Յ Q < 50 200 Ϫ 2 ∗Q for 50 Յ Q Յ 99 1 for Q ϭ 100 . (17.1) Although varying the rate by scaling a base quantization table according to some fixed scheme is convenient, it is clearly not optimal. Given an image and a bit rate, there exists a quantization table that provides the “optimal” distortion at the given rate. Clearly, the “optimal” table would vary with different images and different bit rates and even different definitions of distortion such as mean square error (MSE) or perceptual distortion. To get the best performance from JPEG in a given application, custom quantization tables may need to be designed. Indeed, there has been a lot of work reported in the literature addressing the issue of quantization table design for JPEG. Broadly speaking, this work can be classified into three categories. The first deals with explicitly optimizing the RD performance of JPEG based on statistical mo dels for DCT coefficient distributions. The second attempts to optimize the visual quality of the reconstructed image at a given bit rate, given a set of display conditions and a perception model. The third addresses constraints imposed by applications, such as optimization for printers. 17.4 Quantization 431 An example of the first approach is provided by the work of Ratnakar and Livny [30] who propose RD-OPT, an efficient algor ithm for constructing quantization tables with optimal RD performance for a given image. The RD-OPT algorithm uses DCT coefficient distribution statistics from any given image in a novel way to optimize quantization tables simultaneously for the entire possible range of compression-quality tradeoffs. The algorithm is restricted to the MSE-related distortion measures as it exploits the property that the DCT is a unitary transform, that is, MSE in the pixel domain is the same as MSE in the DCT domain. The RD-OPT essentially consists of the following three stages: 1. Gather DCT statistics for the given image or set of images. Essentially this step involves counting how many times the n-th coefficient gets quantized to the value v when the quantization step size is q and what is the MSE for the n-th coefficient at this step size. 2. Use statistics collected above to calculate R n (q), the rate for the nth coefficient when the quantization step size is q and the corresponding distortion is D n (q), for each possible q.TherateR n (q) is estimated from the corresponding first-order entropy of the coefficient at the given quantization step size. 3. Compute R(Q) and D(Q), the rate and distortions for a quantization table Q,as R(Q) ϭ 63  nϭ0 R n (Q[n]) and D(Q) ϭ 63  nϭ0 D n (Q[n]), respectively. Use dynamic programming to optimize R(Q) against D(Q). Optimizing quantization tables with respect to MSE may not be the best strategy when the end image is to be viewed by a human. A better approach is to match the quan- tization table to the human visual system HVS model. As mentioned before, the “default” quantization tables were arrived at in an image independent manner, based on the visi- bility of the DCT basis functions. Clearly, better performance could be achieved by an image dependent approach that exploits HVS properties like frequency, contrast, and tex- ture masking and sensitivity. A number of HVS model based techniques for quantization table design have been proposed in the literature [3, 18, 41]. Such techniques perform an analysis of the given image and arrive at a set of thresholds, one for each coefficient, called the just noticeable distortion (JND) thresholds. The underlying idea being that if the distortion intro duced is at or just below these thresholds, the reconstructed image will be perceptually distortion free. Optimizing quantization tables with respect to MSE may also not be appropriate when there are constraints on the type of distortion that can be tolerated. For example, on examining Fig. 17.5, it is clear that the “high-frequency” AC quantization factors, i.e., q[m,n] for larger values of m and n, are significantly greater than the DC coefficient q[0,0] and the “low-frequency” AC quantization factors. There are applications in which the information of interest in an image may reside in the high-frequency AC coeffi- cients. For example, in compression of radiographic images [34], the critical diagnostic 432 CHAPTER 17 JPEG and JPEG2000 information is often in the high-frequency components. The size of microcalcification in mammograms is often so small that a coarse quantization of the higher AC coefficients will be unacceptable. In such cases, JPEG allows custom tables to be provided in the bitstreams. Finally, quantization tables can also be optimized for hard copy devices like printers. JPEG was designed for compressing images that are to be displayed on devices that use cathode ray tube that offers a large range of pixel intensities. Hence, when an image is rendered through a half-tone device [40] like a printer, the image quality could be far from optimal. Vander Kam and Wong [37] g ive a closed-loop procedure to design a quantization table that is optimum for a given half-toning and scaling method. The basic idea behind their algorithm is to code more coarsely frequency components that are corrupted by half-toning and to code more finely components that are left untouched by half-toning. Similarly, to take into account the effects of scaling, their design procedure assigns higher bit rate to the frequency components that correspond to a large gain in the scaling filter response and lower bit rate to components that are attenuated by the scaling filter. 17.5 COEFFICIENT-TO-SYMBOL MAPPING AND CODING The quantizer makes the coding lossy, but it provides the major contribution in com- pression. However, the nature of the quantized DCT coefficients and the preponderance of zeros in the array leads to further compression with the use of lossless coding. This requires that the quantized coefficients be mapped to symbols in such a way that the sym- bols lend themselves to effective coding. For this purpose, JPEG treats the DC coefficient and the set of AC coefficients in a different manner. Once the symbols are defined, they are represented with Huffman coding or arithmetic coding. In defining symbols for coding, the DCT coefficients are scanned by traversing the quantized coefficient array i n a zig-zag fashion shown in Fig. 17.8. The zig-zag scan processes the DCT coefficients in increasing order of spatial frequency. Recall that the quantized high-frequency coefficients are zero with high probability. Hence scanning in this order leads to a sequence that contains a large number of trailing zero values and can be efficiently coded as shown below. The [0,0]-th element or the quantized DC coefficient is first separated from the remaining string of 63 AC coefficients,and symbols are defined next as shown in Fig. 17.9. 17.5.1 DC Coefficient Symbols The DC coefficients in adjacent blocks are highly correlated. This fact is exploited to differentially code them. Let qX i [0,0]and qX iϪ1 [0,0]denote the quantized DC coefficient in blocks i and i Ϫ 1. The difference ␦ i ϭ qX i [0,0]Ϫ qX iϪ1 [0,0] is computed. Assuming a precision of 8 bits/pixel for each component, it follows that the largest DC coefficient value (with q[0,0]= 1) is less than 2048, so that values of ␦ i are in the range [Ϫ2047, 2047]. If Huffman coding is used, then these possible values would require a very large coding 17.5 Coefficient-to-Symbol Mapping and Coding 433 01234567 0 1 2 3 4 5 6 7 FIGURE 17.8 Zig-zag scan procedure. table. In order to limit the size of the coding table, the values in this range are grouped into 12 size categories, which are assigned labels 0 through 11. Category k contains 2 k elements {Ϯ 2 kϪ1 , , Ϯ (2 k Ϫ 1)}. The difference ␦ i is mapped to a symbol described by a pair (category, amplitude). The 12 categories are Huffman coded. To distinguish values within the same category, extra k bits are used to represent a specific one of the possible 2 k “amplitudes” of symbols within category k. The amplitude of ␦ i {2 kϪ1 Յ ␦ i Յ 2 k Ϫ 1} is simply given by its binary representation. On the other hand, the amplitude of ␦ i {Ϫ2 k Ϫ 1 Յ ␦ i ՅϪ2 kϪ1 } is given by the one’s complement of the absolute value |␦ i | or simply by the binary representation of ␦ i ϩ 2 k Ϫ 1. 17.5.2 Mapping AC Coefficient to Symbols As observed before, most of the quantized AC coefficients are zero. The zig-zag scanned string of 63 coefficients contains many consecutive occurrences or“runs of zeros”, making the quantized AC coefficients suitable for run-length coding (RLC). The symbols in this case are conveniently defined as [size of run of zeros, nonzero terminating value], which can then be entropy coded. However, the number of possible values of AC coefficients is large as is evident from the definition of DCT. For 8-bit pixels, the allowed range of AC coefficient values is [Ϫ1023,1023]. In view of the large coding tables this entails, a procedure similar to that discussed above for DC coefficients is used. Categories are defined for suitable grouped values that can terminate a run. Thus a run/category pair together with the amplitude within a category is used to define a symbol. The category definitions and amplitude bits generation use the same procedure as in DC coefficient difference coding. Thus, a 4-bit category value is concatenated with a 4-bit run length to get an 8-bit [run/category] symbol. This symbol is then encoded using either Huffman or 434 CHAPTER 17 JPEG and JPEG2000 (a) DC coding Difference ␦ i 22 [2,22] 01101 Code (b) AC coding Terminating value Run/ categ. Code length Code Total bits Amplitude bits 0/6 7 1111000 13 010110 0/5 5 11010 10 10010 1/1 4 1100 5 1 0/2 2 01 4 10 1/5 11 11111110110 16 01111 0/3 3 100 6 010 0/2 2 01 4 10 2/1 5 11100 6 0 0/1 2 00 3 0 3/3 12 111111110101 15 100 1/1 4 1100 5 1 5/1 7 1111010 8 1 5/1 7 1111010 8 0 41 18 1 2 216 25 2 21 21 4 21 1 21 EOB EOB 4 112 1010 4 2 Total bits for block Rate 5 112/64 5 1.75 bits per pixel [Category, Amplitude] FIGURE 17.9 (a) Coding of DC coefficient with value 57, assuming that the previous block has a DC coefficient of value 59; (b) Coding of AC coefficients. arithmetic coding. T here are two special cases that arise when coding the [run/category] symbol. First, since the run value is restricted to 15, the symbol (15/0) is used to denote fifteen zeroes followed by a zero. A number of such symbols can be cascaded to specify larger runs. Second, if after a nonzero AC coefficient, all the remaining coefficients are zero, then a special symbol (0/0) denoting an end-of-block (EOB) is encoded. Fig. 17.9 continues our example and shows the sequence of symbols generated for coding the quantized DCT block in the example show n in Fig. 17.6. 17.5.3 Entropy Coding The symbols defined for DC and AC coefficients are entropy coded using mostly Huffman coding or, optionally and infrequently, arithmetic coding based on the probability esti- mates of the symbols. Huffman coding is a method of VLC in which shorter code words are assigned to the more frequently occurring symbols in order to achieve an average symbol code word length that is as close to the symbol source entropy as possible. 17.6 Image Data Format and Components 435 Huffman coding is optimal (meets the entropy bound) only when the symbol proba- bilities are integral powers of 1/2. The technique of arithmetic coding [42] provides a solution to attaining the theoretical bound of the source entropy. The baseline implementation of the JPEG standard uses Huffman coding only. If Huffman coding is used, then Huffman tables,up to a maximum of eight in number, are specified in the bitstream. The tables constructed should not contain code words that (a) are more than 16 bits long or (b) consist of all ones. Recommended tables are listed in annex K of the standard. If these tables are applied to the output of the quantizer shown in the first two columns of Fig. 17.9, then the algorithm produces output bits shown in the following columns of the figure. The procedures for specification and generation of the Huffman tables are identical to the ones used in the lossless standard [25]. 17.6 IMAGE DATA FORMAT AND COMPONENTS The JPEG standard is intended for the compression of both grayscale and color images. In a grayscale image, there is a single “luminance” component. However, a color image is represented w ith multiple components, and the JPEG standard sets stipulations on the allowed number of components and data formats. The standard permits a maximum of 255 color components which are rectangular arrays of pixel values represented with 8- to 12-bit precision. For each color component, the largest dimension supported in either the horizontal or the vertical direction is 2 16 ϭ 65,536. All color component arrays do not necessarily have the same dimensions. Assume that an image contains K color components denoted by C n , n ϭ 1,2, ,K . Let the horizontal and vertical dimensions of the n-th component be equal to X n and Y n , respectively. Define dimensions X max ,Y max , and X min ,Y min as X max ϭ max K nϭ1 {X n }, Y max ϭ max K nϭ1 {Y n } and X min ϭ min K nϭ1 {X n }, Y min ϭ min K nϭ1 {Y n }. Each color component C n , n ϭ 1,2, ,K , is associated with relative horizontal and vertical sampling factors, denoted by H n and V n respectively, where H n ϭ X n X min , V n ϭ Y n Y min . The standard restricts the possible values of H n and V n to the set of four integers 1,2,3,4. The largest values of relative sampling factors are given by H max ϭ max{H n }and V max ϭ max{V n }. According to the JFIF, the color information is specified by [X max , Y max , H n and V n , n ϭ 1, 2, ,K , H max , V max ]. The horizontal dimensions of the components are 436 CHAPTER 17 JPEG and JPEG2000 computed by the decoder as X n ϭ X max ϫ H n H max . Example 1: Consider a raw image in a luminance-plus-chrominance representation consisting of K ϭ 3 components, C 1 ϭ Y , C 2 ϭ Cr, and C 3 ϭ Cb. Let the dimensions of the luminance matrix (Y )beX 1 ϭ 720 and Y 1 ϭ 480, and the dimensions of the two chrominance matrices (Cr and Cb)beX 2 ϭ X 3 ϭ 360 and Y 2 ϭ Y 3 ϭ 240. In this case, X max ϭ 720 and Y max ϭ 480, and X min ϭ 360 and Y min ϭ 240. The relative sampling factors are H 1 ϭ V 1 ϭ 2 and H 2 ϭ V 2 ϭ H 3 ϭ V 3 ϭ 1. When images have multiplecomponents, the standard specifies formats for organizing the data for the purpose of storage. In storing components, the standard provides the option of using either interleaved or noninterleaved formats. Processing and storage efficiency is aided, however, by interleaving the components where the data is read in a single scan. Interleaving is performed by defining a data unit for lossy coding as a single block of 8 ϫ 8 pixels in each color component. This definition can be used to partition the n-th color component C n , n ϭ 1, 2, ,K , into rectangular blocks, each of which contains H n ϫ V n data units. A minimum coded unit (MCU) is then defined as the smallest interleaved collection of data units obtained by successively picking H n ϫ V n data units from the n-th color component. Certain restrictions are imposed on the data in order to be stored in the interleaved format: ■ The number of interleaved components should not exceed four; ■ An MCU should contain no more than ten data units, i.e., K  nϭ1 H n V n Յ 10. If the above restrictions are not met, then the data is stored in a noninterleaved format, where each component is processed in successive scans. Example 2: Let us consider the case of storage of the Y , Cr, Cb components in Example 1. The luminance component contains 90 ϫ 60 data units, and each of the two chrominance components contains 45 ϫ 30 data units. Figure 17.10 shows both a noninterleaved and an interleaved arrangement of the data for K ϭ 3 components, C 1 ϭ Y , C 2 ϭ Cr, and C 3 ϭ Cb, with H 1 ϭ V 1 ϭ 2 and H 2 ϭ V 2 ϭ H 3 ϭ V 3 ϭ 1. The MCU in this case contains six data units, consisting of H 1 ϫ V 1 ϭ 4 data units of the Y component and H 2 ϫ V 2 ϭ H 3 ϫ V 3 ϭ 1eachoftheCr and Cb components. 17.7 ALTERNATIVE MODES OF OPERATION What has been described thus far in this chapter represents the JPEG sequential DCT mode. The sequential DCT mode is the most commonly used mode of operation of [...]... approximation to the final image can be constructed at the receiver In the first pass, very few bits are transmitted and the reconstructed image is equivalent to one obtained with a very low quality setting Each of the subsequent passes contain an increasing number of bits which are used to refine the quality of the reconstructed image The total number of bits transmitted is roughly the same as would be needed to. .. the subinterval corresponding to the less probable symbol (LPS) The symbols are thus often recognized as MPS and LPS rather than as 0 or 1 The bounds of the different segments are hence driven by the statistical model of the source The codestream associated with the sequence of coded symbols points to the lower bound of the final subinterval The decoding of the sequence is performed by reproducing the. .. noise, the distortion is additive across code-blocks The overall distortion can thus be written as D ϭ i Dini There is thus a need to search for the packet lengths ni so that the distortion is minimized under the constraint of an overall bit rate, R ϭ i Rini Յ R max The distortion measure Dini is defined as the MSE weighted by the square of the L2 norm of the wavelet basis functions used for the subband... respect to the non-ROI area 17.10.7.2 File Format Part 1 of the JPEG2000 standard also defines an optional file format referred to as JP2 It defines a set of data structures used to store information that may be required to render and display the image such as the colorspace (with two methods of color specification), the resolution of the image, the bit depth of the components, and the type and ordering of the. .. higher precision The ROI coding approach in JPEG2000 Part 1 is based on the MAXSHIFT method [8] which is an extension of the ROI scaling-based method introduced in [5] The ROI scaling method consists of scaling up the coefficients belonging to the ROI or scaling down the coefficients corresponding to non-ROI regions in the image The goal of the scaling operation is to place the bits of the ROI in higher... by the stationary probability of the corresponding symbol The partition, and hence the bounds of the different segments, of the unit interval is given by the cumulative stationary probability of the alphabet symbols The interval corresponding to the first symbol to be encoded is chosen It becomes the current interval that is again partitioned into different segments The subinterval associated with the. .. color version of the Lena image In Fig 17.13 we show the reconstructed Lena image with different compression ratios At 24 to 1 compression we see few artifacts However, as the compression ratio is increased to 96 to 1, noticeable artifacts begin to appear Especially annoying is the “blocking artifact” in smooth regions of the image One approach to deal with this problem is to change the “coarseness”... the ROI in higher bit planes than the bits associated with the non-ROI regions as shown in Fig 17.19 Thus, the ROI will be decoded before the rest of the image, and if the bitstream is truncated, the ROI will be of higher quality The ROI scaling method described in [5] requires the coding and transmission of the ROI shape information to the decoder In order to minimize the decoder complexity, 455 456... area The compressed data associated with the ROI will then be placed first in the bitstream With this approach the decoder does not need to generate the ROI mask All the coefficients lower than the scaling value belong to the non-ROI region Therefore the ROI shape information does not need to be encoded and transmitted The drawback of this reduced complexity is that the ROI cannot be encoded with multiple... sufficient to give a reasonable rendition of the image In fact, just the DC coefficient can serve to essentially identify the contents of an image, although the reconstructed image contains 17.7 Alternative Modes of Operation severe blocking artifacts It should be noted that after all the scans are decoded, the final image quality is the same as that obtained by a sequential mode of operation The bit rate, . inversely related to the number of bits (bit rate) used to encode the image. The higher the rate, the lower the distortion. Naturally, for a given rate, we would like to incur the minimum possible distortion untouched by half-toning. Similarly, to take into account the effects of scaling, their design procedure assigns higher bit rate to the frequency components that correspond to a large gain in the. where the image is decomposed into a pyramidal structure of increasing resolution. The top-most layer in the pyramid represents the image at the lowest resolution, and the base of the pyramid represents

Ngày đăng: 01/07/2014, 10:43

TỪ KHÓA LIÊN QUAN