The Essential Guide to Image Processing- P19 ppt

30 536 0
The Essential Guide to Image Processing- P19 ppt

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

20.6 Conclusions 549 (a) (b) FIGURE 20.11 (a) In tracking a white blood cell, the GVF vector diffusion fails to attract the active contour; (b) successful detection is yielded by MGVF. Thus (20.48) provides an external force that can guide an active contour to a moving object boundary. The capture range of GVF is increased using the motion gradient vector flow (MGVF) vector diffusion [51]. With MGVF, a tracking algorithm can simply use the final position of the active contour from a prev ious video frame as the initial contour in the subsequent frame. For an example of tracking using MGVF, see Fig. 20.11. 20.6 CONCLUSIONS Anisotropic diffusion is an effective precursor to edge detection. The main benefit of anisotropic diffusion over isotropic diffusion and linear filtering is edge preservation. By properly specifying the diffusion PDE and the diffusion coefficient, an image can be scaled, denoised, and simplified for boundary detection. For edge detection, the most critical design step is specification of the diffusion coefficient. The variants of the diffusion coefficient involve tr adeoffs between sensitivity to noise, the ability to spec- ify scale, convergence issues, and computational cost. The diverse implementations of the anisotropic diffusion PDE result in improved fidelity to the original image, mean curvature motion, and convergence to LOMO signals. As the diffusion PDE may be considered a descent on an energy surface, the diffusion operation can be viewed in a variational framework. Recent variational solutions produce optimized edge maps and image segmentations in which certain edge-based features,such as edge length, curvature, thickness, and connectivity, can be optimized. The computational cost of anisotropic diffusion may be reduced by using multireso- lution solutions, including the anisotropic diffusion pyramid and multigrid anisotropic diffusion. Application of edge detection to multispectral imagery and to radar/ultrasound imagery is possible through techniques presented in the literature. In general, the edge detection step after anisotropic diffusion of the image is straightforward. Edges may be detected using a simple gradient magnitude threshold, using robust statistics, or using a 550 CHAPTER 20 Diffusion Partial Differential Equations for Edge Detection feature extraction technique. Active contours, used in conjunction with vector diffusion, can be employed to extract meaningful object boundaries. REFERENCES [1] D. G. Lowe. Perceptual Organization and Visual Recognition. Kluwer Academic, New York, 1985. [2] V. Caselles, J M. More l, G. Sapiro, and A. Tannenbaum. Introduction to the special issue on partial differential equations and geometry-driven diffusion in image processing and analysis. IEEE Trans. Image Process., 7:269–273, 1998. [3] A. P. Witkin. Scale-space filtering. In Proc. Int. Joint Conf. Art. Intell., 1019–1021, 1983. [4] J. J. Koenderink. The structure of images. Biol. Cybern., 50:363–370, 1984. [5] D. Marr and E. Hildreth. Theory of edge detection. Proc. R. Soc. Lond. B, Biol. Sci., 207:187–217, 1980. [6] P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell., PAMI-12:629–639, 1990. [7] S. Teboul, L. Blanc-Feraud, G. Aubert, and M. Barlaud. Variational approach for edge-preserving regularization using coupled PDE’s. IEEE Trans. Image Process., 7:387–397, 1998. [8] R. T. Whitaker and S. M. Pizer. A multi-scale approach to nonuniform diffusion. Comput. Vis. Graph. Image Process.—Image Underst., 57:99–110, 1993. [9] Y L. You, M. Kaveh, W. Xu, and A. Tannenbaum. Analysis and design of anisotropic diffusion for image processing. In Proc. IEEE Int. Conf. Image Process., Austin, Texas, November 13–16, 1994. [10] Y L. You, W. Xu, A. Tannenbaum, and M. Kaveh. Behavioral analysis of anisotropic diffusion in image processing. IEEE Trans. Image Process., 5:1539–1553, 1996. [11] F. Catte, P L. Lions, J M. More l, and T. Coll. Image selective smoothing and edge detection by nonlinear diffusion. SIAM J. Numer. Anal., 29:182–193, 1992. [12] L. Alvarez, P L. Lions, and J M. Morel. Image selective smoothing and edge detection by nonlinear diffusion II. SIAM J. Numer. Anal., 29:845–866, 1992. [13] C. A. Segall and S. T. Acton. Morphological anisotropic diffusion. In Proc. IEEE Int. Conf. Image Process., Santa Barbara, CA, October 26–29, 1997. [14] L I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation noise removal algorithm. Physica D, 60:1217–1229, 1992. [15] S. Osher and L I. Rudin. Feature-oriented image enhancement using shock filters. SIAM J. Numer. Anal., 27:919–940, 1990. [16] S. T. Acton. Locally monotonic diffusion. IEEE Trans. Signal. Process., 48:1379–1389, 2000. [17] M. J. Black, G. Sapiro, D. H. Marimont, and D. He eger. Robust anisotropic diffusion. IEEE Trans. Image. Process., 7:421–432, 1998. [18] K. N. Nordstrom. Biased anisotropic diffusion—a unified approach to edge detection. Tech. Report, Dept. of Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, CA, 1989. [19] J. Canny. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell., PAMI-8:679–714, 1986. References 551 [20] A. El-Fallah and G. Ford. The evolution of mean curvature in image filtering. In Proc. IEEE Int. Conf. Image Process., Austin, Texas, November 1994. [21] S. Osher and J. Sethian. Fronts propagating with curvature dependent speed: algorithms based on the Hamilton-Jacobi formulation. J. Comp. Phys., 79:12–49, 1988. [22] N. Sochen, R. Kimmel, and R. Malladi. A general framework for low level vision. IEEE Trans. Image Process., 7:310–318, 1998. [23] A. Yezzi, Jr. Modified curvature motion for image smoothing and enhancement. IEEE Trans. Image Process., 7:345–352, 1998. [24] J L. Morel and S. Solimini. Variational Methods in Image Segmentation. Birkhauser, Boston, MA, 1995. [25] D. Mumford and J. Shah. Boundary detection byminimizingfunctionals.In IEEE Int. Conf. Comput. Vis. Pattern Recognit., San Francisco, 1985. [26] S. T. Acton and A. C. Bovik. Anisotropic edge detection using mean field annealing. In Proc. IEEE Int. Conf. Acoust., Speech and Signal Process. (ICASSP-92), San Francisco, March 23–26, 1992. [27] D. Geman and G. Reynolds. Constrained restoration and the re covery of discontinuities. IEEE Trans. Pattern Anal. Mach. Intell., 14:376–383, 1992. [28] P. J. Burt, T. Hong, and A. Rosenfeld. Segmentation and estimation of region properties through cooperative hierarchical computation. IEEE Trans. Syst. Man Cybern., 11(12):1981. [29] P. J. Burt. Smart s ensing within a pyramid vision machine. Proc. IEEE, 76(8):1006–1015, 1988. [30] S. T. Acton. A pyramidal edge detector based on anisotropic diffusion. In Proc. of the IEEE Int. Conf. Acoust., Speech and Signal Process. (ICASSP-96), Atlanta, May 7–10, 1996. [31] S. T. Acton, A. C. Bovik, and M. M. Crawford. Anisotropic diffusion pyramids for image segmentation. In Proc. IEEE Int. Conf. Image Process., Austin, Texas, November 1994. [32] A. Morales, R. Acharya, and S. Ko. Morphological pyramids with alternating sequential filters. IEEE Trans. Image Process., 4(7):965–977, 1996. [33] C. A. Segall, S. T. Acton, and A. K. Katsaggelos. Sampling conditions for anisotropic diffusion. In Proc. SPIE Symp. Vis. Commun. Image Process., San Jose, January 23–29, 1999. [34] R. M. Haralick, X. Zhuang, C. Lin, and J. S. J. Lee. The digital morphological sampling theorem. IEEE Trans. Acoust., 3720(12):2067–2090, 1989. [35] S. T. Acton. Multigrid anisotropic diffusion. IEEE Trans. Image. Process., 7:280–291, 1998. [36] J. H. Bramble. Multigrid Methods. John Wiley, New York, 1993. [37] W. Hackbush and U. Trottenberg, editors. Multigrid Methods. Springer-Verlag, New York, 1982. [38] R. T. Whitaker and G. Gerig. Vector-valued diffusion. In B. ter Haar Romeny, editor, Geometry- Driven Diffusion in Computer Vision, 93–134. Kluwer, 1994. [39] S. T. Acton and J. Landis. Multispectr al anisotropic diffusion. Int. J. Remote Sens., 18:2877–2886, 1997. [40] G. Sapiro and D. L. Ringach. Anisotropic diffusion of multivalued images with applications to color filtering. IEEE Trans. Image Process., 5:1582–1586, 1996. [41] S. DiZenzo. A note on the gradient of a multi-image. Comput. Vis. Graph. Image Process., 33: 116–125, 1986. [42] Y. Yu and S. T. Acton. Speckle reducing anisotropic diffusion. IEEE Trans. Image Process., 11: 1260–1270, 2002. 552 CHAPTER 20 Diffusion Partial Differential Equations for Edge Detection [43] Y. Yu and S. T. Acton. Edge detection in ultrasound imagery using the instantaneous coefficient of variation. IEEE Trans. Image Process., 13(12):1640–1655, 2004. [44] P. J. Rousseeuw and A. M. Leroy. Robust Regression and Outlier Detection. Wiley, New York, 1987. [45] W. K. Pratt. D i gital Image Processing. Wiley, New York, 495–501, 1978. [46] M. Kass, A. Witkin, and D. Terzopoulos. Snakes: active contour models. Int. J. Comput. Vis., 1(4):321–331, 1987. [47] R. Courant and D. Hilbert. Methods of Mathematical Physics, Vol. 1. Interscience Publishers Inc., New York, 1953. [48] J. L. Troutman. Variational Calculus with Elementary Convexity. Springer-Verlag, New York, 1983. [49] C. Xu and J. L. Prince. Snakes, shapes, and gradient vector flow. IEEE Trans. Image Process., 7: 359–369, 1998. [50] C. Xu and J. L. Prince. Generalized gradient vector flow external force for active contours. Signal Processing, 71:131–139, 1998. [51] N. Ray and S. T. Acton. Tracking rolling leukocytes with motion gradient vector flow. In Proc. 37th Asilomar Conf. on Signals, Systems and Computers, Pacific Grove, California, November 9–12, 2003. CHAPTER 21 Image Quality Assessment Kalpana Seshadrinathan 1 , Thrasyvoulos N. Pappas 2 , Robert J. Safranek 3 , Junqing Chen 4 , Zhou Wang 5 , Hamid R. Sheikh 6 , and Alan C. Bovik 7 1 The University of Texas at Austin; 2 Northwestern University; 3 Benevue, Inc.; 4 Northwestern University; 5 University of Waterloo; 6 Texas Instruments, Inc.; 7 The University of Texas at Austin 21.1 INTRODUCTION Recent advances indigital imaging technolog y, computational speed,storage capacity,and networking have resulted in the proliferation of digital images, both still and video. As the digital images are captured, stored, transmitted, and displayed in different devices, there is a need to maintain image quality. The end users of these images, in an overwhelmingly large number of applications, are human observers. In this chapter, we examine objective criteria for the evaluation of image quality as perceived by an average human observer. Even though we use the term image quality, we are primarily interested in image fidelity, i.e., how close an image is to a given original or reference image. This paradigm of image quality assessment (QA) is also known as full reference image QA. The development of objective metrics for evaluating image quality without a reference image is quite different and is outside the scope of this chapter. Image QA plays a fundamental role in the design and evaluation of imaging and image processing systems. As an example, QA algorithms can be used to systematically evaluate the performance of different image compression algorithms that attempt to minimize the number of bits required to store an image, while maintaining sufficiently high image qualit y. Similarly, QA algorithms can be used to evaluate image acquisition and display systems. Communication networks have developed tremendously over the past decade, and images and video are frequently transported over optic fiber, p acket switched networks like the Internet, wireless systems, etc. Bandwidth efficiency of appli- cations such as video conferencing and Video on Demand can be improved using QA systems to evaluate the effects of channel errors on the transported images and video. Further, QA algorithms can be used in “perceptually optimal” design of various compo- nents of an image communication system. Finally, QA and the psychophysics of human vision are closely related disciplines. Research on image and video QA may lend deep 553 554 CHAPTER 21 Image Quality Assessment insights into the functioning of the human visual system (HVS), which would be of great scientific value. Subjective evaluations are accepted to be the most effective and reliable, albeit quite cumbersome and expensive, way to assess image quality. A significant effort has been dedicated for the development of subjective tests for image quality [56, 57]. There has also been standards activity on subjective evaluation of image quality [58]. The study of the topic of subjective evaluation of image quality is beyond the scope of this chapter. The goal of an objective perceptual metric for image quality is to determine the differences between two images that are visible to the HVS. Usually one of the images is the reference which is considered to be“orig inal,”“perfect,”or “uncorrupted.”The second image has been modified or distorted in some sense. The output of the QA algorithm is often a number that represents the probability that a human eye can detect a difference in the two images or a number that quantifies the perceptual dissimilarity between the two images. Alternatively, the output of an image quality metric could be a map of detection probabilities or perceptual dissimilarity values. Perhaps the earliest image quality metrics were the mean squared error (MSE) and peak signal-to-noise ratio (PSNR) between the reference and distorted images. These metrics are still widely used for performance evaluation, despite their well-known lim- itations, due to their simplicity. Let f (n) and g (n) represent the value (intensity) of an image pixel at location n. Usually the image pixels are arranged in a Cartesian grid and n ϭ (n 1 ,n 2 ). The MSE between f (n) and g(n) is defined as MSE  f (n),g(n)  ϭ 1 N  n  f (n) Ϫ g (n)  2 , (21.1) where N is the total number of pixel locations in f (n) or g (n). The PSNR between these image patches is defined as PSNR  f (n),g(n)  ϭ 10 log 10 E 2 MSE  f (n),g(n)  , (21.2) where E is the maximum value that a pixel can take. For example, for 8-bit grayscale images, E ϭ 255. In Fig. 21.1, we show two distorted images generated from the same or iginal image. The first distorted image (Fig. 21.1(b)) was obtained by adding a constant number to all signal samples. The second distorted image (Fig. 21.1(c)) was generated using the same method except that the signs of the constant were randomly chosen to be positive or negative. It can be easily shown that the MSE/PSNR between the original image and both of the distorted images are exactly the same. However, the visual quality of the two distorted images is drastically different. Another example is shown in Fig. 21.2,where Fig. 21.2(b) was generated by adding independent white Gaussian noise to the original texture image in Fig. 21.2(a).InFig. 21.2(c), the signal sample values remained the same as in Fig. 21.2(a), but the spatial ordering of the samples has been changed (through a sorting procedure). Figure 21.2(d) was obtained from Fig. 21.2(b), by following the same reordering procedure used to create Fig. 21.2(c). Again, the MSE/PSNR between 21.2 Human Vision Modeling Based Metrics 555 (a) (b) (c) 1 1 FIGURE 21.1 Failure of the Minkowski metric for image quality prediction. (a) original image; (b) distorted image by adding a positive constant; (c) distorted image by adding the same constant, but with random sign. Images (b) and (c) have the same Minkowski metric with respect to image (a), but drastically different visual quality. Figs. 21.2(a) and 21.2(b) and Figs. 21.2(c) and 21.2(d) is exactly the same. However, Fig. 21.2(d) appears to be significantly noisier than Fig. 21.2(b). The above examples clearly illustrate the failure of PSNR as an adequate measure of visual quality. In this chapter, we will discuss three classes of image QA algorithms that correlate with visual perception significantly better—human vision based metrics, Structural SIMilarity (SSIM) metrics, and information theoretic metrics. Each of these techniques approaches the image QA problem from a different perspective and using different first principles. As we proceed in this chapter, in addition to discussing these QA techniques, we will also attempt to shed light on the similarities, dissimilarities, and interplay between these seemingly diverse techniques. 21.2 HUMAN VISION MODELING BASED METRICS Human vision modeling based metrics utilize mathematical models of certain stages of processing that occur in the visual systems of humans to construct a quality metric. Most HVS-based methods take an engineering approach to solving the QA problem by 556 CHAPTER 21 Image Quality Assessment Noise (a) Reordering pixels (b) (d) (c) 1 FIGURE 21.2 Failure of the Minkowski metric for image quality prediction. (a) original texture image; (b) dis- torted image by adding independent white Gaussian noise; (c) reordering of the pixels in image (a) (by sorting pixel intensity values); (d) reordering of the pixels in image (b), by following the same reordering used to create image (c). The Minkowski metrics between images (a) and (b) and images (c) and (d) are the same, but image (d) appears much noisier than image (b). measuring the threshold of visibility of signals and noise in the signals. These thresholds are then utilized to normalize the error between the reference and distorted images to obtain a perceptually meaningful error metric. To measure visibility thresholds, differ- ent aspects of visual processing need to be taken into consideration such as response to average brightness, contrast, spatial frequencies, orientations, etc. Other HVS-based methods attempt to directly model the different stages of processing that occur in the HVS that results in the observed visibility thresholds. In Section 21.2.1, we will discuss the individual building blocks that comprise a HVS-based QA system. The function of these blocks is to model concepts from the psychophysics of human perception that apply to image quality met rics. In Section 21.2.2, we will discuss the details of several well-known HVS-based QA systems. Each of these QA systems is comprised of some or all of the building blocks discussed in Section 21.2.1, but uses different mathematical models for each block. 21.2.1 Building Blocks 21.2.1.1 Preprocessing Most QA algorithms include a preprocessing stage that typically comprises of calibra- tion and registration. The array of numbers that represents an image is often mapped to 21.2 Human Vision Modeling Based Metrics 557 units of visual frequencies or cycles per degree of visual angle, and the calibration stage receives input parameters such as viewing distance and physical pixel spacings (screen resolution) to perform this mapping. Other calibration parameters may include fixa- tion depth and eccentricity of the images in the observer’s visual field [37, 38]. Display calibration or an accurate model of the display device is an essential part of any image quality metric [55], as the HVS can only see what the display can reproduce. Many qual- ity metrics require that the input image values be converted to physical luminances 1 before they enter the HVS model. In some cases, when the perceptual model is obtained empirically, the effects of the display are incorporated in the model [40]. The obvious disadvantage of this approach is that when the display changes, a new set of model parameters must be obtained [43]. The study of display models is beyond the scope of this chapter. Registration, i.e., establishing point-by-point correspondence between two images, is also necessary in most image QA systems. Often times, the performance of a QA model can be extremely sensitive to registration errors since many QA systems operate pixel by pixel (e.g., PSNR) or on local neighborhoods of pixels. Errors in registration would result in a shift in the pixel or coefficient values being compared and degrade the performance of the system. 21.2.1.2 Frequency Analysis The f requency analysis stage decomposes the reference and test images into different channels (usually called subbands) with different spatial frequencies and orientations using a set of linear filters. In many QA models, this stage is intended to mimic simi- lar processing that occurs in the HVS: neurons in the visual cortex respond selectively to stimuli with particular spatial frequencies and orientations. Other QA models that target specific image coders utilize the same decomposition as the compression sys- tem and model the thresholds of visibility for each of the channels. Some examples of such decompositions are shown in Fig. 21.3. The range of each axis is from Ϫu s /2to u s /2 cycles per degree, where u s is the sampling frequency. Figures 21.3(a)–(c) show transforms that are polar separable and belong to the former category of decomposi- tions (mimicking processing in the visual cortex). Figures 21.3(d)–(f) are used in QA models in the latter category and depict transforms that are often used in compression systems. In the remainder of this chapter, we will use f (n) to denote the value (intensity, grayscale, etc.) of an image pixel at location n. Usually the image pixels are arranged in a Cartesian grid and n ϭ (n 1 ,n 2 ). The value of the kth image subband at location n will be denoted by b(k,n). T he subband indexing k ϭ (k 1 ,k 2 ) could be in Cartesian or polar or even scalar coordinates. The same notation will be used to denote the kth coefficient of the nth discrete cosine transform (DCT) block (both Cartesian coordinate systems). This notation underscores the similarity between the two transformations, 1 In video practice, the term luminance is sometimes, incorrectly, used to denote a nonlinear transformation of luminance [75, p. 24]. 558 CHAPTER 21 Image Quality Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (a) Cortex transform (Watson) (b) Cortex transform (Daly) (c) Lubin’s transform (d) Subband transform (e) Wavelet transform (f) DCT transform FIGURE 21.3 The decomposition of the frequency plane corresponding to various transforms. The range of each axis is from Ϫu s /2 to u s /2 cycles per degree, where u s is the sampling frequency. even though we traditionally display the subband decomposition as a collection of subbands and the DCT as a collection of block transforms: a regrouping of coeffi- cients in the blocks of the DCT results in a representation very similar to a subband decomposition. [...]... the distortion is considerably higher for the SPIHT image In particular, the metric picks up the blurring on the wall on the left The perceptual PSNR (pooled over the whole image) is 46.8 dB for the SPIHT image and 49.5 dB for the PIC image, in contrast to the PSNR values Figure 21.7(d) shows the image coded with the standard JPEG algorithm at 0.52 bits/pixel, and Fig 21.7(g) shows the PIC metric The. .. thereby introducing distortions The output of the image source is the reference image, the output of the channel is the test image, and the visual quality of the test image is computed as the amount of information shared between the test and the reference signals, i.e., the mutual information between them Thus, information fidelity methods exploit the relationship between statistical image information and... structural distortions [28] 21.3.2 Image Quality Assessment Using SSIM The SSIM index measures the SSIM between two images If one of the images is regarded as of perfect quality, then the SSIM index can be viewed as an indication of the quality of the other image signal being compared When applying the SSIM index approach to large-size images, it is useful to compute it locally rather than globally The reasons... structural distortion are defined, there may be different ways to develop image QA algorithms The SSIM index is a specific implementation from the perspective of image formation The luminance of the surface of an object being observed is the product of the illumination and the reflectance, but the structures of the objects in the scene are independent of the illumination Consequently, we wish to separate the influence... and the perceptual PSNR is 47.9 dB At the intended viewing distance, the quality of the JPEG image is higher than the SPIHT image and worse than the PIC image as the metric indicates Note that the quantization matrix provides some perceptual weighting, which explains why the SPIHT image is superior according to PSNR and inferior according to perceptual PSNR The above examples illustrate the power of image. .. model to compute the average contrast signal -to- noise ratios (CSNR) at the threshold of detection for wavelet distortions in natural images for each subband of the wavelet decomposition To determine whether the distortions are visible within each octave band of frequencies, the actual contrast of the distortions is compared with the corresponding contrast detection threshold If the contrast of the distortions... weighting The PIC coder assumes a viewing distance of six image heights or 21 inches Depending on the quality of reproduction (which is not known at the time this chapter is written), at a close viewing distance, the reader may see ringing near the edges of the PIC image On the other hand, the SPIHT image has considerable blurring, especially on the wall near the left edge of the image However, if the reader... away from an image, a fixed size feature in the image takes up fewer degrees of visual angle This action moves it to the right on the contrast sensitivity curve, possibly requiring it to have greater contrast to remain visible On the other hand, moving closer to an image can allow previously imperceivable details to rise above the visibility threshold Given these observations, it is clear that the minimum... than the corresponding detection threshold for all frequencies, the distorted image is declared to be of perfect quality In Section 21.2.1.3, we mentioned the CSF of human vision and several models discussed here attempt to model this aspect of human perception Although the CSF is critical in determining whether the distortions are visible in the test image, the utility of the CSF in measuring the visibility... that pixel (as opposed to using the average luminance in a neighborhood of the pixel) To account for contrast sensitivity, the VDP filters the image by the CSF before the frequency decomposition Once this normalization is accomplished to account for the varying sensitivities of the HVS to different spatial frequencies, the thresholds derived in the contrast masking stage become the same for all frequencies . edges of the PIC image. On the other hand, the SPIHT image has considerable blurring, especially on the wall near the left edge of the image. However, if the reader holds the image at the intended. particular, the metric picks up the blurring on the wall on the left. The perceptual PSNR (pooled over the whole image) is 46.8 dB for the SPIHT image and 49.5 dB for the PIC image, in contrast to the. together with the other types of masking we will see below [41].Itis called masking because the luminance of the original image signal masks the variations in the distorted signal. Consider the

Ngày đăng: 01/07/2014, 10:43

Từ khóa liên quan

Mục lục

  • Cover Page

  • Copyright

    • Copyright

    • Preface

      • Preface

      • About the Author

        • About the Author

        • 1 Introduction to Digital Image Processing

          • 1 Introduction to Digital Image Processing

            • Types of Images

            • Scale of Images

            • Dimension of Images

            • Digitization of Images

            • Sampled Images

            • Quantized Images

            • Color Images

            • Size of Image Data

            • Objectives of this Guide

            • Organization of the Guide

            • Reference

            • 2 The SIVA Image Processing Demos

              • 2 The SIVA Image Processing Demos

                • Introduction

                • LabVIEW for Image Processing

                  • The LabVIEW Development Environment

                  • Image Processing and Machine Vision in LabVIEW

                    • NI Vision

                    • NI Vision Assistant

                    • Examples from the SIVA Image Processing Demos

Tài liệu cùng người dùng

Tài liệu liên quan