2 Image Quality and Information Content Several factors a ect the quality and information content of biomedical images acquired with the modalities described in Chapter A few considerations in biomedical image acquisition and analysis that could have a bearing on image quality are described in Section 2.1 A good understanding of such factors, as well as appropriate characterization of the concomitant loss in image quality, are essential in order to design image processing techniques to remove the degradation and/or improve the quality of biomedical images The characterization of information content is important for the same purposes as above, as well as in the analysis and design of image transmission and archival systems An inherent problem in characterizing quality lies in the fact that image quality is typically judged by human observers in a subjective manner To quantify the notion of image quality is a di cult proposition Similarly, the nature of the information conveyed by an image is di cult to quantify due to its multifaceted characteristics in terms of statistical, structural, perceptual, semantic, and diagnostic connotations However, several measures have been designed to characterize or quantify a few speci c attributes of images, which may in turn be associated with various notions of quality as well as information content The numerical values of such measures of a given image before and after certain processes, or the changes in the attributes due to certain phenomena, could then be used to assess variations in image quality and information content We shall explore several such measures in this chapter 2.1 Di culties in Image Acquisition and Analysis In Chapter 1, we studied several imaging systems and procedures for the acquisition of many di erent types of biomedical images The practical application of these techniques may pose certain di culties: the investigator often faces conditions that may impose limitations on the quality and information content of the images acquired The following paragraphs illustrate a few practical di culties that one might encounter in biomedical image acquisition and analysis © 2005 by CRC Press LLC 61 62 Biomedical Image Analysis Accessibility of the organ of interest: Several organs of interest in imaging-based investigation are situated well within the body, encased in protective and di cult-to-access regions, for good reason! For example, the brain is protected by the skull, and the prostate is situated at the base of the bladder near the pelvic outlet Several limitations are encountered in imaging such organs special imaging devices and image processing techniques are required to facilitate their visualization Visualization of the arteries in the brain requires the injection of an X-ray contrast agent and the subtraction of a reference image see Section 4.1 Special transrectal probes have been designed for 3D ultrasonic imaging of the prostate 92] Despite the use of such special devices and techniques, images obtained in applications as above tend to be a ected by severe artifacts Variability of information: Biological systems exhibit great ranges of inherent variability within their di erent categories The intrinsic and natural variability presented by biological entities within a given class far exceeds the variability that we may observe in engineering, physical, and manufactured samples The distinction between a normal pattern and an abnormal pattern is often clouded by signi cant overlap between the ranges of the features or variables that are used to characterize the two categories the problem is compounded when multiple abnormalities need to be considered Imaging conditions and parameters could cause further ambiguities due to the e ects of subject positioning and projection For example, most malignant breast tumors are irregular and spiculated in shape, whereas benign masses are smooth and round or oval However, some malignant tumors may present smooth shapes, and some benign masses may have rough shapes A tumor may present a rough appearance in one view or projection, but a smoother pro le in another Furthermore, the notion of shape roughness is nonspeci c and open-ended Overlapping patterns caused by ligaments, ducts, and breast tissue that may lie in other planes, but are integrated on to a single image plane in the process of mammographic imaging, could also a ect the appearance of tumors and masses in images The use of multiple views and spot magni cation imaging could help resolve some of these ambiguities, but at the cost of additional radiation dose to the subject Physiological artifacts and interference: Physiological systems are dynamic and active Some activities, such as breathing, may be suspended voluntarily by an adult subject (in a reasonable state of health and wellbeing) for brief periods of time to permit improved imaging However, cardiac activity, blood circulation, and peristaltic movement are not under one's volitional control The rhythmic contractile activity of the heart poses challenges in imaging of the heart The pulsatile movement of blood through the brain causes slight movements of the brain that could cause artifacts in angiographic imaging see Section 4.1 Dark shadows may appear in ultrasound images next to bony regions due to signi cant attenuation of the investigating beam, and hence the lack of echoes from tissues beyond the bony regions along © 2005 by CRC Press LLC Image Quality and Information Content 63 the path of beam propagation An analyst should pay attention to potential physiological artifacts when interpreting biomedical images Special techniques have been developed to overcome some of the limitations mentioned above in cardiac imaging Electronic steering of the X-ray beam has been employed to reduce the scanning time required for CT projection data acquisition in order to permit imaging of the heart see Figure 1.21 State-of-the-art multislice and helical-scan CT scanners acquire the required data in intervals much shorter than the time taken by the initial models of CT scanners Cardiac nuclear medicine imaging is performed by gating the photon-counting process to a certain speci c phase of the cardiac cycle by using the electrocardiogram (ECG) as a reference see Figure 1.27 and Section 3.10 Although nuclear medicine imaging procedures take several minutes, the almost-periodic activity of the heart permits the cumulative imaging of its musculature or chambers at particular positions repeatedly over several cardiac cycles Energy limitations: In X-ray mammography, considering the fact that the organ imaged is mainly composed of soft tissues, a low kV p would be desired in order to maximize image contrast However, low-energy X-ray photons are absorbed more readily than high-energy photons by the skin and breast tissues, thereby increasing the radiation dose to the patient A compromise is required between these two considerations Similarly, in TEM, a high-kV electron beam would be desirable in order to minimize damage to the specimen, but a low-kV beam can provide improved contrast The practical application of imaging techniques often requires the striking of a trade-o between icting considerations as above Patient safety: The protection of the subject or patient in a study from electrical shock, radiation hazard, and other potentially dangerous conditions is an unquestionable requirement of paramount importance Most organizations require ethical approval by specialized committees for experimental procedures involving human or animal subjects, with the aim of minimizing the risk and discomfort to the subject and maximizing the bene ts to both the subjects and the investigator The relative levels of potential risks involved should be assessed when a choice is available between various procedures, and analyzed against their relative bene ts Patient safety concerns may preclude the use of a procedure that may yield better images or results than others, or may require modi cations to a procedure that may lead to inferior images Further image processing steps would then become essential in order to improve image quality or otherwise compensate for the initial compromise © 2005 by CRC Press LLC 64 Biomedical Image Analysis 2.2 Characterization of Image Quality Biomedical images are typically complex sources of several items of information Furthermore, the notion of quality cannot be easily characterized with a small number of features or attributes Because of these reasons, researchers have developed a rather large number of measures to represent quantitatively several attributes of images related to impressions of quality Changes in measures related to quality may be analyzed for several purposes, such as: comparison of images generated by di erent medical imaging systems comparison of images obtained using di erent imaging parameter settings of a given system comparison of the results of several image enhancement algorithms assessment of the e ect of the passage of an image through a transmission channel or medium and assessment of images compressed by di erent data compression techniques at di erent rates of loss of data, information, or quality Specially designed phantoms are often used to test medical imaging systems for routine quality control 104, 105, 106, 107, 108] Bijkerk et al 109] developed a phantom with gold disks of di erent diameter and thickness to test mammography systems Because the signal contrast and location are known from the design of the phantom, the detection performance of trained observers may be used to test and compare imaging systems Ideally, it is desirable to use \numerical observers": automatic tools to measure and express image quality by means of numbers or \ gures of merit" (FOMs) that could be objectively compared see Furuie et al 110] and Barrett 111] for examples It is clear that not only are FOMs important, but so is the methodology for their comparison Kayargadde and Martens 112, 113] discuss the relationships between image quality attributes in a psychometric space and a perceptual space Many algorithms have been proposed to explore various attributes of images or imaging systems The attributes take into consideration either the whole image or a chosen region to calculate FOMs, and are labeled as being \global" or \local", respectively Often, the measured attribute is image de nition | the clarity with which details are reproduced 114] | which is typically expressed in terms of image sharpness This notion was rst mentioned by Higgins and Jones 115] in the realm of photography, but is valid for image evaluation in a broader context Rangayyan and Elkadiki 116] present a survey of di erent methods to measure sharpness in photographic and digital images (see Section 2.15) Because quality is a subjective notion, the results © 2005 by CRC Press LLC Image Quality and Information Content 65 obtained by algorithms such as those mentioned above need to be validated against the evaluation of test images by human observers This could be done by submitting the same set of images to human and numerical (computer) evaluation, and then comparing the results 104, 105, 106, 107, 108, 117] Subjective and objective judgment should agree to some degree under de ned conditions in order for the numerical measures to be useful The following sections describe some of the concepts and measures that are commonly used in biomedical image analysis 2.3 Digitization of Images The representation of natural scenes and objects as digital images for processing using computers requires two steps: sampling and quantization Both of these steps could potentially cause loss of quality and introduce artifacts 2.3.1 Sampling Sampling is the process of representing a continuous-time or continuous-space signal on a discrete grid, with samples that are separated by (usually) uniform intervals The theory and practice of sampling 1D signals have been well established 1, 2, 7] In essence, a band-limited signal with the frequency of its fastest component being fm Hz may be represented without loss by its samples obtained at the Nyquist rate of fs = fm Hz Sampling may be modeled as the multiplication of the given continuoustime or analog signal with a periodic train of impulses The multiplication of two signals in the time domain corresponds to the convolution of their Fourier spectra The Fourier transform of a periodic train of impulses is another periodic train of impulses with a period that is equal to the inverse of the period in the time domain (that is, fs Hz ) Therefore, the Fourier spectrum of the sampled signal is periodic, with a period equal to fs Hz A sampled signal has in nite bandwidth however, the sampled signal contains distinct or unique frequency components only up to fm = fs =2 Hz If the signal as above is sampled at a rate lower than fs Hz , an error known as aliasing occurs, where the frequency components above fs =2 Hz appear at lower frequencies It then becomes impossible to recover the original signal from its sampled version If sampled at a rate of at least fs Hz , the original signal may be recovered from its sampled version by lowpass ltering and extracting the base-band component over the band fm Hz from the in nite spectrum of the sampled signal If an ideal (rectangular) lowpass lter were to be used, the equivalent operation in the time domain would be convolution with a sinc function (which © 2005 by CRC Press LLC 66 Biomedical Image Analysis is of in nite duration) This operation is known as interpolation Other interpolating functions of nite duration need to be used in practice, with the equivalent lter extracting the base-band components without signi cant reduction in gain over the band fm Hz In practice, in order to prevent aliasing errors, it is common to use an anti-aliasing lter prior to the sampling of 1D signals, with a pass-band that is close to fs =2 Hz , with the prior knowledge that the signal contains no signi cant energy or information beyond fm fs =2 Hz Analog spectrum analyzers may be used to estimate the bandwidth and spectral content of a given 1D analog signal prior to sampling All of the concepts explained above apply to the sampling of 2D signals or images However, in most real-life applications of imaging and image processing, it is not possible to estimate the frequency content of the images, and also not possible to apply anti-aliasing lters Adequate sampling frequencies need to be established for each type of image or application based upon prior experience and knowledge Regardless, even with the same type of images, di erent sampling frequencies may be suitable or adequate for di erent applications Figure 2.1 illustrates the loss of quality associated with sampling an image at lower and lower numbers of pixels Biomedical images originally obtained on lm are usually digitized using high-resolution CCD cameras or laser scanners Several newer biomedical imaging systems include devices for direct digital data acquisition In digital imaging systems such as CT, sampling is inherent in the measurement process, which is also performed in a domain that is di erent from the image domain This adds a further level of complexity to the analysis of sampling Practical experimentation and experience have helped in the development of guidelines to assist in such applications 2.3.2 Quantization Quantization is the process of representing the values of a sampled signal or image using a nite set of allowed values In a digital representation using n bits per sample and positive integers only, there exist 2n possible quantized levels, spanning the range 2n ; 1] If n = bits are used to represent each pixel, there can exist 256 values or gray levels to represent the values of the image at each pixel, in the range 255] It is necessary to map appropriately the range of variation of the given analog signal, such as the output of a charge-coupled device (CCD) detector or a video device, to the input dynamic range of the quantizer If the lowest level (or lower threshold) of the quantizer is set too high in relation to the range of the original signal, the quantized output will have several samples with the value zero, corresponding to all signal values that are less than the lower threshold Similarly, if the highest level (or higher threshold) of the quantizer is set too low, the output will have several samples with the highest © 2005 by CRC Press LLC Image Quality and Information Content 67 (a) (b) (c) (d) FIGURE 2.1 E ect of sampling on the appearance and quality of an image: (a) 225 250 pixels (b) 112 125 pixels (c) 56 62 pixels and (d) 28 31 pixels All four images have 256 gray levels at bits per pixel © 2005 by CRC Press LLC 68 Biomedical Image Analysis quantized level, corresponding to all signal values that are greater than the higher threshold Furthermore, the decision levels of the quantizer should be optimized in accordance with the probability density function (PDF) of the original signal or image The Lloyd{Max quantization procedure 8, 9, 118, 119] to optimize a quantizer is derived as follows Let p(r) represent the PDF of the amplitude or gray levels in the given image, with the values of the continuous or analog variable r varying within the range rmin rmax ] Let the range rmin rmax ] be divided into L parts demarcated by the decision levels R0 R1 R2 : : : RL , with R0 = rmin and RL = rmax see Figure 2.2 Let the L output levels of the quantizer represent the values Q0 Q1 Q2 : : : QL;1 , as indicated in Figure 2.2 The mean-squared error (MSE) in representing the analog signal by its quantized values is given by "2 = LX ;1 Z R +1 l l=0 R (r ; Ql )2 p(r) dr: (2.1) l Several procedures exist to determine the values of Rl and Ql that minimize the MSE 8, 9, 118, 119] A classical result indicates that the output level Ql should lie at the centroid of the part of the PDF between the decision levels Rl and Rl+1 , given by R R +1 r p(r) dr (2.2) Ql = RRR +1 p(r) dr R which reduces to Ql = Rl +2Rl+1 (2.3) if the PDF is uniform It also follows that the decision levels are then given by Rl = Ql;12+ Ql : (2.4) It is common to quantize images to bits=pixel However, CT images represent a large dynamic range of X-ray attenuation coe cient, normalized into HU , over the range ;1 000 000] for human tissues Small di erences of the order of 10 HU could indicate the distinction between normal tissue and diseased tissue If the range of 000 HU were to be quantized into 256 levels using an 8-bit quantizer, each quantized level would represent a change 000 = 7:8125 HU , which could lead to the loss of the distinction as above of 2256 in noise For this reason, CT and several other medical images are quantized using 12 ; 16 bits=pixel The use of an inadequate number of quantized gray levels leads to false contours and poor representation of image intensities Figure 2.3 illustrates the loss of image quality as the number of bits per pixel is reduced from six to one l l l l © 2005 by CRC Press LLC Image Quality and Information Content Q Q R = r R Q R Q R R 69 Q Q L-2 L-1 R R = r L-1 L max Quantizer output levels Decision levels gray level r FIGURE 2.2 Quantization of an image gray-level signal r with a Gaussian (solid line) or uniform (dashed line) PDF The quantizer output levels are indicated by Ql and the decision levels represented by Rl The quantized values in a digital image are commonly referred to as gray levels, with representing black and 255 standing for white when 8-bit quantization is used Unfortunately, this goes against the notion of a larger amount of gray being darker than a smaller amount of gray! However, if the quantized values represent optical density (OD), a larger value would represent a darker region than a smaller value Table 2.1 lists a few variables that bear di erent relationships with the displayed pixel value 2.3.3 Array and matrix representation of images Images are commonly represented as 2D functions of space: f (x y) A digital image f (m n) may be interpreted as a discretized version of f (x y) in a 2D array, or as a matrix see Section 3.5 for details on matrix representation of images and image processing operations The notational di erences between the representation of an image as a function of space and as a matrix could be a source of confusion © 2005 by CRC Press LLC 70 Biomedical Image Analysis (a) (b) (c) (d) FIGURE 2.3 E ect of gray-level quantization on the appearance and quality of an image: (a) 64 gray levels (6 bits per pixel) (b) 16 gray levels (4 bits per pixel) (c) four gray levels (2 bits per pixel) and (d) two gray levels (1 bit per pixel) All four images have 225 250 pixels Compare with the image in Figure 2.1 (a) with 256 gray levels at bits per pixel © 2005 by CRC Press LLC 136 FIGURE 2.51 Biomedical Image Analysis (a) Edge spread function, (b) line spread function, and (c) MTF of a CT system micron = m Reproduced with permission from M Pateyron, F Peyrin, A.M Laval-Jeantet, P Spanne, P Cloetens, and G Peix, \3D microtomography of cancellous bone samples using synchrotron radiation", Proceedings of SPIE 2708: Medical Imaging 1996 { Physics of Medical Imaging, Newport Beach, CA, pp 417{426 c SPIE © 2005 by CRC Press LLC Image Quality and Information Content 137 In some applications, the variance of the image may not provide an appropriate indication of the useful range of variation present in the image For this reason, another commonly used de nition of SNR is based upon the dynamic range of the image, as SNR2 = 20 log10 fmax ; fmin dB: (2.95) Video signals in modern CRT monitors have SNR of the order of 60 ; 70 dB with noninterlaced frame repetition rate in the range 70;80 frames per second Contrast-to-noise ratio (CNR) is a measure that combines the contrast or the visibility of an object and the SNR, and is de ned as CNR = f ; b b (2.96) where f is an ROI (assumed to be uniform, such as a disc being imaged using X rays), and b is a background region with no signal content (see Figure 2.7) Comparing this measure to the basic measure of simultaneous contrast in Equation 2.8, the di erence lies in the denominator, where CNR uses the standard deviation Whereas simultaneous contrast uses a background region that encircles the ROI, CNR could use a background region located elsewhere in the image CNR is well suited to the analysis of X-ray imaging systems, where the density of an ROI on a lm image depends upon the dose: the visibility of an object is dependent upon both the dose and the noise In a series of studies on image quality, Schade reported on image gradation, graininess, and sharpness in television and motion-picture systems 135] on an optical and photoelectric analog of the eye 136] and on the evaluation of photographic image quality and resolving power 137] The sine-wave, edgetransition, and square-wave responses of imaging systems were discussed in detail Schade presented a detailed analysis of the relationships between resolving power, contrast sensitivity, number of perceptible gray-scale steps, and granularity with the \three basic characteristics" of an imaging system: intensity transfer function, sine-wave response, and SNR Schade also presented experimental setups and procedures with optical benches and equipment for photoelectric measurements and characterization of optical and imaging systems Burke and Snyder 138] reported on quality metrics of digital images as related to interpreter performance Their test set included a collection of 250 transparencies of 10 digital images, each degraded by ve levels of blurring and ve levels of noise Their work addressed the question \How can we measure the degree to which images are improved by digital processing?" The results obtained indicated that although the main e ect of blur was not signi cant in their interpretation experiment (in terms of the extraction of the \essential elements of information"), the e ect of noise was signi cant However, in medical imaging applications such as SPECT, high levels of noise © 2005 by CRC Press LLC 138 Biomedical Image Analysis are tolerated, but blurring of edges caused by lters used to suppress noise is not accepted Tapiovaara and Wagner 139] proposed a method to measure image quality in the context of the image information available for the performance of a speci ed detection or discrimination task by an observer The method was applied to the analysis of uoroscopy systems by Tapiovaara 140] 2.14 Error-based Measures Notwithstanding several preceding works on image quality, Hall 141] stated (in 1981) that \A major problem which has plagued image processing has been the lack of an e ective image quality measure." In his paper on subjective evaluation of a perceptual quality metric, Hall discussed several image quality measures including the MSE, normalized MSE (NMSE), normalized error (NE), and Laplacian MSE (LMSE), and then de ned a \perceptual MSE" or PMSE based on an HVS model The measures are based upon the differences between a given test image f (m n) and its degraded version g(m n) after passage through the imaging system being evaluated, computed over the full image frame either directly or after some lter or transform operation, as follows: ;1 NX ;1 MX MSE = MN f (m n) ; g(m n)]2 (2.97) m=0 n=0 PM ;1 PN ;1 n) ; g(m n)]2 NMSE = m=0PM ;n1=0PNf;(m m=0 n=0 f (m n)] PM ;1 PN ;1 jf (m n) ; g (m n)j NE = m=0PM ;n1=0PN ;1 m=0 n=0 jf (m n)j PM ;2 PN ;2 fL (m n) ; gL (m n)]2 LMSE = m=1PM ;n=1 P N ;2 m=1 n=1 fL (m n)] (2.98) (2.99) (2.100) where the M N images are de ned over the range m = : : : M ; and n = : : : N ; and fL (m n) is the Laplacian (second derivative) of f (m n) de ned as in Equation 2.82 for m = : : : M ; and n = : : : N ; 2: PMSE was de ned in a manner similar to NMSE, but with each image replaced with the logarithm of the image convolved with a PSF representing the HVS Hall's results showed that PMSE correlated well with subjective ranking of images to greater than 99:9%, and performed better than NMSE or LMSE It should be noted that the measures de ned above assume the availability of a reference image for comparison in a before{and{ after manner © 2005 by CRC Press LLC Image Quality and Information Content 2.15 Application: Image Sharpness and Acutance 139 In the search for a single measure that could represent the combined e ects of various imaging and display processes, several researchers proposed various measures under the general label of \acutance" 114, 115, 142, 143, 144, 145, 146] The following paragraphs present a review of several such measures based upon the ESF and the MTF As an aside, it is important to note the distinction between acutance and acuity Westheimer 147, 148] discussed the concepts of visual acuity and hyperacuity and their light-spread and frequency-domain descriptions Acuity (as evaluated with Snellen letters or Landolt \C"s) tests the \minimum separable", where the visual angle of a small feature is varied until a discrimination goal just can or cannot be achieved (a resolution task) On the other hand, hyperacuity (vernier or stereoscopic acuity) relates to spatial localization or discrimination The edge spread function: Higgins and Jones 115] discussed the nature and evaluation of the sharpness of photographic images, with particular attention to the importance of gradients With the observation that the cones in the HVS, while operating in the mode of photopic vision (under high-intensity lighting), respond to temporal illuminance gradients, and that the eye moves to scan the eld of vision, they argued that spatial luminance gradients in the visual eld represent physical aspects of the object or scene that a ect the perception of detail Higgins and Jones conducted experiments with microdensitometric traces of knife edges recorded on various photographic materials, and found that the maximum gradient or average gradient measures along the knife-edge spread functions (KESF) failed to correlate with sharpness as judged by human observers Figure 2.20 illustrates an ideal sharp edge and a hypothetical KESF Higgins and Jones proposed a measure of acutance based upon the mean-squared gradient across a KESF as Z b d f (x) dx A = f (b) ;1 f (a) (2.101) a dx where f (x) represents the intensity function along the edge, and a and b are the spatial limits of the (blurred) edge Ten di erent photographic materials were evaluated with the measure of acutance, and the results indicated excellent correlation between acutance and subjective judgment of sharpness Wolfe and Eisen 149] reported on psychometric evaluation of the sharpness of photographic reproductions They stated that resolving power, maximum gradient, and average gradient not correlate well with sharpness, and that the variation of density across an edge is an obvious physical measurement to be investigated in order to obtain an objective correlate of sharpness Perrin 114] continued along these lines, and proposed an averaged measure of © 2005 by CRC Press LLC 140 Biomedical Image Analysis acutance by averaging the mean-squared gradient measure of Higgins and Jones over many sections of the KESF, and further normalizing it with respect to the density di erence across the knife edge Perrin also discussed the relationship between the edge trace and the LSF MTF-based measures: Although Perrin 114] reported on an averaged acutance measure based on the mean-squared gradient measure of Higgins and Jones, he also remarked 150] that the sine-wave response better describes the behavior of an optical system than a single parameter (such as resolving power), and discussed the relationship between the sine-wave response and spread functions The works of Schade and Perrin, perhaps, shifted interest from the spatial-gradient technique of Higgins and Jones to the frequency domain Frequency-domain measures related to image quality typically combine the areas under the MTF curves of the long chain of systems and processes involved from the initial stage of the camera lens, through the lm and/or display device, to the nal visual system of the viewer 146] It is known from basic linear system theory that when a composite system includes a number of LSI systems with transfer functions H1 (u v), H2 (u v), , HN (u v) in series (cascade), the transfer function of the complete system is given by H (u v) = H1 (u v) H2 (u v) HN (u v) = N Y i=1 Hi (u v): (2.102) Equivalently, we have the PSF of the net system given by h(x y) = h1 (x y) h2 (x y) hN (x y): (2.103) Given that the high-frequency components in the spectrum of an image are associated with sharp edges in the image domain, it may be observed that the transfer function of an imaging or image processing system should possess large gains at high frequencies in order for the output image to retain the sharpness present in the input This observation leads to the result that, over a given frequency range of interest, a system with larger gains at higher frequencies (and hence a sharper output image) will have a larger area under the normalized MTF than another system with lower gains (and hence poorer sharpness in the resulting image) By design, MTF-area-based measures represent the combined e ect of all the systems between the image source and the viewer they are independent of the actual image displayed Crane 142] started a series of de nitions of acutance based on the MTFs of imaging system components He discussed the need for objective correlates of the subjective property of image sharpness or crispness, and remarked that resolving power is misleading, and that the averaged squared gradient of edge pro les is dependable but cannot include the e ects of all the components in a photographic system (camera to viewer) Crane proposed a single numerical rating based on the areas under the MTF curves of all the systems in the © 2005 by CRC Press LLC Image Quality and Information Content 141 chain from the camera to the viewer (for example, camera, negative, printer, intermediate processing systems, print lm, projector, screen, and observer) He called the measure the system modulation transfer acutance (SMTA) and claimed that it could be readily comprehended, compared, and tabulated He also recommended that the acutance measure proposed by Higgins and Jones 115] and Perrin 114] be called image edge-pro le acutance (IEPA) Crane evaluated SMTA using 30 color lms and motion-picture lms, and found it to be a good tool Crane's work started another series of papers proposing modi ed de nitions of MTF-based acutance measures for various applications: Gendron 146] proposed a \cascaded modulation transfer or CMT" measure of acutance (CMTA) to rectify certain de ciencies in SMTA Crane wrote another paper on acutance and granulance 143] and de ned \AMT acutance" (AMTA) based on the ratio of the MTF area of the complete imaging system including the human eye to that of the eye alone He also presented measures of granulance based on root mean-squared (RMS) deviation from mean lightness in areas expected to be uniform, and discussed the relationships between acutance and granulance CMTA was used by Kriss 145] to compare the system sharpness of continuous and discrete imaging systems AMTA was used by Yip 144] to analyze the imaging characteristics of CRT multiformat printers Assuming the systems involved to be isotropic, the MTF pis typically expressed as a 1D function of the radial unit of frequency = (u2 + v2 ) see Figures 2.33 (b) and 2.51 (c) Let us represent the combined MTF of the complete chain of systems as Hs ( ) Some of the MTF-area-based measures are de ned as follows: A1 = Z max Hs ( ) ; He ( )] d (2.104) where He ( ) is the MTF threshold of the eye, represents the radial frequency at the eye of the observer, and max is given by the condition Hs ( max ) = He ( max ) 151] In order to reduce the weighting on high-frequency components, another measure replaces the di erence between the MTFs as above with their ratio, as 151] Z Hs ( ) d : A2 = (2.105) He ( ) AMTA was de ned as 143, 144] AMTA = 100 + 66 # "R H ( ) H ( ) d s e log10 R He ( ) d : (2.106) The MTF of the eye was modeled as a Gaussian with standard deviation = 13 cycles=degree AMTA values were interpreted as 100 : excellent, 90 : good, 80 : fair, and 70 : just passable 143] © 2005 by CRC Press LLC 142 Biomedical Image Analysis Several authors have presented and discussed various other image quality criteria and measures that are worth mentioning here whereas some are based on the MTF and hence have some common ground with acutance, others are based on di erent factors Higgins 152] discussed various methods for analyzing photographic systems, including the e ects of nonlinearity, LSFs, MTFs, granularity, and sharpness Granger and Cupery 153] proposed a \subjective quality factor (SQF)" based upon the integral of the system MTF (including scaling e ects to the retina) over a certain frequency range Their results indicated a correlation of 0:988 between SQF and subjective ranking by observers Higgins 154] published a detailed review of various image quality criteria Quality criteria as related to objective or subjective tone reproduction, sharpness, and graininess were described Higgins reported on the results of tests evaluating various versions of MTF-based acutance and other measures with photographic materials having widely di erent MTFs, and recommended that MTF-based acutance measures are good when no graininess is present SNR-based measures were found to be better when graininess was apparent Task et al 155] compared several television (TV) display image quality measures Their tests included target recognition tasks and several FOMs such as limiting resolution, MTF area, threshold resolution, and gray-shade frequency product They found MTF area to be the best measure among those evaluated Barten 151, 156] presented reviews of various image quality measures, and proposed the evaluation of image quality using the square-root integral (SQRI) method The SQRI measure is based upon the ratio of the MTF of the display system to that of the eye, and can take into account the contrast sensitivity of the eye and various display parameters such as resolution, addressability, contrast, luminance, display size, and viewing distance SQRI is de ned as SQRI = ln(2) Z max Hs ( ) He ( ) d : (2.107) Here, max is the maximum frequency to be displayed The SQRI measure overcomes some limitations in the SQF measure of Granger and Cupery 153] Based upon good correlation between SQRI and perceived subjective image quality, Barten proposed SQRI as an \excellent universal measure of perceived image quality" Carlson and Cohen 157] proposed a psychophysical model for predicting the visibility of displayed information, combining the e ects of MTF, noise, sampling, scene content, mean luminance, and display size They noted that edge transitions are a signi cant feature of most scenes, and proposed \discriminable di erence diagrams" of modulation transfer versus retinal frequency (in cycles per degree) Their work indicated that discriminable di erence diagrams could be used to predict the visibility of MTF changes in magnitude but not in phase © 2005 by CRC Press LLC Image Quality and Information Content 143 Several other measures of image quality based upon the HVS have been proposed by Saghri et al 158], Nill and Bouzas 159], Lukas and Budrikis 160], and Budrikis 161] Region-based measure of edge sharpness: Westerink and Roufs 162] proposed a local basis for perceptually relevant resolution measures Their experiments included the presentation of a number of slides with complex scenes at variable resolution created by defocusing the lens of the projector, and at various widths They showed that the width of the LSF correlates well with subjective quality, and remarked that MTF-based measures \do not re ect the fact that local aspects such as edges and contours play an important role in the quality sensation" Rangayyan and Elkadiki 116] discussed the importance of a local measure of quality, sharpness, or perceptibility of a region or feature of interest in a given image The question asked was \Given two images of the same scene, which one permits better perception of a speci c region or object in the image?" Such a situation may arise in medical imaging, where one may have an array of images of the same patient or phantom test object acquired using multiple imaging systems (di erent models or various imaging parameter settings on the same system) It would be of interest to determine which system or set of parameters provides the image where a speci c object, such as a tumor, may be seen best Whereas local luminance gradients are indeed re ected as changes at all frequencies in the MTF, such a global characteristic may dilute the desired di erence in the situation mentioned above Furthermore, MTFbased measures characterize the imaging and viewing systems in general, and are independent of the speci c object or scene on hand Based upon the observations of Higgins and Jones 115], Wolfe and Eisen 149], Perrin 114], Carlson and Cohen 157], and Westerink and Roufs 162] on the importance of local luminance variations, gradients, contours, and edges (as reviewed above), Rangayyan and Elkadiki presented arguments in favor of a region-based measure of sharpness or acutance They extended the measure of acutance de ned by Higgins and Jones 115] and Perrin 114] to 2D regions by computing the mean-squared gradient across and around the contour of an object or ROI in the given image, and called the quantity \a region-based measure of image edge-pro le or IEP acutance (IEPA)" Figure 2.52 illustrates the basic principle involved in computing the gradient around an ROI, using normals (perpendiculars to the tangents) at every pixel on its boundary Instead of the traditional di erence de ned as f (n) = f (n) ; f (n ; 1) (2.108) Rangayyan and Elkadiki (see Rangayyan et al 163] for revised de nitions) split the normal at each boundary pixel into a foreground part f (n) and a background part b(n) (see Figure 2.52), and de ned an averaged gradient as fd (k) = N1 © 2005 by CRC Press LLC N X n=1 f (n) ; b(n) 2n (2.109) 144 Biomedical Image Analysis where k is the index of the boundary pixel and N is the number of pairs of pixels (or di erences) used along the normal The averaged gradient values over all boundary pixels were then combined to obtain a single normalized value of acutance A for the entire region as A= d1 " max # 12 K X fd2 (k) K k=1 (2.110) where K is the number of pixels along the boundary, and dmax is the maximum possible gradient value used as a normalization factor It was shown that the value of A was reduced by blurring and increased by sharpening of the ROI 116, 164] Olabarriaga and Rangayyan 117] further showed that acutance as de ned in Equation 2.110 is not a ected signi cantly by noise, and that it correlates well with sharpness as judged by human observers Normal b(3) b(2) b(1) f(1) f(2) f(3) Background Boundary Foreground (Object) FIGURE 2.52 Computation of di erences along the normals to a region in order to derive a measure of acutance Four sample normals are illustrated, with three pairs of pixels being used to compute di erences along each normal Example of application: In terms of their appearance on mammograms, most benign masses of the breast are well-circumscribed with sharp boundaries that delineate them from surrounding tissues On the other hand, most malignant tumors possess fuzzy boundaries with slow and extended transition from a dense core region to the surrounding, less-dense tissues Based upon this radiographic observation, Rangayyan et al 163] hypothesized that the acutance measure should have higher values for benign masses than for ma© 2005 by CRC Press LLC Image Quality and Information Content 145 lignant tumors Acutance was computed using normals with variable length adapted to the complexity of the shape of the boundary of the mass being analyzed The measure was tested using 39 mammograms, including 28 benign masses and 11 malignant tumors Boundaries of the masses were drawn by a radiologist for this study It was found that acutance could lead to the correct classi cation of all of the 11 malignant tumors and 26 out of the 28 benign masses, resulting in an overall accuracy of 94:9% Mudigonda et al 165, 166] evaluated several versions of the acutance measure by de ning the di erences based upon successive pixel pairs and acrossthe-boundary pixel pairs It was observed that the acutance measure is sensitive to the location of the reference boundary 163, 164, 165, 166] See Sections 7.9.2 and 12.12 as well as Figure 12.4 for more details and illustrations related to acutance 2.16 Remarks We have reviewed several notions of image quality and information content in this chapter We explored many methods and measures designed to characterize various image attributes associated with quality and information content It should be observed that image quality considerations vary from one application of imaging to another, and that appropriate measures should be chosen after due assessment of the particular problem on hand In medical diagnostic applications, emphasis is usually placed on the assessment of image quality in terms of its e ect on the accuracy of diagnostic interpretation by human observers and specialists: methods related to this approach are discussed in Sections 4.11, 12.8, and 12.10 The use of measures of information content in the analysis of methods for image coding and data compression is described in Chapter 11 2.17 Study Questions and Problems (Note: Some of the questions may require background preparation with other sources on the basics of signals and systems as well as digital signal and image processing, such as Lathi 1], Oppenheim et al 2], Oppenheim and Schafer 7], Gonzalez and Woods 8], Pratt 10], Jain 12], Hall 9], and Rosenfeld and Kak 11].) Selected data les related to some of the problems and exercises are available at the site www.enel.ucalgary.ca/People/Ranga/enel697 © 2005 by CRC Press LLC 146 Biomedical Image Analysis Explain the di erences between spatial resolution and gray-scale resolution in a digitized image Give the typical units for the variables (x y) and (u v) used in the representation of images in the space and frequency domains How can a continuous (analog) image be recovered from its sampled (digitized) version? Describe the operations required in (a) the space domain, and (b) the frequency domain What are the conditions to be met for exact recovery of the analog image? Distinguish between gray-scale dynamic range and simultaneous contrast Explain the e ects of the former on the latter Draw schematic sketches of the histograms of the following types of images: (a) A collection of objects of the same uniform gray level placed on a uniform background of a di erent gray level (b) A collection of relatively dark cells against a relatively bright background, with both having some intrinsic variability of gray levels (c) An under-exposed X-ray image (d) An over-exposed X-ray image Annotate the histograms with labels and comments Starting with the expression for the entropy of a continuous PDF, show that the entropy is maximized by a uniform PDF (Hint: Treat this as a constrained optimization problem, with the constraint being that the integral of the PDF be equal to unity.) De ne two rectangular functions as (2.111) f1 (x y) = if x X y Y = otherwise and < if jxj X2 jyj Y2 f2 (x y) = : (2.112) otherwise: Starting from the de nition of the 2D Fourier transform, derive the Fourier transforms F1 (u v) and F2 (u v) of the two functions show all steps Explain the di erences between the two functions in the spatial and frequency domains Using the continuous 2D convolution and Fourier transform expressions, prove that convolution in the space domain is equivalent to multiplication of the corresponding functions in the Fourier domain You are given three images of rectangular objects as follows: (a) a horizontally placed rectangle with the horizontal side two times the vertical side (b) the rectangle in (a) rotated by 45o and (c) the rectangle in (a) reduced in each dimension by a factor of two Draw schematic diagrams of the Fourier spectra of the three images Explain the di erences between the spectra © 2005 by CRC Press LLC Image Quality and Information Content 147 10 Draw schematic diagrams of the Fourier magnitude spectra of images with (a) a circle of radius R (b) a circle of radius 2R and (c) a circle of radius R/2 The value of R is not relevant Explain the di erences between the three cases in both the space domain and the frequency domain (x y) in terms of the 11 (a) Derive the expression for the Fourier transform of @f@x Fourier transform of f (x y) Show and explain all steps (Hint: Start with the de nition of the inverse Fourier transform.) Explain the e ect of the di erentiation operator in the space domain and the frequency domain f (x y) (b) Based upon the result in (a), what is the Fourier transform of @ @x ? Explain (c) Based upon the result in (a), state the relationship between the Fourier (x y) ]2 and that of f (x y) State all properties that you use transform of @f@x (d) Explain the di erences between the operators in (a), (b), and (c) and their e ects in both the space domain and the frequency domain 12 Using the continuous 2D Fourier transform expression, prove that the inverse Fourier transform of a function F (u v) may be obtained by taking the forward Fourier transform of the complex conjugate of the given function that is, taking the forward transform of F (u v)], and then taking the complex conjugate of the result 13 Starting with the 2D DFT expression, show how the 2D DFT may be computed as a series of 1D DFTs Show and explain all steps 14 An image of size 100 mm 100 mm is digitized into a matrix of size 200 200 pixels with uniform sampling and equal spacing between the samples in the horizontal and vertical directions The spectrum of the image is computed using the FFT algorithm after padding the image to 256 256 pixels Draw a square to represent the array containing the spectrum and indicate the FFT array indices as well as the frequency coordinates in mm;1 at the four corners, at the mid-point of each side, and at the center of the square 15 A system performs the operation g(x y) = f (x y) ; f (x ; y): Derive the MTF of the system and explain its characteristics 16 Using the continuous Fourier transform, derive the relationship between the Fourier transforms of an image f (x y) and its modi ed version given as f1 (x y) = f (x ; x1 y ; y1 ) Explain the di erences between the two images in the spatial and frequency domains 17 The impulse response of a system is approximated by the 3 matrix 2 42 25 : (2.113) Derive the transfer function of the system and explain its characteristics © 2005 by CRC Press LLC 148 Biomedical Image Analysis 18 The image in Equation 2.113 is processed by systems having the following impulse responses: (a) h(m n) = ;1 1] (b) h(m n) = ;1 1]T and (c) h(m n) = a 3 matrix with all elements equal to 19 Compute the output image in each case over a 3 array, assuming that the input is zero outside the array given in Equation 2.113 19 The 5 image 0 0 66 10 10 10 77 f (m n) = 66 10 10 10 77 (2.114) 10 10 10 0 0 is processed by two systems in cascade The rst system produces the output g1 (m n) = f (m n) ; f (m ; n) The second system produces the output g2 (m n) = g1 (m n) ; g1 (m n ; 1) Compute the images g1 and g2 Does the sequence of application of the two operators a ect the result? Why (not)? Explain the e ects of the two operators 20 Derive the MTF for the Laplacian operator and explain its characteristics 21 Write the expressions for the convolution and correlation of two images Explain the similarities and di erences between the two What are the equivalent relationships in the Fourier domain? Explain the e ects of the operations in the spatial and frequency domains 22 Consider two systems with the impulse responses (a) h1 (m n) = ;1 1] and (b) h2 (m n) = ;1 1]T What will be the e ect of passing an image through the two systems in (a) parallel (and adding the results of the individual systems), or (b) series (cascade)? Considering a test image made up of a bright square in the middle of a dark background, draw schematic diagrams of the outputs at each stage of the two systems mentioned above 23 The squared gradient of an image f (x y) is de ned as (x y) + @f (x y) : g(x y) = @f @x @y (2.115) Derive the expression for G(u v), the Fourier transform of g(x y) How does this operator di er from the Laplacian in the spatial and frequency domains? 24 Using mathematical expressions and operations as required, explain how a degraded image of an edge may be used to derive the MTF of an imaging system © 2005 by CRC Press LLC Image Quality and Information Content 2.18 Laboratory Exercises and Projects 149 Prepare a phantom for X-ray imaging by attaching a few strips of metal (such as aluminum or copper) of various thickness to a plastic or plexiglass sheet Ensure that the strips have straight edges With the help of a quali ed technologist, obtain X-ray images of the phantom at a few di erent kV p and mAs settings Note the imaging parameters for each experiment Repeat the experiment with screens and lms of di erent characteristics, with and without the grid (bucky), and with the grid being stationary Study the contrast, noise, artifacts, and detail visibility in the resulting images Digitize the images for use in image processing experiments Scan across the edges of the metal strips and obtain the ESF From this function, derive the LSF, PSF, and MTF of the imaging system for various conditions of imaging Measure the SNR and CNR of the various metal strips and study their dependence upon the imaging parameters Repeat the experiment above with wire meshes of di erent spacing in the range ; 10 lines per mm Study the e ects of the X-ray imaging and digitization parameters on the clarity and visibility of the mesh patterns Compute the Fourier spectra of several biomedical images with various objects and features of di erent size, shape, and orientation characteristics, as well as of varying quality in terms of noise and sharpness Calibrate the spectra in terms of frequency in mm;1 or lp=mm Explain the relationships between the spatial and frequency-domain characteristics of the images and their spectra Compute the histograms and log-magnitude Fourier spectra of at least ten test images that you have acquired Comment on the nature of the histograms and spectra Relate speci c image features to speci c components in the histograms and spectra Comment on the usefulness of the histograms and spectra in understanding the information content of images Create a test image of size 100 100 pixels, with a circle of diameter 30 pixels at its center Let the value of the pixels inside the circle be 100, and those outside be 80 Prepare three blurred versions of the test image by applying the 3 mean lter (i) once, (ii) three times, and (iii) ve times successively To each of the three blurred images obtained as above, add three levels of (a) Gaussian noise, and (b) speckle noise © 2005 by CRC Press LLC 150 Biomedical Image Analysis Select the noise levels such that the edge of the circle becomes obscured in at least some of the images Compute the error measures MSE and NMSE between the original test image and each of the 18 degraded images obtained as above Study the e ect of blurring and noise on the error measures and explain your ndings From your collection of test images, select two images: one with strong edges of the objects or features present in the image, and the other with weaker de nition of edges and features Compute the horizontal di erence, vertical di erence, and the Laplacian of the images Find the minimum and maximum values in each result, and map appropriate ranges to the display range in order to visualize the results Study the results obtained and comment upon your ndings in relation to the details present in the test images © 2005 by CRC Press LLC [...]... system to be linear and shift-invariant (or positioninvariant or space-invariant, abbreviated as LSI), the image g(x y) of an object f (x y) is given by the 2D convolution integral 8, 9, 11, 131] g(x y) = Z 1 Z 1 =;1 © 2005 by CRC Press LLC =;1 h(x ; y ; ) f ( )d d (2. 32) 92 = Z 1 Z 1 =;1 =;1 Biomedical Image Analysis h( ) f (x ; y ; ) d d = h(x y) f (x y) where h(x y) is the PSF, and are temporary... thus given by an integral of the impulse function (see Figure 2.19) The output of an LSI system when the input is the line image fl (x y) = (y) , that is, the LSF, which we shall denote here as hl (x y) , is given by hl (x y) = = = = Z 1 =;1 Z 1 =;1 Z 1 =;1 Z 1 x=;1 Z 1 =;1 Z 1 =;1 h( ) fl (x ; y ; ) d d h( ) (y ; ) d d h( y ) d h(x y) dx: (2.34) In the equations above, h(x y) is the PSF of the system, and...Relationships Between Tissue Type, Tissue Density, X-ray Attenuation Coe cient, Houns eld Units (HU ), Optical Density (OD), and Gray Level 120, 121] The X-ray Attenuation Coe cient was Measured at a Photon Energy of 103:2 keV 121] Tissue Density X-ray Houns eld type gm=cm3 attenuation (cm;1 ) units Optical Gray level Appearance density (brightness) in image lung < 0:001 lower low... (y ) = (y) : (x y) dx (x) (y) dx 1 x=;1 (x) dx (2.33) The last integral above is equal to unity the separability property of the 2D impulse function as (x y) = (x) (y) has been used above Observe that although (y) has been expressed as a function of y only, it represents a 2D function of (x y) that is independent of x in the present case Considering (y) over the entire 2D (x y) space, it becomes evident... may be considered to be symbols produced by a discrete information source with the gray levels as its states Let us consider the occurrence of L gray levels in an image, with the probability of occurrence of the lth gray level being p(l) l = 0 1 2 : : : L ; 1: Let us also treat the gray level of a pixel as a random variable A measure of information conveyed by an event (a pixel or a gray level) may... follows: fe ( x y ) = Z y =;1 Z y fl (x ) d ( )d : (2.36) 0 fe (x y) = 10 ifif yy >