Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 22 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
22
Dung lượng
355,94 KB
Nội dung
Podilchuk, C. “Signal Recovery from Partial Information” DigitalSignalProcessingHandbook Ed. Vijay K. Madisetti and Douglas B. Williams Boca Raton: CRC Press LLC, 1999 c 1999byCRCPressLLC 25 Signal Recovery from Partial Information Christine Podilchuk Bell Laboratories Lucent Technologies 25.1 Introduction 25.2 Formulation of the Signal Recovery Problem Prolate Spheroidal Wavefunctions 25.3 Least Squares Solutions WienerFiltering • ThePseudoinverseSolution • Regularization Techniques 25.4 Signal Recovery using Projection onto Convex Sets (POCS) ThePOCSFramework 25.5 Row-Based Methods 25.6 Block-Based Methods 25.7 Image Restoration Using POCS References 25.1 Introduction Signal recovery has been an active area of research for applications in many different scientific dis- ciplines. A central reason for exploring the feasibility of signal recovery is due to the limitations imposed by a physical device on the amount of data one can record. For example, for diffraction- limited systems, the finite aperture size of the lens constrains the amount of frequency information that can be captured. The image degradation is due to attenuation of high frequency components resulting in a loss of details and other high frequency information. In other words, the finite aperture size of the lens acts like a lowpass filter on the input data. In some cases, the quality of the recorded image data can be improved by building a more costly recording device but many times the required condition for acceptable data quality is physically unrealizable or too costly. Other times signal re- covery may be necessary is for the recording of a unique event that cannot be reproduced under more ideal recording conditions. Some of the earliest work on signal recovery includes the work by Sondhi [1] and Slepian [2]on recovering images from motion blur and Helstrom [3] on least squares restoration. A sampling of some of the signal recovery algorithms applied to different types of problems can be found in [4]– [21]. Further reading includes the other sections in this book, Chapter 53, and the extended list of references provided by all the authors. The simplesignaldegradation modeldescribed inthenext section turns outtobe a useful represen- tation for many different problems encountered in practice. Some examples that can be formulated using the general signal recovery paradigm include image restoration, image reconstruction, spectral c 1999 by CRC Press LLC estimation, and filter design. We distinguish between image restoration, which pertains to image re- covery based on a measured distorted version of the original image, and image reconstruction, which refers most commonly to medical imaging where the image is reconstructed from a set of indirect measurements, usually projections. For many of the signal recovery applications, it is desirable to extrapolate a signal outside of a known interval. Extrapolating a signal in the spatial or temporal domain could result in improved spectral resolution and applies to such problems as power spec- trum estimation, radio–astronomy, radar target detection, and geophysical exploration. The dual problem, extrapolating the signal in the frequency domain, also known as superresolution, results in improved spatial or temporal resolution and is desirable in many image restoration problems. As will be shown later, the standard inverse filtering techniques are not able to resolve the signal estimate beyond the diffraction limit imposed by the physical measuring device. The observed signal is degraded from the original signal by both the measuring device as well as external conditions. Besides the measured, distorted output signal we may have some additional information about the following: the measuring system and external conditions, such as noise, as well as some a priori knowledge about the desired signal to be restored or reconstructed. In order to produce a good estimate of the original signal, we should take advantage of all the available information. Although the data recovery algorithms described here apply in general to any data type, we derive most of the techniques based on two-dimensional input data for image processing applications. For most cases, it isstraightforward toadaptthe algorithms to otherdatatypes. Examples ofdatarecovery techniques for different inputs are illustrated in the other sections in this book as well as Chapter 53 for image restoration. The material in this section requires some basic knowledge of linear algebra as found in [22]. Section 25.2 presents the signal degradation model and formulates the signal recovery problem. The early attempts of signal recovery based on inverse filtering are presented in Section 25.3.The concept of Projection Onto Convex Sets (POCS) described in Section 25.4 allows us to introduce a priori knowledge about the original signal in the form of linear as well as nonlinear constraints into the recovery algorithm. Convex set theoretic formulations allow us to design recovery algorithms that are extremely flexible and powerful. Sections 25.5 and 25.6 present some basic POCS-based algorithms and Section 25.7 presents a POCS-based algorithm for image restoration as well as some results. The sample algorithms presented here are not meant to be exhaustive and the reader is encouraged to read the other sections in this chapter as well as the references for more details. 25.2 Formulation of the Signal Recovery Problem Signal recovery can be viewed as an estimation process in which operations are performed on an observed signal in order to estimate the ideal signal that would be observed if no degradation was present. In order to design a signal recovery system effectively, it is necessary to characterize the degradation effectsofthephysicalmeasuringsystem. The basicidea istomodelthesignal degradation effectsas accuratelyas possibleand performoperations toundo thedegradations andobtainarestored signal. When the degradation cannot be modeled sufficiently, even the best recovery algorithms will not yield satisfactory results. For many applications, the degradation system is assumed to be linear and can be modeled as a Fredholm integral equation of the first kind expressed as g(x) = +∞ −∞ h(x; a)f (a)da + n(x). (25.1) This is the general case for a one-dimensional signal where f and g are the original and measured signals, respectively, n represents noise, and h(x; a) is the impulse response or the response of the c 1999 by CRC Press LLC measuring system to an impulse at coordinate a. 1 A block diagram illustrating the general one- dimensional signal degradation system is shown in Fig. 25.1. For image processing applications, we modify this equation to the two-dimensional case, that is, g(x, y) = +∞ −∞ +∞ −∞ h(x, y; a, b)f (a, b)dadb + n(x, y) (25.2) The degradation operator h iscommonlyreferredtoasapoint spread function (PSF) in imaging applications because in optics, h is the measured response of an imaging system to a point of light. FIGURE 25.1: Block diagram of the signal recovery problem. The Fourier transform of the point spread function h(x, y) denoted as H(w x ,w y ) is known as the optical transfer function (OTF) and can be expressed as H(w x ,w y ) = ∞ −∞ h(x, y)exp −i w x x + w y y dxdy ∞ −∞ h(x, y)dxdy . (25.3) The absolute value of the OTF is known as the modulation transfer function (MTF). A commonly used optical image formation system is a circular thin lens. The recovery problem is considered ill-posed when a small change in the observed image, g, results in a large change in the solution, f . Most signal recovery problems in practice are ill-posed. The continuous version of the degradation system for two-dimensional signals formulated in Eq. (25.2) can be expressed in discrete form by replacing the continuous arguments with arrays of samples in two dimensions, that is, g(i, j) = m n h(i, j; m, n)f (m, n) + n(i, j ). (25.4) It is convenient for image recovery purposes to represent the discrete formulation given in Eq. (25.4) as a system of linear equations expressed as g = Hf + n, (25.5) where g, f, and n are the lexicographic row-stacked versions of the discretized versions of g, f , and n in Eq. (25.4) and H is the degradation matrix composed of the PSF. This section presents an overview of some of the techniques proposed to estimate f when the recovery problem can be modeled by Eq. (25.5). If there is no external noise or measurement error 1 This corresponds to the case of a shift–varying impulse response. c 1999 by CRC Press LLC and the set of equations is consistent, Eq. (25.5) reduces to g = Hf. (25.6) It is usually not the case that a practical system can be described by Eq. (25.6). In this section, we will focus on recovery algorithms where an estimate of the distortion operation represented by the matrix H is known. For recovery problems where both the desired signal, f, and the degradation operator, H, are unknown, refer to other articles in this book. For most systems, the degradation matrix H is highly structured and quite sparse. The additive noise term due to measurement errors and external and internal noise sources is represented by the vector n. At first glance, the solution to the signal recovery problem seems to be straightforward — find the inverse of the matrix H to solve for the unknown vector f. It turns out that the solution is not so simple because in practice the degradation operator is usually ill-conditioned or rank-deficient and the problem of inconsistencies or noise must be addressed. Other problems that may arise include computational complexity due to extremely large problem dimensions especially for image processing applications. The algorithms described here try to address these issues for the general signal recovery problem described by Eq. (25.5). 25.2.1 Prolate Spheroidal Wavefunctions We introduce the problem of signal recovery by examining a one-dimensional, linear, time-invariant system that can be expressed as g(x) = +T −T f (α)h(x − α)dα, (25.7) whereg(x)istheobservedsignal, f(α)isthe desiredsignal offinitesupporton theinterval (−T,+T), and h(x) denotes the degradation operator. Assuming that the degradation operator in this case is an ideal lowpass filter, h can be described mathematically as h(x) = sin(x) x . (25.8) For this particular case, it is possible to solve for the exact signal f(x) with prolate spheroidal wavefunctions [23]. The key to successfully solving for f lies in the fact that prolate spheroidal wavefunctions are the eigenfunctions of the integral equation expressed by Eq. (25.7)withEq.(25.8) as the degradation operator. This relationship is expressed as: +T −T ψ n (α) sin(x − α) x − α dα = λ n ψ n (x), n = 0,1,2, ., (25.9) where ψ n (x) are the prolate spheroidal wavefunctions and λ n are the corresponding eigenvalues. A critical feature of prolate spheroidal wavefunctions is that they are complete orthogonal bases in the interval (−∞, +∞) as well as the interval (−T,+T), that is, +∞ −∞ ψ n (x)ψ m (x)dx = 1, if n = m, 0, if n = m, (25.10) and +T −T ψ n (x)ψ m (x)dx = λ n , if n = m, 0, if n = m. (25.11) c 1999 by CRC Press LLC This allows the functions g(x) and f(x)to be expressed as the series expansion, g(x) = ∞ n=0 c n ψ n (x), (25.12) f(x) = ∞ n=0 d n ψ Ln (x), (25.13) where ψ Ln (x) are the prolatespheroidal functions truncated to the interval (−T,T). The coefficients c n and d n are given by c n = ∞ −∞ g(x)ψ n (x)dx (25.14) and d n = 1 λ n T −T f(x)ψ n (x)dx. (25.15) If we substitute the series expansions given by Eqs. (25.12) and (25.13) into Eq. (25.7), we get g(x) = ∞ n=0 c n ψ n (x) = +T −T ∞ n=0 d n ψ Ln (α) h(x − α)dα (25.16) = ∞ n=0 d n +T −T ψ n (α)h(x − α)dα . (25.17) Combining this result with Eq. (25.9), ∞ n=0 c n ψ n (x) = ∞ n=0 λ n d n ψ n (x), (25.18) where c n = λ n d n , (25.19) and d n = c n λ n . (25.20) We get an exact solution for the unknown signal f(x)by substituting Eq. (25.20) into Eq. (25.13), that is, f(x) = ∞ n=0 c n λ n ψ Ln (x). (25.21) Therefore, in theory, it is possible to obtain the exact image f(x)from the diffraction-limited im- age, g(x), using prolate spheroidal wavefunctions. The difficulties of signal recovery become more apparent when we examine the simple diffraction-limited case in relation to prolate spheroidal wave- functions asdescribedin Eq. (25.21). Thefinite aperture sizeofa diffraction-limited systemtranslates to eigenvalues λ n which exhibit a unit–step response; that is, the several largest eigenvalues are ap- proximately one followedby asuccessionofeigenvalues that rapidly fall offtozero. The solution given by Eq. (25.21) will be extremely sensitive to noise for small eigenvalues λ n . Therefore, for the general c 1999 by CRC Press LLC problem represented in vector–space by Eq. (25.5), the degradation operator H is ill-conditioned or rank-deficient due to the small or zero-valued eigenvalues, and a simple inverse operation will not yield satisfactory results. Many algorithms have been proposed to find a compromise between exact deblurring and noise amplification. These techniques include Wiener filtering and pseudo-inverse filtering. We begin our overview of signal recovery techniques by examining some of the methods that fall under the category of optimization-based approaches. 25.3 Least Squares Solutions The earliest attempts toward signal recovery are based on the concept of inverting the degradation operator to restore the desired signal. Because in practical applications the system will often be ill- conditioned, several problems can arise. Specifically, high detail signal information may be masked by observation noise, or a small amount of observation noise may lead to an estimate that contains very large false high frequency components. Another potential problem with such an approach is that for a rank-deficient degradation operator, the zero-valued eigenvalues cannot be inverted. Therefore, the general inverse filtering approach will not be able to resolve the desired signal beyond the diffraction limit imposed by the measuring device. In other words, referring to the vector–space description, the data that has been nulled out by the zero-valued eigenvalues cannot be recovered. 25.3.1 Wiener Filtering Wiener filtering combines inverse filtering with a priori statistical knowledge about the noise and unknown signal [24] in order to deal with the problems associated with an ill-conditioned system. The impulse response of the restoration filter is chosen to minimize the mean square error as defined by E f = E f − ˆ f 2 (25.22) where ˆ f denotes the estimate of the ideal signal f and E{·} denotes the expected value. The Wiener filter estimate is expressed as H −1 W = R ff H T HR ff H T + R nn (25.23) where R ff and R nn are the covariance matrices of f and n, respectively, and f and n are assumed to be uncorrelated; that is, R ff = E ff T , (25.24) R nn = E nn T , (25.25) and R fn = 0. (25.26) The superscript T in the above equations denotes transpose. The Wiener filter can also be expressed in the Fourier domain as H −1 W = H ∗ S ff | H | 2 S ff + S nn (25.27) where S denotes the power spectral density, the superscript ∗ denotes the complex conjugate, and H denotes the Fourier transform of H. Note that when the noise power is zero, the Wiener filter reduces to the inverse filter; that is, H −1 W = H −1 . (25.28) c 1999 by CRC Press LLC The Wiener filter approach for signal recovery assumes that the power spectra are known for the input signal and the noise. Also, this approach assumes that finding a least squares solution that optimizes Eq. (25.22) is meaningful. For the case of image processing, it has been shown, specifically in the context of image compression, that the mean square error (mse) does not predict subjective image quality [25]. Many signalprocessing algorithms are based on the least squares paradigm because the solutions are tractable and, in practice, such approaches have produced some useful results. However, in order to define a more meaningful optimization metric in the design of image processing algorithms, we need to incorporate a human visual model into the algorithm design. In the area of image coding, several coding schemes based on perceptual criteria have been shown to produce improved results over schemes based on maximizing signal–to–noise ratio or minimizing mse [25]. Likewise, the Wiener filtering approach will not necessarily produce an estimate that maximizes perceived image or signal quality. Another limitation of the Wiener filter approach is that the solution will not necessarily be consistent with any a priori knowledge about the desired signal characteristics. In addition, the Wiener filter approach does not resolve the desired signal beyond the diffraction limit imposed by the measuring system. For more details on Wiener filtering and the various applications, see other chapters in this book. 25.3.2 The Pseudoinverse Solution The Wiener filters attempt to minimize the noise amplification obtained in a direct inverse by pro- viding a taper determined by the statistics of the signal and noise process under consideration. In practice, the power spectra of the noise and desired signal might not be known. Here we present what is commonly referred to as the generalized inverse solution. This will be the framework for some of the signal recovery algorithms described later. The pseudoinverse solution is an optimization approach that seeks to minimize the least squares errorasgivenby E n = n T n = g − Hf T (g − Hf ). (25.29) The leastsquaressolution isnotunique when the rank ofthe M ×N matrix H is r<N≤ M. Inother words, there are many solutions that satisfy Eq. (25.29). However, the Moore-Penrose generalized inverse or pseudoinverse [26] does provide a unique least squares solution based on determining the least squares solution with minimum norm. For a consistent set of equations as described in Eq. (25.6), a solution is sought that minimizes the least squares estimation error; that is, E f = f − ˆ f T (f − ˆ f) = tr (f − ˆ f) f − ˆ f T (25.30) where f is the desired signal vector, ˆ f is the estimate, and tr denotes the trace [22]. The generalized inverse provides an optimum solution that minimizes the estimation error for a consistent set of equations. Thus, the generalized inverse provides an optimum solution for both the consistent and inconsistent set of equations as defined by the performance functions E f and E n , respectively. The generalized inverse solution satisfies the normal equations H T g = H T Hf. (25.31) The generalized inverse solution, also known as the Moore-Penrose generalized inverse, pseudoin- verse, or least squares solution with minimum norm is defined as f † = H T H −1 H T g = H † g, (25.32) c 1999 by CRC Press LLC where the dagger † denotes the pseudoinverse and the rank of H is r = N ≤ M. Forthe caseofan inconsistentset ofequationsas described in Eq.(25.5), the pseudoinversesolution becomes f † = H † g = H † Hf + H † n (25.33) wheref † is theminimumnorm, leastsquaressolution. Ifthesetof equations areoverdetermined with rank r = N<M, H † H becomes an identity matrix of size N denoted as I N and the pseudoinverse solution reduces to f † = f + H † n = f + f. (25.34) A straightforward result from linear algebra is the bound on the relative error, f f H † H n g , (25.35) where the product H † H is the condition number of H. This quantity determines the relative error in the estimate in terms of the ratio of the vector norm of the noise to the vector norm of the observed image. The condition number of H is defined as C H = H † H = σ 1 σ N (25.36) where σ 1 and σ N denote the largest and smallest singular values of the matrix H, respectively. The larger the condition number, the greater the sensitivity to noise perturbations. A matrix with a large condition number, typically greater than 100, results in an ill-conditioned system. The pseudoinverse solution is best described by diagonalizing the degradation matrix H using singular value decomposition (SVD) [22]. SVD provides a way to diagonalize any arbitrary M × N matrix. In this case, we wish to diagonalize H; that is, H = UV T (25.37) where U is a unitary matrix composed of the orthonormal eigenvectors of H T H, V is a unitary matrix composed of the orthonormal eigenvectors of HH T , and is a diagonal matrix composed of the singular values of H. The number of nonzero diagonal terms denotes the rank of H. The degradation matrix can be expressed in series form as H = r i=1 σ i u i v T i (25.38) whereu i and v i arethe i-th columns of U and V, respectively and r is the rank of H. From Eqs. (25.37) and (25.38), the pseudoinverse of H becomes as H † = V † U T = r i=1 σ −1 i .v i u T i (25.39) Therefore, from Eq. (25.39), the pseudoinverse solution can be expressed as f † = H † g = V † U T g (25.40) or f † = r i=1 σ −1 i v i u T i g = r i=1 σ −1 i u T i g v i . (25.41) c 1999 by CRC Press LLC The series form of the pseudoinverse solution using SVD allows us to solve for the pseudoinverse solution using a sequential restoration algorithm expressed as f †(k+1) = f †(k) + σ −1 k u T k g v k . (25.42) The iterative approach for finding the pseudoinverse solution is advantageous when dealing with ill-conditioned systems and noise corrupted data. The iterative form can be terminated before the inversion of small singular values resulting in an unstable estimate. This technique becomes quite easy to implement for the case of a circulant degradation matrix H, where the unitary matrices in Eq. (25.37) reduce to the discrete Fourier transform (DFT). 25.3.3 Regularization Techniques Smoothing and regularization techniques [27, 28, 29]havebeenproposedinanattempttoovercome the problems associated with inverting ill-conditioned degradation operators for signal recovery. These methods attempt to force smoothness on the solution of a least squares error problem. The problem can be formulated in two different ways. One way of formulating the problem is minimize: ˆ f T S ˆ f (25.43) subject to: g − H ˆ f T W(g − H ˆ f) = e (25.44) where S represents a smoothing matrix, W is an error weighting matrix, and e is a residual scalar estimation error. The error weighting matrix can be chosen as W = R −1 nn . The smoothing matrix is typically composed of the first or second order difference. For this case, we wish to find the stationary point of the Lagrangian expression F( ˆ f,λ)= ˆ f T S ˆ f + λ g − H ˆ f T W(g − H ˆ f) − e . (25.45) The solution is found by taking derivatives with respect to f and λ and setting them equal to zero. The solution for a nonsingular overdetermined set of equations becomes ˆ f = H T WH + 1 λ S −1 H T Wg (25.46) where λ is chosen to satisfy the compromise between residual error and smoothness in the estimate. Alternately, this problem can be formulated as minimize: g − H ˆ f T W(g − H ˆ f) (25.47) subject to: ˆ f T S ˆ f = d (25.48) where d represents a fixed degree of smoothness. The Lagrangean expression for this formulation becomes G( ˆ f,γ)= g − H ˆ f T W g − H ˆ f T + γ ˆ f T S ˆ f − d (25.49) and the solution for a nonsingular overdetermined set of equations becomes ˆ f = H T WH + γ S −1 H T Wg. (25.50) c 1999 by CRC Press LLC [...]... Trussell, H.J., Digital signal restoration using fuzzy sets, IEEE Trans Acoustics, Speech, Signal Process., 34, 919–936, 1986 [14] Trussell, H.J and Civanlar, M.R., The feasible solution in signal restoration, IEEE Trans Acoust., Speech, Signal Process., 32, 201–212, 1984 [15] Sezan, M.I and Trussell, H.J., Prototype image constraints for set–theoretic image restoration, IEEE Trans Signal Process.,... details on commonly used convex sets, see [10] Most of the commonly used constraints for different signal processing applications fall under the category of convex sets which provide weak convergence However, in practice, most of the POCS algorithms provide strong convergence Many of the commonly used iterative signal restoration techniques are specific examples of the POCS algorithm The Kaczmarz algorithm... while enforcing thea priori constraints through the out-of-band signal component It is apparent that such an approach will yield a better estimate for the unknown signal f than the minimum norm least squares solution f † Note that this algorithm can easily be generalized to other problems by replacing the non-negativity constraint C+ with the signal appropriate constraints In the case when f † satisfies... feasible solutions Many problems in signal recovery can be approached using the set theoretic paradigm POCS has been one of the most extensively studied set theoretic approaches in the literature due to its convergence properties and flexibility to handle a wide range of signal characteristics We limit our discussion here to POCS–based algorithms The more general case of signal estimation using nonconvex... certain properties The more a priori information about the desired signal that one can incorporate into the algorithm, the more effective the algorithm becomes In [21], POCS is presented as a particular example of a much broader class of algorithms described as Set Theoretic Estimation The author distinguishes between two basic approaches to a signal estimation or recovery problem: optimization-based approaches... the signal space and the Fourier space in an iterative fashion until the estimate converges to a solution For the image restoration problem, the high frequency components of the image are extrapolated by imposing the finite extent of the object in the spatial domain and by imposing the known low frequency components in the frequency domain The dual problem involves spectral estimation where the signal. .. is extrapolated in the time or spatial domain The algorithm consists of imposing the known part of the signal in the time domain and imposing a finite bandwidth constraint in the frequency domain The GP algorithm assumes a space-invariant (or time-invariant) degradation operator We now present several signal recovery algorithms that conform to the POCS paradigm which are broadly classified under two categories:... suited for such applications However, for most applications, other a priori information is known about the desired signal and an effective algorithm should utilize this information We now describe a POCS-based algorithm suited for the problem of image restoration where additional a priori signal information is incorporated into the algorithm 25.7 Image Restoration Using POCS Here we describe an image... here provides a framework that allows a priori information in the form of signal constraints to be incorporated into the algorithm in order to obtain a better estimate than the least squares minimum norm solution f † The constraint operator will be represented by C and can incorporate a variety of linear and nonlinear a priori signal characteristics as long as they obey the properties of convex set... posteriori, and maximum entropy methods [17] We now introduce the concept of Projection onto Convex Sets (POCS), which will be the framework for a much broader and more powerful class of signal recovery algorithms 25.4 Signal Recovery using Projection onto Convex Sets (POCS) A broad set of recovery algorithms has been proposed to conform to the general framework introduced by the theory of projection . Podilchuk, C. Signal Recovery from Partial Information” Digital Signal Processing Handbook Ed. Vijay K. Madisetti and Douglas. projections. For many of the signal recovery applications, it is desirable to extrapolate a signal outside of a known interval. Extrapolating a signal in the spatial