Scattered data reconstruction by regularization in b spline and associated wavelet spaces

136 331 0
Scattered data reconstruction by regularization in b spline and associated wavelet spaces

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

SCATTERED DATA RECONSTRUCTION BY REGULARIZATION IN B-SPLINE AND ASSOCIATED WAVELET SPACES XU YUHONG (M.Sci., Fudan University) A THESIS SUBMITTED FOR THE DEGREE OF DOCTOR OF PHILOSOPHY DEPARTMENT OF MATHEMATICS NATIONAL UNIVERSITY OF SINGAPORE 2008 Acknowledgements I would like to thank my advisor, Professor Shen Zuowei. In my eyes, Prof. Shen set an exemplar for research scientists, by his passionate and painstaking inquiries on scientific problems as well as profound research works. In the past five years, together with his specific guidance on my research topic, numerous communications between us, especially the advices on how to effective research, are great sources of help for my academic growth. I also appreciate his encouragement and support all the way along. I would like to express my gratitude to the professors in and outside the department. Through lecturing and personal discussion, they enriched my knowledge and experience on mathematical researches. Particularly I would like to thank Professors Ji Hui, Lin Ping, Sun Defeng and Toh Kim Chuan, all of whom are from NUS, and Prof. Han Bin from University of Alberta, and Prof. Michael Johnson from Kuwait University. My thanks go to my fellow graduate students Pan Suqi, Zhao Xinyuan and Zhou Jinghui; thanks also go to my former fellow graduates, Chai Anwei, Chen Libing, Dong Bin, Lu Xiliang, as well as to Dr. Cai Jianfeng at CWAIP. Personal interaction with them, whether it is about discussing researches or taking dinner or having fun together, makes my five-year stay at NUS a wonderful experience and a cherishing memory. ii Acknowledgements iii At last, but not the least, I want to express my deep gratitude to my wife, Lu Lu, for her unceasing love and continuous support during these years. I also take this opportunity to give my thanks to my mother and brother for their yearly support. Xu Yuhong July 2008 Contents Acknowledgements ii Summary vii List of Tables x List of Figures xi Introduction 1.1 Scattered Data Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . 1.2 The Purpose and Contribution of the Thesis . . . . . . . . . . . . . . . . . 1.2.1 Regularized Least Squares . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Edge Preserving Reconstruction . . . . . . . . . . . . . . . . . . . 1.3 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reconstruction in Principal Shift Invariant Spaces 2.1 Introduction to PSI Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 13 13 iv Contents v 2.2 Interpolation in PSI Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3 Regularized Least Squares in PSI Spaces . . . . . . . . . . . . . . . . . . . 19 2.3.1 Lemmas and Propositions . . . . . . . . . . . . . . . . . . . . . . . 20 2.3.2 Error Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Natural Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.4 Computation in B-spline Domain 31 3.1 Uniform B-splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.2 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.2.1 A KKT Linear System . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.2.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Regularized Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.3.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.3.2 Generalized Cross Validation . . . . . . . . . . . . . . . . . . . . . 46 Computational Advantages and Disadvantages . . . . . . . . . . . . . . . 48 3.3 3.4 Computation in Wavelet Domain 51 4.1 Wavelets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.2 A Basis Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.3 Two Iterative Solvers: PCG and MINRES . . . . . . . . . . . . . . . . . . 61 4.4 Wavelet Based Preconditioning . . . . . . . . . . . . . . . . . . . . . . . . 63 4.4.1 Regularized Least Squares . . . . . . . . . . . . . . . . . . . . . . . 64 4.4.2 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Numerical Experiments 74 5.1 Curve and Surface Interpolation . . . . . . . . . . . . . . . . . . . . . . . 74 5.2 Curve and Surface Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.2.1 77 Curve Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contents 5.2.2 vi Surface Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . Edge Preserving Reconstruction 82 88 6.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 6.2 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 6.3 Regularized Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Implementation and Simulation 101 7.1 Approximating Regularization Functionals . . . . . . . . . . . . . . . . . . 101 7.2 Primal-Dual Interior-Point Methods . . . . . . . . . . . . . . . . . . . . . 105 7.3 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 7.3.1 7.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Regularized Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 7.4.1 Bibliography Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 113 118 Summary The objective of data fitting is to represent a discrete form of data by a continuous mathematical object — functions. Data fitting techniques can be applied to many applications in science and engineering with different purposes such as visualization, parametric estimation, data smoothing, etc. In the literature, various approaches to data fitting problem can be loosely classified into two categories: interpolation and approximation. Interpolation is usually applied to noise-free data, while approximation is suitable when given data is contaminated by noise. This thesis addresses both interpolation and approximation by taking principal shiftinvariant (PSI) spaces as the space the fitting function lives in. The research topic is inspired by Johnson’s paper on an interpolation approach (see [52]), where the interpolant is found in a PSI space by minimizing Sobolev semi-norm subject to interpolation constraint. This idea is generalized, in this thesis, to the approximation case where the approximant is found in a PSI space by solving a regularized least squares problem with Sobolev semi-norm as a regularization. Fitting data by minimization or regularization is a common methodology, however, formulating the problem in PSI spaces brings several benefits which we will elaborate in the following. By taking advantage of good approximation power of PSI spaces, Johnson provides vii Summary an error analysis to the above-mentioned interpolation approach (see [52]). We generalize the error analysis to the approximation approach. Error estimate, which measures the distance from the interpolant (or approximant) to the data function by Lp norm, is given in terms of the data site density. Roughly speaking, the estimate claims that the error is small whenever scattered data have high density (and low noise level for approximation case). This characterization guarantees the accuracy of the interpolation and approximation methods. We present the corresponding interpolation and approximation algorithms in the general setting. The properties of the algorithms, such as the existence and uniqueness of the solution, are discussed. In the implementation, we employ a special type of PSI spaces, the one generated by a uniform B-spline function or its tensor product. In view of the connection between PSI spaces and wavelets, the algorithms are converted from B-spline domain to wavelet domain so that computational efficiency is improved dramatically. This computational strategy consists of two critical components: compact support of the uniform B-splines resulting in a sparse linear system, and preconditioning technique in the wavelet domain that accelerates the iterative solution to the linear system. The question of why the acceleration comes along in the wavelet domain is studied and answered. Numerical experiments are conducted to demonstrate the effectiveness of both interpolation and approximation algorithms in the context of curve and surface fitting. The experiments compare our methods with the classical interpolating and smoothing spline, and the result is that they produce very similar fitting curve or surface in terms of accuracy and visual quality. However, our methods offer advantages in terms of numerical efficiency. We expect that our methods remain numerically feasible on large data sets and hence will extend the scope of applications. In the above, we assume that Sobolev semi-norm, the regularization term, is defined by L2 norm. In the last two chapters, we look into the approaches that employ L1 based Sobolev semi-norm as regularization. We propose both interpolation and approximation viii Summary methods and study the corresponding error estimates, and then conduct numerical experiments to illustrate the effectiveness of the L1 based methods. These methods are particularly suitable for fitting the data that contain discontinuities or edges. The numerical experiments show that in fitting such data, the L1 methods preserve edges very well, while the L2 methods tend to blur edges and create undesirable oscillations. For this reason, we call the L1 methods the edge-preserving methods. ix List of Tables 4.1 Comparison of number of iterations and computation time (seconds) . . . 69 5.1 Average SNR for f1 , f2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.2 SNR’s standard deviation for f1 , f2 . . . . . . . . . . . . . . . . . . . . . . 82 5.3 SNR for f3 , f4 , f5 : average and standard deviation. . . . . . . . . . . . . . 86 x 7.3 Interpolation 109 Example 7.3.2. Let f be the following            f (x) =           piecewise constant ≤ x < 0.25 0.25 ≤ x < 0.5 0.5 ≤ x < 0.75 , 0.75 ≤ x < 0.9 0.9 ≤ x < In Figure 7.2, the upper-left subfigure shows the original function f and a sample (n=100); the upper-right subfigure gives the interpolation by the L2 method; on the bottom is the interpolation by the L1 method. Again the L2 interpolation result has some undesirable spikes appearing at step edge points, and L1 interpolation does a better job in preserving the step edges. The above 1D examples have demonstrated that the L1 penalty is more effective than the L2 penalty in preserving discontinuities in 1D case. Next we show by two 2D examples that the same conclusion holds in 2D case. The test functions are made of piecewise constants. As in 1D, discrete samples can be generated by evaluating the function at data sites (xi , yi ); in these two examples the data sites are uniformly spaced so that they form a grid. It is clear from these 2D examples that edges are better preserved by the L1 interpolation. Example 7.3.3. Let f be defined   f (x, y) =  ≤ x ≤ 0.5 , 0.5 < x ≤ f is a piecewise constant. The plot of this function is given on the left in Figure 7.3. We create a sample of size n = 441 (21 points on x and y direction), and the number of basis on each direction is N = 50. Figure 7.4 shows the interpolation results. Example 7.3.4.      f (x, y) =     0.40 ≤ x ≤ 0.60 & 0.40 ≤ y ≤ 0.60 x ≤ 0.20 | x ≥ 0.80 | y ≤ 0.20 | y ≥ 0.80 , otherwise 7.3 Interpolation 110 0.2 0.15 0.5 0.1 0.05 1 0.5 0.5 0.5 0.5 Figure 7.3: Original discontinuous surfaces L2 interpolation L1 interpolation 0.5 0.5 1 0.5 0.5 0.5 0.5 Figure 7.4: Comparison of L2 and L1 interpolation in 2D: Example 7.3.3 7.4 Regularized Least Squares 111 where & and | are the logic operators AND and OR respectively. The plot of this function is given on the right in Figure 7.3. We create a sample of size n = 676 (26 points on each direction), and the number of basis on each direction is N = 64. The interpolation results are shown in Figure 7.5. L2 interpolation L1 interpolation 0.25 0.25 0.2 0.2 0.15 0.15 0.1 0.1 0.05 0.05 −0.05 −0.05 1 0.5 0.5 0.5 0.5 Figure 7.5: Comparison of L2 and L1 interpolation in 2D: Example 7.3.4 7.4 Regularized Least Squares The L1 regularized least squares (also called smoothing) method looks for the solution of the unconstrained minimization problem Au − f 2 + α Hu . (7.4) As in the interpolation case, the minimizer always exists but may not be unique. We intend to compare the performance of L1 and L2 smoothing methods. Recall that L2 based smoothing looks for the solution of Au − f 2 + αuT Gu. (7.5) 7.4 Regularized Least Squares 112 It is the equation (3.9). Since the degree of smoothing depends on the smoothing parameter α, we need to set up a criterion of choosing α that makes the results comparable. One may suggest to choose different α for the minimization problems (7.4) and (7.5) such that their solutions produce the same residual Au − f 2. However, to achieve this goal, if we choose to manually set α by trial and error, it can be very time-consuming. Since our goal here is to compare the effectiveness of L1 and L1 smoothing methods, rather than directly solve the problems (7.4) and (7.5), we instead solve their corresponding dual problems Hu , s.t. Au − f ≤ uT Gu, s.t. Au − f ≤ √ √ nσ, nσ. Here we assume that the additive noise satisfies N (0, σ) with a known σ. To see that the above two constrained minimization problems actually correspond to the unconstrained problem (7.4) and (7.5) with an implicitly determined α (α > 0) respectively, we refer to the following strong duality theorem (see e.g. [5]). Consider the following general constrained minimization problem, which is called the primal problem f (x), s.t. g(x) ≤ 0, h(x) = 0, where g and h are function vectors. Its Lagrangian dual problem is as follows max L(x, α, β), α≥0,β x where the Lagrangian function L is defined as L(x, α, β) = f (x) + αT g(x) + β T h(x). Strong duality theorem concludes that if each function in g is convex and the equality constraint h is linear, further there exists a point x such that g(x) < (called Slater’s condition), then p∗ , the optimal value of the primal problem, is equal to d∗ , the optimal value of the dual problem. Moreover, suppose x∗ and (α∗ , β ∗ ) are the primal and dual optimal points, then (x∗ , α∗ , β ∗ ) is a saddle point for L, i.e., L(x∗ , α, β) ≤ L(x∗ , α∗ , β ∗ ) ≤ L(x, α∗ , β ∗ ) 7.4 Regularized Least Squares 113 for all x, β, α ≥ 0. In our setting, we only have one convex inequality constraint and no equality constraint. In the following numerical examples, a sufficiently large number of basis functions are taken to ensure that slater’s condition holds. Consequently, the saddle point property holds, i.e., if u0 is the optimal point for the constrained problem, then uT0 Gu0 + α∗ Au0 − f ≤ uT Gu + α∗ Au − f . Therefore, u0 is also the solution to the problem (7.5) with parameter 1/α∗ . In a similar way, we can see that the constrained L1 smoothing is equivalent to the unconstrained formulation (7.4) with an appropriate smoothing factor. 7.4.1 Numerical Examples The simulated noisy data is generated by adding Gaussian noise to the functional values sampled from the 1D test functions in interpolation. The test data set is (xi , zi ) : i = 1, · · · , n , where zi = f (xi ) + (7.6) i where xi ’s are random numbers uniformly distributed in [0, 1], and i ’s are Gaussian noise satisfying the distribution N (0, σ ). Example 7.4.1. A noisy sample (n = 100) is generated by equation (7.6) with f defined in Example 7.3.1 and σ = 0.1. The plot of f and the noisy sample is shown on the top in Figure 7.6. The L2 and L1 smoothing lead to the fitting curves shown in the middle and bottom subfigures respectively. Example 7.4.2. A noisy sample (n = 150) is generated by equation (7.6) with f defined in Example 7.3.2 and σ = 0.4. As in the above example, Figure 7.7 shows the smoothing curve by the L2 and L1 smoothing methods. In either of the above examples, both smoothing curves give the same respect to noisy functional values) at the data sites, i.e., Au − f = √ residual (with nσ. However, 7.4 Regularized Least Squares 114 L2 smoothing Original function & noisy samples 0.5 0.5 −0.5 −0.5 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 L1 smoothing 0.5 −0.5 0.2 0.4 0.6 0.8 Figure 7.6: Comparison of L2 and L1 smoothing in 1D: Example 7.4.1 L2 smoothing Original function & noisy samples 0 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 L1 smoothing 0 0.2 0.4 0.6 0.8 Figure 7.7: Comparison of L2 and L1 smoothing in 1D: Example 7.4.2 7.4 Regularized Least Squares the L1 smoothing gives a more satisfying result in that the edge is better recovered and much less oscillations are presented in the smoothing curve. Next we turn to 2D case where the test functions are the ones used in interpolation. The noisy sample is produced in a similar way as in 1D smoothing — adding Gaussian noise to functional values at grid data sites. Again these examples demonstrate that the L1 smoothing is better than the L2 smoothing in preserving edges. Example 7.4.3. A noisy sample (n = 676) is generated by equation (7.6) with f defined in Example 7.3.3 and σ = 0.04. The plot of the noisy sample is shown in the top subfigure in Figure 7.8. The L2 and L1 smoothing lead to the smoothed surfaces shown in lower-left and lower-right subfigures respectively. Example 7.4.4. A noisy sample (n = 676) is generated by equation (7.6) with f defined in Example 7.3.4 and σ = 0.01. The plot of the noisy sample is shown in the top subfigure in Figure 7.9. The L2 and L1 smoothing lead to the smoothed surfaces shown in lower-left and lower-right subfigures respectively. 115 7.4 Regularized Least Squares 116 Noisy samples 0.5 1 0.5 0.5 L2 smoothing L1 smoothing 0.5 0.5 1 0.5 0.5 0.5 0.5 Figure 7.8: Comparison of L2 and L1 smoothing in 2D: Example 7.4.3 7.4 Regularized Least Squares 117 Noisy samples 0.2 0.15 0.1 0.05 −0.05 1 0.5 0.5 L2 smoothing L1 smoothing 0.2 0.2 0.15 0.15 0.1 0.1 0.05 0.05 −0.05 −0.05 1 0.5 0.5 0.5 0.5 Figure 7.9: Comparison of L2 and L1 smoothing in 2D: Example 7.4.4 Bibliography [1] R. Adams. Sobolev spaces. Academic Press, 1975. [2] H. Akima. A method of bivariate interpolation and smooth surface fitting for irregularly distributed data. ACM Trans. Math. Software, 4:148–159, 1978. [3] M. Arigovindan, M. Suhling, P. Hunziker, and M. Unser. Variational image reconstruction from arbitrarily spaced samples: a fast multiresolution spline solution. IEEE Trans. Image Processing, 14(4):450–460, 2005. [4] G. Aubert, J. Bect, L. Blanc Fraud, and A. Chambolle. A -unified variational framework for image restoration. European Conference on Computer Vision, 4:1– 13, 2004. [5] M.S. Bazaraa, H.D. Sherali, and C.M. Shetty. Nonlinear Programming: Theory and Algorithms. Wiley, edition, 1993. [6] R.K. Beatson, J.B. Cherrie, and C.T. Mouat. Fast fitting of radial basis functions: methods based on preconditioned gmres iteration. Adv. Comp. Math., 11:253–270, 1999. 118 Bibliography [7] R.K. Beatson, W.A. Light, and S. Billings. Fast solution of the radial basis function interpolation equations: domain decomposition methods. SIAM J. Sci. Comput., 22(5):1717–1740, 2000. [8] C.de Boor. A practical guide to splines. New York, Springer-verlag, 2001. [9] C.de Boor. Spline toolbox user’s guide. Mathworks Inc., 2004. [10] C.de Boor, R.A. DeVore, and A. Ron. Approximation from shift-invariant subspaces of L2 (Rd ). Trans. Amer. Math. Soc., 341(2):787–806, 1994. [11] C.de Boor, K. Hollig, and S. Riemenschneider. Box splines. New York, Springerverlag, 1993. [12] C.de Boor and A. Ron. Fourier analysis of the approximation power of prinicpal shift-inavriant spaces. Constr. Approx., 8(4):427–462, 1992. [13] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, Cambridge, 2004. [14] J.H. Bramble, J.E. Pasciak, and J. Xu. Parallel multilevel preconditioners. Math. Comp., 55(191):1–22, 1990. [15] S.C. Brenner and L.R. Scott. The Mathematical Theory of Finite Element Methods. New York, Springer-verlag, 1994. [16] M.D. Buhmann. Radial basis functions: theory and implementations. Cambridge university press, 2003. [17] R.L. Burden and J.D. Faires. Numerical Analysis. PWS-Kent Pub. Co., Boston, 1993. [18] V.I. Burenkov. On the extension of functions with preservation of seminorms. Dokl. Akad. Nauk SSSR, 228:779–782, 1976. English transl. in Soviet Math. Dokl, 17, 1976. 119 Bibliography [19] E. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52(2):489–509, 2006. [20] J.C. Carr, R.K. Beatson, J.B. Cherrie, T.J. Mitchell, W.R. Fright, B.C. McCallum, and T.R. Evans. Reconstruction and representation of 3D objects with radial basis functions. In Computer Graphics (SIGGRAPH 2001 Proceedings), pages 67–76, 2001. [21] A. Cohen, I. Daubechies, and J.C. Feauveau. Biorthogonal bases of compactly supported wavelets. Comm. Pure Appl. Math., 45(5):485–560, 1992. [22] P. Craven and G. Wahba. Smoothing noisy data with spline functions. Numer. Math., 31(4):377–403, 1979. [23] D. Casta˜ no and A. Kunoth. Multilevel regularization of wavelet based fitting of scattered data — some experiments. Numer. Algor., 39(1-3):81–96, 2005. [24] W. Dahmen and A. Kunoth. Multilevel preconditioning. Numer. Math., 63:315–344, 1992. [25] I. Daubechies. Orthonormal bases of compactly supported wavelets. Comm. Pure Appl. Math., 41(7):909–996, 1988. [26] I. Daubechies. Ten lectures on wavelets. SIAM, Philadelphia, 1992. [27] I. Daubechies, M. Defrise, and C.De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm. Pure Appl. Math., 57(11):1413–1457, 2004. [28] J.W. Demmel. Applied numerical linear algebra. SIAM, Philadelphia, 1997. [29] R.A. DeVore. Nonlinear approximation. Acta Numer., 7:51–150, 1998. [30] P. Dierckx. Curve and surface fitting with splines. Oxford University Press, 1993. 120 Bibliography [31] D. Donoho, X.M. Huo, and T. Yu. Available at http://www-stat.stanford.edu /˜wavelab/WaveLab701.html. [32] J. Duchon. Sur l’erreur d’interpolation des fonctions de plusieur variables par les Dm -splines. RAIRO Anal. Numer., 12(4):325–334, 1978. [33] N. Dyn, D. Levin, and S. Rippa. Numerical procedures for surface fitting of scattered data by radial functions. SIAM J. Sci. Statist. Comput., (2):639–659, 1986. [34] R. Franke. Scattered data interpolation: tests of some methods. Math. Comp., 38(157):181–200, 1982. [35] R. Franke and G.M. Nielson. Scattered data interpolation and applications: a tutorial and survey. In Gometric modelling, pages 131–160. Springer, Berlin, 1991. [36] I.M. Gelfand and S.V. Fomin. Calculus of variations. Prentice-Hall, 1963. [37] M.T. Heath G.H. Golub and G. Wahba. Generalized cross validation as a method for choosing a good ridge parameter. Technometrics, 21(2):215–223, 1979. [38] S.J. Gortler and M.F. Cohen. Hierarchical and variational geometric modeling with wavelets. In Proceedings Symposium on Interactive 3D Graphics, pages 35–42, 1995. [39] P.J. Green and B.W. Silverman. Nonparametric regression and generalized linear models: a roughness penalty approach. Chapman and Hall, 1994. [40] A. Greenbaum. Iterative methods for solving linear systems. SIAM, Philadelphia, 1997. [41] B. Han and Z.W. Shen. Wavelets from the Loop scheme. J. Fourier Anal. Appl., 11(6):615–637, 2005. [42] B. Han and Z.W. Shen. Wavelets with short support. SIAM J. Math. Anal., 38(2):530–556, 2006. [43] B. Han and Z.W. Shen. Dual wavelet frames and Riesz bases in Sobolev spaces. Preprint, 2007. 121 Bibliography [44] P.C. Hansen. Regularization tools: A matlab package for analysis and solution of discrete ill-posed problems. Numer. Algorithms, 6:1–35, 1994. [45] R.L. Hardy. Multiquadric equations of topography and other irregular surfaces. J. Geophy. Res., 76:1905–1915, 1978. [46] X.M. He, L.X. Shen, and Z.W. Shen. A data-adaptive knot selection scheme for fitting splines. IEEE Signal Processing Letters, 8(5):137–139, 2001. [47] H. Inoue. A least-squares smooth fitting for irregularly spaced data: finite-element approach using the cubic b-spline basis. Geophysics, 51(11):2051–2066, 1986. [48] S. Jaffard. Wavelet methods for fast resolution of elliptic problems. SIAM J. Numer. Anal., 29(4):965–986, 1992. [49] R.Q. Jia. The toeplitz theorem and its applications to approximation theory and linear PDE’s. Trans. Amer. Math. Soc., 347(7):2585–2594, 1995. [50] R.Q. Jia and J. Lei. Approximation by multiinteger translates of functions having global support. J. Approx. Theory, 72(1):2–23, 1993. [51] R.Q. Jia and Z.W. Shen. Multiresolution and wavelets. Proc. Edinburgh Math. Soc., 37(2):271–300, 1994. [52] M.J. Johnson. Scattered data interpolation from principal shift-invariant spaces. J. Approx. Theory, 113(2):172–188, 2001. [53] M.J. Johnson, Z.W. Shen, and Y.H. Xu. Scattered data reconstruction by regularization in B-spline and associated wavelet spaces. Preprint. [54] J.E. Lavery. Shape-preserving, multiscale fitting of univariate data by cubic L1 smoothing splines. Comput. Aided Geom. Design, 17(7):715–727, 2000. [55] J.E. Lavery. Univariate cubic Lp splines and shape-preserving, multiscale interpolation by univariate cubic L1 splines. Comput. Aided Geom. Design, 17(4):319–336, 2000. 122 Bibliography [56] C. L. Lawson. Software for C surface interpolation. In Mathematical Software III. Academic Press, New York, 1993. [57] S. Lee, G. Wolberg, and S. Shin. Scattered data interpolation with multilevel bsplines. IEEE Trans. Vis. Comput. Graph., 3(3):228–244, 1997. [58] S. Osher L.I. Rudin and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D, 60:259–268, 1992. [59] W.R. Madych and S.A. Nelson. Multivariate interpolation and conditionally positive definite functions. Math. Comp., 54(189):211–230, 1990. [60] S. Mallat. A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans. Pattern Anal. Machine Intell., 11(7):674–693, 1989. [61] J. Meinguet. Mulitvariate interpolation at arbitary points made simple. Z. Angew. Math. Phys., 30:292–304, 1979. [62] Y. Meyer. Wavelets and operators. Cambridge Univ. Press, 1992. [63] C.A. Micchelli. Interpolation of scattered data: Distance matrices and conditionally positive definite functions. Constructive Approximation, 2(1):11–22, 1986. [64] F.J. Narcowich, J.D. Ward, and H. Wendland. Sobolev bounds on functions with scattered zeros with applications to radial basis function surface fitting. Math. Comp., 74(250):743–763, 2005. [65] J. Nocedal and S. Wright. Numerical optimization. New York, Springer-verlag, 1999. [66] S. Osher and R. Fedkiw. Level set methods and dynamic implicit surfaces. 2002. [67] Y. Sadd. Iterative methods for sparse linear systems. SIAM, 2nd edition, 2003. [68] R. Sibson and G. Stone. Computation of thin-plate splines. SIAM J. Sci. Statist. Comput., 12(6):1304–1313, 1991. 123 Bibliography [69] J.F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw., 11/12(1-4):625–653, 1999. [70] R. Szeliski. Fast surface interpolation using hierarchical basis functions. IEEE Trans. Pattern Anal. Machine Intell., 12(6):513–528, 1990. [71] D. Terzopoulos. Regularization of inverse visual problems involving discontinuities. IEEE Trans. Pattern Anal. Machine Intell., 8(4):413–424, 1986. [72] K.C. Toh, R.H. T¨ ut¨ unc¨ u, and M.J. Todd. On the implementation and usage of SDPT3 - a MATLAB software package for semidefinite-quadratic-linear programming, version 4.0. 2006. [73] M. Unser. Splines — a perfect fit for signal and image processing. IEEE Signal Processing Mag., 16:22–38, 1999. [74] C. Vazquez, E. Dubois, and J. Konrad. Reconstruction of nonuniformly sampled images in spline spaces. IEEE. Trans. Image Processing, 14(6):713–725, 2005. [75] G. Wahba. Spline models for observational data. SIAM, 1990. [76] H. Wendland. Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree. Adv. Comput. Math., 4:389–396, 1995. [77] H. Woltring. A fortran package for generalized, cross-validatory spline smoothing and differentiation. Adv. Eng. Software, 8:104–113, 1986. [78] S. Wright. Primal-dual interior-point methods. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1997. [79] W. Yin, S. Osher, D. Goldfarb, and J. Darbon. Bregman iterative algorithms for l1minimization with applications to compressed sensing. SIAM J. Imaging Sciences, 1(1):143–168, 2008. [80] H. Yserentant. On the multi-level splitting of finite element spaces. Numer. Math., 49(4):379–412, 1986. 124 [...]... Contribution of the Thesis scattered < /b> samples have a high sampling density We will review the error estimates in < /b> Section 2.2 In < /b> this thesis, we will implement Johnson’s interpolation method in < /b> one and two dimensions, by < /b> choosing the generator φ as a uniform B- spline in < /b> 1D and a tensor product of uniform B- splines in < /b> 2D Since the iterative process for numerical solution is slow in < /b> the B- spline domain,... applications is signal and image processing, where data < /b> fitting can be used to recover the signals or images that are contaminated by < /b> noises, to compress image or video sequences, and so on Here we summarize the objectives to studying data < /b> fitting problems as follows • Functional representation Representing the data < /b> in < /b> terms of a continuous function instead of discrete data < /b> points brings some benefits For example,... uniform data,< /b> as they are uniformly sampled in < /b> general The examples of scattered < /b> data < /b> include geographical data,< /b> meteorological data,< /b> 3D data < /b> collected by < /b> 3D scanner, feature vectors used in < /b> pattern recognition, etc Because uniform data < /b> have nice structure to be utilized in < /b> the fitting process, fitting on uniform data < /b> is generally easier than fitting on scattered < /b> data < /b> The main focus of this thesis is on scattered.< /b> .. surface spline were initially by < /b> Duchon [32] and Meinguet [61] The spline interpolation is a popular method for scattered < /b> data < /b> interpolation in < /b> a wide range of applications, see e.g [8, 20] However, in < /b> the multivariate setting (d ≥ 2) the method becomes computationally expensive as the number of data < /b> sites n grows large One reason for this is that the basis functions for surface spline are globally supported,... picking 2.3 Regularized Least Squares in < /b> PSI Spaces 22 one point from Ξ in < /b> each ball if it contains such a point, and grouping them together Thus, for all the points in < /b> Ξ that lie in < /b> the balls, we can group them into at most γ such subsets which do not intersect with each other By < /b> the construction of the subsets, the separation distance of each subset is not less than δ Let U be the union of all the balls... it possible to predict the functional value at any data < /b> site in < /b> the range of representation • Parametric estimation Some data < /b> sets are from physical models that contain a number of parameters By < /b> applying data < /b> fitting technique on the data < /b> set, one is able to estimate the parameters • Data < /b> smoothing Real data < /b> sets always contain noises and errors Data < /b> fitting techniques can be applied for smoothing out... by < /b> solving a quadratic programming problem, as indicated in < /b> [52] The formulation and solution to the quadratic program will be elaborated in < /b> Chapter 3 2.3 Regularized Least Squares in < /b> PSI Spaces Noisy data < /b> can be modeled as a sampling (on scattered < /b> sites Ξ) of a function f which is contaminated by < /b> a noise n f |Ξ = f |Ξ + n|Ξ As mentioned in < /b> Chapter 1, we use regularized least squares to fit noisy data.< /b> .. hand, δ(Ξ0 , Ω) ≥ δ := δ(Ξ, Ω) since Ξ0 ⊂ Ξ Hence δ(Ξ0 , Ω) ∼ δ (ii) Since Ω is bounded, there is a “bounding box” BD of the form [l1 , r1 ] × [l2 , r2 ] · · · × [ld , rd ] which covers Ω Define a set of lattice nodes Q = {3δj ∈ BD, j ∈ Zd } ¯ Associate each node p ∈ Q with a closed ball B( p, δ) := p + δ B By < /b> the definition of γ, in < /b> B( p, δ) there are at most γ points in < /b> Ξ A subset of Ξ can be formed by.< /b> .. noises and reduce errors • Data < /b> reduction A data < /b> set may have a huge number of data < /b> points, which take up 1.2 The Purpose and Contribution of the Thesis too much storage Data < /b> fitting can be used to compress the data < /b> in < /b> a more compact form to cut down storage consumption In < /b> this thesis, the main objectives are “functional representation” and data < /b> smoothing” Under such circumstance, data < /b> fitting can be referred... a multilevel scheme based on B- splines is proposed in < /b> [57] to approximate scattered < /b> data;< /b> a wavelet- based smoothing method which operates in < /b> a coarse-to-fine manner to get the fitting function efficiently is suggested in < /b> [23] Just as in < /b> the interpolation case, our approximation method differs from the smoothing spline only in < /b> the choice of approximation space Since S h (φ, Ω) is a subspace of H m , the . SCATTERED DATA RECONSTRUCTION BY REGULARIZATION IN B- SPLINE AND ASSOCIATED WAVELET SPACES XU YUHONG (M.Sci., Fudan University) A THESIS SUBMITTED FOR THE DEGREE OF DOCTOR. addresses both interpolation and approximation by taking principal shift- invariant (PSI) spaces as the space the fitting function lives in. The research topic is inspired by Johnson’s paper on an interpolation. Johnson’s interpolation method in one and two dimensions, by choosing the generator φ as a uniform B- spline in 1D and a tensor product of uniform B- splines in 2D. Since the iterative process for

Ngày đăng: 11/09/2015, 09:11

Tài liệu cùng người dùng

Tài liệu liên quan