Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 52 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
52
Dung lượng
2,5 MB
Nội dung
250 COHERENCE IMAGING 6.6 Coherent and Incoherent Imaging (a) Given that the impulse response for incoherent imaging is the square of the impulse response for coherent imaging, does an incoherent imaging system implement a linear transformation? (b) Does the higher bandpass of an incoherent imaging system yield higher spatial resolution than the comparable coherent imaging system? (c) Is it possible for the image of a coherent object to be an exact replica? (For instance, can the image be equal, modulo phase constants, to the input object with no blurring?) (d) Is it possible for the image of an incoherent object to be an exact replica? 6.7 MTF and Resolution An f/2 imaging system with circular aperture of diameter A ¼ 1000l is distorted by an aberration over the pupil such that P(x, y) ¼ eipax y (6:149) where a ¼ 10À4 =AlF Plot the MTF for this system Estimate its angular resolution 6.8 Coherent and Incoherent Imaging (suggested by J Fienup) The interference of two coherent plane waves produces the eld c(x, z) ẳ [1 ỵ cos(2pu0 x)]ei2pw0 z (6:150) The interfering field is imaged as illustrated in Fig 6.26 As we have seen, the coherent imaging system is bandlimited with maximum frequency umax ¼ A=ldi Experimentally, one observes that the image of the field is uniform (e.g., there is no harmonic modulation) for A , uo ldi If, however, one places a “diffuser” at the object plane, then one observes harmonic modulation in the image irradiance as long as A uo ldi =2 The diffuser is described by the transmittance t(x) ¼ eif(x) , where f(x) is a random process Diffusers are typically formed from unpolished glass (a) Explain the role of the diffuser in increasing the imaging bandpass for this system Figure 6.26 Geometry for Problem 6.8 PROBLEMS 251 (b) Without the diffuser, the image of the interference pattern is insensitive to defocus Once the diffuser is inserted, the image requires careful focus Explain this effect (c) Model this system in one dimension in Matlab by considering the coherent field f (x) ẳ e(x =s ) [1 ỵ cos(2p uo x)] (6:151) for s ¼ 20=uo Plot j f j2 Lowpass-filter f to the bandpass umax ¼ 0:9uo and plot the resulting function (d) Modulate f (x) with a random phasor t(x) ¼ eif(x) (simulating a diffuser) Plot the Fourier transforms of f (x) and of f (x)t(x) Lowpass-filter f (x)t(x) with umax ¼ 0:9uo and plot the resulting irradiance image (e) Phase modulation on scattering from diffuse sources produces “speckle” in coherent images Discuss the resolution of speckle images 6.9 OCT A Fourier domain OCT system is illuminated by a source covering the spectral range 800 –850 nm One uses this system to image objects of thickness up to 200 mm Estimate the longitudinal spatial resolution one might achieve with this system and the spectral resolution necessary to operate it 6.10 Resolution and 3D Imaging Section 6.4.2 uses dux % l=A and dz % 8lz2 =A2 to approximate the angular and range resolution for 3D imaging using Eqn (6.73) Derive and explain these limits Compare the resolution of this approach with projection tomography and optical coherence tomography 6.11 Bandwidth Section 6.6.1 argues that the spatial bandwidth of the field does not change under free-space propagation Nevertheless, one observes that coherent, incoherent, and partially coherent images blur under defocus The blur suggests, of course, that bandwidth is reduced on propagation Explain this paradox 6.12 Coherent-Mode Decomposition Consider a source consisting of three mutually incoherent Gaussian beams at focus in the plane z ¼ The source is described by the cross-spectral density y1 À 0:5D x2 y2 À 0:5D f0 f0 D D D D x1 0:5D y1 ỵ 0:5D x2 0:5D ỵ f0 f0 f0 D D D y2 ỵ 0:5D x1 ỵ 0:5D y1 ỵ 0:5D f0 ỵ f0 f0 D D D ! x2 ỵ 0:5D y2 ỵ 0:5D f0 (6:152) f0 D D W(x1 , y1 , x2 , y2 , n) ¼ S(n) f0 x f0 252 COHERENCE IMAGING Figure 6.27 Pellicle beamsplitter geometry (a) In analogy with Fig 6.24, plot the irradiance, cross sections of the crossspectral density, and the coherent modes in the plane z ¼ (b) Assuming that D ¼ 100l, plot the coherent modes and the spectral density in the plane z ¼ 1000l 6.13 Time Reversal and Beamsplitters Consider the pelicle beamsplitter as illustrated in Fig 6.27 The beamsplitter may be illuminated by input beams from the left or from below Assume that the illuminating beams are monochromatic TE polarized plane waves (e.g., E is parallel to the y axis) The amplitude of incident wave on ports and are Ei1 and Ei2 The beamsplitter consists of a dielectric plate of index n ¼ 1.5 Write a computer program to calculate the matrix transformation from the input mode amplitudes to the output plane wave amplitudes Eo1 and Eo2 Show numerically for specific plate thicknesses ranging from 0:1l to 0:7l that this transformation is unitary 6.14 Wigner Functions Plot the 2D Wigner distributions corresponding to the 1D Hermite – Gaussian modes described by Eqn (3.55) for n ¼ 0,2,7 Confirm that the Wigner distributions are real and numerically confirm the orthogonality relationship given in Eqn (6.146) SAMPLING If a function f (t) contains no frequencies higher than W cps, it is completely determined by giving its ordinates at a series of points spaced 1/2 W seconds apart This is a fact which is common knowledge in the communication art —C Shannon [219] 7.1 SAMPLES AND PIXELS “Sampling” refers to both the process of drawing discrete measurements from a signal and the representation of a signal using discrete numbers It is helpful in computational sensor design and analysis to articulate distinctions between the various roles of sampling: † † † Measurement sampling refers to the generation of discrete digital values from physical systems A measurement sample may consist, for example, of the current or voltage returned from a CCD pixel Analysis sampling refers to the generation of an array of digital values describing a signal An analysis sample may consist, for example, of an estimated wavelet coefficient for the object signal Display sampling refers to the generation of discrete pixel values for display of the estimated image or spectrum Hard separations between sampling categories are difficult and unnecessary For example, there is no magic transition point between measurement and analysis samples in the train of signal filtering, digitization, and readout Similarly, analysis samples themselves are often presented in raw form as display samples In the context of system analysis, however, one may easily distinguish measurement, analysis, and display Optical Imaging and Spectroscopy By David J Brady Copyright # 2009 John Wiley & Sons, Inc 253 254 SAMPLING A mathematical and/or physical process is associated with each type of sample The measurement process implements a mapping from the continuous object signal f to discrete measurement data g In optical imaging systems this mapping is linear: ð (7:1) gi ¼ f(x)hi (x) dx g consists of measurement samples The analysis process transforms the measurements into data representative of the object signal Specifically, the analysis problem is Given g, derive the set of values fa that best represent f: The postdetection analysis samples fa may correspond, for example, to estimates of the basis coefficients fn from Eqn (7.2), to estimates of coefficients on a different basis, or to parameters in a nonlinear representation algorithm We may assume, for example, that f is well represented on a basis cn (x) such that f (x) ¼ X fn cn (x) (7:2) n and Eqn (7.1) reduces to g ¼ Hf (7:3) Ð where hij ¼ hi (x)cj (x) dx The vector f with elements fn consists of analysis samples, and the analysis process consists of inversion of Eqn (7.3) Algorithms for implementing this inversion are discussed in Chapter Display samples are associated with the processes of signal interpolation or feature detection A display vector consists of a set of discrete values fd that are assigned to each modulator in liquid crystal or digital micromirror array or that are assigned to points on a sheet of paper by a digital printer Display samples may also consist of principal component weights or other feature signatures used in object recognition or biometric algorithms While one may estimate the display values directly from g, the direct approach is unattractive in systems where the display is not integrated with the sensor system Typically, one uses a minimal set of analysis samples to represent a signal for data storage and transmission One then uses interpolation algorithms to adapt and expand the analysis samples for diverse display and feature estimation systems Pinhole and coded aperture imaging systems provide a simple example of distinctions between measurement, analysis, and display We saw in Eqn (2.38) that fa naturally corresponds to projection of f (x, y) onto a first-order B-spline basis Problem 3.12 considers estimation of display values of f(x, y) from these projections using biorthogonal scaling functions For the coded aperture system, measurement samples are 7.2 IMAGE PLANE SAMPLING ON ELECTRONIC DETECTOR ARRAYS 255 described by Eqn (2.39), analysis samples by Eqn (2.38), and display samples by Eqn (3.125) Of course, Eqn (3.125) is simply an algorithm for estimating f(x); actual plots or displays consist of a finite set of signal estimates In subsequent discussion, we refer to both display samples and to Haar or impulse function elements as picture elements or pixels (voxels for 3D or image data cube elements) We developed a general model of measurement, analysis, and display for coded aperture imaging in Chapter 2, but our subsequent discussions of wave and coherence imaging have not included complete models of continuous to discrete to display processes This chapter begins to rectify this deficiency In particular, we focus on the process of measurement sample acquisition Chapter focuses on analysis sample generation, especially with regard to codesign strategies for measurement and analysis In view of our focus on image acquisition and measurement layer design, this text does not consider display sample generation or image exploitation Section 7.2 considers sampling in focal imaging systems Focal imaging is particularly straightforward in that measurements are isomorphic to the image signal Section 7.3 generalizes our sampling model to include focal spectral imaging Section 7.4 describes interesting sampling features encountered in practical focal systems The basic assumption underlying Sections 7.2– 7.4, that local isomorphic sampling of object features is possible, is not necessarily valid in optical sensing In view of this fact, Section 7.5 considers “generalized sampling.” Generalized sampling forsakes even the attempt maintain measurement/signal isomorphism and uses deliberately anisomorphic sensing as a mechanism for improving imaging system performance metrics 7.2 IMAGE PLANE SAMPLING ON ELECTRONIC DETECTOR ARRAYS Image plane sampling is illustrated in Fig 7.1 The object field f(x, y) is blurred by an imaging system with shift-invariant PSF h(x, y) The image is sampled on a 2D detector array The detector pitch is D, and the extent of the focal plane in x and y is X and Y Figure 7.1 Image plane sampling 256 SAMPLING The full transformation from the continuous image to a discrete two-dimensional dataset g is modeled as ð gnm ¼ ð X=2 ð Y=2 ð f (x, y)h(x0 À x, y0 À y) À1 À1 ÀX=2 ÀY=2  p(x0 À nD, y0 À mD) dx0 dy0 dx dy (7:4) where p(x, y) is the pixel sampling function For rectangular full fill factor pixels, for example, p(x, y) ¼ rect(x=D)rect( y=D) Several assumptions are implicit in the sampling model of Eqn (7.4) The object distribution, the optical PSF, and the pixel sampling function are in general all dependent on the optical wavelength l For simplicity, we assume in most of this section that the field is quasimonochromatic such that we can neglect the wavelength dependence of these functions We also focus on irradiance imaging, meaning that f(x, y) and h(x, y) are nonnegative We neglect, for the moment, the possibility of 3D object distributions We also neglect complexity in the pixel sampling function, such as crosstalk, shading, and nonuniform response The function g consists of discrete samples of the continuous function 00 00 g(x , y ) ¼ ð ð X=2 ð Y=2 ð f (x, y)h(x0 À x, y0 À y) À1 À1 À(X=2) À(Y=2)  p(x0 À x00 , y0 À x00 ) dx0 dy0 dx dy (7:5) g(x, y) is, in fact, a bandlimited function and can, according to the Whittaker – Shannon sampling theorem [Eqn (3.92)], be reconstructed in continuous form from the discrete samples gnm To show that g(x, y) is bandlimited, we note from the convolution theorem that the Fourier transform of g(x, y) is ^(u, v) ¼ ^(u, v)^ v)^(u, v) g f h(u, p (7:6) ˆ h(u, v) is the optical transfer function (OTF) of Section 6.4.1, and its magnitude is the optical modulation transfer function In analogy with the OTF, we refer to ^(u, v) p ^ v)^(u, v) as the system as the pixel transfer function (PTF) and to the product h(u, p transfer function (STF) Figure 7.2 illustrates the magnitude of the OTF [e.g., the modulation transfer function, (MTF)] for an object at infinity imaged through an aberration-free circular aperture and the PTF for a square pixel of size D One assumes in most cases that the object distribution f(x, y) is not bandlimited Since the pixel sampling function p(x, y) is spatially compact, it also is not bandlim^ ited As discussed in Section 6.4.1, however, the optical transfer function h(u, v) for a well-focused quasimonochromatic planar incoherent imaging system is limited to a bandpass of radius 1=lf =# Lowpass filtering by the optical system means that 7.2 IMAGE PLANE SAMPLING ON ELECTRONIC DETECTOR ARRAYS 257 Figure 7.2 MTF for a monochromatic aberration-free circular aperture and PTF for a square pixel of size D: (a) MTF; (b) MTF cross section; (c) PTF; (d) PTF cross section aliasing is avoided for D lf =# (7:7) The factor of in Eqn (7.7) arises because the width of the OTF is equal to the autocorrelation of the pupil function We saw in Section 4.7 that a coherent imaging system could be characterized without aliasing with a sampling period equal to lf =#; the distinction between the two cases arises from the relative widths of the coherent and incoherent transfer functions as illustrated in Figs 4.14 and 6.17 The goals of the present section are to develop familiarity with discrete analysis of imaging systems, to consider the impact of the pixel pitch D and the pixel sampling function p(x, y) on system performance, and to extend the utility of Fourier analysis tools to discrete systems We begin by revisiting the sampling theorem Using the Fourier convolution theorem, the double convolution in Eqn (7.4) may be represented as gnm ¼ ð ð À1 À1 e2piunD e2pivmD^(u, v)^ v)^(u, v) du dv f h(u, p (7:8) 258 SAMPLING The discrete Fourier transform of g is ^n0 m0 ¼ g ¼ N=2 N=2 X X 0 ei2pnn =N ei2pmm =N gnm N N=2ỵ1 N=2ỵ1 0 eÀpi [(n =N)ÀuD] eÀpi [(m =N)ÀvD] À1 À1  sin[p(n0 À uX)] sin[p(m0 À vY)] sinfp[(n0 =N) À uD]g sinfp[(m0 =N) À vD]g  ^(u, v)^ v)^(u, v) du dv f h(u, p (7:9) Under the approximation that X sin[p(n0 À uX)] % (À1)n Nsinc(uX À n0 À nN) sinfp[(n0 =N) À uD]g n (7:10) ^ We find that gn0 m0 is a projection of ^(u, v) onto the Shannon scaling function basis g described in Section 3.8 Specifically 1 X X n0 ỵ nN m0 ỵ mN ^n0 m0 % ^ , (7:11) g g X Y n¼À1 m¼À1 ˆ Since ^n0 m0 is periodic in n0 and m0 , g tiles the discrete Fourier space with discretely g sampled copies of ^(u, v) We previously encountered this tiling in the context of the g sampling theorem, as illustrated in Fig 3.4 Figure 7.3 is a revised copy of Fig 3.4 to allow for the possibility of aliasing ^[(n0 ỵ nN)=X, (m0 ỵ mN)=Y] is a discretely sampled copy of ^(u, v) centered g g Figure 7.3 Periodic Fourier space of a sampled image 7.2 IMAGE PLANE SAMPLING ON ELECTRONIC DETECTOR ARRAYS 259 on n0 ¼ ÀnN The Fourier space separation between samples is du ¼ 1=X, and the separation between copies is N=X ¼ 1=D The value of ^n0 m0 within a certain range g ^[(n0 ỵ nN)=X, (m0 ỵ mN=Y)] is equal to the sum of the nonvanishing values of g with that range If more than one copy of ^ is nonzero within any range of n0 m0 , g then the measurements are said to be aliased and an undistorted estimation of f(x, y) is generally impossible Since the displacement between copies of ^(u, v) is deterg mined by the sampling period D, it is possible to avoid aliasing for bandlimited ^ by g selecting sufficiently small D Specifically, there is no aliasing if j^(u, v)j vanishes for g juj 1=2D If aliasing is avoided, we find that n m n m n m ^ , ^nm ¼ ^ , ^ , f g h p X Y X Y X Y (7:12) Figures 7.4 and 7.5 illustrate the STF for D ¼ 2lf =# , D ¼ 0:5lf =# and D ¼ lf =# Figure 7.4 plots the STF in spatial frequency units 1=D The plot is a cross section of the STF as a function of u For rotationally symmetric systems, STF and MTF plots are typically plotted only the positive frequency axis Relative to the critical limit for aliasing, the STF of Fig 7.4(a) is undersampled by a factor of 4, and Fig 7.4(b) is undersampled by a factor of Figure 7.4(c) is sampled at the Nyquist frequency The aliasing limit is 1=2D in all of Fig 7.4(a) – (c) The STF above this limit in (a) and (b) transfers aliased object features into the sample Figure 7.4 Imaging system STF for various values of D: (a) D ¼ 2lf/#; (b) D ¼ lf/#; (c) D ¼ 0.5lf/# 7.5 Figure 7.22 strategies GENERALIZED SAMPLING 287 Optical spectrograph for implementing group testing/generalized sampling wavemeter” to characterize a source in 256 wavelength bands with seven or fewer measurements The source in this case is a slit or pinhole; the goal is to identify the wavelength of the source in logd N measurements The collimator and frontend grating map the source onto the spectral encoding mask such that each color channel occupies a different transverse column; for instance, if the source power spectrum is S(l), then the power spectrum illuminating the mask is f (x, y) ¼ S(x À al) Section 9.2 describes how a dispersive spectrograph implements this transformation For the present purposes it is sufficient to note that if the mask transmission t(x, y) is described by one of the codes of Fig 7.21, then the spectral density on the plane immediately after the mask can be represented discretely as tij f j , where f j % S(l ¼ jD=a) is the signal in the jth spectral channel Optical components following the mask decollimate the light to focus on an output slit, which in this case is occupied by a detector array distributed along the y axis The effect of the backend optics is to sum the spectral channels transmitted by the mask along the rows such P that the measured irradiance on the ith detector is gi ¼ j tij f j 288 SAMPLING Figure 7.23 UD2 code for N ¼ 64 with 16 codewords White corresponds to pixel value 1, and black corresponds to The spectrograph of Fig 7.22 can implement any linear transformation on the input spectrum under the constraint that tij Potuluri demonstrated the binary codes of Fig 7.21 as a means of minimizing the number of detectors needed to characterize the spectrum Minimizing the number of detectors is particularly important in wavelength ranges like the near and midwave infrared, where detector cost scales linearly with the number of elements in the array Dispersive systems similar to Fig 7.22 have been combined with dynamic spatial light modulators to implement adaptive group testing [68] The single target group testing problem is generalized in binary “superimposed codes” introduced by Kautz and Singleton [135] Rather than implementing g ¼ Hf as a linear product, sensors using superimposed codes implement a logical OR operation on gi ¼