1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Development of spatial and temporal phase evaluation techniques in digital holography 2

45 259 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 45
Dung lượng 879,33 KB

Nội dung

CHAPTER TWO LITERATURE REVIEW CHAPTER TWO LITERATURE REVIEW 2.1 Basic principles of holography 2.1.1 Wave theory of light Light is an electromagnetic radiation, particularly radiation of a wavelength that is visible to the human eye (about 400 nm – 700 nm) In the field of physics, the term light usually refers to an electromagnetic radiation of any wavelength Light that exists in tiny packets called photons can exhibit properties of both waves and particles This property is referred to as the wave–particle duality In addition, light waves can be described either by the electrical or by the magnetic field in many applications There are four primary properties of light wave, i.e., intensity, frequency (or wavelength), polarization and phase The study of light (also known as optics) is an important research area in modern physics and various application fields Diffraction and interference are perfectly described by the wave model, which is based on the theory of classical electromagnetism Interference and diffraction also form the basis of holographic technique Since electromagnetic waves obey the Maxwell equations (Kreis, 2005; Schnars and Jueptner, 2005), the propagation of a light wave in vacuum can be described by ∇2 E − ∂2E =0 c ∂t (2.1) 11 CHAPTER TWO LITERATURE REVIEW where E is the electric field strength, ∇ denotes Laplace operator described by ∂2 ∂2 ∂2 ∇ = 2+ 2+ 2, ∂x ∂y ∂z (2.2) ( x, y, z ) denotes the spatial coordinate, t denotes the temporal coordinate, and c is the propagation speed of the light wave in vacuum ( 3.0 × 108 m s ) The electrical field E is a vector quantity, and can vibrate in any direction perpendicular to the light wave propagation However, in most real applications, it is not essential to consider the full vector quantity, and vibration in a single plane is usually assumed In this case, the light is called linear polarized light The above scalar wave equation can be rewritten as (Kreis, 2005; Schnars and Jueptner, 2005) ∂2 E ∂2 E − =0 ∂z c ∂t (2.3) where the propagation of the light is in the z direction For a linearly polarized and harmonic plane wave, the important solution of Eq (2.3) is described by rr E ( x, y, z; t ) = A cos(2π ft − k r + ϕ0 ) (2.4) where A denotes real amplitude of the light wave, f denotes the frequency of the light r r r wave, k is the wave vector, r is the spatial vector [ r = ( x, y, z ) ], and ϕ0 denotes a rr constant phase In this study, ϕ = − k r + ϕ0 is defined as phase In many optical applications, the use of a complex exponential for Eq (2.4) can greatly facilitate the wave calculation and derivations of optical principles 12 CHAPTER TWO LITERATURE REVIEW The expression of the complex exponential can be written by rr E ( x, y, z; t ) = A exp  j (2π ft − k r + ϕ0 )    (2.5) where j = −1 In practice, only the real part of this complex exponential represents the physical wave, and 2π ft can be ignored since the spatial part of the electrical field is of the most interest in most cases The wavelengths of visible light are in the range of 400 nm to 700 nm, and the range of the light frequency is from 4.3 × 1014 Hz to 7.5 × 1014 Hz Hence, commonlyused sensors, such as photodiodes, photographic films and CCD, are not able to detect such high frequencies The only measurable quantity is the intensity, which is defined by the energy flux through an area per time The intensity distribution I for a plane wave can be described by (Schnars and Jueptner, 2005) I = ε0 c E2 where t t = ε c A2 (2.6) is the time average over the light periods, and ε is the vacuum permittivity In many applications, the factor ε c can be ignored For simplicity, the coordinate in Eq (2.6) is omitted 2.1.2 Interference Interference is the superposition of two or more waves, which can result in a new wave pattern In holography, interference usually refers to the interaction of monochromatic waves that are correlated or coherent with each other since they may come from the same source or have the same frequencies and wavelengths (Schnars 13 CHAPTER TWO LITERATURE REVIEW and Jueptner, 2005) In this study, two monochromatic waves are considered, and the complex amplitudes for these two waves are described by E1 ( x, y, z ) = A1 exp ( jϕ1 ) (2.7) E2 ( x, y, z ) = A2 exp ( jϕ2 ) (2.8) The resultant intensity distribution is calculated by the summation of these two light waves described by (Kreis, 2005) I = ( E1 + E2 )( E1 + E2 ) * * = E1 + E2 + E1* E2 + E1 E2 2 (2.9) = A + A + A1 A2 cos(ϕ1 − ϕ ) 2 where the asterisk denotes complex conjugate It can be seen from Eq (2.9) that a constructive interference is formed when the value of (ϕ1 − ϕ ) is equal to 2nπ (n = 0,1, ) Similarly, a destructive interference is formed when the value of (ϕ1 − ϕ ) is equal to (2n + 1)π (n = 0,1, ) When the constructive interference is generated, the wavefronts can be considered to be in phase; when the destructive interference is generated, the wavefronts can be considered to be out of phase In digital holography, after the intensity distribution captured by the CCD is multiplied by a numerical ( reference wave, the first two terms E1 + E2 2 ) on the right-hand side of Eq (2.9) * form zero-order term of the diffraction, and the third ( E1* E2 ) and fourth terms ( E1 E2 ) form real and virtual images, respectively It is assumed that the light wave E2 represents the reference wave in digital holography 14 CHAPTER TWO LITERATURE REVIEW The visibility or contrast of the interference pattern is defined by V= I max − I I max + I (2.10) where I max and I denote maxima and minima of two neighboring intensities, respectively If two parallel polarized waves with the same intensity interfere, the visibility is unity; if incoherent superposition happens, the visibility is zero The fringe spacing of the recorded interference pattern can be defined by the distance between two neighboring maxima points P1 and P2 as shown in Fig 2.1 The fringe spacing is described by (Schnars and Jueptner, 2005) d= λ (2.11) sin (θ1 + θ )    where λ denotes the wavelength of the light source, the angle θ1 is between the first r incident ray E1 from the source and the unit vector n perpendicular to the aperture r plane, and the angle θ is between the second incident ray E2 and n The spatial frequency can be determined by the reciprocal of the fringe spacing E1 r n P1 E2 r n d θ1 P2 θ2 Recording plane Figure 2.1 Interference between two plane waves E1 and E2 15 CHAPTER TWO LITERATURE REVIEW 2.1.3 Spatial and temporal coherence It is well known that interference phenomenon is rarely observed under natural sunlight or lamplight This is mainly due to the lack of sufficient coherence with this type of illumination light Coherence is the measure of the ability of light to interfere The coherence of two waves follows from how well correlated the individual waves are and is derived from the phase relationship between two points which are separated in either space or time along the wave train For instance, we consider two points along the train that are spatially separated and moving with the train If the phase relationship between the waves at these points remains constant in time, the waves between these points are coherent On the other hand, if the phase relationship is random or rapidly changing, the waves at these two points are incoherent Two aspects of the general coherence are the spatial and temporal coherence Spatial coherence describes the mutual correlation of different parts of the same wavefront and can be physically explained using Young’s double aperture interferometer experiment arrangement (Kreis, 2005; Schnars and Jueptner, 2005) as shown in Fig 2.2 In the Young’s double aperture interferometry experimental arrangement, an aperture with two transparent holes is mounted between the light source and a screen, and the two holes are separated with a certain distance It was demonstrated in the experiment that only with the distance below the critical limit acl , the interference pattern could be observed In addition, the interference fringes also vanish with the decrease of the distance between the light source and the aperture Hence, it can be concluded that the spatial coherence is not related to the spectral width of the light source, but depends on the properties of the light source and the geometry of the interferometer 16 CHAPTER TWO LITERATURE REVIEW Light source H1 a H2 Aperture with two holes H1 and H2 Figure 2.2 Experimental arrangement for Young’s double aperture interferometer Temporal coherence describes the correlation of a wave with itself at different instants and is related to the finite bandwidth of the light source The temporal coherence length L is the greatest distance between two points for which there is a phase difference that still remains constant in time When the points are separated by a certain distance greater than the temporal coherence length, there is no phase correlation and the interference fringes vanish Typical coherence lengths of commonly-used lasers in digital holography have temporal coherence lengths from a few millimeters to centimeters The temporal-coherence property of a light source can be investigated using Michelson-interferometer experimental arrangement as shown in Fig 2.3 As can be seen in Fig 2.3, through translating mirror 2, we can adjust the optical path difference between the two-wave paths Interference fringes can be observed only when the optical path difference is below the temporal coherence length L of the light source 17 CHAPTER TWO LITERATURE REVIEW Mirror Mirror Light source Translation Recording plane Figure 2.3 Michelson interferometer arrangement 2.1.4 Diffraction Diffraction is broadly defined as the spreading out of light from its geometrically defined path The diffraction phenomenon can be observed with a light illumination to an opaque screen with some transparent holes or a transparent medium with opaque structures Figure 2.4 shows a basic principle for the generation of diffraction phenomenon (Goodman, 1996) A light source S illuminates the aperture with a transparent hole H, and the light further propagates the observation point R This propagation procedure was first qualitatively explained by Huygens, whose principle presented the idea that each point of a wave-front can act as a source of secondary wavelets and the wavefront at any other places is the coherent superposition of these secondary waves The new wavefront can be considered an ‘envelope’ of these secondary wavelets 18 CHAPTER TWO LITERATURE REVIEW Light source r ρ HR S R H r n Aperture Figure 2.4 Diffraction based on an opaque screen with a transparent hole With a simple assumption about the amplitude and phase of the secondary waves, Huygens’s principle is able to accurately determine the light distribution of diffraction patterns (Goodman, 1996) Huygens principle was mathematically described by Fresnel who considered the approximation methods and Kirchhoff who figured out all the correct multiplying terms Subsequently, some problems inherent in diffraction principle were solved by Fresnel and Fraunhofer Recently, several effective diffraction theories have been widely applied, such as Kirchhoff theory, first Rayleigh-Sommerfeld solution and second Rayleigh-Sommerfeld solution The Huygens-Fresnel principle which is predicted by the first RayleighSommerfeld solution can be mathematically described by (Goodman, 1996) E ( R) = exp( jk ρ ) ∫∫ E ( H ) ρ HR HR cos θ ds jλ Σ (2.12) r where k is wave number ( k = 2π λ ), and θ denotes the angle between the vectors n r and ρ HR Equation (2.12) can be explained as follows: complex amplitude E ( R ) at 19 CHAPTER TWO LITERATURE REVIEW the observation place is a superposition of the diverging spherical waves [exp( jk ρ HR ) ρ HR ] originating from the secondary waves located at the point H within the aperture Σ 2.1.5 Speckles When a surface is illuminated by a light wave, each point on an illuminated surface acts as a source of secondary spherical waves according to the diffraction theory The light at any other places is made up of waves which are scattered from each point of the illuminated surface If the surface is rough enough to create path-length differences exceeding one wavelength, the intensity of the resultant light will vary randomly, which is called speckles A typical speckle pattern is shown in Fig 2.5 However, if the light of low coherence (for instance, with multiple wavelengths) is applied, a speckle pattern is rarely observed The reason is that the speckle pattern produced by individual wavelengths has different dimensions which average one another (Kreis, 2005) There are two main types of speckle patterns according to the experimental arrangement, i.e., objective and subjective speckle patterns When a laser light that is scattered by a rough surface directly develops on a screen without any intermediate optical imaging optics or system, an objective speckle pattern is formed When the illuminated surface is focused with an imaging optics or system, a subjective speckle pattern is formed In objective or subjective speckle pattern, speckle size d is calculated by d = ( λ z a ) , where z denotes the distance between the object or the imaging optics and the screen, and a denotes the dimension of the object or the aperture of an imaging optics 20 CHAPTER TWO LITERATURE REVIEW In each state of the test specimen, each hologram is reconstructed independently For instance, for two reconstructed complex amplitudes Γ1 (ξ ',η ') and Γ2 (ξ ',η ') , phase distributions are respectively calculated by ϕ1 (ξ ',η ') = arctan ϕ (ξ ',η ') = arctan Im [ Γ1 (ξ ',η ')] (2.40) Im [ Γ (ξ ',η ')] (2.41) Re [ Γ1 (ξ ',η ')] Re [ Γ (ξ ',η ') ] where index is the first state, and index is the second (or deformed) state The phase difference map ∆ϕ (ξ ',η ') is directly determined by a digital phase subtraction method (Schnars, 1994; Kreis, 2005) ϕ (ξ ',η ') − ϕ2 (ξ ',η ') ∆ϕ (ξ ',η ') =  ϕ1 (ξ ',η ') − ϕ2 (ξ ',η ') + 2π if ϕ1 (ξ ',η ') ≥ ϕ2 (ξ ',η ') if ϕ1 (ξ ',η ') < ϕ (ξ ',η ') (2.42) The phase difference map obtained can be also called a wrapped phase map The phase difference map determined by Eq (2.42) contains 2π jumps, so a phase unwrapping algorithm (described in Section 2.8) is usually required Before the implementation of a phase unwrapping algorithm, the phase difference map is usually filtered so as to reduce or eliminate speckle noise However, commonly-used lowpass filters can not effectively eliminate the noise especially when the fringes are dense or the noise level of the phase difference map is high Hence, novel filtering methods need to be further developed, and a corresponding fringe density estimation method for the extracted wrapped phase map should be developed in order to facilitate the subsequent phase unwrapping operation 41 CHAPTER TWO LITERATURE REVIEW 2.6 Phase evaluation technique in spatial domain After an interference fringe pattern is recorded, it is necessary to extract the phase distribution from the recorded intensity pattern since the phase map is directly related to the measured physical quantity There are two main types of phase evaluation techniques, i.e., spatial and temporal phase evaluation methods Typical spatial phase evaluation techniques are Fourier transform method, phase scanning approach, continuous wavelet transform and short-time Fourier transform However, in digital holography, if the phase evaluation method is applied in the hologram (or CCD) plane, a wave-propagation integral should be further carried out in order to extract the original object complex amplitude in the object plane 2.6.1 Fourier transform method The Fourier transform method was proposed by Takeda et al (1982, 1983) for fringe pattern analysis, and they suggested that the carrier phase component could be removed in the frequency domain via a spectrum shift Subsequently, an inverse Fourier transform of the shifted spectrum can produce a wrapped phase distribution using an arc-tangent operation A main advantage of this method is the usage of a single image to perform the phase evaluation, and the knowledge of phase slope corresponding to the carrier fringes solves the problem of absolute sign of the phase Two-dimensional Fourier transform has also been proposed by Bone et al (1986) In the two-dimensional Fourier transform method, noise can be reduced more efficiently but with an increase of processing time Assuming that the carrier fringes are along y direction [Fig 2.17(a)] and the spatial frequency is set as f , the recorded intensity can be expressed by 42 CHAPTER TWO LITERATURE REVIEW I ( x, y ) = I O ( x, y ) + I M ( x, y ) cos [ϕ ( x, y ) + 2π f x ] (2.43) where I O ( x, y ) and I M ( x, y ) respectively denote the background and modulation factor, and ϕ ( x, y ) denotes the phase distribution which contains the physical information of the test specimen The intensity distribution of Eq (2.43) can be described in a complex form as I ( x, y ) = a f ( x, y ) + c f ( x, y ) exp( j 2π f x) + c f ( x, y ) exp(− j 2π f x) (2.44) where a f ( x, y ) = I O ( x, y ), c f ( x, y ) = I M ( x, y ) exp [ jϕ ( x, y )] , and − (above the character) denotes the complex conjugate The 2D Fourier transform of this intensity distribution can be expressed by I (u , v) = A(u , v) + C (u − f , v) + C (u + f , v) (2.45) where u and v denote the horizontal and vertical spatial frequencies, and C (u , v) and A(u , v) are the 2D Fourier transforms of c f ( x, y ) and a f ( x, y ), respectively Since the three terms are separated in the spectral domain, it is easy to apply a low-pass filter to extract the expected term An inverse Fourier transform of the filtered spectrum can produce a wrapped phase distribution using an arc-tangent operation Figure 2.17(a) shows a deformed fringe pattern, and Fig 2.17(b) shows the logarithm of modulus of 2D Fourier transform of the fringe pattern in Fig 2.17(a) As indicated in Fig 2.17(b), a processing inside the box [dashed line in Fig 2.17(b)] is carried out in order to select the positive or negative component After the filtering in 43 CHAPTER TWO LITERATURE REVIEW the frequency domain, a wrapped phase map obtained by using inverse 2D Fourier transform and an arc-tangent operation is shown in Fig 2.17(c) An unwrapped phase map is shown in Fig 2.17(d) (a) (b) (c) (d) Figure 2.17 (a) A deformed fringe pattern; (b) the logarithm of the modulus of 2D Fourier transform of (a); (c) an extracted wrapped phase map; (d) an unwrapped phase map after the removal of the spatial carrier 2.6.2 Phase scanning approach Phase scanning approach (Vikhagen, 1991; Wang and Grant, 1995) is another simple but effective spatial phase evaluation method for the analysis of a recorded fringe pattern This method is also called maximum-minimum exploration strategy (Dorrio 44 CHAPTER TWO LITERATURE REVIEW and Fernandez, 1999) The phase value can be calculated by using the maximum and minimum points of the interference signal I ( x, y ), and can be described by  I ( x, y ) − [ I max ( x, y ) + I ( x, y ) ]   I max ( x, y ) − I ( x, y )   ϕ ( x, y ) = arccos  (2.46) where I max ( x, y ) and I ( x, y ) denote the maximum and minimum gray value of the interference signal Note that the phase extraction can be carried out row by row Phase values determined by Eq (2.46) are in a range of [0, π ), so before the phase unwrapping operation a conversion to a range of [0, 2π ) or ( −π , π ) is essential Some strategies have been proposed to overcome this limitation Debnath et al (2009) proposed to use Hilbert transform method (Ikeda et al., 2005; Chalut et al., 2007; Shaked et al., 2009) to ensure that the extracted wrapped phase map is in the range of [0, 2π ) In the dynamic situation, the conversion (Quan et al., 2004) can be made based on a guidance of vibration direction and the slope of the intensity A procedure for the extraction of continuous phase maps based on phase scanning approach is shown in Fig 2.18 Phase (rad) 2.5 Conversion to 0, 2π )  1.5 Phase unwrapping 0.5 0 50 100 150 200 250 Pixels Phase scanning approach Figure 2.18 A procedure for the extraction of continuous phase maps based on phase scanning approach 45 CHAPTER TWO LITERATURE REVIEW Similarly to Fourier transform method, phase scanning approach is also weak in the suppression or the elimination of speckle noise This approach is much suitable for the analysis of fringe patterns recorded using fringe projection or shadow moiré technique since signal-to-noise ratio of the recorded fringe patterns is high using these optical techniques 2.6.3 Continuous wavelet transform Continuous wavelet transform (Mallat, 1999; Watkins et al., 1999) is another phase evaluation technique, which is widely applied to analyze fringe patterns (Federico and Kaufmann, 2001a, 2001b, 2002a, 2002b; Gdeisat et al., 2006, 2009) In the case of continuous wavelet transform, the resolution will vary according to the frequency of interest, which means that at higher frequencies the time resolution is smaller at the cost of a large frequency window (Fu, 2005) However, the well-known uncertainty principle ensures that the product of temporal duration and frequency bandwidth is constant The continuous wavelet transform of a signal I ( x) is expressed by (Mallat, 1999; Watkins et al., 1999) WI (a, b) = a ∫ +∞ −∞  x−b  I ( x) g *   dx  a  (2.47) where WI (a, b) is the coefficient function of continuous wavelet transform, a is a dilation parameter related to the frequency, b is a position parameter, the symbol * denotes the complex conjugate, and g ( x) is a mother wavelet A suitable choice of wavelet function is the well-known complex Morlet wavelet 46 CHAPTER TWO LITERATURE REVIEW  x2  g ( x) = exp  −  exp( jω0 x)  2 (2.48) where ω is the central frequency, a parameter that has to be chosen properly The value of ω = 2π is usually chosen to satisfy the admissibility condition so that the wavelet function is able to remove the negative frequencies as well as avoid the DC contribution of the signal For simplicity, one-dimensional fringe pattern is analyzed In the continuous wavelet transform, a phase map can be obtained by the integration of instantaneous frequency or direct phase extraction on the ridge In instantaneous frequency method, continuous phase distributions are determined by the integration of extracted instantaneous frequency, so phase unwrapping operation can be avoided In direct phase extraction method, a wrapped phase map is obtained by an arc-tangent operation and will contain 2π jumps, so phase unwrapping is essential A wrapped phase map extracted on a wavelet ridge is expressed by  Im [WI (arb , b)]      Re [WI (arb , b) ]    ϕ (b) + 2π f b = arctan  (2.49) where a rb denotes the value of a at the position b on the ridge The deformed fringe pattern in Fig 2.17(a) is also investigated, and Fig 2.19(a) shows a wrapped phase distribution of continuous wavelet transform to the central row of Fig 2.17(a) After the points with maximum modulus are detected, a phase map for the central row is correspondingly determined In this case, the scale vector contains 66 elements and varies from to 66 in an increment of Figure 2.19(b) shows a wrapped phase map extracted by using continuous wavelet transform 47 CHAPTER TWO LITERATURE REVIEW (a) (b) Figure 2.19 Continuous wavelet transform: (a) a phase distribution to the central row of Fig 2.17(a); (b) a wrapped phase map 2.6.4 Short-time Fourier transform Similarly to continuous wavelet transform, short-time Fourier transform (Mallat, 1999; Asundi and Wang, 2002) can also be considered an advanced spatial-phase evaluation method This technique has also been applied in various areas to analyze fringe patterns (Asundi and Wang, 2002; Wang and Asundi, 2002; Qian, 2004; Qian and Soon, 2005; Fu et al., 2007b) However, in the case of short-time Fourier transform, time-frequency resolution will not vary according to the frequency of interest Shorttime Fourier transform of a 1D signal I (x) is described by SI (u, ξ ) = ∫ +∞ −∞ I ( x) g s ( x − u ) exp(− jξ x) dx (2.50) where s is the scaling factor, g s (x) denotes a window function that usually uses a Gaussian function, and u and ξ denote the time and frequency, respectively For a 2D signal I ( x, y ), 2D short-time Fourier transform (Qian, 2007) can be applied The instantaneous frequencies in x- and y- directions can be extracted by 48 CHAPTER TWO LITERATURE REVIEW [ω x (u , v), ω y (u , v)] = arg max SI (u , v, ξ ,η ) (2.51) ξ ,η The ridge and phase map are respectively determined by r (u , v) = SI [u , v, ω x (u , v), ω y (u , v)] (2.52) ϕ (u, v) = angle {SI [u, v, ω x (u, v), ω y (u , v)]} + ω x (u, v) u + ω y (u, v) v (2.53) where angle denotes an arc-tangent operation between the imaginary and real parts of a complex amplitude The deformed fringe pattern in Fig 2.17(a) is also studied, and Fig 2.20(a) shows a wrapped phase map extracted by short-time Fourier transform The frequency regions of ξ and η are set as [-0.7, 0.7] and [0.3, 1.7], respectively In short-time Fourier transform, a proper selection of related parameters is important in order to avoid the incorrect results Figure 2.20(b) shows 3D plot of an unwrapped phase map after the removal of the spatial carrier It is worth noting that the resolution achieved in short-time Fourier transform is based on the specific optical applications (a) (b) Figure 2.20 Short-time Fourier transform: (a) a wrapped phase map; and (b) 3D plot of an unwrapped phase map 49 CHAPTER TWO LITERATURE REVIEW 2.7 Phase evaluation technique in temporal domain 2.7.1 Phase shifting technique Phase-shifting technique (Creath, 1985; Yamaguchi and Zhang, 1997) is widely used in the analysis of fringe patterns A commonly-used temporal phase retrieval approach is the N-frame phase shifting method, in which N is equal to or larger than (Quan et al., 2001, 2003; Yu et al., 2008; Potcoava and Kim, 2009) Many kinds of phaseshifting techniques have been proposed, such as three-frame phase shifting with a known phase shift, or with an unknown phase shift (Cai et al., 2004; Tay et al., 2005; Carl et al., 2009) Compared with Fourier transform method, phase-shifting technique can effectively enhance the spatial resolution but require more experimental effort For instance, typical four phase-shifting fringe patterns with the phase-shifting steps of 0, π , π and 3π are considered, and the following formulae can be used to represent successive fringe patterns with phase shift (Creath, 1985) I1 ( x, y ) = a ( x, y ) + b( x, y ) cos[ϕ ( x, y )] (2.54) I ( x, y ) = a ( x, y ) + b( x, y ) cos[ϕ ( x, y ) + π 2] (2.55) I ( x, y ) = a ( x, y ) + b( x, y ) cos[ϕ ( x, y ) + π ] (2.56) I ( x, y ) = a ( x, y ) + b( x, y ) cos[ϕ ( x, y ) + 3π 2] (2.57) where a ( x, y ) denotes background, b( x, y ) denotes intensity modulation, and ϕ ( x, y ) is the phase distribution Hence, the phase map ϕ ( x, y ) can be determined by ϕ ( x, y ) = arctan I ( x , y ) − I ( x, y ) I1 ( x, y ) − I ( x, y ) (2.58) 50 CHAPTER TWO LITERATURE REVIEW Since the extracted phase map contains 2π jumps, a phase unwrapping algorithm (Ghiglia and Pritt, 1998) should also be employed to correct phase discontinuous points Figure 2.21(a) shows four fringe patterns with phase shift of 0, π , π and 3π , and Fig 2.21(b) shows a wrapped phase map extracted by using the phase-shifting technique 00 90 0 180 270 (a) (b) Figure 2.21 Phase shifting technique: (a) fringe patterns; (b) a wrapped phase map 2.7.2 Continuous wavelet transform The principle about continuous wavelet transform mentioned in Section 2.6.3 can also be applied in the temporal domain Each pixel is processed along the time axis independently of other pixels using continuous wavelet transform, and a schematic for the processing procedure is shown in Fig 2.22 Different from the spatial processing in Section 2.6.3, the wavelet coefficient in Eq (2.47) for the temporal processing can be written as (Mallat, 1999) W ( a, b) = ( ) a A(b) g {a [ξ − ϕ '(b)]} + ε (b, ξ ) exp [ jϕ (b)] (2.59) 51 CHAPTER TWO LITERATURE REVIEW where ε (b, ξ ) is a corrective term which can be negligible, A is the modulus of the signal, and g denotes the Fourier transform of mother wavelet g Hence, instantaneous frequency ϕ '(b) of the signal can be extracted from the wavelet ridge ϕ '(b) = ξ rb = ω0 (2.60) arb where arb denotes the value of a at the instant b on the ridge The phase map can be extracted directly from the ridge or through the integration of the extracted instantaneous frequency In continuous wavelet transform, time-frequency resolution will vary according to the frequency of interest Continuous wavelet transform can perform well when the signal frequency is high and has small variation However, continuous wavelet transform performs poorly when the signal frequency is low, since it will adjust the window size to be large Sometimes this window size is larger than the signal length, which will produce large errors in the phase evaluation In this case, short-time Fourier transform may perform better A1 (1,1) A1 (1, ) A2 (1,1) Time axis ( A21, 2) ( A31,1) A3(1, ) (1,1) A ( A41, 2) Figure 2.22 A schematic for the processing of each independent pixel 52 CHAPTER TWO LITERATURE REVIEW 2.7.3 Short-time Fourier Transform Similarly to continuous wavelet transform, short-time Fourier transform mentioned in Section 2.6.4 can also be applied in the temporal domain Short-time Fourier transform [described in Eq (2.50)] for the processing of a pixel along the time axis can be written as SI (u , ξ ) = ( s a (u ) exp { j[ϕ (u ) − ξ u ]} g {s [ξ − ϕ '(u )]} + ε ( u , ξ ) ) (2.61) where ϕ '(u ) is defined as instantaneous frequency of the signal In this thesis, the scaling factor s is set as It is assumed that real amplitude a (t ) of the signal and phase derivative ϕ ′(t ) of the signal have small relative variations over the support of Gaussian function g If the assumption can not be satisfied, the corrective term ε (u , ξ ) should be added and this corrective term will affect the extraction of instantaneous frequency As can be seen in Eq (2.61), SI (u , ξ ) reaches its maximum value at ξ = ϕ '(u ) Similarly, a phase map can be extracted directly from the ridge or through the integration of the extracted instantaneous frequency 2.8 Phase unwrapping Since the phase difference map obtained usually contains 2π jumps, phase unwrapping (Judge and Bryanston-Cross, 1994) becomes an essential and important topic which is related to many research disciplines including optical metrology, synthetic aperture radar, acoustic imaging and magnetic resonance imaging The conversion of the phase modulo 2π into a continuous phase distribution is usually 53 CHAPTER TWO LITERATURE REVIEW called phase unwrapping The book written by Ghiglia and Pritt (1998) presented many fundamental theories of phase unwrapping and showed that a successful phase unwrapping algorithm could be applied to various fields Phase unwrapping methods can be divided into two types, i.e., spatial and temporal phase unwrapping 2.8.1 Spatial phase unwrapping During the past several decades, many spatial phase unwrapping techniques have been proposed, such as Goldstein’s branch cut algorithm, quality-guided path following, and mask cut algorithm (Ghiglia and Pritt, 1998) In addition, Buckland et al (1995) have proposed a minimum-cost-matching algorithm for unwrapping noisy phase maps Asundi and Zhou (1998) presented a fast phase-unwrapping algorithm based on a gray-scale mask and flood fill The spatial phase unwrapping operation involves some problems, in particular if the extracted wrapped phase map contains much noise A proper filtering of the wrapped phase map could greatly improve the results However, if an object contains discontinuous points such as a large-step change or some cracks, spatial phase unwrapping algorithm might not succeed In addition, using spatial phase unwrapping, only relative phase values can be obtained and an absolute measurement is impossible Hence, in many cases especially for dynamic or real-time situations, temporal phase unwrapping can be considered an excellent alternative 2.8.2 Temporal phase unwrapping Temporal phase unwrapping algorithm was first proposed by Huntley and Saldner (1993, 1997) In the temporal phase unwrapping method, a series of fringe patterns is recorded, and each pixel of the camera acts as a sensor independently of others Hence, 54 CHAPTER TWO LITERATURE REVIEW phase unwrapping process can be carried out along the time axis (Huntley et al., 1999; Pedrini et al., 2003), and even isolated regions can be correctly unwrapped without any uncertainty concerning their relative phase orders In the temporal phase unwrapping, the first step is to calculate 2π phase discontinuities for each pixel along the time axis, and the 2π phase discontinuities are calculated by (Huntley and Saldner, 1993) p(m, n, t ) = [ ∆∆ϕ (m, n, t ) 2π ] (2.62) where ∆∆ϕ ( m, n, t ) = ∆ϕ w ( m, n, t ) − ∆ϕ w ( m, n, t − 1), ∆ϕ w ( m, n, t ) denotes a wrapped phase at an instant t, and [ ] denotes a rounding operation to the nearest integer Total number of 2π phase discontinuities up to the rth wrapped map is calculated by r d( m, n, r ) = ∑ p( m, n, t ) (2.63) t =1 Continuous phase distribution is obtained by subtracting 2π d( m, n, r ) from the wrapped phase map ∆ϕ ( m, n, r ) Using temporal phase unwrapping, each pixel is processed along the time axis independently of other pixels, so good data can be efficiently preserved However, the errors, such as speckle noise, still accumulate along the time axis for bad data In addition, in the temporal phase unwrapping, the positions of ill-behaved pixels should not vary with the time Recently, some advanced analysis methods, such as continuous wavelet transform and short-time Fourier transform (Fu et al., 2007a), are proposed to suppress the influence of noise in the temporal processing 55 ... two main types of phase evaluation techniques, i.e., spatial and temporal phase evaluation methods Typical spatial phase evaluation techniques are Fourier transform method, phase scanning approach,... fringe patterns with phase shift of 0, π , π and 3π , and Fig 2. 21(b) shows a wrapped phase map extracted by using the phase- shifting technique 00 90 0 180 27 0 (a) (b) Figure 2. 21 Phase shifting... Phase unwrapping methods can be divided into two types, i.e., spatial and temporal phase unwrapping 2. 8.1 Spatial phase unwrapping During the past several decades, many spatial phase unwrapping

Ngày đăng: 11/09/2015, 09:57

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN