Alpert/Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 712 24-9-2008 #19 712 Handbook of Algorithms for Physical Design Automation 2 1.5 0 SRAF ½ SRAF 1 SRAF 2 SRAF/edge Depth of focus (µm) 1 0.5 0 200 400 600 800 1000 1200 1400 Pitch (nm) Acceptable Unacceptable Without SRAF With SRAF FIGURE 35.17 Pitch curveforlines andspaces undera particularOAIapproach calledQUASAR illumination. W ithout SRAFs, certain pitches do not have enough contrast, and will not print. SRAF are added to restore the contrast. (Adapted from Schellenberg, F.M., Capodieci, L., and Socha, B., Proceedings of the 38t h Design Automation Conference, A CM, New York, 2001, pp. 89–92. With per mission.) 35.2.3.5 Polarization At this time, there is a fourth independent variable of the EM field that has not yet been as fully exploited as the other three: polarization [73]. For advanced steppers, which fill the gap between the last lens element and the wafer with water for the higher angle coupling it allows (water immersion (a) (c) (b) (d) FIGURE 35.18 (a) Layout with alternating phase-shifted apertures, (black is opaque, left stripe is 0 ◦ , right stripe 180 ◦ ), and (b) pupil map of an illumination pattern optimized for this layout; (c) Layout with or a memory cell (dark i sopaque, clear isnormal 0 ◦ mask transmission),and (d)pupil mapof an illuminationpattern optimized for this layout. (Adapted from Granik, Y., J. Microlith. Microfab. Microsyst., 3, 509, 2004. With permission.) Alpert/Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 713 24-9-2008 #20 Modeling and Computational Lithography 713 steppers [26]), anticipation of and compensation for polar ization pr operties of the light is becom- ing crucial [73–76]. At the time of this writing, however, although some very creative techniques exploiting polarization have been proposed [77], no definitive polarization-based RET has been demonstrated as practical. Instead, polarization is considered in each of the other RETs—source illumination, mask diffraction, and lens pupil transmission. This may change in the future as the polarization issues with advanced immersion lithography become better understood. 35.2.4 RET FLOW AND COMPUTATIONAL LITHOGRAPHY No matter whatpatterning techniqueisused, incorporatingthe simulationo f the correspondingeffects requires some care for insertion into an EDA environment. Complete brute force image simulation of a 32mm ×22 mm IC with resolution at the nanometer scale would require a gigantic amount of simulation and days or even weeks to complete. Some effort to therefore determine the minimum necessary set for simulation is called for. Therefore, the initial step simulation for an EDA flow involves fragmentation of the layout. In a layout format such as GDS-II or OASIS, a polygon is defined by a sequence of vertices. These vertices are only placed where the boundary of a polygon makes a change (e.g., at the corners of rectangles). With fragmentation, additional vertices are inserted [41,78]. The rules governing fragmentation can be complex, but the intention is to basically break the longer edge segments into shorter, more manageable edge segments, with more segments (higher fragmentation) in regions of high variability and fewer segments (low fragmentation) in regions of low variability. This is illustrated in Figure 35.19. Once fragmented, a simulation point is determined for each edge segment. This is the location at which the image simulation results will be determined, and the corresponding position of the edge as expected on the wafer determined. Each simulation point has an associated cutline, along which the various values for the image intensity and its derivatives (e.g., image slope) will be calculated. This is illustrated in Figure 35.20 [41,79,80]. At this point, the simulator is invoked to systematically simulate the image properties only along the cutline for each edge segment. Using some assumptions or a suitable algorithm, the position of the edge o f the resist is determ ined from the computed image. Once this edge position is determined, a difference between the edge position in the desired layout and the simulated edge position is computed. This difference is called the edge placement error (EPE) [41]. Fragmentation point FIGURE 35.19 Original portion of a layout with original fragmentation (left) and layout after refragmentation for OPC (right). (Adapted from Word, J. and Cobb, N., Proc. SPIE, 5567, 1305–1314, 2004. With permission.) Alpert/Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 714 24-9-2008 #21 714 Handbook of Algorithms for Physical Design Automation Fragmentation point Simulation cutline Location for image computation FIGURE 35.20 Selection of the simulation cutlines to use with the fragmentation from Figure 35.19. (Reproduced, Courtesy Mentor Graphics.) Fragmentation Simulation Generate EPE Correction Layer selection FIGURE 35.21 Sequence of oper ations within a typical OPC iterative loop. For each and every edge segment there is, therefore, an EPE. For an EPE of zero, the image of the edge falls exactly on the desired location. When the EPE is nonzero, a suggested motion for the edge segment is determined from the sign and magnitude of the EPE that should reduce the EPE. The edge segment in the layout is then moved, according to this prediction. Once this happens, a new simulation and a new EPE are generated for the revised layout. The iterative process proceeds until the EPE has been reduced to b e within a predetermined tolerance. This is illustrated in Figure 35.21. Although simplistic in outline, determining fragmentation settings and suitable simulation sites while remaining optimal for the competing metrics of high accuracy, rapid convergence, and man- ageable data volume remains challenging. A real-world example of a layout with fragmentation selections is shown in Figure 35.22. In general, high fragmentation density leads to better accuracy, but requires more simulation and may create higher data volume. Poorly chosen simulation sites can converge rapidly, but may not accurately represent the average behavior along the entire edge fragment (and in some cases, may even lead to a motion in the wrong direction). Cutlines chosen in certain orientations (e.g., normal to the layout, not normal to the image gradient) may again produce less representative EPEs, and the iteration may require longer to converge. Alpert/Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 715 24-9-2008 #22 Modeling and Computational Lithography 715 FIGURE 35.22 Example of a real-world layout, showing the target layout, simulation cutlines, and image contours. (Reproduced, Courtesy Mentor Graphics.) 35.2.5 MASK MANUFACTURING FLOW Although originally developed for computing the relationship between the layout and the wafer image, a similar p rocedure can be carried out to compensate for mask manufacturing effects [81]. In thiscase, themodelmust be derivedforthe variousprocessesused inmaskfabrication.Thesetypically involve exposure using an electron beam (E-beam), and because electrons are charged and repel, a significantamount of computationmayberequiredto compensateforelectron proximityeffects[82]. Optical mask writers, which write masks using UV lasers and use lithography materials similar to those used for wafers [82], can also be corrected for optical proximity and processing effects. 35.2.6 CONTOUR-BASED EPE For sparse layouts, with feature dimensions larger than the optical wavelength, selection of frag- mentation settings and simulation sites can be fairly straightforward, as illustrated in Figure 35.23a. As feature dimensions become significantly smaller than the optical wavelength, however, more simulation sites can be needed, as illustrated in Figure 35.23b [83]. At some point, the advantage of a sparse simulation set is severely reduced, and the use of a uniform grid of simulation points becomes attractive again. In this case, the simulation of the image intensity is carried out using a regular grid, as illustrated in Figure 35.24. Contours from the simulation result, using again a suitable model to predict the edge location on the wafer, are used to represent the image intensity. The EPE is then synthesized from the desired position of an edge segment and a corresponding location on the contour. Subsequent motion of the edge segments proceeds as previously described. Representation of the contour data can present additionalproblems not encountered in the sparse approach. Accurate representationsofcontours contain far more vertices than their counterparts in the original GDS-II layout. And although storing the contours after it has been used to determine an EPE may be extremely useful, because identical regions may be encountered later and the precomputed Alpert/Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 716 24-9-2008 #23 716 Handbook of Algorithms for Physical Design Automation (a) (b) FIGURE 35.23 (a) Layout with sparse simulation plan and (b) scaled layout using sparse simulation rules when the target dimension is 65nm and the exposure wavelength i s 193 nm. At some point, sparse simulations are no longer sparse. (Adapted from Cobb, N. andDudau, D., Proc. SPIE, 6154, 615401, 2006. With permission.) solution accessed and reused, the additional data volume for storage of contours w ith their high vertex counts in the database can present problems. In spite of these logistical problems, however, there are some clear advantages for accuracy. With the dense approach, certain features such as the bridge shown in Figure 35.24 can be simulated and flagged; catching such a structure with a sparse number o f simulation sites becomes far more problematic. No matter what the simulation strategy, image and process simulators are invoked in these OPC flows. We now turn our attention to the simulator itself, and some of the practical approximations that are u sed to make a simulator functional in an EDA environment. Alpert/Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 717 24-9-2008 #24 Modeling and Computational Lithography 717 Sparse Dense FIGURE 35.24 Fragmentation/simulation plan for a portion of a layout using sparse rules (left), a nd a d ense grid simulation (right). Using the contours from the dense grid, features such as the bridge between the two features can be detected. (Reproduced, Courtesy Mentor Graphics). 35.3 SIMULATION TECHNIQUES 35.3.1 I NTRODUCTION In Section 35.2, the fundamental framework for modeling lithography and various RETs were pro- vided. In this section, computational techniques that can be used within that framework for detailed mask transmission, image propagation, and wafer process simulation are presented, and the various trade-offs in the approximations they use are discussed. As described in Section 35.2.2.2, the imaging system can be approximated as a simple Fourier transform and its inverse, with the pupil aperture (e.g., a circle) providing a low pass cutoff for the spatial frequencies of the image. Although abstractly true, certainly much more than a pair o f FFTs are needed to provide highly accurate simulation results. The th ree areas that require modeling attention are the imaging system itself, the interaction with the photomask, and the interaction with the wafer. 35.3.2 IMAGING SYSTEM MODELING A lithographic imaging system has a large number of highly polished, precision optical elements, mounted in a precision mechanical housing. The lens column can weigh over 2 t and be over 2 m tall. An example of a contemporary lens design [84] is shown in Figure 35.25. These lenses are usually designed with complex ray tracing programs that accurately represent the path that light takes through the reflective and refractive elements [85]. Because the mathematical theory of lens design is linear and well understood, the complex interactions of the lens elements can be represented as the simple, ideal Fourier lens described in Section 35.2.2.2, with all the physical properties of the lens (refraction, aberrations, etc.) lumped together into an idealized pupil function represented by Zernike polynomials. This function can be measured using precision interferometry techniques, but thisis usually not easy todo for an individual stepper in the field [86]. The interaction o f this pupil with the illuminator presents the essential challenge of imaging simulation. If the light falling on the lens were a single, coherent, uniform normal incidence (on- axis) plane wave, the corresponding spectrum in the pupil would be a single point at the center of the pupil. This represents coherent illumination, as shown in Figure 35.26a. I n practice, however, light falls on the photomask at a range of angles, from a number of potential source points. The corresponding interactions in the lens pupil are shifted and overlapped. The degree to which the Alpert/Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 718 24-9-2008 #25 718 Handbook of Algorithms for Physical Design Automation 502 508 511 512 514 LG1 516 518 531 532 LG2 520 522 524 526 528 538 536 530 531 534 LG3 180 548 546 544 542 540 110 FIGURE 35.25 Example of a contemporary scanner lens design. (From K reuzer, J., US Patent 6,836,380.) pupil is filled is then related to the spatial coherence of the light source. For very coherent light, the pupil filling ratio is small (Figure 35.26b); for larger angles and lower coherence, the pupil filling is higher (Figure 35.26c). This ratio, also called the coherence factor, is typically desig nated by lithographers using the symbol σ. This should not be confused, however, with the electrical conductivity from Equation 35.1b above. Imaging with complicated sources and pupils can be complicated to model. For coherent light, the ima ge fields add directly both at every moment in time and in a time average, and so we can sum the various contributions individually. For incoherent light, the local fields add instantaneously, but for the time average, the corr elation is lost, and so the various image intensities m ust be computed and added. However, most illumination sy stems are partially cohe rent. This means that the relation between the image I(x, y) from two different points in an object (x o , y o )and(x o , y o ) (e.g., two points in a mask) do not fit either of these simple cases. Likewise, the illumination of an object by a distribution of source points follows similarly. (a) (b) (c) FIGURE 35.26 Pupil maps for illumination that is (a) coherent, (b) partially coherent, and (c) incoherent. Alpert/Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 719 24-9-2008 #26 Modeling and Computational Lithography 719 The imag e formulation for this situation can be computed using the mutual in tensity function J(x o , y o ; x o , y o ), according to Refs. [29,87,88] I(x, y) = ∞ −∞ J(x o − x o , y o − y o ) · M(x o , y o ) ·M ∗ (x o , y o ) × H(x o , y o ) ·H ∗ (x o , y o ) · dx o dy o dx o ,dy o (35.21) where M(x o , y o ) are the points in the mask H(x, y, x o , y o ) represents the optical system transfer function from point (x o , y o )to(x, y). When the mask and the transfer function are replaced by Fourier representations, M(x, y) = ∞ −∞ ˆ M( p, q) ·e −i2π(px+qy) dp dq (35.22a) J(x, y) = ∞ −∞ ˆ J( p, q) · e −i2π(px+qy) dp dq (35.22b) the image intensity can be rewritten as I(x, y) = +∞ +∞ ˆ J( p, q) · ˆ H(p + p , q +q ) ˆ H ∗ (p +p , q +q ) × ˆ M( p , q ) · ˆ M ∗ ( p , q ) · e −i2π[(p −p )x+(q −q )y] dp dq dp dq dp dq (35.23) Changing the order of integration, the integral can be reexpressed as I(x, y) = +∞ +∞ TCC( p , q , p , q ) · ˆ M( p , q ) ˆ M ∗ ( p , q ) · e −i2π[(p −p )x+(q −q )y] dp dq dp dq (35.24) where TCC( p , q , p , q ) = +∞ +∞ ˆ J( p, q) ˆ H(p + p , q +q ) ˆ H ∗ ( p +p , q +q )dp dq (35.25) is called the transmission cross coefficient (TCC). An illustration of this overlap integral in the pupil plane is shown in Figure 35.27. This TCC overlap integral d epends only on the illumination source and the transfer of light through the lens, which are independent of mask layout. J(p, q) in Figure 35.27 is a representation of the projection of a circular source illumination. This could just as well be an annular, quadrupole, or other off-axis structure, as illustrated in Figure 35.16, or a more complex pattern, as shown in Figure 35.18. Only portions in frequency space (the pupil plane) where source light overlaps with the lens transmission (the shaded area) will contribute to the final image. The key element here is that the interaction of the source and lens can be precomputed as TCCs and stored for later use, once the details of the mask layout M(x, y) are known. This formulation for imaging was originally presented by Hopkins [88] and is often called the Hopkins approach. Alpert/Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 720 24-9-2008 #27 720 Handbook of Algorithms for Physical Design Automation q p TCC (p′,q′, pÁ,qÁ ) H (p + p′,q + q′) ^ H (p + pÁ, q + qÁ ) ^ J (p,q) ^ FIGURE 35.27 Diagram of the integral of ov erlap for the computation of images using TCCs. One example of the utility of this approach is the simulation of defo cus. Normally, the Fourier optical equations represent the image at the plane of focus. However, for propagation beyond focus, the expansion of a spherical wave from a point follows a quadratic function that is equivalent to introducing a fourth-orderZernike aberration Z 4 in the pupilplane [89](See Table 35.1). Computation of a defocused image therefore becomes equivalent to the computation of an in-focus image with a suitable degree of fourth-order aberration.By precomputingthe TCCs for a system with fourth-order aberration, defocus images for a mask pattern can therefore be calculated merely by using different sets of precalculated TCCs. 35.3.3 MASK T RANSMISSION FUNCTION In our formulations of imaging so far, the mask transmission is a simple function, M(x, y). Typically, this is a binary mask, having a value of 0 or 1 depending on the pixel coordinates. In the Kirchhoff approximation, mentioned in Section 35.2.2.2, the mask transmission is exactly this function. How- ever, in a real photomask, with layers of chrome coated onto a substrate of quartz, the wavefronts reflect and scatter off the three-dimensional structures, and the wavefront can be a complicated function of position, amplitude, and phase. This wavefront can still be represented as a 2D function, in which each pixel has its own trans- mission value and a phase factor, depending on the phase shift of the transmitted light. To derive this representation, however, a simple scalar representation of the field at the mask will not suffice. Instead, a full vector EM field computation may be required. 35.3.3.1 FDTD A widely used first-principles method for simulating the electromagnetic field over time is the finite-difference time domain (FDTD) method [90–93]. This is illustrated in Figure 35.28. Here, a grid in time and space is established, and the initial conditions for sources (charge and current) determined and the field at the boundaries determined. Then, using Maxwell equations in a finite difference form, the time step is incremented, and the E-field recomputed, based on the previous E field and the curl of H at the previous time step. Once this is generated, the time step in incremented again, and the H field is computed, based on the previous H field and the curl of the E field. As an example, following the notation of Erdmann [93], the Maxwell equations for a transverse electric (TE) field mode can be represented for grid point i, j at time step n in finite difference form as Alpert/Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 721 24-9-2008 #28 Modeling and Computational Lithography 721 H y H y H y H y H z H z H z E x E z E y H x H x ∆z ∆y ∆x FIGURE 35.28 Illustration of the geometry used in the computation of EM fields according to the FDTD method. (Adapted from Taflove, A. and Hagness, S.C., Computational Electrodynamics: The Finite-Difference Time-Domain Method, Artech H ouse, Boston, 2005. With per mission. After Yee, K. S., IEEE Trans. Antennas Propagation, AP-14, 302, 1966, Copyright IEEE. With permission.) H x n+1/2 i,j = H x n−1/2 i,j + t µx E y n i,j+1 − E y n i,j (35.26a) H z n+1/2 i,j = H z n−1/2 i,j + t µx E y n i,j − E y n i+1,j (35.26b) E y n+1 i,j = C a i,j ·E y n i,j + C b i,j ·H x n+1/2 i,j −·H x n+1/2 i,j−1 +·H z n+1/2 i−1,j −·H x n+1/2 i,j (35.26c) where the coefficients C a and C b depend on the materials properties and charge densities: C a i,j = 1 − σ i,j t 2ε i,j 1 + σ i,j t 2ε i,j (35.27a) C b i,j = t 2ε i,j 1 + σ i,j t 2ε i,j (35.27b) From the initial conditions, the suitable fields are computed at half time steps throughout the spatial grid, and the revised fields are then used for the computation of the complementary fields for the next half time step. Each step, of course, could be designated as a unit time step for the algorithm. But then the entire algorithm (E generating H; H generating E) would then require two time steps to come full circle. The use of half time steps is therefore convenient so that the entire algorithm counts a single cycle in a single unit time step. This staggered computation is illustrated in Figure 35.29. The calculation p roceeds through time and space until the maximum time allocated is reached. For a steady-state source of excitation (e.g., incident electromagnetic waves), the time interval should be chosen such that the final few cycles reach a steady state, and can be time averaged to give average local fields and intensity values. For this method to work, the optical properties of each point in the computation grid must be specified. For metals (such as the chrome photomask layer), this can be difficult, because the refractive index is less than 1 and a denser grid may be required. However, because the optical . is often called the Hopkins approach. Alpert /Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 720 24-9-2008 #27 720 Handbook of Algorithms for Physical Design Automation q p TCC . With permission.) Alpert /Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 714 24-9-2008 #21 714 Handbook of Algorithms for Physical Design Automation Fragmentation. and the precomputed Alpert /Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 716 24-9-2008 #23 716 Handbook of Algorithms for Physical Design Automation (a) (b) FIGURE