1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Logic kỹ thuật số thử nghiệm và mô phỏng P11

16 339 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 16
Dung lượng 124,46 KB

Nội dung

551 Digital Logic Testing and Simulation , Second Edition , by Alexander Miczo ISBN 0-471-43995-9 Copyright © 2003 John Wiley & Sons, Inc. CHAPTER 11 I DDQ 11.1 INTRODUCTION Test strategies described in previous chapters relied on two concepts: controllability and observability (C/O). Good controllability makes it easier to drive a circuit into a desired state, thus making it easier to sensitize a targeted fault. Good observability makes it easier to monitor the effects of a fault. Solutions for solving C/O problems include scan path and various ad-hoc methods. Scan path reduces C/O to a combina- tional logic problem which, as explained in Chapter 4, is a solved problem (theoreti- cally, at least). I DDQ monitoring is another approach that provides complete observability. Current drain in a properly functioning, fully static CMOS IC is negligible when the clock is inactive. However, when the IC is defective, due to the presence of leakage in the cir- cuit, or possibly even to an open, current flow usually becomes excessive. This rise in current flow can be detected by monitoring the current supplied by the tester. How effective is this technique for spotting defective ICs? In one study, it was shown that I DDQ testing with a test program that provided 60% coverage of stuck-at faults pro- vided the same AQL as a test program with 90% stuck-at coverage without I DDQ . 1 The stuck-at fault model that we have been dealing with up to this point is not intended to address qualitative issues; its primary target is solid defects manifested as signals stuck-at logic 1 or logic 0. An IC may run perfectly well on a tester operating at 1 or 2 MHz, at room temperature, but fail in the system. Worse still, an IC may fail shortly after the product is delivered to the customer. This is often due to leakage paths that degrade to catastrophic failure mode shortly after the product is put into service. 11.2 BACKGROUND The CMOS circuit was patented in 1963 by Frank Wanlass. 2 His two-transistor inverter consumed just a few nanowatts of standby power, whereas equivalent bipolar circuits of the time consumed milliwatts of power in standby mode. During 552 I DDQ the 1970s, companies began measuring leakage of CMOS parts to identify those that had excessive power consumption. 3 At times it was a useful adjunct to the traditional functional testing for stuck-at faults, and at other times it was critical to achieve quality levels required by customers. The classic stuck-at fault model, while identifying unique signal paths (cf. Sec- tion 7.5) and providing a means for quantitatively measuring the completeness of a test for these paths, does not model many of the fault classes that can occur, particu- larly in deep submicron circuits. In fact, as was pointed out in Section 3.4 that the stuck-at fault can be thought of as a behavioral model for very low level behavioral devices, namely, the logic gates. Faults such as high-resistance bridging shorts, inside a logic gate or between con- nections to adjacent gates, may not be visible during a functional test. A leakage path may cause path delay, so the circuit does not operate correctly at speed, but it may operate correctly if the circuit is tested at a speed much slower than its design speed, since there may be enough time for a charge to build up and force the gate to switch. Shorts between signal runs on the die are usually overlooked during func- tional testing, because, in general, there is no fault model to determine if they have been tested. If there were fault models for these shorts, perhaps generated by a lay- out program, the number of these faults would be prohibitively large and would aggravate a frequently untenable fault simulation problem (cf. Section 3.4). Excess current detected during test may indicate reliability problems. The inverter depicted in Figure 11.1 has a short circuit from gate to drain of Q 1 . In nor- mal operation, when input A switches from 0 to 1, there is a brief rush of current between V DD and ground. Shortly thereafter, a high at the gate of Q 1 causes a near complete cutoff of current, the measured flow typically being a few nanoamperes. This minuscule current flow is quite important in battery operated applications, ranging from human implants to laptop computers. However, because of the defect, there is a path from ground, through the drain of Q 2 , to the source of Q 1 and then to the gate. The output F in this example will likely respond with the correct value, since it is logically connected to ground through Q 2 , but current flow will be exces- sive, and there is the possibility of a catastrophic failure in the future. Interestingly, although much attention is given to detection of shorts by I DDQ , it can also detect open circuits. When an open occurs, it is often the case that neither Figure 11.1 CMOS inverter. Short circuit A F A F Time Current I DDQ (defect) V DD V SS Q 1 Q 2 SELECTING VECTORS 553 transistor of a transistor pair is completely turned off. As a result, a leakage path from ground to V DD exists. This is significant because, in conventional stuck-fault testing, a two-vector combination is required to detect stuck-open faults in CMOS circuits (cf. Section 7.6.2). 11.3 SELECTING VECTORS In order to measure leakage current, the circuit must be in a fully initialized state. I DDQ measurements must be made on quiet vectors—that is, vectors with very little leakage current. During simulation, those vectors for which indeterminate values are detected must immediately be eliminated as candidates for current measurement. During test, when the circuit reaches a vector at which a current measurement is to be made, the circuit must be held in a steady state for a sufficient duration to allow all switching transients to subside. Some design rules include: No pullups or pulldowns. No floating nodes. No logic contention. If analog circuits appear in the design, they should be on separate power supplies. No unconnected inputs on unused logic. The purpose of these design rules is to prevent excess current flows during quies- cent periods. Pullups and pulldowns provide resistive paths to ground or power. On average, a node is going to be at logic 0 half the time and at logic 1 half the time. If the node is at logic 0 and is connected to a pullup, a path exists for current flow. Floating inputs may stabilize at a voltage level somewhere between ground and V DD , thus pro- viding a current path. Incompletely specified busses can be troublesome. For example, if a bus has three drivers, a logic designer may design the circuit in such a way that the select logic floats the bus when no driver is active. Hence, any inputs driven by the bus will be floating. Bus keeper cells are recommended to prevent floating buses. 1 In general, any circuit configuration that causes a steady current drain from the power supply runs the risk of masking failure effects, since the effectiveness of I DDQ relies on the ability to distinguish between the very low quiescent current drain for a defect-free circuit and the high current caused by a defect. Interestingly, redundant logic, which is troublesome for functional testing, does not adversely affect I DDQ testing. In fact, I DDQ can detect defects in redundant logic that a functional test can- not detect. 11.3.1 Toggle Count Toggle count has been used for many years as a metric for evaluating the thorough- ness of gate-level simulations for design verification. When schematic entry was the primary medium for developing logic circuits, and the level of abstraction was logic 554 I DDQ gates, toggle count could be used to identify nodes on the schematics that were never toggled to a particular value. Those nodes were then targeted during simula- tion, the objective being to get all or nearly all nodes toggled to both 1 and 0. Since one of the objectives of I DDQ is to identify circuits with short circuits between signal lines and power or ground, the toggle count can be an effective method for determining the effectiveness of a given test. If a particular set of test vectors has a high toggle percentage, meaning that a high percentage of nodes toggled to both 1 and 0, then it is reasonable to expect that a high percentage of shorts will be detected. The computation is quite straightforward: simply identify the gate that is driving each line in the circuit and note whether it has toggled to a 1 or 0 at the end of each vector. Then, during simulation, the first step is to determine whether or not the vec- tor can be used for I DDQ . Recall that a vector cannot be a candidate if the circuit is not yet fully initialized, or if there is bus contention. If the vector is a candidate, then determine how many previously untoggled nodes are toggled by this vector. Since there is usually a limit on the number of vectors for which the tester can make I DDQ measurements, it is desirable to select vectors such that each vector selected contrib- utes as many new nodes as possible to the collection of toggled nodes. The first vector that meets acceptance criteria is generally going to provide about 50% coverage, since every node is at 1 or 0. A scheme described in the Quietest method (next section), but that is also applicable here, establishes a percentage of the untoggled node values as an objective. As an example, an objective might be estab- lished that bars a vector from being selected unless it toggles at least 10% of the currently untoggled node values. As toggle coverage increases, the 10% selection criteria remains, but the absolute number of newly toggled node values decreases. This procedure can be applied iteratively. For example, a given percentage may be too restrictive; as a result, no new vectors are selected after some toggle coverage is reached. Those vectors can be retained, and then simulation can be rerun with a lower percentage threshold, say 5%. This will usually cause additional vectors to be selected. If the maximum allowable number of vectors has not been reached, and the toggle coverage has not yet reached an acceptable level, this procedure can again be repeated with yet another lower selection percentage. 11.3.2 The Quietest Method The quietest method is based on the observation that six shorts can occur in a single MOS transistor: 4 f GS gate and source f GD gate and drain f SD source and drain f BS bulk and source f BD bulk and drain f BD bulk and gate SELECTING VECTORS 555 Figure 11.2 MOS transistor short fault model. These shorts are seen in Figure 11.2. The approach used in this method is applicable at the transistor level or at the macrocell level. It begins with a table for a particular cell, which could be a simple logic gate, or a full-adder, or a considerably more complex circuit. All input combinations to the cell are fault-simulated at the transis- tor level. This list of transistor shorts permits I DDQ fault simulation of the entire cir- cuit to be accomplished hierarchically. The first step is to simulate each transistor or macrocell and to fault-simulate each of the faults. A table is created for each cell, listing I/O combinations versus faults detected (see Figure 11.3). The NAND gate, Figure 11.3(a), is simulated, and the table of Figure 11.3(b) is constructed. This table is a matrix of dimension m × n , where m = 2 k is the number of rows, and k is the number of I/O pins. The circuit shown in Figure 11.3 has two inputs and one output, so there are 2 3 rows. The number of columns, n , corresponds to the number of transistors. Each entry in the table is a two-character octal number. The six bits corresponds to the six tran- sistor faults, as defined in Figure 11.3(c). The all-zero row entries for combinational logic correspond to combinations that cannot occur. For example, row 2 corresponds Figure 11.3 Lookup table for I DDQ faults. Gate Source Drain Bulk A B X i 0 1 2 3 4 5 6 7 N1 0 22 0 26 0 70 43 0 N2 0 0 0 43 0 26 43 0 P1 0 43 0 43 0 0 26 0 P2 0 43 0 0 0 43 26 0 f BG f BD f BS f SD f GD f GS (a) (b) (c) N1 N2 P1 P2 556 I DDQ to the combination A, B, X = (0,1,0), which is inconsistent with the definition of a NAND gate. Note, however, that some combinations in sequential circuits may rely on the presence of feedback. Once the table is created, it can be used to compute I DDQ coverage for the cell during normal logic simulation. At the end of each vector, the input combination on each macrocell is examined. If the combination has not been generated by any previ- ously selected I DDQ vector, then any short faults detected by this combination, and not previously marked as detected, can be selected and tallied for the current vector. After all cells have been examined, the incremental improvement in fault coverage for the vector can be computed. If the vector satisfies some criteria, such as that described in the previous subsection, it can be accepted and added to the collection of vectors for which I DDQ measurements are to be made. 11.4 CHOOSING A THRESHOLD One of the problems associated with I DDQ is choice of a current threshold. Different devices exhibit different amounts of leakage current. Even different devices of the same die size may have significantly different amounts of leakage current, depend- ing on the kind of logic and/or memory that is contained on the die. Furthermore, the same device, when tested at wafer sort and at package test, will exhibit different leakage. The target application of the IC will influence the leakage threshold: Manu- facturers of ICs for portable applications or human implants will have much more stringent requirements on leakage current. The issue is further complicated by the fact that different vectors from the same test vector set can have noticeably different leakage currents. As a result, it is a non- trivial task to establish a threshold for current. A threshold that is too lax results in keeping devices that should be discarded. Conversely, a threshold that is too rigor- ous results in discarding good devices. One source suggests that if I DDQ of the device under test is greater than 100 µ A for all vectors under normal conditions, the IC can- not be tested by means of I DDQ measurement. 5 Determining a threshold starts with a histogram of I DDQ current versus number of devices that occur in each bin of the histogram. Figure 11.4 shows a histogram for 11,405 microcontrollers. 6 The author uses I SSQ to denote the fact that current is mea- sured at V SS rather than V DD . In an IEEE QTAG (Quality Test Action Group) survey, respondents were asked where they would set a threshold for the data in Figure 11.4. 7 The following results were obtained: 500–100 µA3 100–50 µA7 50–25 µA4 25–10 µA3 10–5 µA6 <5 µA5 MEASURING CURRENT 557 Figure 11.4 Distribution of I SSQ . One experiment that was conducted attempted to correlate I DDQ results with the results of functional tests. In this experiment, I DDQ was measured in die that passed functional test with high stuck-fault coverage and in die that failed the same func- tional tests. It was shown that 96% of parts passing the functional test measured less than 1 µA, while only 2% of parts reading greater than 1 mA passed functional test. 1 Conversely, of parts failing functional test, 83% gave I DDQ readings of over 1 mA, while only 15% read less than 1 µA. It has been recommended that I DDQ measurements be made at the highest possi- ble V DD in order to ensure detection of defects that have strong nonlinear character- istics. 8 The authors of this study report that a defective IC leaked 10 nA at 5 V but 29.3 µA at 6.2 V. These same authors point out that a design that was amenable to I DDQ testing had, nonetheless, some particular vectors in which I DDQ values were on the order of 265 µA. In general, it seems safe to say that the selection of a threshold will, of necessity, be empirical, since there is no hard and fast rule. Measurements such as those described here, involving measurement of I DDQ for those that pass ver- sus those that fail functional test, help to shed light on the subject. Measurement of I DDQ from lots with different yields, along with die from different points on the wafer and at different voltages and after different periods of quiescence, can help to influence one’s judgment as to where to set the threshold. 11.5 MEASURING CURRENT A proposed circuit for measuring I DDQ current flow that has come to be known as the Keating–Meyer circuit is shown in Figure 11.5. 8 At the beginning of the period, Q 1 is on and provides a short circuit between C 1 and C 2 , maintaining full voltage to the DUT. Eventually, Q 1 is turned off and static current to the DUT is obtained exclusively from C 1 . The value of C 1 is determined from the relationship C 1 = I ⋅ t/V, where I is the desired measurement resolution, t is the elapsed time within which it is desired to make a measurement, and V is the voltage resolution at the op amp. 0 500 1000 1500 2000 2500 3000 Number in bin 01234567891011121314151617181920254060 over I SSQ bin (µA) 558 I DDQ Figure 11.5 I DDQ pass/fail circuit. Example Suppose we want a measurement resolution of 25 µA within 500 ns, along with 10 mV at the op amp: For the capacitance value of 1250 pF, if we wish to limit voltage drop at the DUT to 1.0 V (V CC > 4 V), for a defect-free device (I DD < 25 µA), then the voltage drop across Q 1 must be measured within t 1 < CV/I = 50 µS.  The circuit in Figure 11.5 can also be used to measure switching currents, as well as static I DD . For example, if a 1.0-A peak current is assumed, lasting 5 ns, then for a desired resolution of 100 µA at 10 mV and for a 500-ns I DD measurement time, C 1 = 100 µA * 500 ns/10 mV = 5000 pF. Turn off Q 1 and clock the device at t = 0 ns. Then sample the drop across Q 1 at t = 100 ns. The total charge delivered by C 1 is The voltage across Q 1 equals V = Q/C = 5 nC/.005 µF = 1 V. In these equations, the value of C 1 is critical. An optimal value must be selected in order to avoid unneces- sarily increasing test time or producing excessive V CC drop at the DUT. The QuiC-Mon circuit builds on the Keating–Meyer concept. 9 Figure 11.6 illus- trates the QuiC-Mon circuit. The key difference is that QuiC-Mon takes the time derivative of the voltage at V DD . As a result, the constant-slope waveform is con- verted into a step function and settling time improves significantly, allowing faster measurement rates. Measurements with QuiC-Mon can be taken using I DDQ or I SSQ . However, I SSQ provides more accurate measurements at input pins with internal pullups when the pin is driven low. The transfer function for the QuiC-Mon circuit of Figure 11.6 is DUT C 1 C 2 Q 1 SH1 To DPS To sample and holds C 1 It V ---- 25 µA 500 ns⋅ 10 mV ------------------------------------- 1250 pF== = Q 1 i td ∫ 1 A 5 ns⋅()100 µA97 ns⋅()+ 5 nC 9.7 pC+ 5 nC== = = V S R 3 R 2 ------ V 1 R 3 R 2 ------ R 1 C 1 dV SS dt ------------ R 3 R 2 ------ R 1 C 1 C 1 C SS + --------------------- I SSQ == = I DDQ VERSUS BURN-IN 559 Figure 11.6 The QuiC-Mon circuit. If capacitor C 1 is large compared to the DUT capacitance C SS , the transfer func- tion is When using the monitor, a number of factors must be taken into consideration in order to achieve accuracy and speed. It is important to minimize the physical length of the V SS path between the DUT and QuiC-Mon to reduce noise and inductance. It is recommended that the monitor be within 2 or 3 cm of the DUT. For I DDQ testing bypass capacitance should be minimized so measurement speed is unaffected. For I SSQ testing, bypass capacitance is not a significant issue. The resistor R1 can be increased to amplify QuiC-Mon’s output. However, after a point, larger values require low-pass filtering. The circuit can achieve gains of up to 500 mV/µA at 250 kHz, which is sufficient for high-speed, submicroampere resolution. In some applications, transient settling time limited measurement speeds to 100 kHz. 11.6 I DDQ VERSUS BURN-IN Burn-in is a process of continuously energizing a circuit, usually under extreme voltage or environmental conditions, in order to precipitate failures of devices that are marginal performers due to fabrication imperfections. It is well known that most devices that fail will do so within a few days or weeks of their initial purchase. This is illustrated in Figure 11.7. Some devices will pass the initial testing phase, when the testing is performed at nominal values of the key parameters, but will fail shortly after when put into operation. By elevating parameters such as voltage and tempera- ture, many of the devices susceptible to early life failures can be identified and dis- carded before they are packaged and shipped to customers. There is growing evidence that an effective I DDQ program can serve the same purpose as a burn-in program. One of the more prevalent failures common to CMOS circuits is the gate-oxide short (GOS). The GOS may create a high-resistance leak- age current path that does not initially affect performance because of the high noise V DD V 1 R 1 + − + − V S R 3 R 2 I SSQ V SS C SS C 1 DUT V S R 1 R 3 R 2 ------ I SSQ = 560 I DDQ Figure 11.7 Bathtub curve. margin of the field-effect transistor (FET). Eventually, over time, the resistance decreases and the device fails. In a paper previously cited in this chapter, 1 the author evaluated the effects of I DDQ on burn-in. It was found that the use of I DDQ reduced burn-in failures by 80%, whereas adding additional functional tests had only a marginal effect on reducing burn-in failures. Another study was performed on ASICs returned from the field. The author found that nearly 70% of the parts would have failed an I DDQ test. 10 In another study the author subjected parts failing an I DDQ test to a 1000-hour life test. The experiment revealed that about 8% of the parts that failed the I DDQ test failed the 1000-hour life test. Yet another study was conducted on 2100 die that failed I DDQ . When subjected to burn-in, the failure rate was 10 times greater than that of a con- trol sample. 11 In yet one more previously cited study, the number of parts failing a 24-hour burn-in was reduced from a failure rate of 448 ppm to a rate of 25.6 ppm. 6 A study performed at Intel was used to justify the use of I DDQ as a major part of the test strategy on the i960JX CPU. 12 The goal was to achieve ZOBI (zero hour burn-in). This decision was shown to save about 1.25 million dollars as a result of reduced capital costs, reduced test costs per part, and yield improvement. In order to achieve ZOBI, it was necessary to demonstrate a defects per million (DPM) of less than 1000 (0.1% DPM). It was also necessary to have at least 30% of burn-in hard- ware in place for contingencies, and it was necessary to have SBLs (statistical bin limits) on key bins at wafer sort. The tool used by this division of Intel was an I DDQ fault simulator called iLEAK. It generated tables based on the Quietest method (Section 11.3.2). The use of toggle coverage and an option in iLEAK called fastileak helped to reduce the amount of computation by screening the vector sets and choosing candidate vectors for iLEAK to evaluate. At the end of that process, seven vectors were chosen. This set of vectors was augmented with another six vectors, bringing the total number to 13. The i960 CPU is a two-phase clock design, and only one of the phases is static, so it was necessary to change the vector format to ensure that the clock would stop during the static phase. Through experimentation it was determined that the delay time needed to measure leakage current was 20 ms per I DDQ strobe. A key concern in setting up the I DDQ process was to ensure defect detection with- out overkill—that is, discarding excessive numbers of good die. To achieve this, it Early life failures Useful life Wear-out period [...]... Elimination of a High Volume Microprocessor Using IDDQ, Proc IEEE Int Test Conf., 1996, pp 242–249 566 IDDQ 13 Henderson, C L., J M Soden, and C F Hawkins, The Behavior and Testing Implications of CMOS IC Logic Gate Open Circuits, Proc IEEE Int Test Conf., 1991, pp 302–310 14 Williams, T W et al., IDDQ Test: Sensitivity Analysis of Scaling, Proc IEEE Int Test Conf., 1996, pp 786–792 15 Soden, J M., C F . power. On average, a node is going to be at logic 0 half the time and at logic 1 half the time. If the node is at logic 0 and is connected to a pullup, a path. low level behavioral devices, namely, the logic gates. Faults such as high-resistance bridging shorts, inside a logic gate or between con- nections to adjacent

Ngày đăng: 07/11/2013, 20:15

TỪ KHÓA LIÊN QUAN