1. Trang chủ
  2. » Công Nghệ Thông Tin

Standardized Functional Verification- P17 ppsx

10 125 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Cấu trúc

  • 0387717323

  • Table of Contents

  • 1. A Brief Overview of Functional Verification

    • 1.1 Costs and Risk

    • 1.2 Verification and Time to Market

    • 1.3 Verification and Development Costs

    • 1.4 But any Lessons Learned?

    • 1.5 Functional Verification in a Nutshell

    • 1.6 Principles of Constrained Random Verification

    • 1.7 Standardized Functional Verification

    • 1.8 Summary

  • 2. Analytical Foundation

    • 2.1 A Note on Terminology

    • 2.2 DUTs, DUVs, and Targets

    • 2.3 Linear Algebra for Digital System Verification

    • 2.4 Standard Variables

    • 2.5 Ranges of Variables

    • 2.6 Rules and Guidelines

      • 2.6.1 Example – Rules and Guidelines

    • 2.7 Variables of Connectivity

      • 2.7.1 Example – External Connectivity

      • 2.7.2 Example – Internal Connectivity

    • 2.8 Variables of Activation

      • 2.8.1 Example – Activation

    • 2.9 Variables of Condition

      • 2.9.1 Example – Conditions

    • 2.10 Morphs

    • 2.11 Variables of Stimulus and Response

      • 2.11.1 Internal Stimuli and Responses

      • 2.11.2 Autonomous Responses

      • 2.11.3 Conditions and Responses

      • 2.11.4 Example – Stimulus and Response

    • 2.12 Error Imposition

      • 2.12.1 Example – Errors

    • 2.13 Generating Excitement

    • 2.14 Special Cases

      • 2.14.1 Example – Special Case

    • 2.15 Summary

    • References

  • 3. Exploring Functional Space

    • 3.1 Functional Closure

    • 3.2 Counting Function Points

      • 3.2.1 Variables of Connectivity

      • 3.2.2 Variables of Activation (and other Time-variant Variables)

      • 3.2.3 Variables of Condition

      • 3.2.4 Variables of Stimulus

      • 3.2.5 Variables of Response

      • 3.2.6 Variables of Error

      • 3.2.7 Special Cases

      • 3.2.8 An Approximate Upper Bound

    • 3.3 Condensation in the Functional Space

    • 3.4 Connecting the Dots

    • 3.5 Analyzing an 8-entry Queue

    • 3.6 Reset in the VTG

    • 3.7 Modeling Faulty Behavior

    • 3.8 Back to those Special Cases

    • 3.9 A Little Graph Theory

    • 3.10 Reaching Functional Closure

    • 3.11 Summary

  • 4. Planning and Execution

    • 4.1 Managing Verification Projects

    • 4.2 The Goal

    • 4.3 Executing the Plan to Obtain Results

      • 4.3.1 Preparation

      • 4.3.2 Code Construction

      • 4.3.3 Code Revision

      • 4.3.4 Graduated Testing

      • 4.3.5 Bug Fixing

    • 4.4 Soft Prototype and Hard Prototype

    • 4.5 The Verification Plan

    • 4.6 Instances, Morphs, and Targets (§ 1)

    • 4.7 Clock Domain Crossings (§ 1)

    • 4.8 Verifying Changes to an Existing Device (§ 1)

    • 4.9 Interpretation of the Specification (§ 1)

    • 4.10 Instrumenting the Prototype (§ 2)

      • 4.10.1 An Ounce of Prevention (§ 2)

    • 4.11 Standard Results (§ 3)

    • 4.12 Setting Goals for Coverage and Risk (§ 4)

      • 4.12.1 Making Trade-offs (§ 4)

      • 4.12.2 Focusing Resources (§ 4)

    • 4.13 Architecture for Verification Software (§ 5)

      • 4.13.1 Flow for Soft Prototype (§ 5)

      • 4.13.2 Random Value Assignment (§ 5)

      • 4.13.3 General CRV Process (§ 5)

      • 4.13.4 Activation and Initialization (§ 5)

      • 4.13.5 Static vs. Dynamic Test Generation (§ 5)

      • 4.13.6 Halting Individual Tests (§ 5)

      • 4.13.7 Sanity Checking and Other Tests (§ 5)

      • 4.13.8 Gate-level Simulation (§ 5)

      • 4.13.9 Generating Production Test Vectors (§ 5)

    • 4.14 Change Management (§ 6)

    • 4.15 Organizing the Teams (§ 7)

      • 4.15.1 Failure Analysis (§ 7)

    • 4.16 Tracking Progress (§ 8)

    • 4.17 Related Documents (§ 9)

    • 4.18 Scope, Schedule and Resources (§ 10)

    • 4.19 Summary

    • References

  • 5. Normalizing Data

    • 5.1 Estimating Project Resources

    • 5.2 Power and Convergence

    • 5.3 Factors to Consider in using Convergence

    • 5.4 Complexity of a Target

    • 5.5 Scaling Regression using Convergence

    • 5.6 Normalizing Cycles Counts with Complexity

    • 5.7 Using Normalized Cycles in Risk Assessment

    • 5.8 Bug Count as a Function of Complexity

    • 5.9 Comparing Size and Complexity

    • 5.10 Summary

    • References

  • 6. Analyzing Results

    • 6.1 Functional Coverage

    • 6.2 Standard Results for Analysis

    • 6.3 Statistically Sampling the Function Space

    • 6.4 Measures of Coverage

    • 6.5 Code Coverage

    • 6.6 State Reachability in State Machines

    • 6.7 Arc Transversability in State Machines

    • 6.8 Fault Coverage

    • 6.9 VTG Coverage

    • 6.10 Strong Measures and Weak Measures

    • 6.11 Standard Measures of Function Space Coverage

    • 6.12 Specific Measures and General Measures

    • 6.13 Specific Measures for Quadrant I

    • 6.14 General Measures for Quadrants II, III, and IV

    • 6.15 Multiple Clock Domains

    • 6.16 Views of Coverage

      • 6.16.1 1-dimensional Views

      • 6.16.2 Pareto Views

      • 6.16.3 2-dimensional Views

      • 6.16.4 Time-based Views

    • 6.17 Standard Views of Functional Coverage

    • 6.18 Summary

    • References

  • 7. Assessing Risk

    • 7.1 Making Decisions

    • 7.2 Some Background on Risk Assessment

    • 7.3 Successful Functional Verification

    • 7.4 Knowledge and Risk

    • 7.5 Coverage and Risk

    • 7.6 Data-driven Risk Assessment

    • 7.7 VTG Arc Coverage

    • 7.8 Using Q to Estimate Risk of a Bug

    • 7.9 Bug Count as a Function of Z

    • 7.10 Evaluating Commercial IP

    • 7.11 Evaluating IP for Single Application

    • 7.12 Nearest Neighbor Analysis

    • 7.13 Summary

    • References

  • Appendix – Functional Space of a Queue

    • A.1 Basic 8-entry Queue

    • A.2 Adding an Indirect Condition

    • A.3 Programmable High- and Low-water Marks

    • A.4 Size of the Functional Space for this Queue

    • A.5 Condensation in the Functional Space

    • A.6 No Other Variables?

    • A.7 VTGs for 8-entry Queue with Programmable HWM & LWM

  • Index

    • A

    • B

    • C

    • D

    • E

    • F

    • G

    • H

    • I

    • K

    • L

    • M

    • N

    • O

    • P

    • Q

    • R

    • S

    • T

    • U

    • V

    • W

Nội dung

6.11 Standard Measures of Function Space Coverage 145 coverage fall between these two extremes as shown in Fig. 6.3. The rela- tive positions of the various measures along the vertical axis of increasing strength are only approximate, but serve to indicate how strongly each should be considered when assessing risk at tape-out, the topic of the next chapter. Fig. 6.3. Relative strength of measures of coverage 6.11 Standard Measures of Function Space Coverage In our discussion on standard measures (and later on standard views) it’s necessary to keep in mind that the measures described in this section are only the beginning of the evolution of widely-adopted standards with re- gard to measuring functional coverage for an IC or other digital system. The measures described in the remainder of this section might better be re- garded as candidates for inclusion in a standard. 146 Useful and productive candidates will be embraced by the verification community and, eventually, by standards-making bodies. Those that are ei- ther not useful or not productive will fall by the wayside. Accumulation of empirical data within the industry will lead to the discovery of other more useful and productive measures that will add to the standard or replace some existing measures. Regression results are retained for the purpose of generating standard measures by way of data-mining. Ideally, complete execution histories of all tests would be saved for later analysis. However, this is not always practical due to the considerable disk space consumed by the simulation results. But, considering the economics of cheap disk storage vs. expensive mask sets, this storage can be a very worthwhile investment. The standard measures are the basic measures for how thoroughly each of the various spaces is explored and/or exercised. Many of these measures are simply the fraction of possible things exercised, what is commonly called coverage. Another term commonly used in this analysis is visit. Some commercially available verification tools allow the user to establish some minimum number of visits to a given thing (statement, state machine arc, value, etc.) before claiming that the thing has been covered or verified. Earlier in this chapter it was mentioned that some tools will paint an arc depending on whether the arc has never been traversed (visited), been traversed the number of times (or more) than specified by the user, or somewhere in-between. 6.12 Specific Measures and General Measures It is necessary to account for the four different possible ways in which coverage might be measured for a given target or body of RTL. These are shown in Fig. 6.4. The subspace of connectivity can be usefully regarded as containing 4 non-intersecting quadrants. Quadrant I is without variability. That is, the values of variables of connectivity are all constants within this quadrant. This corresponds to designing an IC for use in one and only one system. Many conventional designs fall into this quadrant, and verification cover- age of these designs is where most industry attention has been focused. Chapter 6 – Analyzing Results 6.12 Specific Measures and General Measures 147 Fig. 6.4. Four quadrants of subspace of connectivity Quadrant II corresponds to designs in which the target is intended to be used in a variety of systems. Many general-purpose ICs fall into this quad- rant, such as a microprocessor designed to work with multiple memory subsystems and I/O subsystems. Quadrants III and IV correspond to the design space for commercial IP. RTL can be generated with differing values for variables of internal con- nectivity per the needs of the IP integrator. Quadrant III may be of little in- terest but quadrant IV contains most commercial IP available currently. Only one of these four quadrants (quadrant I) can be characterized by specific measures of functional coverage. Code coverage, for example, is only meaningful for a single instance being simulated in a single context. The other three quadrants (II, III, and IV) can only be characterized by general measures of functional coverage. These general measures can be useful to the IP integrator in evaluating a producer’s product in a broad sense, but if the intended usage of the IP is known, then the specific meas- ures can be applied. To know the intended usage means that values for all variables of internal and external (to the IP) connectivity have been deter- mined. Chapter 7 will have more to say on this topic. 148 6.13 Specific Measures for Quadrant I This set of measures is summarized in Table 6.1 below. Note that these measures must be applied to each clock domain in the target individually. Table 6.1. Standard Specific Measures Type of measure Value for measure Code coverage % statement % branch % condition % path (optional) % toggle (optional) % trigger (optional) For each state machine % states reached % arcs traversed VTG (uncondensed) coverage % function points % function arcs VTG (condensed) coverage % function points % function arcs Complexity Z G Z V Regression cycles N R Q ZG Q ZV Chapter 6 – Analyzing Results The standard measures for verification of a single instance in a single con- text include the familiar measures of code coverage and somewhat less familiar measures based on state machines. In addition to these measures, all of which should be readily available using commercial verification tools, the standard measures include the much stronger measures of VTG point coverage and VTG arc coverage 2 . Coverage of standard variables is implicit in VTG coverage, but this coverage could also be included explic- itly as standard specific measures (expressed as a fraction of the total count of values of the variables as in Table 6.2). To provide a more complete picture and for eventual comparison with other projects (or to compare IP products) both the complexity and the quantity of regression with CRV are included as standard measures. At the time of publication, commercial tools that readily determine VTG coverage 2 are not yet available. 6.15 Multiple Clock Domains 149 6.14 General Measures for Quadrants II, III, and IV The sets of measures that provide a broad view of coverage of RTL used in multiple contexts or used to generate multiple instances (or both) are listed in Table 6.2. Table 6.2. Standard General Measures Standard Variables Within subspace Overall Connectivity % - instance % - context % Activation % - power % - clocking % - reset % Condition % - internal direct % - internal indirect % - external direct % - external indirect % Stimulus % - composition % - time % - errors in composition % - errors in time % Response % - composition % - time % 6.15 Multiple Clock Domains The measures listed in Tables 6.1 and 6.2 do not quite give a sufficiently clear picture of coverage because clock-domain crossing (CDC) signals aren’t sufficiently addressed by any of these measures. Fortunately, the de- sign issues related to synchronizing signals between two clock domains are well understood. * * These entries are the fraction, expressed as a percentage, of the values of the indicated variable that have been exercised 150 Commercially available tools are now capable of analyzing RTL stati- cally for correct clock-domain crossings with the exception of handshaking protocols (as discussed in Chapter 4). The use of these tools is routine in many development organizations and their cost need not be prohibitive when risk abatement is considered. The reports generated for RTL contain- ing one or more clock-domain crossings should be free of any errors or warnings. The results of this analysis with these tools should be included as a matter of course with the standard measures. Finally, the specific combinations of clock frequencies used during veri- fication should be declared along with the standard measures and the CDC report. 6.16 Views of Coverage Having collected standard results, it is then possible to obtain standard views of these results. There are many excellent techniques for visualizing the coverage ob- tained within the regression results, many provided with the commercially available tools. For the purposes of illustrating the concept of visualizing the results to identify coverage holes, consider a couple of examples. In these views one is attempting to make judgments on whether the degree of exercise is sufficiently high such the corresponding risk of an unexposed bug is sufficiently low to meet verification goals. Standard views use any of 3 different indicators for how thoroughly the target has been exercised: • visits: If a value has been assigned to a variable over the course of re- gression, then that value is said to have been visited at least once. Simi- larly, if values have been assigned to 2 different variables, then that pair of values is said to have been visited at least once. The degree of exercise expressed as visits is a binary degree: the function point was visited (at least once) or it was never visited. • cycles: The number of cycles during which a given variable was as- signed a particular value is indicative of how thoroughly that function point was exercised. Cycles are meaningful for a single clock domain only. A target comprising 2 clock domains needs a separate cycle count for each domain. • Q: This is a normalized view of cycles based on the complexity of the clock domain of the target or on the size of its corresponding VTG. Chapter 6 – Analyzing Results 6.16 Views of Coverage 151 6.16.1 1-dimensional Views A standard 1-dimensional view of the values attained by a single variable over the course of regression are readily visualized with the use of a histo- gram that plots the count of occurrences of a value against the range of values of that variable (see Fig. 6.5). An area of unexercised functionality is readily seen in the histogram. If there is any faulty behavior associated with these unexercised values, it will not be exposed during regression. Fig. 6.5. Example of 1-dimensional view 6.16.2 Pareto Views A Pareto chart is simply a histogram whose data have been sorted by the y- axis value. This enables identification of functional regions that may need further exercising. Consider the example of composite state, and in par- ticular the number of times that reachable composite states have been en- tered. Rather than counting the number of cycles that a given composite 152 state has been visited (idling state machines spend a lot of cycles without changing state), the number of times the composite state has been entered from a different state is much more enlightening. This example is illus- trated in Fig. 6.6. Fig. 6.6. Example of Pareto View This histogram clearly shows 3 composite states that were never entered over the course of pseudo-random regression. If there are any bugs associ- ated with these composite states, they will not be exposed. Similarly, some composite states were entered rather infrequently and the risk of an unex- posed bug associated with these states is much higher than with those that have been entered much more frequently. Pareto views can provide important information about potential cover- age holes when coverage is known not to be complete (exhaustive). This is usually the case for commercial designs. Consider as an example the 8- entry queue analyzed in Chapter 3. Assume that this queue, deeply buried within some complex design, has been exercised exhaustively by a test generator. That is, every VTG arc has been traversed at least once. Why Chapter 6 – Analyzing Results 6.16 Views of Coverage 153 would there be any interest in a Pareto view of visits to the function points of ITEMS_IN_QUEUE? If such a view revealed that, in fact, the full- queue value had been visited relatively infrequently as compared to other values, that’s a clue that perhaps other logic elsewhere within the target hasn’t been exercised thoroughly enough and might still contain an unex- posed bug. Dynamic behavior must be exercised throughout its range with particu- lar focus on the boundaries (or extremes) of its defined behavior. Is the test generator able to fill and empty the queue? Can it do so frequently? Can it sustain a full or nearly full state for sufficiently long periods of time so that other areas of the target are thoroughly exercised? Without exhaustive coverage these questions should be answered with suitable views of cover- age of variables associated with this dynamic behavior, such as the number of items in a queue or the number of bus access per unit time. This will be discussed again later in this chapter under the section about time-based views. 6.16.3 2-dimensional Views A standard 2-dimensional view of the values attained by 2 chosen variables (“cross” items in the e language) over the course of regression are readily visualized with the use of a scatter diagram with a symbol plotted for every occurrence of a pair of values (see Fig. 6.7) recorded over the course of re- gression. For this particular pair of variables, a dependency exists between their ranges such that 6 pairs of values cannot occur. These create a vacant region in the functional subspace spanned by these 2 variables. Again, an area of unexercised functionality can be readily seen in the diagram, labeled as a coverage hole. Using color as a z-axis value, the count of occurrences can be indicated, and lightly covered areas can be determined from such a diagram. 6.16.4 Time-based Views The temptation to characterize levels of functional coverage with some single figure of merit is great, but no such measures exist yet. Coverage of many types of functionality are often evaluated on a qualitative basis based on engineering knowledge of how the target is intended to function. 154 Fig. 6.7. Example of 2-dimensional view Consider the example of a shared bus in a hypothetical target. It can be asserted that thoroughly exercising a shared bus requires that the verifica- tion software be able to drive it now and then to saturation and then to qui- escence and, possibly, to other levels as defined by high- and low-water marks. Fig. 6.8 illustrates the results of exercising such a bus both well and poorly. The trace labeled “good” shows that the tests are able to cause ac- cesses at the maximum rate (15 in this example) we well as at the mini- mum rate (zero in this example). The trace labeled “poor” is not able to saturate the bus and, indeed, does not exercise frequent (more than 8 per sample period) accesses at all. Chapter 6 – Analyzing Results . values cannot occur. These create a vacant region in the functional subspace spanned by these 2 variables. Again, an area of unexercised functionality can be readily seen in the diagram, labeled. temptation to characterize levels of functional coverage with some single figure of merit is great, but no such measures exist yet. Coverage of many types of functionality are often evaluated. The other three quadrants (II, III, and IV) can only be characterized by general measures of functional coverage. These general measures can be useful to the IP integrator in evaluating a

Ngày đăng: 03/07/2014, 08:20