1. Trang chủ
  2. » Công Nghệ Thông Tin

Standardized Functional Verification- P18 pot

10 150 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 184 KB

Nội dung

6.17 Standard Views of Functional Coverage 155 Fig. 6.8. Example of time-based view Because tools are not yet capable of producing exhaustive coverage in a practical timeframe, it remains necessary and prudent to develop good judgment regarding this sort of dynamic behavior. A time-based view of bus utilization that is largely uniform or that is not able to reach and sus- tain extremes of usage indicates that coverage is poor and unexposed bugs are likely to remain in the RTL. A time-based view that shows lots of burst behavior with sustained high levels and low levels of activity with both rapid and slow transitions between extremes indicates that coverage is much, much better and that the risk of unexposed bugs is much lower. 6.17 Standard Views of Functional Coverage The views described in this chapter are actually quite simple and obvious. It is not the type of views that must be standardized, but rather that which is viewed that merits standardization. In other words, the standard variables and their values must be made available for arbitrary viewing by interested 156 parties, whether they are from upper management or from a prospective customer of IP. These different views are all quite common and readily available with commercial verification tools. When the standard results are available to such viewers, then it is easy and economical to produce the views on de- mand. Data-mining of the standard results and “refining the ore” with these views can provide useful insights into the level of coverage achieved for the different subspaces of the design. 6.18 Summary A verification methodology that takes full advantage of standard results will increase productivity in the long run. Everything about the verification of the target can be determined from these standard results. Until 100% functional coverage can be achieved in commercially interesting time- frames, it remains necessary to sample the results. If every view examined indicates thorough exercise with no gaping holes, then the risk of an unex- posed functional bug is low. If, on the other hand, one out of ten views re- veals a coverage hole, then the target has probably not been exercised thoroughly. Standard variables and the measures and views of them make it possible to have objective comparisons of projects and commercial IP. We will ex- amine this more closely in the next chapter. References Keating M, Bricaud P (2002) Reuse Methodology Manual: for System-on-a-chip Designs, Third Edition. Kluwer Academic Publishers. Piziali A (2004) Functional Verification Coverage Measurement and Analysis. Kluwer Academic Publishers. Raynaud A (2002) Managing Coverage: A Perspective. Synopsys. Tufte ER (2001) The Visual Display of Quantitative Information, Second Edition. Graphics Press. Verisity Design, Inc. (2005) Coverage-Driven Functional Verification. Verisity Design, Inc. Chapter 6 – Analyzing Results Chapter 7 – Assessing Risk So far in this book we have developed important new theory that explains what it means to achieve 100% functional coverage. This theory is based on a set of standard variables that account for the full range of variability within which our design must function. Standard results capture the values of these variables for analysis. If we achieve functional closure, we can tape-out without risk of a func- tional bug. But, until verification tools are available that can drive a system throughout its functional space, traversing every arc and visiting every point, we will have to live with a certain amount of risk. In this chapter we will learn how to assess the risk of a functional bug based on results of re- gression. 7.1 Making Decisions Risk assessment is performed to facilitate good decision-making. This was first discussed in chapter 1 and, before diving into the topic of risk assess- ment, a brief detour into this topic of decision-making will help place risk analysis into proper context. Conventional risk assessment has been well developed for industries in which safety is a consideration, primarily safety for people. Aircraft and nuclear power plants and medical implants must be designed not only with functionality in mind, but also with failure in mind. Things wear out with use and must undergo timely maintenance to sustain the required level of safety. Risk assessment for functional verification is somewhat different, but concepts from conventional risk assessment provide insightful back- ground. The interested reader is referred to the detailed treatment in Fault Tree Handbook by Veseley, et al for a thorough discussion of this subject, but a couple of extended quotations from this publication will provide some useful background. 158 Chapter 7 – Assessing Risk Fig. 7.1. Decision making based on quantified risk assessment First, an important reality check: It is possible to postulate an imaginary world in which no decisions are made until all the relevant information is as- sembled. This is a far cry from the everyday world in which decisions are forced on us by time, and not by the degree of completeness of our knowledge. We all have deadlines to meet. Furthermore, because it is generally impossible to have all the relevant data at the time the decisions must be made, we simply cannot know all the consequences of electing to take a particular course of action. On the difference between good decisions and correct decisions: The existence of the time constraint on the decision making process leads us to make a distinction between good decisions and correct decisions. We can classify a decision as good or 7.2 Some Background on Risk Assessment 159 bad whenever we have the advantage of retrospect. I make a decision to buy 1000 shares of XYZ Corporation. Six months later, I find that the stock has risen 20 points. My original de- cision can now be classified as good. If, however, the stock has plummeted 20 points in the interim, I would have to con- clude that my original decision was bad. Nevertheless, that original decision could very well have been correct if all the information available at the time had indicated a rosy future for XYZ Corporation. We are concerned here with making correct decisions. To do this we require: 1. The identification of that information (or those data) that would be pertinent to the anticipated decision. 2. A systematic program for the acquisition of this pertinent information. 3. A rational assessment or analysis of the data so acquired. In the preceding chapters the standard framework for functional verifi- cation has been defined and described in detail. The information pertinent to the anticipated decision appears in the form of values of standard vari- ables collected during the verification process. Collecting this information in such a manner that standard measures and views can be produced con- stitutes a systematic program for information acquisition. This places us in the position of now being able to form a rational assessment of risk based on analysis of the information. 7.2 Some Background on Risk Assessment Conventional risk assessment is concerned with understanding the extent of possible loss. In this sense risk is formed of two components: 1) some unlikely event, and 2) some undesirable consequence. Mathematically this is usually expressed as risk = p (event) ⋅ c(event), (7.1) where p is the probability of the event and c is the consequence of the event. If one purchases a lottery ticket for $1.00 and the probability of not winning is extremely high, then the risk is pretty nearly the $1.00 paid for 160 the ticket. This example has an easily quantifiable consequence, but many consequences are not so simply described numerically. If one purchases cosmetic surgery for $10,000 but with an associated small risk of death or disability from surgical complications, the risk is considerably more than the original outlay of $10,000. That leaves the problem of estimating the probability of this undesirable event. Estimating this probability is the concern of the remainder of this chapter. In spite of the rather exact definition in Eq. 7.1 for risk as used by those concerned with product safety, traditional usage freely inter- changes this term with “probability” and this book will honor this tradi- tion. The “risk of a bug” and the “probability of a bug” will be taken to 7.3 Successful Functional Verification What constitutes a successful functional verification project? There are as many definitions of success as there are managers, and there is likely to be wide disagreement among the various definitions. A project that deploys a new set of verification tools without a hitch or achieves code coverage goals previously unattained or results in a number of new software inventions might be considered wildly successful by some managers and dismissed as routine by other managers. Failure, on the other hand, is usually much easier to pin down. If a product cannot be shipped due to functional bugs in the product, there will be generally strong agreement that the project to develop the product was a failure. Veseley provides a handy formalism for clearly articulating risk (see Fig. 7.2). Chapter 7 – Assessing Risk Risk analysis typically progresses through several steps. The first step is identifying threats. Threats come in many forms, ranging from human (a lead engineer resigns) to operational (corporate re-organization delays progress by N weeks) to procedural (handoff of design data between engi- neering teams not well coordinated) to technical (verification tools do not work as advertised) to natural (hurricanes and earthquakes to name but a few). For the purposes of this book only one threat will be considered, that of an unexposed functional bug. mean the same thing. Nevertheless the consequences of a given bug will depend on how it might affect the product’s success (see remarks about optional or opportunistic functionality in section 4.6). 7.3 Successful Functional Verification 161 Fig. 7.2. Success and failure spaces This figure illustrates the complementary nature of success and failure. There are degrees of success that readily map to corresponding degrees of failure. At the two extremes we have total success and total failure. What lies between these two extremes is what is normally experienced in the real world. The natural drive for engineers and managers in pursuit of professional excellence is for total success in what they do. However, experience shows that despite best efforts, the most that can be expected is something a bit less than total success. This point is indicated on the continuum with the designation “maximum anticipated success.” The other points on this con- tinuum are likely to be unconsciously familiar to the reader. In golfing terms, now and then one enjoys the pleasant surprise of hit- ting that hole-in-one (total success), but the best that one expects on a given hole might be a birdie (maximum anticipated success). At the very least one might expect to break 80 (minimum anticipated success) for 18 holes, but there are days when one just barely breaks 100 (minimum acceptable success). This level of success can be costly in terms of replac- ing damaged clubs. Analogies are really not necessary here, however, because experienced verification engineers and managers will readily recognize degrees of suc- cess for a given verification project. Consider Fig. 7.3. Clearly if the prod- uct ships with fully functional first silicon, the project can be considered a total success. But, what might constitute intermediate levels of success? Real-world projects are executed within real-world constraints of scope, schedule, and resources. Few corporations can fund a parade of tape-outs without obtaining silicon worthy of shipping as product. Some limits will be placed on the amount of money (or engineers or simulation engines, 162 etc.) spent on a project before abandoning it if it doesn’t result in shippable product. The example illustrated in Fig. 7.3 uses a simple budgetary constraint to define these intermediate levels of success. Assume that $3,000,000 has been budgeted for purchasing masks for fabricating the IC. Using the ex- pense assumptions shown in Table 1.2, this budget is sufficient to pay for 3 full mask sets (at $1,000,000 each), or 2 full mask sets and 2 metal-only mask sets (at $500,000 each). Fig. 7.3. Example of budgeting for success Chapter 7 – Assessing Risk 7.3 Successful Functional Verification 163 The budgeting manager knows that the first $1,000,000 will be spent for the tape-out 1.0. However, experience says that despite best efforts on the part of everyone on the project, something always seems to require at least one metal turn. This is the maximum anticipated success and will cost the manager an additional $500,000 for the metal masks. At this point half of the budget ($1,500,000) will be spent. Experience also tells this manager that the most one can expect (based on prior project) is a second metal-only tape-out. This is the minimum an- ticipated success and it costs $2,000,000 to achieve. The very least acceptable outcome is that a full spin is needed after these two metal-only tape-outs. This constitutes the minimum acceptable success and it costs everything that has been budgeted for tape-outs, $3.000,000. Note that this particular progression of tape-outs (from 1.0 to 1.1 to 1.2 to 2.0) is within the budget for mask sets, but the project might progress in other ways, based on decisions made during the duration. The four possi- ble progressions within this budget are: • 1.0, 1.1, 1.2, 2.0 (shown in Fig. 7.3) • 1.0, 1.1, 2.0, 2.1 • 1.0, 2.0, 2.1, 2.2 • 1.0, 2.0, 3.0 The manager responsible for approval of tape-out expenses will want to consider carefully what will be the best strategy for getting product to mar- ket. There is no budget for masks above and beyond what is listed above, so if 3.1 or 4.0 should become necessary, this would be regarded as failure. Budget is only one of several considerations in setting goals for risk. Of the manager’s triad of scope, schedule, and resources, any one of these might be regarded as inflexible while the remaining two are regarded as flexible. In Fig. 7.3 resources are treated as inflexible - when the money is spent, that’s the end. 1 Many products, especially seasonal consumer prod- ucts such as games and toys, have financially unforgiving deadlines and schedule is inflexible. The budget column might instead be replaced with one showing schedule dates or perhaps product features (functionality) that can be sacrificed to complete the project One further observation regarding Fig. 7.3 must be made, and that is that success is defined in terms of the manufactured article being shipped as a product (or as a component of a product). The manager responsible for 1 Startups are particularly vulnerable to this constraint. When the Nth round of fi- nancing is denied, the project is pretty much over. 164 approving expenditures for a tape-out will not be interested in a function- ally correct and complete design that fails to work properly due to other reasons (excessive heat dissipation, for example) and this manager’s budget and organizational responsibility will extend well beyond func- tional verification, one of several factors affecting the ship-worthiness of the manufactured article. These factors include: • functional • electrical • thermal • mechanical • process yield A complete risk analysis must comprise all factors, but only the func- tional factor lies within the scope of this book. This same success/failure method could be used with IP vendors on whom the manager’s project is dependent. Can your IP vendor meet the terms that are imposed on your own development? How does your vendor view success in its varying degrees? Is there agreement between vendor and consumer? Is their view compatible with your project? 7.4 Knowledge and Risk Risk can be said to be a consequence of our lack of knowledge about something. The less we know about an undertaking, the greater the risk of an undesirable outcome. Functional verification addresses the risk—or probability - that a bug is present whose manifestations as faulty or unde- sired behavior limit the usefulness of the device in its intended applica- tions. Keating says (pp. 153, 156), “The verification goal must be for zero defects ideally achieving 100% confidence in the functionality of the design …” This is the correct goal for verification, but in the non-ideal world where ICs are developed and without true functional closure, risk will never actually reach zero. It approaches, but does not reach, zero. Chapter 7 – Assessing Risk . lower. 6.17 Standard Views of Functional Coverage The views described in this chapter are actually quite simple and obvious. It is not the type of views that must be standardized, but rather that. analysis. If we achieve functional closure, we can tape-out without risk of a func- tional bug. But, until verification tools are available that can drive a system throughout its functional space,. with functionality in mind, but also with failure in mind. Things wear out with use and must undergo timely maintenance to sustain the required level of safety. Risk assessment for functional

Ngày đăng: 03/07/2014, 08:20