7.10 Evaluating Commercial IP 175 Standard variables allow the integrator to characterize their needs in terms of external (to the IP) connectivity and condition variables and then examine various offerings precisely in terms of those values. Generating standard results such that standard views and measures can be obtained gives IP developers and buyers a solid platform on which to grow so we can all get back to the business of inventing. Standards bodies will eventually begin to define standard variables and their ranges. Then IP producers will be able to disclose what values or ranges are supported and the degree to which they have been verified. This will make it possible for prospective customers to examine standard results prior to licensing. Fig. 7.10. Evaluating IP Quality 176 It is extremely valuable to conduct one’s own risk assessment based on standard results provided by the vendors under consideration. And, if there are particular areas of concern, data-mining these results in greater detail will enable one to form a better assessment of risk for each vendor’s IP. If the IP is being acquired for broad deployment in a variety of applica- tions, then the following measures and views are of interest. If the IP is be- ing acquired for a single use, then the views and measures of unused con- nectivity are less relevant except as an indication of overall thoroughness of verification in general related to the IP. Of course, there are considerations in the buying decision other than verification, such as gate count, price, and so forth, but here we will re- strict our attention to quality of verification. Chapter 7 – Assessing Risk 7.11 Evaluating IP for a Single Application The evaluation of IP products from multiple producers is greatly simplified when standardized verification is in effect. If the IP is to be used in a sin- gle application, this corresponds to quadrant I in Fig. 6.4. In this case the specific measures listed in Table 6.1 apply. Comparing one vendor’s measures against another’s (or against one’s own internal measures) pro- vides a valid indication of the level of risk (probability of a functional bug) entailed in using the product. 7.12 Nearest Neighbor Analysis If a vendor’s standard results do not contain any data that match the in- tended usage, then an analysis of nearest neighbors may provide some in- dication of risk. A formal approach to comparing IP (on the basis of verification, that is) for a particular use considers the accumulated relevance of the standard re- sults to those for your intended usage. This formal approach may prove to be of limited practical value, but it illustrates how bugs can remain hidden even in “pre-verified” or “silicon-proven” IP. We begin with some useful definitions from coding theory: • Hamming weight: the number of positions in a vector with non-zero values. From the standpoint of verification, we will understand this definition to be the number of positions in a vector with non-null values. If a value has not been assigned to a variable in a given subspace, the position corresponding to that variable will have the null value. 7.12 Nearest Neighbor Analysis 177 • Hamming distance: the number of positions in which 2 vectors differ in value. The Hamming weight of a vector is also its distance from the null vector. First, consider the particular instance of the IP that corresponds to your needs. Does your instance appear in the standard results? If so, then some or all of the standard results relate directly to the defined instance because this exact instance produced all or some of the results. Formally we can state that the relevance r i =1. In addition, if only some of the results were generated from your instance, this must also be noted. Call the count of relevant cycles n i . Of course, if the instance is comprised of multiple clock domains, then this analysis must be performed separately for each clock domain. For the purposes of the present discussion, assume that the in- stance contains only 1 clock domain. If your instance does not appear in the standard results, then determine the nearest neighbor within the subspace of internal connectivity and com- pute the Hamming distance d i between your instance and this nearest neighbor. This distance is simply the number of positions in which the nearest neighbor differs from your instance. Formally, we could say that for this neighbor’s results the relevance r i =1-( d i /p i p i is the number of positions in the vector of variables of internal connectivity. However, not all variables are created equal. Some variables have much greater influence on the behavior of a digital system than others. Consider the variables of connectivity for a USB host controller. A variable corresponding to the number of ports for the controller will not affect the behavior affected by the logic quite as much as a variable corresponding to the speed (HS vs. FS vs. LS) of a given port. A weighting factor for each variable could help give a better indication of the net relevance. In this case we can use the dot product of the differ- ence vector (with a value of one for each position in which the two vec- tors differ and a zero for each position in which they are the same) with the weighting vector is the resulting weighted distance d i . 2 So, given a specific instance we can determine the relevance r i of the standard results to this instance as well as the number of cycles n i in the standard results that are produced by this instance (or some chosen nearest neighbor). Then, we continue the analysis by considering the context presented to the instance by our intended usage. If the results “contain” this context (i.e., some or all of the results were produced by the chosen instance or its 2 Existing standards bodies would bear the burden of assigning industry-standard verification weights to standard variables. ) , where 178 Chapter 7 – Assessing Risk neighbor in the chosen context), then the relevance r k = 1 (we use k to rep- resent context so that we can later use c to represent conditions). Other- wise, we find a neighboring context, compute its (weighted) distance from the chosen context and compute the resulting relevance. Then we can com- pute the relevance of the standard results to our instance/context pair as r = r i ⋅ r k . (7.3) We also note the number of cycles n k , which, it will be observed, is less than or equal to the number of cycles n i . This analysis is then continued to consider the internal conditions and external conditions until we arrive at a final value for the relevance of the standard results as well as the number of cycles in the standard results as- sociated with our intended use of the IP. Unless the match is exact, the relevance will be less (perhaps much less) than 1. In practice this analysis will most likely be a bit more complex than our example, especially when dependent variables enter the picture. For exam- ple, given an instance of a USB host controller, for each port accommo- dated by the controller a new set of variables spring into existence (such as a variable for speed of the port). Computing the relevance can become a bit complicated as these various dependencies are handled. The general analysis proceeds as shown in the following table. Table 7.4. Relevance of vendor’s standard results subspace relevance cycles net relevance net relevant cycles instance i r i n i r i n i context k r k n k ≤ n i r i ⋅ r k n k internal conditions c i r c i n c i ≤ n k ≤ n i r i ⋅ r k ⋅ r c i n c i external conditions c e r c e n c e ≤ n c i ≤ n k ≤ n i r i ⋅ r k ⋅ r c i ⋅ r c e n c e The preceding table does not account for variables of activation, but a similar analysis can be conducted to determine how relevant the activation testing is to the IP’s intended usage. The table is largely intended to illus- trate the relevance of the tests (sequences of stimuli applied to the instance in its logical context, i.e. the excitation). 7.13 Summary 179 The point of this analysis is not so much to compute a value for the net relevance and, correspondingly, the risk of an undiscovered bug. The point of this analysis is to realize just how quickly uncertainty and risk can ac- cumulate when the many, many differences between instance, context, and conditions are scrutinized. Even “silicon-proven” IP can still contain bugs. 7.13 Summary Standardized functional verification greatly simplifies the challenging task of developing bug-free digital systems. It provides a single framework (a linear algebra) for describing and comparing any digital system, whether comprising a single IC to a system containing multiple ICs. We started with one basic premise: From the standpoint of functional verification, the func- tionality of a digital system, including both the target of verification as well as its surrounding environment, is characterized completely by the variability inherent in the definition of the system. The fundamental relations among the many values of variables in the functional space are defined as follows. • Instantiation of a target in a context (i.e. a system) chooses a particular function point in the subspace of connectivity. • Activation of a system is a particular functional trajectory from inactive to active through the subspace of activation, arriving at some (usually) persistent region of the subspace within which all subsequent activity takes place. Deactivation is a trajectory from active to inactive. • Initialization of a system is a particular functional trajectory through the subspace of condition, also arriving at some (usually) persistent region of the subspace within which all subsequent activity takes place. Excitation drives the functional trajectory of the target’s responses. Some day it will be commonplace to write specifications for digital de- signs in terms of standard variables. Verification tools will make it easy to view and manipulate VTGs usefully for coverage analysis and risk as- sessment. • 180 Chapter 7 – Assessing Risk The set F of function points p, where p is a tuple of values of standard variables, is simply F = {p | p ∈ ( r (n),e(n),s(n),c(n),a(n), k (n))}. (3.7) The set of directed arcs connecting time-variant values is A = {( p (n), p (n + 1)) | p(n) ∈ F, p(n +1) ∈ F, and r(n +1) = f (r(n),e(n),s)n),c(n),a(n),k(n))}. (3.8) Faulty behavior is defined as y (n) ∉ R ,o r (y(n −1), y(n)) ∉ A, or b oth. (3.9) The actual size S of the functional space of a given clock domain of a given instance in a given context is defined as S = E(G). (3.19) The risk p of an unexposed functional bug for a target with D clock do- mains is Now we can finally talk about a cumulative probability density function like Fig. 7.11. p T = 1− ((1− p 1 )⋅ (1− p 2, )⋅ ⋅ (1 − p D )). (7.2) ⋅⋅ ⋅ 7.13 Summary 181 Fig. 7.11. Functional Closure and Risk There remains much to be done by way of researching these new con- cepts more deeply. Some will be found to be of great value, and others are likely to be deserted over time as richer veins of theory are mined to our industry’s advantage, especially those that give us the ability to achieve functional coverage for every tape-out. References Montgomery DC (2005) Introduction to Statistical Quality Control, Fifth Edition. Wiley. Vesely WE; Goldberg FF, Roberts NH, Haasl DF (1981) Fault Tree Handbook (PDF). U.S. Nuclear Regulatory Commission. NUREG-0492, Washington, DC, USA. Retrieved on 2006-08-31. Appendix – Functional Space of a Queue This analysis applies to a queue of 8 entries with programmable high- and low-water marks and a internal indirect condition that gives priority to draining the queue. When the number of entries is equal to the high-water mark, the drain priority is set to 1 and remains there until the number of entries is equal to the low-water mark when drain priority is reset to 0. A.1 Basic 8-entry Queue Fig. A.1a illustrates the functional space of this queue in terms of the changes in value of the response variable ITEMS_IN_QUEUE. The value of this variable is shown within the ovals representing the function points of this space. This queue has 9 function points. It has 8 arcs representing the addition of an item to the queue, 8 arcs for the removal of an item from the queue, and 9 hold arcs for when nothing is added or removed. There- fore, this queue has a total of 25 arcs connecting the 9 function points in its functional space. Of these 9 function points we can observe that 7 values are condensable into the condensed function point [1 7], because every adjacent point in this range has congruent arrival and departure arcs. Fig. A.1b illustrates the condensed functional space of this 8-entry queue. It has only 3 function points and only 7 arcs connecting them. For this particular target, conden- sation greatly simplifies the functional space. We begin by examining the operation of a simple 8-entry queue without high- and low- water marks to clarify where we are beginning our exami- nation. The response variable ITEMS_IN_QUEUE has a range of [0 8]. That is, it can be empty or contain anywhere between 1 and 8 items. From any value except 8 an item can be added to the queue. Once the queue has 8 items, nothing more can be added. Likewise, from any value except 0 an item can be removed from the queue. At any point in time (that is to say, at any clock cycle) an item can be added to the queue (if it is not already full), removed from the queue (if it is not already empty), or nothing can happen (nothing is either added or removed). 184 Appendix – Functional Space of a Queue Fig. A.1a. Functional space of an 8-entry queue A.1 B asic 8-entry Queue 18 Fig. A.1b. Condensed functional space of an 8-entry queue 5 . or removed). 184 Appendix – Functional Space of a Queue Fig. A.1a. Functional space of an 8-entry queue A.1 B asic 8-entry Queue 18 Fig. A.1b. Condensed functional space of an 8-entry. The actual size S of the functional space of a given clock domain of a given instance in a given context is defined as S = E(G). (3.19) The risk p of an unexposed functional bug for a target. condensed functional space of this 8-entry queue. It has only 3 function points and only 7 arcs connecting them. For this particular target, conden- sation greatly simplifies the functional