1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Độ tin cậy của hệ thống máy tính và mạng P5 docx

81 636 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 81
Dung lượng 439,1 KB

Nội dung

5 SOFTWARE RELIABILITY AND RECOVERY TECHNIQUES Reliability of Computer Systems and Networks: Fault Tolerance, Analysis, and Design Martin L. Shooman Copyright  2002 John Wiley & Sons, Inc. ISBNs: 0 - 471 - 29342 - 3 (Hardback); 0 - 471 - 22460 -X (Electronic) 202 5 . 1 INTRODUCTION The general approach in this book is to treat reliability as a system problem and to decompose the system into a hierarchy of related subsystems or com- ponents. The reliability of the entire system is related to the reliability of the components by some sort of structure function in which the components may fail independently or in a dependent manner. The discussion that follows will make it abundantly clear that software is a major “component” of the system reliability, 1 R. The reason that a separate chapter is devoted to software reli- ability is that the probabilistic models used for software differ from those used for hardware; moreover, hardware and software (and human) reliability can be combined only at a very high system level. (Section 5 . 8 . 5 discusses a macro- software reliability model that allows hardware and software to be combined at a lower level.) Specifically, if the hardware, software, and human failures are independent (often, this is not the case), one can express the system reliabil- ity, R SY , as the product of the hardware reliability, R H , the software reliability, R S , and the human operator reliability, R O . Thus, if independence holds, one can model the reliability of the various factors separately and combine them: R SY  R H × R S × R O [Shooman, 1983 , pp. 351 – 353 ]. This chapter will develop models that can be used for the software reliabil- ity. These models are built upon the principles of continuous random variables 1 Another important “component” of system reliability is human reliability if an operator is involved in any control, monitoring, input, or similar task. A discussion of human reliability models is beyond the scope of this book; the reader is referred to Dougherty and Fragola [ 1988 ]. INTRODUCTION 203 developed in Appendix A, Sections A 6 and A 7 , and Appendix B, Section B 3 ; the reader may wish to review these concepts while reading this chapter. Clearly every system that involves a digital computer also includes a signif- icant amount of software used to control system operation. It is hard to think of a modern business system, such as that used for information, transportation, communication, or government, that is not heavily computer-dependent. The microelectronics revolution has produced microprocessors and memory chips that are so cheap and powerful that they can be included in many commercial products. For example, a 1999 luxury car model contained 20 – 40 micropro- cessors (depending on which options were installed), and several models used local area networks to channel the data between sensors, microprocessors, dis- plays, and target devices [New York Times, August 27 , 1998 ]. Consumer prod- ucts such as telephones, washing machines, and microwave ovens use a huge number of embedded microcomponents. In 1997 , 100 million microprocessors were sold, but this was eclipsed by the sale of 4 . 6 billion embedded microcom- ponents. Associated with each microprocessor or microcomponent is memory, a set of instructions, and a set of programs [Pollack, 1999 ]. 5 . 1 . 1 Definition of Software Reliability One can define software engineering as the body of engineering and manage- ment technologies used to develop quality, cost-effective, schedule-meeting soft- ware. Software reliability measurement and estimation is one such technology that can be defined as the measurement and prediction of the probability that the software will perform its intended function (according to specifications) without error for a given period of time. Oftentimes, the design, programming, and test- ing techniques that contribute to high software reliability are included; however, we consider these techniques as part of the design process for the development of reliable software. Software reliability complements reliable software; both, in fact, are important topics within the discipline of software engineering. Software recovery is a set of fail-safe design techniques for ensuring that if some serious error should crash the program, the computer will automatically recover to reini- tialize and restart its program. The software succeeds during software recovery if no crucial data is lost, or if an operational calamity occurs, but the recovery transforms a total failure into a benign or at most a troubling, nonfatal “hiccup.” 5 . 1 . 2 Probabilistic Nature of Software Reliability On first consideration, it seems that the outcome of a computer program is a deterministic rather than a probabilistic event. Thus one might say that the output of a computer program is not a random result. In defining the concept of a random variable, Cramer [Chapter 13 , 1991 ] talks about spinning a coin as an experiment and the outcome (heads or tails) as the event. If we can control all aspects of the spinning and repeat it each time, the result will always be the same; however, such control needs to be so precise that it is practically 204 SOFTWARE RELIABILITY AND RECOVERY TECHNIQUES impossible to repeat the experiment in an identical manner. Thus the event (heads or tails) is a random variable. The remainder of this section develops a similar argument for software reliability where the random element in the software is the changing set of inputs. Our discussion of the probabilistic nature of software begins with an exam- ple. Suppose that we write a computer program to solve the roots r 1 and r 2 of a quadratic equation, Ax 2 + Bx + C  0 . If we enter the values 1 , 5 , and 6 for A, B, and C, respectively, the roots will be r 1  − 2 and r 2  − 3 . A sin- gle test of the software with these inputs confirms the expected results. Exact repetition of this experiment with the same values of A, B, and C will always yield the same results, r 1  − 2 and r 2  − 3 , unless there is a hardware failure or an operating system problem. Thus, in the case of this computer program, we have defined a deterministic experiment. No matter how many times we repeat the computation with the same values of A, B, and C, we obtain the same result (assuming we exclude outside influences such as power failures, hard- ware problems, or operating system crashes unrelated to the present program). Of course, the real problem here is that after the first computation of r 1  − 2 and r 2  − 3 we do no useful work to repeat the same identical computation. To do useful work, we must vary the values of A, B, and C and compute the roots for other input values. Thus the probabilistic nature of the experiment, that is, the correctness of the values obtained from the program for r 1 and r 2 , is dependent on the input values A, B, and C in addition to the correctness of the computer program for this particular set of inputs. The reader can readily appreciate that when we vary the values of A, B, and C over the range of possible values, either during test or operation, we would soon see if the software developer achieved an error-free program. For exam- ple, was the developer wise enough to treat the problem of imaginary roots? Did the developer use the quadratic formula to solve for the roots? How, then, was the case of A  0 treated where there is only one root and the quadratic formula “blows up” (i.e., leads to an exponential overflow error)? Clearly, we should test for all these values during development to ensure that there are no residual errors in the program, regardless of the input value. This leads to the concept of exhaustive testing, which is always infeasible in a practical problem. Suppose in the quadratic equation example that the values of A, B, and C were restricted to integers between + 1 , 000 and − 1 , 000 . Thus there would be 2 , 000 values of A and a like number of values of B and C. The possible input space for A, B, and C would therefore be ( 2 , 000 ) 3  8 billion values. 2 Suppose that 2 In a real-time system, each set of input values enters when the computer is in a different “initial state,” and all the initial states must also be considered. Suppose that a program is designed to sum the values of the inputs for a given period of time, print the sum, and reset. If there is a high partial sum, and a set of inputs occurs with large values, overflow may be encountered. If the partial sum were smaller, this same set of inputs would therefore cause no problems. Thus, in the general case, one must consider the input space to include all the various combinations of inputs and states of the system. THE MAGNITUDE OF THE PROBLEM 205 we solve for each value of roots, substitute in the original equation to check, and only print out a result if the roots when substituted do not yield a zero of the equation. If we could process 1 , 000 values per minute, the exhaustive test would require 8 million minutes, which is 5 , 556 days or 15 . 2 years. This is hardly a feasible procedure: any such computation for a practical problem involves a much larger test space and a more difficult checking procedure that is impossible in any practical sense. In the quadratic equation example, there was a ready means of checking the answers by substitution into the equation; however, if the purpose of the program is to calculate satellite orbits, and if 1 million combinations of input parameters are possible, then a person(s) or computer must independently obtain the 1 million right answers and check them all! Thus the probabilistic nature of software reliability is based on the varying values of the input, the huge number of input cases, the initial system states, and the impossibility of exhaustive testing. The basis for software reliability is quite different than the most common causes of hardware reliability. Software development is quite different from hardware development, and the source of software errors (random discovery of latent design and coding defects) differs from the source of most hard- ware errors (equipment failures). Of course, some complex hardware does have latent design and assembly defects, but the dominant mode of hardware fail- ures is equipment failures. Mechanical hardware can jam, break, and become worn-out, and electrical hardware can burn out, leaving a short or open circuit or some other mode of failure. Many who criticize probabilistic modeling of software complain that instructions do not wear out. Although this is a true statement, the random discovery of latent software defects is indeed just as damaging as equipment failures, even though it constitutes a different mode of failure. The development of models for software reliability in this chapter begins with a study of the software development process in Section 5 . 3 and continues with the formulation of probabilistic models in Section 5 . 4 . 5 . 2 THE MAGNITUDE OF THE PROBLEM Modeling, predicting, and measuring software reliability is an important quan- titative approach to achieving high-quality software and growth in reliabil- ity as a project progresses. It is an important management and engineering design metric; most software errors are at least troublesome—some are very serious—so the major flaws, once detected, must be removed by localization, redesign, and retest. The seriousness and cost of fixing some software problems can be appreci- ated if we examine the Year 2000 Problem (Y 2 K). The largely overrated fears occurred because during the early days of the computer revolution in the 1960 s and 1970 s, computer memory was so expensive that programmers used many tricks and shortcuts to save a little here and there to make their programs oper- 206 SOFTWARE RELIABILITY AND RECOVERY TECHNIQUES ate with smaller memory sizes. In 1965 , the cost of magnetic-core computer memory was expensive at about $ 1 per word and used a significant operating current. (Presently, microelectronic memory sells for perhaps $ 1 per megabyte and draws only a small amount of current; assuming a 16 -bit word, this cost has therefore been reduced by a factor of about 500 , 000 !) To save memory, programmers reserved only 2 digits to represent the last 2 digits of the year. They did not anticipate that any of their programs would survive for more than 5 – 10 years; moreover, they did not contemplate the problem that for the year 2000 , the digits “ 00 ” could instead represent the year 1900 in the soft- ware. The simplest solution was to replace the 2 -digit year field with a 4 -digit one. The problem was the vast amount of time required not only to search for the numerous instances in which the year was used as input or output data or used in intermediate calculations in existing software, but also to test that the changes have been successful and have not introduced any new errors. This problem was further exacerbated because many of these older software pro- grams were poorly documented, and in many cases they were translated from one version to another or from one language to another so they could be used in modern computers without the need to be rewritten. Although only minor problems occurred at the start of the new century, hundreds of millions of dol- lars had been expended to make a few changes that would only have been triv- ial if the software programs had been originally designed to prevent the Y 2 K problem. Sometimes, however, efforts to avert Y 2 K software problems created prob- lems themselves. One such case was that of the 7 -Eleven convenience store chain. On January 1 , 2001 , the point-of-sale system used in the 7 -Eleven stores read the year “ 2001 ” as “ 1901 ,” which caused it to reject credit cards if they were used for automatic purchases (manual credit card purchases, in addition to cash and check purchases, were not affected). The problem was attributed to the system’s software, even though it had been designed for the 5 , 200 -store chain to be Y 2 K-compliant, had been subjected to 10 , 000 tests, and worked fine during 2000 . (The chain spent 8 . 8 million dollars— 0 . 1 % of annual sales—for Y 2 K preparation from 1999 to 2000 .) Fortunately, the bug was fixed within 1 day [The Associated Press, January 4 , 2001 ]. Another case was that of Norway’s national railway system. On the morning of December 31 , 2000 , none of the new 16 airport-express trains and 13 high- speed signature trains would start. Although the computer software had been checked thoroughly before the start of 2000 , it still failed to recognize the correct date. The software was reset to read December 1 , 2000 , to give the German maker of the new trains 30 days to correct the problem. None of the older trains were affected by the problem [New York Times, January 3 , 2001 ]. Before we leave the obvious aspects of the Y 2 K problem, we should con- sider how deeply entrenched some of these problems were in legacy software: old programs that are used in their original form or rejuvenated for extended use. Analysts have found that some of the old IBM 9020 computers used in outmoded components of air traffic control systems contain an algorithm SOFTWARE DEVELOPMENT LIFE CYCLE 207 in their microcode for switching between the two redundant cooling pumps each month to even the wear. (For a discussion of cooling pumps in typi- cal IBM computers, see Siewiorek [ 1992 , pp. 493 , 504 ].) Nobody seemed to know how this calendar-sensitive algorithm would behave in the year 2000 ! The engineers and programmers who wrote the microcode for the 9020 s had retired before 2000 , and the obvious answer—replace the 9020 s with modern computers—proceeded slowly because of the cost. Although no major prob- lems occurred, the scare did bring to the attention of many managers the poten- tial problems associated with the use of legacy software. Software development is a lengthy, complex process, and before the focus of this chapter shifts to model building, the development process must be studied. 5 . 3 SOFTWARE DEVELOPMENT LIFE CYCLE Our goal is to make a probabilistic model for software, and the first step in any modeling is to understand the process [Boehm, 2000 ; Brooks, 1995 ; Pfleerer, 1998 ; Schach, 1999 ; and Shooman, 1983 ]. A good approach to the study of the software development process is to define and discuss the various phases of the software development life cycle. A common partitioning of these phases is shown Table 5 . 1 . The life cycle phases given in this table apply directly to the technique of program design known as structured procedural program- ming (SPP). In general, it also applies with some modification to the newer approach known as object-oriented programming (OOP). The details of OOP, including the popular design diagrams used for OOP that are called the uni- versal modeling language (UMLs), are beyond the scope of this chapter; the reader is referred to the following references for more information: [Booch, 1999 ; Fowler, 1999 ; Pfleerer, 1998 ; Pooley, 1999 ; Pressman, 1997 ; and Schach, 1999 ]. The remainder of this section focuses on the SPP design technique. 5 . 3 . 1 Beginning and End The beginning and end of the software development life cycle are the start of the project and the discard of the software. The start of a project is gen- erally driven by some event; for example, the head of the Federal Aviation Administration (FAA) or of some congressional committee decides that the United States needs a new air traffic control system, or the director of mar- keting in a company proposes to a management committee that to keep the company’s competitive edge, it must develop a new database system. Some- times, a project starts with a written needs document, which could be an inter- nal memorandum, a long-range plan, or a study of needed improvements in a particular field. The necessity is sometimes a business expansion or evolution; for example, a company buys a new subsidiary business and finds that its old payroll program will not support the new conglomeration, requiring an updated payroll program. The needs document generally specifies why new software is 208 SOFTWARE RELIABILITY AND RECOVERY TECHNIQUES TABLE 5 . 1 Project Phases for the Software Development Life Cycle Phase Description Start of project Initial decision or motivation for the project, including overall system parameters. Needs A study and statement of the need for the software and what it should accomplish. Requirements Algorithms or functions that must be performed, including functional parameters. Specifications Details of how the tasks and functions are to be performed. Design of prototype Construction of a prototype, including coding and testing. Prototype: System Evaluation by both the developer and the customer of test how well the prototype design meets the requirements. Revision of Prototype system tests and other information may reveal specifications needed changes. Final design Design changes in the prototype software in response to discovered deviations from the original specifications or the revised specifications, and changes to improve performance and reliability. Code final design The final implementation of the design. Unit test Each major unit (module) of the code is individually tested. Integration test Each module is successively inserted into the pretested control structure, and the composite is tested. System test Once all (or most) of the units have been integrated, the system operation is tested. Acceptance test The customer designs and witnesses a test of the system to see if it meets the requirements. Field deployment The software is placed into operational use. Field maintenance Errors found during operation must be fixed. Redesign of the A new contract is negotiated after a number of years of system operation to include changes and additional features. The aforementioned phases are repeated. Software discard Eventually, the software is no longer updated or corrected but discarded, perhaps to be replaced by new software. needed. Generally, old software is discarded once new, improved software is available. However, if one branch of an organization decides to buy new soft- ware and another branch wishes to continue with its present version, it may be difficult to define the end of the software’s usage. Oftentimes, the discard- ing takes place many years beyond what was originally envisioned when the software was developed or purchased. (In many ways, this is why there was a Y 2 K problem: too few people ever thought that their software would last to the year 2000 .) SOFTWARE DEVELOPMENT LIFE CYCLE 209 5 . 3 . 2 Requirements The project formally begins with the drafting of a requirements document for the system in response to the needs document or equivalent document. Initially, the requirements constitute high-level system requirements encompassing both the hardware and software. In a large project, as the requirements document “matures,” it is expanded into separate hardware and software requirements; the requirements will specify what needs to be done. For an air traffic control system (ATC), the requirements would deal with the ATC centers that they must serve, the present and expected future volume of traffic, the mix of air- craft, the types of radar and displays used, and the interfaces to other ATC centers and the aircraft. Present travel patterns, expected growth, and expected changes in aircraft, airport, and airline operational characteristics would also be reflected in the requirements. 5 . 3 . 3 Specifications The project specifications start with the requirements and the details of how the software is to be designed to satisfy these requirements. Continuing with our air traffic control system example, there would be a hardware specifica- tions document dealing with (a) what type of radar is used; (b) the kinds of displays and display computers that are used; (c) the distributed computers or microprocessors and memory systems; (d) the communications equipment; (e) the power supplies; and (f) any networks that are needed for the project. The software specifications document will delineate (a) what tracking algorithm to use; (b) how the display information for the aircraft will be handled; (c) how the system will calculate any potential collisions; (d) how the information will be displayed; and (e) how the air traffic controller will interact with both the system and the pilots. Also, the exact nature of any required records of a tech- nical, managerial, or legal nature will be specified in detail, including how they will be computed and archived. Particular projects often use names dif- ferent from requirements and specifications (e.g., system requirements versus software specifications and high-level versus detailed specifications), but their content is essentially the same. A combined hardware–software specification might be used on a small project. It is always a difficult task to define when requirements give way to specifi- cations, and in the practical world, some specifications are mixed in the require- ments document and some sections of the specifications document actually seem like requirements. In any event, it is important that the why, the what, and the how of the project be spelled out in a set of documents. The complete- ness of the set of documents is more important than exactly how the various ideas are partitioned between requirements and specifications. Several researchers have outlined or developed experimental systems that use a formal language to write the specifications. Doing so has introduced a for- malism and precision that is often lacking in specifications. Furthermore, since 210 SOFTWARE RELIABILITY AND RECOVERY TECHNIQUES the formal specification language would have a grammar, one could build an automated specification checker. With some additional work, one could also develop a simulator that would in some way synthetically execute the specifi- cations. Doing so would be very helpful in many ways for uncovering missing specifications, incomplete specifications, and conflicting specifications. More- over, in a very simple way, it would serve as a preliminary execution of the software. Unfortunately, however, such projects are only in the experimental or prototype stages [Wing, 1990 ]. 5 . 3 . 4 Prototypes Most innovative projects now begin with a prototype or rapid prototype phase. The purpose of the prototype is multifaceted: developers have an opportunity to try out their design ideas, the difficult parts of the project become rapidly appar- ent, and there is an early (imperfect) working model that can be shown to the cus- tomer to help identify errors of omission and commission in the requirements and specification documents. In constructing the prototype, an initial control struc- ture (the main program coordinating all the parts) is written and tested along with the interfaces to the various components (subroutines and modules). The various components are further decomposed into smaller subcomponents until the mod- ule level is reached, at which time programming or coding at the module level begins. The nature of a module is described in the paragraphs that follow. A module is a block of code that performs a well-described function or procedure. The length of a module is a frequently debated issue. Initially, its length was defined as perhaps 50 – 200 source lines of code (SLOC). The SLOC length of a module is not absolute; it is based on the coder’s “intellectual span of control.” Since a program listing contains about 50 lines, this means that a module would be 1 – 4 pages long. The reasoning behind this is that it would be difficult to read, analyze, and trace the control structures of a program that extend beyond a few pages and keep all the logic of the program in mind; hence the term intellectual span of control. The concept of a module, module interface, and rough bounds on module size are more directly applicable to an SPP approach than to that of an OOP; however, as with very large and complex modules, very large and complex objects are undesirable. Sometimes, the prototype progresses rapidly since old code from related projects can be used for the subroutines and modules, or a “first draft” of the software can be written even if some of the more complex features are left out. If the old code actually survives to the final version of the program, we speak of such code as reused code or legacy code, and if such reuse is significant, the development life cycle will be shortened somewhat and the cost will be reduced. Of course, the prototype code must be tested, and oftentimes when a prototype is shown to the customer, the customer understands that some fea- tures are not what he or she wanted. It is important to ascertain this as early as possible in the project so that revisions can be made in the specifications that will impact the final design. If these changes are delayed until late in SOFTWARE DEVELOPMENT LIFE CYCLE 211 the project, they can involve major changes in the code as well as significant redesign and extensive retesting of the software, for which large cost overruns and delays may be incurred. In some projects, the contracting is divided into two phases: delivery and evaluation of the prototype, followed by revisions in the requirements and specifications and a second contract for the delivered version of the software. Some managers complain that designing a prototype that is to be replaced by a final design is doing a job twice. Indeed it is; how- ever, it is the best way to develop a large, complex project. (See Chapter 11 , “Plan to Throw One Away,” of Brooks [ 1995 ].) The cost of the prototype is not so large if one considers that much of the prototype code (especially the control structure) can be modified and reused for the final design and that the prototype test cases can be reused in testing the final design. It is likely that the same manager who objects to the use of prototype software would heartily endorse the use of a prototype board (breadboard), a mechanical model, or a computer simulation to “work out the bugs” of a hardware design without realizing that the software prototype is the software analog of these well-tried hardware development techniques. Finally, we should remark that not all projects need a prototype phase. Con- sider the design of a fourth payroll system for a customer. Assume that the development organization specializes in payroll software and had developed the last three payroll systems for the customer. It is unlikely that a prototype would be required by either the customer or the developer. More likely, the developer would have some experts with considerable experience study the present system, study the new requirements, and ask many probing questions of the knowledgeable personnel at the customer’s site, after which they could write the specifications for the final software. However, this payroll example is not the usual case; in most cases, prototype software is generally valuable and should be considered. 5 . 3 . 5 Design Design really begins with the needs, requirements, and specifications docu- ments. Also, the design of a prototype system is a very important part of the design process. For discussion purposes, however, we will refer to the final design stage as program design. In the case of SPP, there are two basic design approaches: top–down and bottom–up. The top–down process begins with the complete system at level 0 ; then, it decomposes this into a num- ber of subsystems at level 1 . This process continues to levels 2 and 3 , then down to level n where individual modules are encountered and coded as described in the following section. Such a decomposition can be modeled by a hierarchy diagram (H-diagram) such as that shown in Fig. 5 . 1 (a). The diagram, which resembles an inverted tree, may be modeled as a mathe- matical graph where each “box” in the diagram represents a node in the graph and each line connecting the boxes represents a branch in the graph. A node at level k (the predecessor) has several successor nodes at level [...]... Generally, testing costs and schedules are also included When a commercial software company is developing a product for sale to the general business and home community, the later phases of testing are often somewhat different, for which the terms alpha testing and beta testing are often used Alpha testing means that a test group within the company evaluates the software before release, whereas beta testing... releases of the software to help test and debug it Some people feel that beta testing is just a way of reducing the cost of software development and that it is not a thorough way of testing the software, whereas others feel that the company still does adequate testing and that this is just a way of getting a lot of extra field testing done in a short period of time at little additional cost During early field... differentiation of both sides yields dR(t) dt − f (t) (5.11) Substituting Eq (5.11) into (5.10) and solving for z(t) yields z(t) − dR(t) dt ΋R(t) (5.12) This differential equation can be solved by integrating both sides, yielding ln{R(t)} − ∫ z(t) d t (5.13a) Eliminating the natural logarithmic function in this equation by exponentiating both sides yields R(t) e − ∫ z(t) d t (5.13b) which is the form... change the test and correction process so that the situation of Fig 5.5(a) or (b) ensues and then continue testing One could also return to an earlier saved release of the software where error generation was modest, change the test and correction process, and, starting with this baseline, return to testing The last and most unpleasant choice is to discard the software and start again (Quantitative error-generation... dt aE r (t) Substituting for E r (t), from Eq (5.20) and letting E d (t) dEc (t) dt (5.25a) E c (t) yields a[E T − E c (t)] (5.25b) Rearranging the differential equation given in Eq (5.25b) yields dEc (t) + aE c (t) dt aE T (5.25c) To solve this differential equation, we obtain the homogeneous solution by SOFTWARE ERROR MODELS 235 setting the right-hand side equal to 0 and substituting the trial solution... questions during product development: When should we stop testing? and Will the product function well and be considered reliable? Both are technical management questions; the former can be restated as follows: When are there few enough errors so that the software can be released to the field (or at least to the last stage of testing)? To continue testing is costly, but to release a product with too many errors... passed, the software is accepted and the developer is paid; however, if the test is failed, the developer resumes the testing and correcting of software errors (including those found during the acceptance test), and a new acceptance test date is scheduled Sometimes, “third party” testing is used, in which the customer hires an outside organization to make up and administer integration, system, or acceptance... Furthermore, errors found by code reading and testing at the middle (unit) code level (called module errors) are often not carefully kept A change in the preliminary design and the occurrence of module test errors should both be carefully recorded Oftentimes, the standard practice is not to start counting software errors, 3 The origin of the word “bug” is very interesting In the early days of computers, many... language, whether there is legacy code available, how well the operating system supports the language, whether the code modules are to be written so that they may be reused in the future, and so forth Typical choices are C, Ada, and Visual Basic In the case of OOP, the most common languages at the present are C++ and Ada 5.3.7 Testing Testing is a complex process, and the exact nature of it depends on... there may be more testing of interfaces, objects, and other structures within the OOP philosophy If proof of program correctness is employed, there will be many additional layers added to the design process involving the writing of proofs to ensure that the design will satisfy a mathematical representation of the program logic These additional phases of design may replace some of the testing phases Assuming . the later phases of testing are often somewhat different, for which the terms alpha testing and beta testing are often used. Alpha testing means that a test. some of the testing phases. Assuming the top–down structured approach, the first step in testing the code is to perform unit (module) testing. In general,

Ngày đăng: 15/12/2013, 08:15

TỪ KHÓA LIÊN QUAN

w