1. Trang chủ
  2. » Công Nghệ Thông Tin

Giáo trình SoftWare Testing

179 1,9K 14
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 179
Dung lượng 3,83 MB

Nội dung

Giáo trình SoftWare Testing

Software Testing Confidential Cognizant Technology Solutions Table of Contents INTRODUCTION TO SOFTWARE 1.1 EVOLUTION OF THE SOFTWARE TESTING DISCIPLINE 1.2 THE TESTING PROCESS AND THE SOFTWARE TESTING LIFE CYCLE .7 1.3 BROAD CATEGORIES OF TESTING 1.4 WIDELY EMPLOYED TYPES OF TESTING 1.5 THE TESTING TECHNIQUES 1.6 CHAPTER SUMMARY BLACK BOX AND WHITE BOX TESTING 11 2.1 INTRODUCTION 11 2.2 BLACK BOX TESTING 11 2.3 TESTING STRATEGIES/TECHNIQUES 13 2.4 BLACK BOX TESTING METHODS 14 2.5 BLACK BOX (VS) WHITE BOX 16 2.6 WHITE BOX TESTING 18 GUI TESTING 23 3.1 SECTION - WINDOWS COMPLIANCE TESTING .23 3.2 SECTION - SCREEN VALIDATION CHECKLIST 25 3.3 SPECIFIC FIELD TESTS .29 3.4 VALIDATION TESTING - STANDARD ACTIONS 30 REGRESSION TESTING 33 4.1 WHAT IS REGRESSION TESTING 33 4.2 TEST EXECUTION .34 4.3 CHANGE REQUEST 35 4.4 BUG TRACKING 35 4.5 TRACEABILITY MATRIX .36 PHASES OF TESTING .39 5.1 INTRODUCTION 39 5.2 TYPES AND PHASES OF TESTING 39 5.3 THE “V”MODEL 40 42 INTEGRATION TESTING 43 6.1 GENERALIZATION OF MODULE TESTING CRITERIA 44 46 ACCEPTANCE TESTING .49 7.1 INTRODUCTION – ACCEPTANCE TESTING 49 7.2 FACTORS INFLUENCING ACCEPTANCE TESTING 49 7.3 CONCLUSION 50 SYSTEM TESTING 51 8.1 INTRODUCTION TO SYSTEM TESTING 51 8.2 NEED FOR SYSTEM TESTING 51 Performance Testing Process & Methodology Proprietary & Confidential -2- 8.3 SYSTEM TESTING TECHNIQUES .52 8.4 FUNCTIONAL TECHNIQUES 53 8.5 CONCLUSION: 53 UNIT TESTING 54 9.1 INTRODUCTION TO UNIT TESTING 54 9.2 UNIT TESTING – FLOW: 55 RESULTS 55 UNIT TESTING – BLACK BOX APPROACH 56 UNIT TESTING – WHITE BOX APPROACH 56 UNIT TESTING – FIELD LEVEL CHECKS 56 UNIT TESTING – FIELD LEVEL VALIDATIONS 56 UNIT TESTING – USER INTERFACE CHECKS .56 9.3 EXECUTION OF UNIT TESTS 57 UNIT TESTING FLOW : 57 DISADVANTAGE OF UNIT TESTING 59 METHOD FOR STATEMENT COVERAGE .59 RACE COVERAGE 60 9.4 CONCLUSION 60 10 TEST STRATEGY 62 10.1 INTRODUCTION 62 10.2 KEY ELEMENTS OF TEST MANAGEMENT: 62 10.3 TEST STRATEGY FLOW : 63 10.4 GENERAL TESTING STRATEGIES 65 10.5 NEED FOR TEST STRATEGY 65 10.6 DEVELOPING A TEST STRATEGY 66 10.7 CONCLUSION: 66 11 TEST PLAN .68 11.1 WHAT IS A TEST PLAN? 68 CONTENTS OF A TEST PLAN 68 11.2 CONTENTS (IN DETAIL) 68 12 TEST DATA PREPARATION - INTRODUCTION .71 12.1 CRITERIA FOR TEST DATA COLLECTION 72 12.2 CLASSIFICATION OF TEST DATA TYPES 78 12.3 ORGANIZING THE DATA 80 12.4 DATA LOAD AND DATA MAINTENANCE .82 12.5 TESTING THE DATA 83 12.6 CONCLUSION 83 13 TEST LOGS - INTRODUCTION 85 13.1 FACTORS DEFINING THE TEST LOG GENERATION .85 13.2 COLLECTING STATUS DATA 86 14 TEST REPORT .92 14.1 EXECUTIVE SUMMARY .92 Performance Testing Process & Methodology Proprietary & Confidential -3- 15 DEFECT MANAGEMENT .95 15.1 DEFECT 95 15.2 DEFECT FUNDAMENTALS 95 15.3 DEFECT TRACKING 96 15.4 DEFECT CLASSIFICATION 97 15.5 DEFECT REPORTING GUIDELINES .98 16 AUTOMATION .101 16.1 WHY AUTOMATE THE TESTING PROCESS? 101 16.2 AUTOMATION LIFE CYCLE .103 16.3 PREPARING THE TEST ENVIRONMENT .105 16.4 AUTOMATION METHODS 108 17 GENERAL AUTOMATION TOOL COMPARISON 111 17.1 FUNCTIONAL TEST TOOL MATRIX 111 17.2 RECORD AND PLAYBACK 111 17.3 WEB TESTING 112 17.4 DATABASE TESTS 112 17.5 DATA FUNCTIONS 112 17.6 OBJECT MAPPING 113 17.7 IMAGE TESTING 114 17.8 TEST/ERROR RECOVERY 114 17.9 OBJECT NAME MAP 114 17.10 OBJECT IDENTITY TOOL .115 17.11 EXTENSIBLE LANGUAGE .115 17.12 ENVIRONMENT SUPPORT 116 17.13 INTEGRATION 116 17.14 COST 116 17.15 EASE OF USE 117 17.16 SUPPORT 117 17.17 OBJECT TESTS 117 17.18 MATRIX 118 17.19 MATRIX SCORE 118 18 SAMPLE TEST AUTOMATION TOOL .119 18.1 RATIONAL SUITE OF TOOLS .119 18.2 RATIONAL ADMINISTRATOR 120 18.3 RATIONAL ROBOT 124 18.4 ROBOT LOGIN WINDOW 125 18.5 RATIONAL ROBOT MAIN WINDOW-GUI SCRIPT 126 18.6 RECORD AND PLAYBACK OPTIONS 127 18.7 VERIFICATION POINTS 129 18.8 ABOUT SQABASIC HEADER FILES 131 18.9 ADDING DECLARATIONS TO THE GLOBAL HEADER FILE .131 18.10 INSERTING A COMMENT INTO A GUI SCRIPT: 131 18.11 ABOUT DATA POOLS 132 18.12 DEBUG MENU .132 18.13 COMPILING THE SCRIPT 133 18.14 COMPILATION ERRORS 134 Performance Testing Process & Methodology Proprietary & Confidential -4- 19 RATIONAL TEST MANAGER 135 19.1 TEST MANAGER-RESULTS SCREEN 136 20 SUPPORTED ENVIRONMENTS 138 20.1 OPERATING SYSTEM .138 20.2 PROTOCOLS 138 20.3 WEB BROWSERS 138 20.4 MARKUP LANGUAGES 138 20.5 DEVELOPMENT ENVIRONMENTS 138 21 PERFORMANCE TESTING 139 21.1 WHAT IS PERFORMANCE TESTING? 139 21.2 WHY PERFORMANCE TESTING? .139 21.3 PERFORMANCE TESTING OBJECTIVES .140 21.4 PRE-REQUISITES FOR PERFORMANCE TESTING 140 21.5 PERFORMANCE REQUIREMENTS .141 22 PERFORMANCE TESTING PROCESS .142 22.1 PHASE – REQUIREMENTS STUDY 143 22.2 PHASE – TEST PLAN 144 22.3 PHASE – TEST DESIGN .144 22.4 PHASE –SCRIPTING 145 22.5 PHASE – TEST EXECUTION 146 22.6 PHASE – TEST ANALYSIS 146 22.7 PHASE – PREPARATION OF REPORTS .147 22.8 COMMON MISTAKES IN PERFORMANCE TESTING 148 22.9 BENCHMARKING LESSONS .148 23 TOOLS 150 23.1 LOADRUNNER 6.5 150 23.2 WEBLOAD 4.5 .150 23.3 ARCHITECTURE BENCHMARKING .157 23.4 GENERAL TESTS 157 24 PERFORMANCE METRICS .158 24.1 CLIENT SIDE STATISTICS 158 24.2 SERVER SIDE STATISTICS .159 24.3 NETWORK STATISTICS 159 24.4 CONCLUSION 159 25 LOAD TESTING 161 25.1 WHY IS LOAD TESTING IMPORTANT ? .161 25.2 WHEN SHOULD LOAD TESTING BE DONE? 161 26 LOAD TESTING PROCESS 162 26.1 SYSTEM ANALYSIS 162 26.2 USER SCRIPTS 162 26.3 SETTINGS .162 26.4 PERFORMANCE MONITORING 163 Performance Testing Process & Methodology Proprietary & Confidential -5- 26.5 ANALYZING RESULTS 163 26.6 CONCLUSION 163 27 STRESS TESTING 165 27.1 INTRODUCTION TO STRESS TESTING 165 27.2 BACKGROUND TO AUTOMATED STRESS TESTING 166 27.3 AUTOMATED STRESS TESTING IMPLEMENTATION .168 27.4 PROGRAMMABLE INTERFACES 168 27.5 GRAPHICAL USER INTERFACES 169 27.6 DATA FLOW DIAGRAM 169 27.7 TECHNIQUES USED TO ISOLATE DEFECTS 170 28 TEST CASE COVERAGE 172 28.1 TEST COVERAGE 172 28.2 TEST COVERAGE MEASURES 172 28.3 PROCEDURE-LEVEL TEST COVERAGE .173 28.4 LINE-LEVEL TEST COVERAGE 173 28.5 CONDITION COVERAGE AND OTHER MEASURES 173 28.6 HOW TEST COVERAGE TOOLS WORK .173 28.7 TEST COVERAGE TOOLS AT A GLANCE 175 29 TEST CASE POINTS-TCP 176 29.1 WHAT IS A TEST CASE POINT (TCP) .176 29.2 CALCULATING THE TEST CASE POINTS: 176 29.3 CHAPTER SUMMARY 178 Performance Testing Process & Methodology Proprietary & Confidential -6- Introduction to Software 1.1 Evolution of the Software Testing discipline The effective functioning of modern systems depends on our ability to produce software in a cost-effective way The term software engineering was first used at a 1968 NATO workshop in West Germany It focused on the growing software crisis! Thus we see that the software crisis on quality, reliability, high costs etc started way back when most of today’s software testers were not even born! The attitude towards Software Testing underwent a major positive change in the recent years In the 1950’s when Machine languages were used, testing is nothing but debugging When in the 1960’s, compilers were developed, testing started to be considered a separate activity from debugging In the 1970’s when the software engineering concepts were introduced, software testing began to evolve as a technical discipline Over the last two decades there has been an increased focus on better, faster and cost-effective software Also there has been a growing interest in software safety, protection and security and hence an increased acceptance of testing as a technical discipline and also a career choice! Now to answer, “What is Testing?” we can go by the famous definition of Myers, which says, “Testing is the process of executing a program with the intent of finding errors” 1.2 The Testing process and the Software Testing Life Cycle Every testing project has to follow the waterfall model of the testing process The waterfall model is as given below 1.Test Strategy & Planning 2.Test Design 3.Test Environment setup 4.Test Execution 5.Defect Analysis & Tracking 6.Final Reporting According to the respective projects, the scope of testing can be tailored, but the process mentioned above is common to any testing activity Software Testing has been accepted as a separate discipline to the extent that there is a separate life cycle for the testing activity Involving software testing in all phases of the Performance Testing Process & Methodology Proprietary & Confidential -7- software development life cycle has become a necessity as part of the software quality assurance process Right from the Requirements study till the implementation, there needs to be testing done on every phase The V-Model of the Software Testing Life Cycle along with the Software Development Life cycle given below indicates the various phases or levels of testing Requirement Study Production Verification Testing High Level Design User Acceptance Testing Low Level Design System Testing Unit Testing Integration Testing SDLC - STLC 1.3 Broad Categories of Testing Based on the V-Model mentioned above, we see that there are two categories of testing activities that can be done on software, namely,  Static Testing  Dynamic Testing The kind of verification we on the software work products before the process of compilation and creation of an executable is more of Requirement review, design review, code review, walkthrough and audits This type of testing is called Static Testing When we test the software by executing and comparing the actual & expected results, it is called Dynamic Testing 1.4 Widely employed Types of Testing From the V-model, we see that are various levels or phases of testing, namely, Unit testing, Integration testing, System testing, User Acceptance testing etc Let us see a brief definition on the widely employed types of testing Unit Testing: The testing done to a unit or to a smallest piece of software Done to verify if it satisfies its functional specification or its intended design structure Integration Testing: Testing which takes place as sub elements are combined (i.e., integrated) to form higher-level elements Regression Testing: Selective re-testing of a system to verify the modification (bug fixes) have not caused unintended effects and that system still complies with its specified requirements Performance Testing Process & Methodology Proprietary & Confidential -8- System Testing: Testing the software for the required specifications on the intended hardware Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria, which enables a customer to determine whether to accept the system or not Performance Testing: To evaluate the time taken or response time of the system to perform it’s required functions in comparison Stress Testing: To evaluate a system beyond the limits of the specified requirements or system resources (such as disk space, memory, processor utilization) to ensure the system not break unexpectedly Load Testing: Load Testing, a subset of stress testing, verifies that a web site can handle a particular number of concurrent users while maintaining acceptable response times Alpha Testing: Testing of a software product or system conducted at the developer’s site by the customer Beta Testing: Testing conducted at one or more customer sites by the end user of a delivered software product system 1.5 The Testing Techniques To perform these types of testing, there are two widely used testing techniques The above said testing types are performed based on the following testing techniques Black-Box testing technique: This technique is used for testing based solely on analysis of requirements (specification, user documentation.) Also known as functional testing White-Box testing technique: This technique us used for testing based on analysis of internal logic (design, code, etc.)(But expected results still come requirements) Also known as Structural testing These topics will be elaborated in the coming chapters 1.6 Chapter Summary This chapter covered the Introduction and basics of software testing mentioning about Performance Testing Process & Methodology Proprietary & Confidential -9-      Evolution of Software Testing The Testing process and lifecycle Broad categories of testing Widely employed Types of Testing The Testing Techniques Performance Testing Process & Methodology Proprietary & Confidential - 10 - 27 Stress Testing 27.1 Introduction to Stress Testing This testing is accomplished through reviews (product requirements, software functional requirements, software designs, code, test plans, etc.), unit testing, system testing (also known as functional testing), expert user testing (like beta testing but in-house), smoke tests, etc All these ‘testing’ activities are important and each plays an essential role in the overall effort but, none of these specifically look for problems like memory and resource management Further, these testing activities little to quantify the robustness of the application or determine what may happen under abnormal circumstances We try to fill this gap in testing by using stress testing Stress testing can imply many different types of testing depending upon the audience Even in literature on software testing, stress testing is often confused with load testing and/or volume testing For our purposes, we define stress testing as performing random operational sequences at larger than normal volumes, at faster than normal speeds and for longer than normal periods of time as a method to accelerate the rate of finding defects and verify the robustness of our product Stress testing in its simplest form is any test that repeats a set of actions over and over with the purpose of “breaking the product” The system is put through its paces to find where it may fail As a first step, you can take a common set of actions for your system and keep repeating them in an attempt to break the system Adding some randomization to these steps will help find more defects How long can your application stay functioning doing this operation repeatedly? To help you reproduce your failures one of the most important things to remember to is to log everything as you proceed You need to know what exactly was happening when the system failed Did the system lock up with 100 attempts or 100,000 attempts?[1] Note that there are many other types of testing which have not mentioned above, for example, risk based testing, random testing, security testing, etc We have found, and it seems they agree, that it is best to review what needs to be tested, pick multiple testing types that will provide the best coverage for the product to be tested, and then master these testing types, rather than trying to implement every testing type Some of the defects that we have been able to catch with stress testing that have not been found in any other way are memory leaks, deadlocks, software asserts, and configuration conflicts For more details about these types of defects or how we were able to detect them, refer to the section ‘Typical Defects Found by Stress Testing’ Table provides a summary of some of the strengths and weaknesses that we have found with stress testing Performance Testing Process & Methodology Proprietary & Confidential - 165 - Table Stress Testing Strengths and Weaknesses Strengths Weakness Find defects that no other type of test would find Using randomization increase coverage Test the robustness of the application Not real world situation Defects are not always reproducible One sequence of operations may catch a problem right away, but use another sequence may never find the problem Helpful at finding memory leaks, deadlocks, software asserts, and configuration conflicts Does not test correctness of system response to user input 27.2 Background to Automated Stress Testing Stress testing can be done manually - which is often referred to as “monkey” testing In this kind of stress testing, the tester would use the application “aimlessly” like a monkey - poking buttons, turning knobs, “banging” on the keyboard etc., in order to find defects One of the problems with “monkey” testing is reproducibility In this kind of testing, where the tester uses no guide or script and no log is recorded, it’s often impossible to repeat the steps executed before a problem occurred Attempts have been made to use keyboard spyware, video recorders and the like to capture user interactions with varying (often poor) levels of success Our applications are required to operate for long periods of time with no significant loss of performance or reliability We have found that stress testing of a software application helps in accessing and increasing the robustness of our applications and it has become a required activity before every software release Performing stress manually is not feasible and repeating the test for every software release is almost impossible, so this is a clear example of an area that benefits from automation, you get a return on your investment quickly, and it will provide you with more than just a mirror of your manual test suite Previously, we had attempted to stress test our applications using manual techniques and have found that they were lacking in several respects Some of the weaknesses of manual stress testing we found were: Manual techniques cannot provide the kind of intense simulation of maximum user interaction over time Humans can not keep the rate of interaction up high enough and long enough Manual testing does not provide the breadth of test coverage of the product features/commands that is needed People tend to the same things in the same way over and over so some configuration transitions not get tested Manual testing generally does not allow for repeatability of command sequences, so reproducing failures is nearly impossible Manual testing does not perform automatic recording of discrete values with each command sequence for tracking memory utilization over time – critical for detecting memory leaks Performance Testing Process & Methodology Proprietary & Confidential - 166 - With automated stress testing, the stress test is performed under computer control The stress test tool is implemented to determine the applications’ configuration, to execute all valid command sequences in a random order, and to perform data logging Since the stress test is automated, it becomes easy to execute multiple stress tests simultaneously across more than one product at the same time Depending on how the stress inputs are configured stress can both ‘positive’ and ‘negative’ testing Positive testing is when only valid parameters are provided to the device under test, whereas negative testing provides both valid and invalid parameters to the device as a way of trying to break the system under abnormal circumstances For example, if a valid input is in seconds, positive testing would test to 59 and negative testing would try –1 to 60, etc Even though there are clearly advantages to automated stress testing, it still has its disadvantages For example, we have found that each time the product application changes we most likely need to change the stress tool (or more commonly commands need to be added to/or deleted from the input command set) Also, if the input command set changes, then the output command sequence also changes given pseudo-randomization Table provides a summary of some of these advantages and disadvantages that we have found with automated stress testing Table Automated Stress Testing Advantages and Disadvantages Advantages Disadvantages Automated stress testing is performed under computer control Capability to test all product application command sequences Multiple product applications can be supported by one stress tool Uses randomization to increase coverage; tests vary with new seed values Repeatability of commands and parameters help reproduce problems or verify that existing problems have been resolved Informative log files facilitate investigation of problem Requires capital equipment and development of a stress test tool Requires maintaince of the tool as the product application changes Reproducible stress runs must use the same input command set Defects are not always reproducible even with the same seed value Requires test application information to be kept and maintained May take a long time to execute In summary, automated stress testing overcomes the major disadvantages of manual stress testing and finds defects that no other testing types can find Automated stress testing exercises various features of the system, at a rate exceeding that at which actual end-users can be expected to do, and for durations of time that exceed typical use The automated stress test randomizes the order in which the product features are accessed In this way, non-typical sequences of user interaction are tested with the system in an attempt to find latent defects not detectable with other techniques Performance Testing Process & Methodology Proprietary & Confidential - 167 - To take advantage of automated stress testing, our challenge then was to create an automated stress test tool that would: Simulate user interaction for long periods of time (since it is computer controlled we can exercise the product more than a user can) Provide as much randomization of command sequences to the product as possible to improve test coverage over the entire set of possible features/commands Continuously log the sequence of events so that issues can be reliably reproduced after a system failure Record the memory in use over time to allow memory management analysis Stress the resource and memory management features of the system 27.3 Automated Stress Testing Implementation Automated stress testing implementations will be different depending on the interface to the product application The types of interfaces available to the product drive the design of the automated stress test tool The interfaces fall into two main categories: 1) Programmable Interfaces: Interfaces like command prompts, RS-232, Ethernet, General Purpose Interface Bus (GPIB), Universal Serial Bus (USB), etc that accept strings representing command functions without regard to context or the current state of the device 2) Graphical User Interfaces (GUI’s): Interfaces that use the Windows model to allow the user direct control over the device, individual windows and controls may or may not be visible and/or active depending on the state of the device 27.4 Programmable Interfaces These interfaces have allowed users to setup, control, and retrieve data in a variety of application areas like manufacturing, research and development, and service To meet the needs of these customers, the products provide programmable interfaces, which generally support a large number of commands (1000+), and are required to operate for long periods of time, for example, on a manufacturing line where the product is used 24 hours a day, days a week Testing all possible combinations of commands on these products is practically impossible using manual testing methods Programmable interface stress testing is performed by randomly selecting from a list of individual commands and then sending these commands to the device under test (DUT) through the interface If a command has parameters, then the parameters are also enumerated by randomly generating a unique command parameter By using a pseudorandom number generator, each unique seed value will create the same sequence of commands with the same parameters each time the stress test is executed Each command is also written to a log file which can be then used later to reproduce any defects that were uncovered Performance Testing Process & Methodology Proprietary & Confidential - 168 - For additional complexity, other variations of the automated stress test can be performed For example, the stress test can vary the rate at which commands are sent to the interface, the stress test can send the commands across multiple interfaces simultaneously, (if the product supports it), or the stress test can send multiple commands at the same time 27.5 Graphical User Interfaces In recent years, Graphical User Interfaces have become dominant and it became clear that we needed a means to test these user interfaces analogous to that which is used for programmable interfaces However, since accessing the GUI is not as simple as sending streams of command line input to the product application, a new approach was needed It is necessary to store not only the object recognition method for the control, but also information about its parent window and other information like its expected state, certain property values, etc An example would be a ‘HELP’ menu item There may be multiple windows open with a ‘HELP’ menu item, so it is not sufficient to simply store “click the ‘HELP’ menu item”, but you have to store “click the ‘HELP’ menu item for the particular window” With this information it is possible to uniquely define all the possible product application operations (i.e each control can be uniquely identified) Additionally, the flow of each operation can be important Many controls are not visible until several levels of modal windows have been opened and/or closed, for example, a typical confirm file overwrite dialog box for a ‘File->Save As…’ filename operation is not available until the following sequence has been executed: Set Context to the Main Window Select ‘File->Save As…’ Select Target Directory from tree control Type a valid filename into the edit-box Click the ‘SAVE’ button If the filename already exists, either confirm the file overwrite by clicking the ‘OK’ button in the confirmation dialog or click the cancel button In this case, you need to group these six operations together as one “big” operation in order to correctly exercise this particular ‘OK’ button 27.6 Data Flow Diagram A stress test tool can have many different interactions and be implemented in many different ways Figure shows a block diagram, which can be used to illustrate some of the stress test tool interactions The main interactions for the stress test tool include an input file and Device Under Test (DUT) The input file is used here to provide the stress test tool with a list of all the commands and interactions needed to test the DUT Performance Testing Process & Methodology Proprietary & Confidential - 169 - System Resource Monitor Stress Test Tool Input File Log command Sequence DUT Log Test Results Figure 1: Stress Test Tool Interactions Additionally, data logging (commands and test results) and system resource monitoring are very beneficial in helping determine what the DUT was trying to before it crashed and how well it was able to manage its system resources The basic flow control of an automated stress test tool is to setup the DUT into a known state and then to loop continuously selecting a new random interaction, trying to execute the interaction, and logging the results This loop continues until a set number of interactions have occurred or the DUT crashes 27.7 Techniques Used to Isolate Defects Depending on the type of defect to be isolated, two different techniques are used: System crashes – (asserts and the like) not try to run the full stress test from the beginning, unless it only takes a few minutes to produce the defect Instead, back-up and run the stress test from the last seed (for us this is normally just the last 500 commands) If the defect still occurs, then continue to reduce the number of commands in the playback until the defect is isolated Diminishing resource issues – (memory leaks and the like) are usually limited to a single subsystem To isolate the subsystem, start removing subsystems from the database and re-run the stress test while monitoring the system resources Continue this process until the subsystem causing the reduction in resources is identified This technique is most effective after full integration of multiple subsystems (or, modules) has been achieved Some defects are just hard to reproduce – even with the same sequence of commands These defects should still be logged into the defect tracking system As the defect rePerformance Testing Process & Methodology Proprietary & Confidential - 170 - occurs, continue to add additional data to the defect description Eventually, over time, you will be able to detect a pattern, isolate the root cause and resolve the defect Some defects just seem to be un-reproducible, especially those that reside around page faults, but overall, we know that the robustness of our applications increases proportionally with the amount of time that the stress test will run uninterrupted Performance Testing Process & Methodology Proprietary & Confidential - 171 - 28 Test Case Coverage 28.1 Test Coverage Test Coverage is an important measure of quality for software systems Test Coverage analysis is the process of: • • • Finding areas of a program not exercised by a set of test cases, Creating additional test cases to increase coverage, and Determining a quantitative measure of code coverage, which is an indirect measure of quality Also an optional aspect of test coverage analysis is: • Identifying redundant test cases that not increase coverage A test coverage analyzer automates this process Test coverage analysis is sometimes called code coverage analysis The two terms are synonymous The academic world more often uses the term "test coverage" while practitioners more often use "code coverage" Test coverage analysis can be used to assure quality of the set of tests, and not the quality of the actual product Coverage analysis requires access to test program source code and often requires recompiling it with a special command Code coverage analysis is a structural testing technique (white box testing) Structural testing compares test program behavior against the apparent intention of the source code This contrasts with functional testing (black-box testing), which compares test program behavior against a requirements specification Structural testing examines how the program works, taking into account possible pitfalls in the structure and logic Functional testing examines what the program accomplishes, without regard to how it works internally 28.2 Test coverage measures A large variety of coverage measures exist Here is a description of some fundamental measures and their strengths and weaknesses Performance Testing Process & Methodology Proprietary & Confidential - 172 - 28.3 Procedure-Level Test Coverage Probably the most basic form of test coverage is to measure what procedures were and were not executed during the test suite This simple statistic is typically available from execution profiling tools, whose job is really to measure performance bottlenecks If the execution time in some procedures is zero, you need to write new tests that hit those procedures But this measure of test coverage is so coarse-grained it's not very practical 28.4 Line-Level Test Coverage The basic measure of a dedicated test coverage tool is tracking which lines of code are executed, and which are not This result is often presented in a summary at the procedure, file, or project level giving a percentage of the code that was executed A large project that achieved 90% code coverage might be considered a well-tested product Typically the line coverage information is also presented at the source code level, allowing you to see exactly which lines of code were executed and which were not This, of course, is often the key to writing more tests that will increase coverage: By studying the unexecuted code, you can see exactly what functionality has not been tested 28.5 Condition Coverage and Other Measures It's easy to find cases where line coverage doesn't really tell the whole story For example, consider a block of code that is skipped under certain conditions (e.g., a statement in an if clause) If that code is shown as executed, you don't know whether you have tested the case when it is skipped You need condition coverage to know There are many other test coverage measures However, most available code coverage tools not provide much beyond basic line coverage In theory, you should have more But in practice, if you achieve 95+% line coverage and still have time and budget to commit to further testing improvements, it is an enviable commitment to quality! 28.6 How Test Coverage Tools Work To monitor execution, test coverage tools generally "instrument" the program by inserting "probes" How and when this instrumentation phase happens can vary greatly between different products Adding probes to the program will make it bigger and slower If the test suite is large and time-consuming, the performance factor may be significant Performance Testing Process & Methodology Proprietary & Confidential - 173 - 28.6.1Source-Level Instrumentation Some products add probes at the source level They analyze the source code as written, and add additional code (such as calls to a code coverage runtime) that will record where the program reached Such a tool may not actually generate new source files with the additional code Some products, for example, intercept the compiler after parsing but before code generation to insert the changes they need One drawback of this technique is the need to modify the build process A separate version namely, code coverage version in addition to other versions, such as debug (un optimized) and release (optimized) needs to be maintained Proponents claim this technique can provide higher levels of code coverage measurement (condition coverage, etc.) than other forms of instrumentation This type of instrumentation is dependent on programming language the provider of the tool must explicitly choose which languages to support But it can be somewhat independent of operating environment (processor, OS, or virtual machine) 28.6.2Executable Instrumentation Probes can also be added to a completed executable file The tool will analyze the existing executable, and then create a new, instrumented one This type of instrumentation is independent of programming language However, it is dependent on operating environment the provider of the tool must explicitly choose which processors or virtual machines to support 28.6.3Runtime Instrumentation Probes need not be added until the program is actually run The probes exist only in the in-memory copy of the executable file; the file itself is not modified The same executable file used for product release testing should be used for code coverage Because the file is not modified in any way, just executing it will not automatically start code coverage (as it would with the other methods of instrumentation) Instead, the code coverage tool must start program execution directly or indirectly Alternatively, the code coverage tool will add a tiny bit of instrumentation to the executable This new code will wake up and connect to a waiting coverage tool whenever the program executes This added code does not affect the size or performance of the executable, and does nothing if the coverage tool is not waiting Like Executable Instrumentation, Runtime Instrumentation is independent of programming language but dependent on operating environment Performance Testing Process & Methodology Proprietary & Confidential - 174 - 28.7 Test Coverage Tools at a Glance There are lots of tools available for measuring Test coverage Company Product OS Lang Bullseye BullseyeCoverage Win32, Unix C/C++ CompuWare DevPartner Win32 C/C++, Java, VB Rational (IBM) PurifyPlus Win32, Unix C/C++, Java, VB Software Research TCAT Win32, Unix C/C++, Java Testwell CTC++ Win32, Unix C/C++ Paterson Technology LiveCoverage Win32 C/C++, VB Coverage analysis is a structural testing technique that helps eliminate gaps in a test suite It helps most in the absence of a detailed, up-to-date requirements specification Each project must choose a minimum percent coverage for release criteria based on available testing resources and the importance of preventing post-release failures Clearly, safety-critical software should have a high goal We must set a higher coverage goal for unit testing than for system testing since a failure in lower-level code may affect multiple high-level callers Performance Testing Process & Methodology Proprietary & Confidential - 175 - 29 Test Case points-TCP 29.1 What is a Test Case Point (TCP) TCP is a measure of estimating the complexity of an application This is also used as an estimation technique to calculate the size and effort of a testing project The TCP counts are nothing but ranking the requirements and the test cases that are to be written for those requirements into simple, average and complex and quantifying the same into a measure of complexity In this courseware we shall give an overview about Test Case Points and not elaborate on using TCP as an estimation technique 29.2 Calculating the Test Case Points: Based on the Functional Requirement Document (FRD), the application is classified into various modules like say for a web application, we can have ‘Login and Authentication’ as a module and rank that particular module as Simple, Average and Complex based on the number and complexity of the requirements for that module A Simple requirement is one, which can be given a value in the scale of to3 An Average requirement is ranked between and A Complex requirement is ranked between and 10 Complexity of Requirements Requirement Classification Simple (1-3) Average (4-7) Complex (> 8) The test cases for a particular requirement are classified into Simple, Average and Complex based on the following four factors • • • • • Test case complexity for that requirement OR Interface with other Test cases OR No of verification points OR Baseline Test data Refer the test case classification table given below Performance Testing Process & Methodology Proprietary & Confidential - 176 - Total 29.2.1.1 Test Case Classification Complexity Type Complexity of Test Case Simple Average Complex < transactions 3-6 transactions > transactions Interface with other Test case 3 Number of verification points 8 A sample guideline for classification of test cases is given below • • • • Any verification point containing a calculation is considered 'Complex' Any verification point, which interfaces with or interacts with another application is classified as 'Complex' Any verification point consisting of report verification is considered as 'Complex' A verification point comprising Search functionality may be classified as 'Complex' or 'Average' depending on the complexity Depending on the respective project, the complexity needs to be identified in a similar manner Based on the test case type an adjustment factor is assigned for simple, average and complex test cases This adjustment factor has been calculated after a thorough study and analysis done on many testing projects The Adjustment Factor in the table mentioned below is pre-determined and must not be changed for every project Performance Testing Process & Methodology Proprietary & Confidential - 177 - Baseline Test Data Not Required Required Required Test Case Type Complexity Adjustment Weight Factor Simple 2(A) Average 4(B) Complex Total Test Case Points 8(C) Number Result No of Simple requirements in the Number*Adjust factor A project (R1) No of Average requirements in Number*Adjust factor B the project (R2) No of Complex requirements in Number*Adjust factor C the project (R3) R1+R2+R3 From the break up of Complexity of Requirements done in the first step, we can get the number of simple, average and complex test case types By multiplying the number of requirements with it s corresponding adjustment factor, we get the simple, average and complex test case points Summing up the three results, we arrive at the count of Total Test Case Points 29.3 Chapter Summary This chapter covered the basics on      What is Test Coverage Test Coverage measures How does Test coverage tools work List of Test Coverage tools What is TCP and how to calculate the Test Case Points for an application Performance Testing Process & Methodology Proprietary & Confidential - 178 - ... of testing, namely, Unit testing, Integration testing, System testing, User Acceptance testing etc Let us see a brief definition on the widely employed types of testing Unit Testing: The testing. .. INTRODUCTION TO SOFTWARE 1.1 EVOLUTION OF THE SOFTWARE TESTING DISCIPLINE 1.2 THE TESTING PROCESS AND THE SOFTWARE TESTING LIFE CYCLE .7 1.3 BROAD CATEGORIES OF TESTING ... repetition and to aid in the software maintenance Other functional testing techniques include: transaction testing, syntax testing, domain testing, logic testing, and state testing Finite state machine

Ngày đăng: 18/08/2012, 10:59

TỪ KHÓA LIÊN QUAN

w