1. Trang chủ
  2. » Công Nghệ Thông Tin

xunit test patterns refactoring test code phần 4 pps

95 244 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Cấu trúc

  • XUnit test patterns : refactoring test code

    • Contents

    • Visual Summary of the Pattern Language

    • Foreword

    • Preface

    • Acknowledgments

    • Introduction

    • Refactoring a Test

    • PART I. The Narratives

      • Chapter 1. A Brief Tour

        • About This Chapter

        • The Simplest Test Automation Strategy That Could Possibly Work

        • What’s Next?

      • Chapter 2. Test Smells

        • About This Chapter

        • An Introduction to Test Smells

        • A Catalog of Smells

        • What’s Next?

      • Chapter 3. Goals of Test Automation

        • About This Chapter

        • Why Test?

        • Goals of Test Automation

        • What’s Next?

      • Chapter 4. Philosophy of Test Automation

        • About This Chapter

        • Why Is Philosophy Important?

        • Some Philosophical Differences

        • When Philosophies Differ

        • My Philosophy

        • What’s Next?

      • Chapter 5. Principles of Test Automation

        • About This Chapter

        • The Principles

        • What’s Next?

      • Chapter 6. Test Automation Strategy

        • About This Chapter

        • What’s Strategic?

        • Which Kinds of Tests Should We Automate?

        • Which Tools Do We Use to Automate Which Tests?

        • Which Test Fixture Strategy Do We Use?

        • How Do We Ensure Testability?

        • What’s Next?

      • Chapter 7. xUnit Basics

        • About This Chapter

        • An Introduction to xUnit

        • Common Features

        • The Bare Minimum

        • Under the xUnit Covers

        • xUnit in the Procedural World

        • What’s Next?

      • Chapter 8. Transient Fixture Management

        • About This Chapter

        • Test Fixture Terminology

        • Building Fresh Fixtures

        • Tearing Down Transient Fresh Fixtures

        • What’s Next?

      • Chapter 9. Persistent Fixture Management

        • About This Chapter

        • Managing Persistent Fresh Fixtures

        • Managing Shared Fixtures

        • What’s Next?

      • Chapter 10. Result Verification

        • About This Chapter

        • Making Tests Self-Checking

        • State Verification

        • Verifying Behavior

        • Reducing Test Code Duplication

        • Avoiding Conditional Test Logic

        • Other Techniques

        • Where to Put Reusable Verification Logic?

        • What’s Next?

      • Chapter 11. Using Test Doubles

        • About This Chapter

        • What Are Indirect Inputs and Outputs?

        • Testing with Doubles

        • Other Uses of Test Doubles

        • Other Considerations

        • What’s Next?

      • Chapter 12. Organizing Our Tests

        • About This Chapter

        • Basic xUnit Mechanisms

        • Right-Sizing Test Methods

        • Test Methods and Testcase Classes

        • Test Naming Conventions

        • Organizing Test Suites

        • Test Code Reuse

        • Test File Organization

        • What’s Next?

      • Chapter 13. Testing with Databases

        • About This Chapter

        • Testing with Databases

        • Testing without Databases

        • Testing the Database

        • Testing with Databases (Again!)

        • What’s Next?

      • Chapter 14. A Roadmap to Effective Test Automation

        • About This Chapter

        • Test Automation Difficulty

        • Roadmap to Highly Maintainable Automated Tests

        • What’s Next?

    • PART II. The Test Smells

      • Chapter 15. Code Smells

        • Obscure Test

        • Conditional Test Logic

        • Hard-to-Test Code

        • Test Code Duplication

        • Test Logic in Production

      • Chapter 16. Behavior Smells

        • Assertion Roulette

        • Erratic Test

        • Fragile Test

        • Frequent Debugging

        • Manual Intervention

        • Slow Tests

      • Chapter 17. Project Smells

        • Buggy Tests

        • Developers Not Writing Tests

        • High Test Maintenance Cost

        • Production Bugs

    • PART III. The Patterns

      • Chapter 18. Test Strategy Patterns

        • Recorded Test

        • Scripted Test

        • Data-Driven Test

        • Test Automation Framework

        • Minimal Fixture

        • Standard Fixture

        • Fresh Fixture

        • Shared Fixture

        • Back Door Manipulation

        • Layer Test

      • Chapter 19. xUnit Basics Patterns

        • Test Method

        • Four-Phase Test

        • Assertion Method

        • Assertion Message

        • Testcase Class

        • Test Runner

        • Testcase Object

        • Test Suite Object

        • Test Discovery

        • Test Enumeration

        • Test Selection

      • Chapter 20. Fixture Setup Patterns

        • In-line Setup

        • Delegated Setup

        • Creation Method

        • Implicit Setup

        • Prebuilt Fixture

        • Lazy Setup

        • Suite Fixture Setup

        • Setup Decorator

        • Chained Tests

      • Chapter 21. Result Verification Patterns

        • State Verification

        • Behavior Verification

        • Custom Assertion

        • Delta Assertion

        • Guard Assertion

        • Unfinished Test Assertion

      • Chapter 22. Fixture Teardown Patterns

        • Garbage-Collected Teardown

        • Automated Teardown

        • In-line Teardown

        • Implicit Teardown

      • Chapter 23. Test Double Patterns

        • Test Double

        • Test Stub

        • Test Spy

        • Mock Object

        • Fake Object

        • Configurable Test Double

        • Hard-Coded Test Double

        • Test-Specific Subclass

      • Chapter 24. Test Organization Patterns

        • Named Test Suite

        • Test Utility Method

        • Parameterized Test

        • Testcase Class per Class

        • Testcase Class per Feature

        • Testcase Class per Fixture

        • Testcase Superclass

        • Test Helper

      • Chapter 25. Database Patterns

        • Database Sandbox

        • Stored Procedure Test

        • Table Truncation Teardown

        • Transaction Rollback Teardown

      • Chapter 26. Design-for-Testability Patterns

        • Dependency Injection

        • Dependency Lookup

        • Humble Object

        • Test Hook

      • Chapter 27. Value Patterns

        • Literal Value

        • Derived Value

        • Generated Value

        • Dummy Object

    • PART IV. Appendixes

      • Appendix A. Test Refactorings

      • Appendix B. xUnit Terminology

      • Appendix C. xUnit Family Members

      • Appendix D. Tools

      • Appendix E. Goals and Principles

      • Appendix F. Smells, Aliases, and Causes

      • Appendix G. Patterns, Aliases, and Variations

    • Glossary

      • A

      • B

      • C

      • D

      • E

      • F

      • G

      • H

      • I

      • L

      • M

      • N

      • O

      • P

      • R

      • S

      • T

      • U

      • V

    • References

    • Index

      • A

      • B

      • C

      • D

      • E

      • F

      • G

      • H

      • I

      • J

      • K

      • L

      • M

      • N

      • O

      • P

      • Q

      • R

      • S

      • T

      • U

      • V

      • W

      • X

Nội dung

223 Chapter 16 Behavior Smells Smells in This Chapter Assertion Roulette . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Erratic Test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Fragile Test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Frequent Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Manual Intervention. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Slow Tests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Behavior Smells Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com 224 Assertion Roulette It is hard to tell which of several assertions within the same test method caused a test failure. Symptoms A test fails. Upon examining the output of the Test Runner (page 377), we cannot determine exactly which assertion failed. Impact When a test fails during an automated Integration Build [SCM], it may be hard to tell exactly which assertion failed. If the problem cannot be reproduced on a developer’s machine (as may be the case if the problem is caused by environ- mental issues or Resource Optimism; see Erratic Test on page 228) fi xing the problem may be diffi cult and time-consuming. Causes Cause: Eager Test A single test verifi es too much functionality. Symptoms A test exercises several methods of the SUT or calls the same method several times interspersed with fi xture setup logic and assertions. public void testFlightMileage_asKm2() throws Exception { // set up fixture // exercise constructor Flight newFlight = new Flight(validFlightNumber); // verify constructed object assertEquals(validFlightNumber, newFlight.number); assertEquals("", newFlight.airlineCode); assertNull(newFlight.airline); // set up mileage newFlight.setMileage(1122); // exercise mileage translator int actualKilometres = newFlight.getMileageAsKm(); // verify results int expectedKilometres = 1810; assertEquals( expectedKilometres, actualKilometres); Chapter 16 Behavior Smells Assertion Roulette Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com 225 // now try it with a canceled flight newFlight.cancel(); try { newFlight.getMileageAsKm(); fail("Expected exception"); } catch (InvalidRequestException e) { assertEquals( "Cannot get cancelled flight mileage", e.getMessage()); } } Another possible symptom is that the test automater wants to modify the Test Automation Framework (page 298) to keep going after an assertion has failed so that the rest of the assertions can be executed. Root Cause An Eager Test is often caused by trying to minimize the number of unit tests (whether consciously or unconsciously) by verifying many test conditions in a single Test Method (page 348). While this is a good practice for manu- ally executed tests that have “liveware” interpreting the results and adjusting the tests in real time, it just doesn’t work very well for Fully Automated Tests (see page 26). Another common cause of Eager Tests is using xUnit to automate customer tests that require many steps, thereby verifying many aspects of the SUT in each test. These tests are necessarily longer than unit tests but care should be taken to keep them as short as possible (but no shorter!). Possible Solution For unit tests, we break up the test into a suite of Single-Condition Tests (see page 45) by teasing apart the Eager Test. It may be possible to do so by using one or more Extract Method [Fowler] refactorings to pull out independent pieces into their own Test Methods. Sometimes it is easier to clone the test once for each test condition and then clean up each Test Method by removing any code that is not required for that particular test conditions. Any code required to set up the fi xture or put the SUT into the correct starting state can be ex- tracted into a Creation Method (page 415). A good IDE or compiler will then help us determine which variables are no longer being used. If we are automating customer tests using xUnit, and this effort has resulted in many steps in each test because the work fl ows require complex fi xture setup, we could consider using some other way to set up the fi xture for the latter parts of the test. If we can use Back Door Setup (see Back Door Manipulation on page 327) to create the fi xture for the last part of the test independently of the Assertion Roulette Assertion Roulette Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com 226 Chapter 16 Behavior Smells fi rst part, we can break one test into two, thereby improving our Defect Local- ization (see Goals of Test Automation). We should repeat this process as many times as it takes to make the tests short enough to be readable at a single glance and to Communicate Intent (see page 41) clearly. Cause: Missing Assertion Message Symptoms A test fails. Upon examining the output of the Test Runner, we cannot deter- mine exactly which assertion failed. Root Cause This problem is caused by the use of Assertion Method (page 362) calls with identical or missing Assertion Messages (page 370). It is most commonly encountered when running tests using a Command-Line Test Runner (see Test Runner) or a Test Runner that is not integrated with the program text editor or development environment. In the following test, we have a number of Equality Assertions (see Assertion Method): public void testInvoice_addLineItem7() { LineItem expItem = new LineItem(inv, product, QUANTITY); // Exercise inv.addItemQuantity(product, QUANTITY); // Verify List lineItems = inv.getLineItems(); LineItem actual = (LineItem)lineItems.get(0); assertEquals(expItem.getInv(), actual.getInv()); assertEquals(expItem.getProd(), actual.getProd()); assertEquals(expItem.getQuantity(), actual.getQuantity()); } When an assertion fails, will we know which one it was? An Equality Assertion typically prints out both the expected and the actual values—but it may prove diffi cult to tell which assertion failed if the expected values are similar or print out cryptically. A good rule of thumb is to include at least a minimal Assertion Message whenever we have more than one call to the same kind of Assertion Method. Possible Solution If the problem occurred while we were running a test using a Graphical Test Runner (see Test Runner) with IDE integration, we should be able to click on the appropriate line in the stack traceback to have the IDE highlight the failed Assertion Roulette Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com 227 assertion. Failing this, we can turn on the debugger and single-step through the test to see which assertion statement fails. If the problem occurred while we were running a test using a Command- Line Test Runner, we can try running the test from a Graphical Test Runner with IDE integration to determine the offending assertion. If that doesn’t work, we may have to resort to using line numbers (if available) or apply a process of elimination to deduce which of the assertions it couldn’t be to narrow down the possibilities. Of course, we could just bite the bullet and add a unique Assertion Message (even just a number!) to each call to an Assertion Method. Further Reading Assertion Roulette and Eager Test were fi rst described in a paper presented at XP2001 called “Refactoring Test Code” [RTC]. Assertion Roulette Assertion Roulette Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com 228 Chapter 16 Behavior Smells Erratic Test One or more tests behave erratically; sometimes they pass and sometimes they fail. Symptoms We have one or more tests that run but give different results depending on when they are run and who is running them. In some cases, the Erratic Test will con- sistently give the same results when run by one developer but fail when run by someone else or in a different environment. In other cases, the Erratic Test will give different results when run from the same Test Runner (page 377). Impact We may be tempted to remove the failing test from the suite to “keep the bar green” but this would result in an (intentional) Lost Test (see Production Bugs on page 268). If we choose to keep the Erratic Test in the test suite despite the failures, the known failure may obscure other problems, such as another issue detected by the same tests. Just having a test fail can cause us to miss additional failures because it is much easier to see the change from a green bar to a red bar than to notice that two tests are failing instead of just the one we expected. Troubleshooting Advice Erratic Tests can be challenging to troubleshoot because so many potential causes exist. If the cause cannot be easily determined, it may be necessary to collect data systematically over a period of time. Where (in which environments) did the tests pass, and where did they fail? Were all the tests being run or just a subset of them? Did any change in behavior occur when the test suite was run several times in a row? Did any change in behavior occur when it was run from several Test Runners at the same time? Once we have some data, it should be easier to match up the observed symp- toms with those listed for each of the potential causes and to narrow the list of possibilities to a handful of candidates. Then we can collect some more data focusing on differences in symptoms between the possible causes. Figure 16.1 summarizes the process for determining which cause of an Erratic Test we are dealing with. Erratic Test Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com 229 Figure 16.1 Troubleshooting an Erratic Test. Causes Tests may behave erratically for a number of reasons. The underlying cause can usually be determined through some persistent sleuthing by paying attention to patterns regarding how and when the tests fail. Some of the causes are common enough to warrant giving them names and specifi c advice for rectifying them. Cause: Interacting Tests Tests depend on other tests in some way. Note that Interacting Test Suites and Lonely Test are specifi c variations of Interacting Tests. Symptoms A test that works by itself suddenly fails in the following circumstances: • Another test is added to (or removed from) the suite. • Another test in the suite fails (or starts to pass). • The test (or another test) is renamed or moved in the source fi le. • A new version of the Test Runner is installed. Root Cause Interacting Tests usually arise when tests use a Shared Fixture (page 317), with one test depending in some way on the outcome of another test. The cause of Interacting Tests can be described from two perspectives: Results Vary for Tests vs. Suites? Different Results Every Run? No Probably Unrepeatable Te st Ye s Gets Worse with Time? No Probably Interacting Tests or Suites Happens When Test Run Alone? No No Probably Lonely Test Ye s Ye s Probably Resource Leakage Ye s Probably Resource Optimism Only with Multiple Test Runners? No Probably Test Run War Ye s Different Results for First Run? No Probably Non- Deterministic Te st Ye s Results Vary by Location? No Ye s Hire an xUnit Expert! Results Vary for Tests vs. Suites? Different Results Every Run? No Probably Unrepeatable Te st Ye s Gets Worse with Time? No Probably Interacting Tests or Suites Happens When Test Run Alone? No No Probably Lonely Test Ye s Ye s Probably Resource Leakage Ye s Probably Resource Optimism Only with Multiple Test Runners? No Probably Test Run War Ye s Different Results for First Run? No Probably Non- Deterministic Te st Ye s Results Vary by Location? No Ye s Hire an xUnit Expert! Erratic Test Erratic Test Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com 230 Chapter 16 Behavior Smells • The mechanism of interaction • The reason for interaction The mechanism for interaction could be something blatantly obvious—for example, testing an SUT that includes a database—or it could be more subtle. Anything that outlives the lifetime of the test can lead to interactions; static variables can be depended on to cause Interacting Tests and, therefore, should be avoided in both the SUT and the Test Automation Framework (page 298)! See the sidebar “There’s Always an Exception” on page 384 for an exam- ple of the latter problem. Singletons [GOF] and Registries [PEAA] are good examples of things to avoid in the SUT if at all possible. If we must use them, it is best to include a mechanism to reinitialize their static variables at the beginning of each test. Tests may interact for a number of reasons, either by design or by accident: • Depending on the fi xture constructed by the fi xture setup phase of another test • Depending on the changes made to the SUT during the exercise SUT phase of another test • A collision caused by some mutually exclusive action (which may be either of the problems mentioned above) between two tests run in the same test run The dependencies may suddenly cease to be satisfi ed if the depended-on test • Is removed from the suite, • Is modifi ed to no longer change the state of the SUT, • Fails in its attempt to change the state of the SUT, or • Is run after the test in question (because it was renamed or moved to a different Testcase Class; see page 373). Similarly, collisions may start occurring when the colliding test is • Added to the suite, • Passes for the fi rst time, or • Runs before the dependent test. In many of these cases, multiple tests will fail. Some of the tests may fail for a good reason—namely, the SUT is not doing what it is supposed to do. Depen- dent tests may fail for the wrong reason—because they were coded to depend Erratic Test Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com 231 on other tests’ success. As a result, they may be giving a “false-positive” (false- failure) indication. In general, depending on the order of test execution is not a wise approach because of the problems described above. Most variants of the xUnit frame- work do not make any guarantees about the order of test execution within a test suite. (TestNG, however, promotes interdependencies between tests by pro- viding features to manage the dependencies.) Possible Solution Using a Fresh Fixture (page 311) is the preferred solution for Interacting Tests; it is almost guaranteed to solve the problem. If we must use a Shared Fixture, we should consider using an Immutable Shared Fixture (see Shared Fixture) to pre- vent the tests from interacting with one another through changes in the fi xture by creating from scratch those parts of the fi xture that they intend to modify. If an unsatisfi ed dependency arises because another test does not create the expected objects or database data, we should consider using Lazy Setup (page 435) to create the objects or data in both tests. This approach ensures that the fi rst test to execute creates the objects or data for both tests. We can put the fi xture setup code into a Creation Method (page 415) to avoid Test Code Duplication (page 213). If the tests are on different Testcase Classes, we can move the fi xture setup code to a Test Helper (page 643). Sometimes the collision may be caused by objects or database data that are created in our test but not cleaned up afterward. In such a case, we should con- sider implementing Automated Fixture Teardown (see Automated Teardown on page 503) to remove them safely and effi ciently. A quick way to fi nd out whether any tests depend on one another is to run the tests in a different order than the normal order. Running the entire test suite in reverse order, for example, would do the trick nicely. Doing so regularly would help avoid accidental introduction of Interacting Tests. Cause: Interacting Test Suites In this special case of Interacting Tests, the tests are in different test suites. Symptoms A test passes when it is run in its own test suite but fails when it is run within a Suite of Suites (see Test Suite Object on page 387). Suite1.run() > Green Suite2.run() > Green Suite(Suite1,Suite2).run() > Test C in Suite2 fails Erratic Test Erratic Test Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com 232 Chapter 16 Behavior Smells Root Cause Interacting Test Suites usually occur when tests in separate test suites try to cre- ate the same resource. When they are run in the same suite, the fi rst one succeeds but the second one fails while trying to create the resource. The nature of the problem may be obvious just by looking at the test failure or by reading the failed Test Method (page 348). If it is not, we can try remov- ing other tests from the (nonfailing) test suite, one by one. When the failure stops occurring, we simply examine the last test we removed for behaviors that might cause the interactions with the other (failing) test. In particular, we need to look at anything that might involve a Shared Fixture, including all places where class variables are initialized. These locations may be within the Test Method itself, within a setUp method, or in any Test Utility Methods (page 599) that are called. Warning: There may be more than one pair of tests interacting in the same test suite! The interaction may also be caused by the Suite Fixture Setup (page 441) or Setup Decorator (page 447) of several Testcase Classes clashing rather than by a confl ict between the actual Test Methods! Variants of xUnit that use Testcase Class Discovery (see Test Discovery on page 393), such as NUnit, may appear to not use test suites. In reality, they do—they just don’t expect the test automaters to use a Test Suite Factory (see Test Enumeration on page 399) to identify the Test Suite Object to the Test Runner. Possible Solution We could, of course, eliminate this problem entirely by using a Fresh Fixture. If this solution isn’t within our scope, we could try using an Immutable Shared Fixture to prevent the tests’ interaction. If the problem is caused by leftover objects or database rows created by one test that confl ict with the fi xture being created by a later test, we should con- sider using Automated Teardown to eliminate the need to write error-prone cleanup code. Cause: Lonely Test A Lonely Test is a special case of Interacting Tests. In this case, a test can be run as part of a suite but cannot be run by itself because it depends on something in a Shared Fixture that was created by another test (e.g., Chained Tests; see page 454) or by suite-level fi xture setup logic (e.g., a Setup Decorator). We can address this problem by converting the test to use a Fresh Fixture or by adding Lazy Setup logic to the Lonely Test to allow it to run by itself. Erratic Test Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com [...]... to completely eliminate a Test Run War but it might reduce its frequency Cause: Nondeterministic Test Test failures occur at random, even when only a single Test Runner is running tests Symptoms We are running tests and the results vary each time we run them, as shown here: Suite.run() Suite.run() Suite.run() Suite.run() > > > > Test 3 fails Test 3 crashes All tests pass Test 3 fails After comparing... that is hard to test in small pieces (Hard-to -Test Code; see page 209) or our lack of experience with unit testing using Test Doubles (page 522) to test pieces in isolation (Overspecified Software) 1 Other tests may fail because we have removed the code that made them pass—but at least we have established which part of the code they depend on Fragile Test 241 Simpo PDF Merge and Split Unregistered Version... each test Cause: Asynchronous Test Symptoms A few tests take inordinately long to run; those tests contain explicit delays Root Cause Delays included within a Test Method slow down test execution considerably This slow execution may be necessary when the software we are testing spawns threads or processes (Asynchronous Code; see Hard-to -Test Code on page 209) and the test needs to wait for them to... paper published on test smells and refactoring test code The four sensitivities were first described in [ARTRP], which also described several ways to avoid Fragile Tests in Recorded Tests Fragile Test 248 Chapter 16 Behavior Smells Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Frequent Debugging Manual debugging is required to determine the cause of most test failures Frequent... would improve our test coverage Unfortunately, this tactic decreases our understanding of the test coverage and the repeatability of our tests (which violates the Repeatable Test principle; see page 26) Another potential cause of Nondeterministic Tests is the use of Conditional Test Logic (page 200) in our tests Its inclusion can result in different code paths being executed on different test runs, which... teammates, we rule out a Test Run War either because we are the only person running tests or because the test fixture is not shared between users or computers As with an Unrepeatable Test, having multiple Nondeterministic Tests in the same test suite can make it more difficult to detect the failure/error pattern: It looks like different tests are failing rather than a single test producing different... fixture outlasts the test run The use of a Database Sandbox may isolate our tests from other developers’ tests but it won’t prevent the tests we run from colliding with themselves or with other tests we run from the same Test Runner The use of Lazy Setup to initialize a fixture holding class variable can result in the test fixture not being reinitialized on subsequent runs of the same test suite In effect,... in question, such a change in test results is expected When we don’t think the change should have affected the tests that are failing or we haven’t changed any production code or tests, we have a case of Fragile Tests Past efforts at automated testing have often run afoul of the “four sensitivities” of automated tests These sensitivities are what cause Fully Automated Tests (see page 26) that previously... http://www.simpopdf.com Are the Tests Compiling? No Probably Interface Sensitivity Yes Possibly Interface Sensitivity Yes Are the Tests Erroring? No Possibly Not Fragile Test Fragile Test Yes Have the Failing Tests Changed? No Probably Behavior Sensitivity Yes Has Some Code Changed? No Probably Data Sensitivity Yes Has the Test Data Changed? No Probably Context Sensitivity Figure 16.2 Troubleshooting a Fragile Test The general... each test and test run The other alternative is to implement Automated Teardown to remove all newly created objects and rows safely and efficiently Cause: Test Run War Test failures occur at random when several people are running tests simultaneously Erratic Test 236 Chapter 16 Behavior Smells Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Symptoms Erratic Test We are running tests . 44 1) or Setup Decorator (page 44 7) of several Testcase Classes clashing rather than by a confl ict between the actual Test Methods! Variants of xUnit that use Testcase Class Discovery (see Test. Interacting Tests. Cause: Interacting Test Suites In this special case of Interacting Tests, the tests are in different test suites. Symptoms A test passes when it is run in its own test suite. fi rst test to execute creates the objects or data for both tests. We can put the fi xture setup code into a Creation Method (page 41 5) to avoid Test Code Duplication (page 213). If the tests

Ngày đăng: 14/08/2014, 01:20

TỪ KHÓA LIÊN QUAN