1. Trang chủ
  2. » Công Nghệ Thông Tin

xunit test patterns refactoring test code phần 5 doc

95 228 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Cấu trúc

  • XUnit test patterns : refactoring test code

    • Contents

    • Visual Summary of the Pattern Language

    • Foreword

    • Preface

    • Acknowledgments

    • Introduction

    • Refactoring a Test

    • PART I. The Narratives

      • Chapter 1. A Brief Tour

        • About This Chapter

        • The Simplest Test Automation Strategy That Could Possibly Work

        • What’s Next?

      • Chapter 2. Test Smells

        • About This Chapter

        • An Introduction to Test Smells

        • A Catalog of Smells

        • What’s Next?

      • Chapter 3. Goals of Test Automation

        • About This Chapter

        • Why Test?

        • Goals of Test Automation

        • What’s Next?

      • Chapter 4. Philosophy of Test Automation

        • About This Chapter

        • Why Is Philosophy Important?

        • Some Philosophical Differences

        • When Philosophies Differ

        • My Philosophy

        • What’s Next?

      • Chapter 5. Principles of Test Automation

        • About This Chapter

        • The Principles

        • What’s Next?

      • Chapter 6. Test Automation Strategy

        • About This Chapter

        • What’s Strategic?

        • Which Kinds of Tests Should We Automate?

        • Which Tools Do We Use to Automate Which Tests?

        • Which Test Fixture Strategy Do We Use?

        • How Do We Ensure Testability?

        • What’s Next?

      • Chapter 7. xUnit Basics

        • About This Chapter

        • An Introduction to xUnit

        • Common Features

        • The Bare Minimum

        • Under the xUnit Covers

        • xUnit in the Procedural World

        • What’s Next?

      • Chapter 8. Transient Fixture Management

        • About This Chapter

        • Test Fixture Terminology

        • Building Fresh Fixtures

        • Tearing Down Transient Fresh Fixtures

        • What’s Next?

      • Chapter 9. Persistent Fixture Management

        • About This Chapter

        • Managing Persistent Fresh Fixtures

        • Managing Shared Fixtures

        • What’s Next?

      • Chapter 10. Result Verification

        • About This Chapter

        • Making Tests Self-Checking

        • State Verification

        • Verifying Behavior

        • Reducing Test Code Duplication

        • Avoiding Conditional Test Logic

        • Other Techniques

        • Where to Put Reusable Verification Logic?

        • What’s Next?

      • Chapter 11. Using Test Doubles

        • About This Chapter

        • What Are Indirect Inputs and Outputs?

        • Testing with Doubles

        • Other Uses of Test Doubles

        • Other Considerations

        • What’s Next?

      • Chapter 12. Organizing Our Tests

        • About This Chapter

        • Basic xUnit Mechanisms

        • Right-Sizing Test Methods

        • Test Methods and Testcase Classes

        • Test Naming Conventions

        • Organizing Test Suites

        • Test Code Reuse

        • Test File Organization

        • What’s Next?

      • Chapter 13. Testing with Databases

        • About This Chapter

        • Testing with Databases

        • Testing without Databases

        • Testing the Database

        • Testing with Databases (Again!)

        • What’s Next?

      • Chapter 14. A Roadmap to Effective Test Automation

        • About This Chapter

        • Test Automation Difficulty

        • Roadmap to Highly Maintainable Automated Tests

        • What’s Next?

    • PART II. The Test Smells

      • Chapter 15. Code Smells

        • Obscure Test

        • Conditional Test Logic

        • Hard-to-Test Code

        • Test Code Duplication

        • Test Logic in Production

      • Chapter 16. Behavior Smells

        • Assertion Roulette

        • Erratic Test

        • Fragile Test

        • Frequent Debugging

        • Manual Intervention

        • Slow Tests

      • Chapter 17. Project Smells

        • Buggy Tests

        • Developers Not Writing Tests

        • High Test Maintenance Cost

        • Production Bugs

    • PART III. The Patterns

      • Chapter 18. Test Strategy Patterns

        • Recorded Test

        • Scripted Test

        • Data-Driven Test

        • Test Automation Framework

        • Minimal Fixture

        • Standard Fixture

        • Fresh Fixture

        • Shared Fixture

        • Back Door Manipulation

        • Layer Test

      • Chapter 19. xUnit Basics Patterns

        • Test Method

        • Four-Phase Test

        • Assertion Method

        • Assertion Message

        • Testcase Class

        • Test Runner

        • Testcase Object

        • Test Suite Object

        • Test Discovery

        • Test Enumeration

        • Test Selection

      • Chapter 20. Fixture Setup Patterns

        • In-line Setup

        • Delegated Setup

        • Creation Method

        • Implicit Setup

        • Prebuilt Fixture

        • Lazy Setup

        • Suite Fixture Setup

        • Setup Decorator

        • Chained Tests

      • Chapter 21. Result Verification Patterns

        • State Verification

        • Behavior Verification

        • Custom Assertion

        • Delta Assertion

        • Guard Assertion

        • Unfinished Test Assertion

      • Chapter 22. Fixture Teardown Patterns

        • Garbage-Collected Teardown

        • Automated Teardown

        • In-line Teardown

        • Implicit Teardown

      • Chapter 23. Test Double Patterns

        • Test Double

        • Test Stub

        • Test Spy

        • Mock Object

        • Fake Object

        • Configurable Test Double

        • Hard-Coded Test Double

        • Test-Specific Subclass

      • Chapter 24. Test Organization Patterns

        • Named Test Suite

        • Test Utility Method

        • Parameterized Test

        • Testcase Class per Class

        • Testcase Class per Feature

        • Testcase Class per Fixture

        • Testcase Superclass

        • Test Helper

      • Chapter 25. Database Patterns

        • Database Sandbox

        • Stored Procedure Test

        • Table Truncation Teardown

        • Transaction Rollback Teardown

      • Chapter 26. Design-for-Testability Patterns

        • Dependency Injection

        • Dependency Lookup

        • Humble Object

        • Test Hook

      • Chapter 27. Value Patterns

        • Literal Value

        • Derived Value

        • Generated Value

        • Dummy Object

    • PART IV. Appendixes

      • Appendix A. Test Refactorings

      • Appendix B. xUnit Terminology

      • Appendix C. xUnit Family Members

      • Appendix D. Tools

      • Appendix E. Goals and Principles

      • Appendix F. Smells, Aliases, and Causes

      • Appendix G. Patterns, Aliases, and Variations

    • Glossary

      • A

      • B

      • C

      • D

      • E

      • F

      • G

      • H

      • I

      • L

      • M

      • N

      • O

      • P

      • R

      • S

      • T

      • U

      • V

    • References

    • Index

      • A

      • B

      • C

      • D

      • E

      • F

      • G

      • H

      • I

      • J

      • K

      • L

      • M

      • N

      • O

      • P

      • Q

      • R

      • S

      • T

      • U

      • V

      • W

      • X

Nội dung

When to Use It Regardless of why we use them, Shared Fixtures come with some baggage that we should understand before we head down this path. The major issue with a Shared Fixture is that it can lead to interactions between tests, possibly resulting in Erratic Tests (page 228) if some tests depend on the outcomes of other tests. Another potential problem is that a fi xture designed to serve many tests is bound to be much more complicated than the Minimal Fixture (page 302) needed for a single test. This greater complexity will typically take more effort to design and can lead to a Fragile Fixture (see Fragile Test on page 239) later on down the road when we need to modify the fi xture. A Shared Fixture will often result in an Obscure Test (page 186) because the fi xture is not constructed inside the test. This potential disadvantage can be mitigated by using Finder Methods (see Test Utility Method on page 599) with Intent-Revealing Names [SBPP] to access the relevant parts of the fi xture. There are some valid reasons for using a Shared Fixture and some misguided ones. Many of the variations have been devised primarily to mitigate the negative consequences of using a Shared Fixture. So, what are good reasons for using a Shared Fixture? Variation: Slow Tests We can use a Shared Fixture when we cannot afford to build a new Fresh Fixture for each test. Typically, this scenario will occur when it takes too much processing to build a new fi xture for each test, which often leads to Slow Tests (page 253). It most commonly occurs when we are testing with real test databases due to the high cost of creating each of the records. This growth in overhead tends to be exacerbated when we use the API of the SUT to create the reference data, because the SUT often does a lot of input validation, which may involve reading some of the just-written records. A better solution is to make the tests run faster by not interacting with the database at all. For a more complete list of options, see the solutions to Slow Tests and the sidebar “Faster Tests Without Shared Fixtures” (page 319). Shared Fixture 318 Chapter 18 Test Strategy Patterns Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Faster Tests Without Shared Fixtures The fi rst reaction to Slow Tests (page 253) is often to switch to a Shared Fixture (page 317) approach. Several other solutions are available, how- ever. This sidebar describes some experiences on several projects. Fake Database On one of our early XP projects, we wrote a lot of tests that accessed the database. At fi rst we used a Shared Fixture. When we encountered Interacting Tests (see Erratic Test on page 228) and later Test Run Wars (see Erratic Test), however, we changed to a Fresh Fixture (page 311) approach. Because these tests needed a fair bit of reference data, they were taking a long time to run. On average, for every read or write the SUT did to or from the database, each test did several more. It was tak- ing 15 minutes to run the full test suite of several hundred tests, which greatly impeded our ability to integrate our work quickly and often. At the time, we were using a data access layer to keep the SQL out of our code. We soon discovered that it allowed us to replace the real data- base with a functionally equivalent Fake Database (see Fake Object on page 551). We started out by using simple HashTables to store the objects against a key. This approach allowed us to run many of our simpler tests “in memory” rather than against the database. And that bought us a sig- nifi cant drop in test execution time. Our persistence framework supported an object query interface. We were able to build an interpreter of the object queries that ran against our HashTable database implementation and that allowed the majority of our tests to work entirely in memory. On average, our tests ran about 50 times faster in memory than with the database. For example, a test suite that took 10 minutes to run with the database took 10 seconds to run in memory. This approach was so successful that we have reused the same testing infrastructure on many of our subsequent projects. Using the faked-out persistence framework also means we don’t have to bother with building a “real database” until our object models stabilize, which can be several months into the project. Incremental Speedups Ted O’Grady and Joseph King are agile team leads on a large (50-plus developers, subject matter experts, and testers) eXtreme Programming project. Like many project teams building database-centric applications, Shared Fixture Shared Fixture 319 Continued Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com they suffered from Slow Tests. But they found a way around this problem: As of late 2005, their check-in test suite ran in less than 8 minutes com- pared to 8 hours for a full test run against the database. That is a pretty impressive speed difference. Here is their story: Currently we have about 6,700 tests that we run on a regular basis. We’ve actually tried a few things to speed up the tests and they’ve evolved over time. In January 2004, we were running our tests directly against a database via Toplink. In June 2004, we modifi ed the application so we could run tests against an in-memory, in-process Java database (HSQL). This cut the time to run in half. In August 2004, we created a test-only framework that allowed Toplink to work without a database at all. That cut the time to run all the tests by a factor of 10. In July 2005, we built a shared “check-in” test execution server that allowed us to run tests remotely. This didn’t save any time at fi rst but it has proven to be quite useful nonetheless. In July 2005, we also started using a clustering framework that al- lowed us to run tests distributed across a network. This cut the time to run the tests in half. In August 2005, we removed the GUI and Master Data (reference data crud) tests from the “check-in suite” and ran them only from Cruise Control. This cut the time to run by approximately 15% to 20%. Since May 2004, we have also had Cruise Control run all the tests against the database at regular intervals. The time it takes Cruise Control to complete [the build and run the tests] has grown with the number of tests from an hour to nearly 8 hours now. When a threshold has been met that prevents the developers from (a) running [the tests] frequently when developing and (b) creat- ing long check-in queues as people wait for the token to check in, we have adapted by experimenting with new techniques. As a rule we try to keep the running of the tests under 5 minutes, with any- thing over 8 minutes being a trigger to try something new. We have resisted thus far the temptation to run only a subset of the tests and instead focused on ways to speed up running all the tests—although as you can see, we have begun removing the tests Shared Fixture 320 Chapter 18 Test Strategy Patterns Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com developers must run continuously (e.g., Master Data and GUI test suites are not required to check in, as they are run by Cruise Control and are areas that change infrequently). Two of the most interesting solutions recently (aside from the in- memory framework) are the test server and the clustering frame- work. The test server (named the “check-in” box here) is actually quite useful and has proven to be reliable and robust. We bought an Opteron box that is roughly twice as fast as the development boxes (really, the fastest box we could fi nd). The server has an account set up for each development machine in the pit. Using the UNIX tool rsynch, the Eclipse workspace is synchronized with the user’s corresponding server account fi le system. A series of shell scripts then recreates the database on the server for the remote account and runs all the development tests. When the tests have completed, a list of times to run each test is dumped to the console, along with a MyTestSuite.java class containing all the test failures, which the developer can use to run locally to fi x any tests that have broken. The biggest advantage the remote server has provided is that it makes running a large number of tests feel fast again, because the developer can continue working while he or she waits for the results of the test server to come back. The clustering framework (based on Condor) was quite fast but had the defect that it had to ship the entire workspace (11MB) to all the nodes on the network (×20), which had a signifi cant cost, especially when a dozen pairs are using it. In comparison, the test server uses rsynch, which copies only the fi les that are new or different in the developer’s workspace. The clustering framework also proved to be less reliable than the server solution, frequently not returning any status of the test run. There were also some tests that would not run reliably on the framework. Since it gave us roughly the same perfor- mance as the “check-in” test server, we have put this solution on the back burner. Further Reading A more detailed description of the fi rst experience can be found at http:// FasterTestsPaper.gerardmeszaros.com. Shared Fixture Shared Fixture 321 Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Variation: Incremental Tests We may also use Shared Fixtures when we have a long, complex sequence of actions, each of which depends on the previous actions. In customer tests, this may show up as a work fl ow; in unit tests, it may be a sequence of method calls on the same object. This case might be tested using a single Eager Test (see As- sertion Roulette on page 224). The alternative is to put each distinct action into a separate Test Method (page 348) that builds upon the actions of a previous test operating on a Shared Fixture. This approach, which is an example of Chained Tests (page 454), is how testers in the “testing” (i.e., QA) community often operate: They set up a fi xture and then run a sequence of tests, each of which builds upon the fi xture. The testers do have one signifi cant advantage over our Fully Automated Tests (see page 26): When a test partway through the chain fails, they are available to make decisions about how to recover or whether it is worth proceeding at all. In contrast, our automated tests just keep running, and many of them will generate test failures or errors because they did not fi nd the fi xture as expected and, therefore, the SUT behaved (probably correctly) differ- ently. The resulting test results can obscure the real cause of the failure in a sea of red. With some experience it is often possible to recognize the failure pattern and deduce the root cause. 10 This troubleshooting can be made simpler by starting each Test Method with one or more Guard Assertions (page 490) that document the assumptions the Test Method makes about the state of the fi xture. When these assertions fail, they tell us to look elsewhere—either at tests that failed earlier in the test suite or at the order in which the tests were run. Implementation Notes A key implementation question with Shared Fixtures is, How do tests know about the objects in the Shared Fixture so they can (re)use them? Because the point of a Shared Fixture is to save execution time and effort by having multiple tests use the same instance of the test fi xture, we’ll need to keep a reference to the fi xture we create. That way, we can fi nd the fi xture if it already exists and we can inform other tests that it now exists once we have constructed it. We have more choices available to us with Per-Run Fixtures because we can “remember” the fi xture we set up in code more easily than a Prebuilt Fixture (page 429) set up by a different program. Although we could just hard-code the identifi ers (e.g., database keys) of the fi xture objects into all our tests, that technique would result in a Fragile Fix- ture. To avoid this problem, we need to keep a reference to the fi xture when we create it and we need to make it possible for all tests to access that reference. 10 It may not be as simple as looking at the fi rst test that failed. Shared Fixture Chapter 18 Test Strategy Patterns 322 Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Variation: Per-Run Fixture The simplest form of Shared Fixture is the Per-Run Fixture, in which we set up the fi xture at the beginning of a test run and allow it to be shared by the tests within the run. Ideally, the fi xture won’t outlive the test run and we don’t have to worry about interactions between test runs such as Unrepeatable Tests (a cause of Erratic Tests). If the fi xture is persistent, such as when it is stored in a database, we may need to do explicit fi xture teardown. If a Per-Run Fixture is shared only within a single Testcase Class (page 373), the simplest solution is to use a class variable for each fi xture object we need to hold a reference to and then use either Lazy Setup (page 435) or Suite Fixture Setup (page 441) to initialize the objects just before we run the fi rst test in the suite. If we want to share the test fi xture between many Testcase Classes, we’ll need to use a Setup Decorator (page 447) to hold the setUp and tearDown methods and a Test Fixture Registry (see Test Helper on page 643) (which could just be the test database) to access the fi xture. Variation: Immutable Shared Fixture The problem with Shared Fixtures is that they lead to Erratic Tests if tests modify the Shared Fixture (page 317). Shared Fixtures violate the Independent Test prin- ciple (see page 42). We can avoid this problem by making the Shared Fixture immutable; that is, we partition the fi xture needed by tests into two logical parts. The fi rst part is the stuff every test needs to have present but is never modifi ed by any tests—that is, the Immutable Shared Fixture. The second part is the objects that any test needs to modify or delete; these objects should be built by each test as Fresh Fixtures. The most diffi cult part of applying an Immutable Shared Fixture is deciding what constitutes a change to an object. The key guideline is this: If any test per- ceives something done by another test as a change to an object in the Immutable Shared Fixture, then that change shouldn’t be allowed in any test with which it shares the fi xture. Most commonly, the Immutable Shared Fixture consists of reference data that is needed by the actual per-test fi xtures. The per-test fi xtures can then be built as Fresh Fixtures on top of the Immutable Shared Fixture. Motivating Example The following example shows a Testcase Class setting up the test fi xture via Implicit Setup (page 424). Each Test Method uses an instance variable to access the contents of the fi xture. Shared Fixture Shared Fixture 323 Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com public void testGetFlightsByFromAirport_OneOutboundFlight() throws Exception { setupStandardAirportsAndFlights(); FlightDto outboundFlight = findOneOutboundFlight(); // Exercise System List flightsAtOrigin = facade.getFlightsByOriginAirport( outboundFlight.getOriginAirportId()); // Verify Outcome assertOnly1FlightInDtoList( "Flights at origin", outboundFlight, flightsAtOrigin); } public void testGetFlightsByFromAirport_TwoOutboundFlights() throws Exception { setupStandardAirportsAndFlights(); FlightDto[] outboundFlights = findTwoOutboundFlightsFromOneAirport(); // Exercise System List flightsAtOrigin = facade.getFlightsByOriginAirport( outboundFlights[0].getOriginAirportId()); // Verify Outcome assertExactly2FlightsInDtoList( "Flights at origin", outboundFlights, flightsAtOrigin); } Note that the setUp method is run once for each Test Method. If the fi xture setup is fairly complex and involves accessing a database, this approach could result in Slow Tests. Refactoring Notes To convert a Testcase Class from a Standard Fixture to a Shared Fixture, we simply convert the instance variables into class variables to make the fi xture outlast the creating Testcase Object. We then need to initialize the class vari- ables just once to avoid recreating them for each Test Method; Lazy Setup is an easy way to accomplish this task. Of course, other ways to set up the Shared Fixture are also possible, such as Setup Decorator or Suite Fixture Setup. Example: Shared Fixture This example shows the fi xture converted to a Shared Fixture set up using Lazy Setup. protected void setUp() throws Exception { if (sharedFixtureInitialized) { return; Shared Fixture 324 Chapter 18 Test Strategy Patterns Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com } facade = new FlightMgmtFacadeImpl(); setupStandardAirportsAndFlights(); sharedFixtureInitialized = true; } protected void tearDown() throws Exception { // We cannot delete any objects because we don't know // whether this is the last test } The Lazy Initialization [SBPP] logic in the setUp method ensures that the Shared Fixture is created whenever the class variable is uninitialized. The Test Methods have also been modifi ed to use a Finder Method to access the contents of the fi xture: public void testGetFlightsByFromAirport_OneOutboundFlight() throws Exception { FlightDto outboundFlight = findOneOutboundFlight(); // Exercise System List flightsAtOrigin = facade.getFlightsByOriginAirport( outboundFlight.getOriginAirportId()); // Verify Outcome assertOnly1FlightInDtoList( "Flights at origin", outboundFlight, flightsAtOrigin); } public void testGetFlightsByFromAirport_TwoOutboundFlights() throws Exception { FlightDto[] outboundFlights = findTwoOutboundFlightsFromOneAirport(); // Exercise System List flightsAtOrigin = facade.getFlightsByOriginAirport( outboundFlights[0].getOriginAirportId()); // Verify Outcome assertExactly2FlightsInDtoList( "Flights at origin", outboundFlights, flightsAtOrigin); } The details of how the Test Utility Methods such as setupStandardAirportsAndFlights are implemented are not shown here, because they are not important for under- standing this example. It should be enough to understand that these methods create the airports and fl ights and store references to them in static variables so that all Test Methods can access the same fi xture either directly or via Test Utility Methods. Shared Fixture Shared Fixture 325 Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Example: Immutable Shared Fixture Here’s an example of Shared Fixture “pollution”: public void testCancel_proposed_p()throws Exception { // shared fixture BigDecimal proposedFlightId = findProposedFlight(); // exercise SUT facade.cancelFlight(proposedFlightId); // verify outcome try{ assertEquals(FlightState.CANCELLED, facade.findFlightById(proposedFlightId)); } finally { // teardown // try to undo the damage; hope this works! facade.overrideStatus( proposedFlightId, FlightState.PROPOSED); } } We can avoid this problem by making the Shared Fixture immutable; that is, we partition the fi xture needed by tests into two logical parts. The fi rst part is the stuff every test needs to have present but is never modifi ed by any tests—that is, the Immutable Shared Fixture. The second part is the objects that any test needs to modify or delete; these objects should be built by each test as Fresh Fixtures. Here’s the same test modifi ed to use an Immutable Shared Fixture. We simply created our own mutableFlight within the test. public void testCancel_proposed() throws Exception { // fixture setup BigDecimal mutableFlightId = createFlightBetweenInsigificantAirports(); // exercise SUT facade.cancelFlight(mutableFlightId); // verify outcome assertEquals( FlightState.CANCELLED, facade.findFlightById(mutableFlightId)); // teardown // None required because we let the SUT create // new IDs for each flight. We might need to clean out // the database eventually. } Note that we don’t need any fi xture teardown logic in this version of the test because the SUT uses a Distinct Generated Value (see Generated Value on page 723)—that is, we do not supply a fl ight number. We also use the predefi ned dummyAirport1 and dummyAirport2 to avoid changing the number of fl ights for airports used by other tests. Therefore, the mutable fl ights can accumulate in the database trouble-free. Shared Fixture 326 Chapter 18 Test Strategy Patterns Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Back Door Manipulation How can we verify logic independently when we cannot use a round-trip test? We set up the test fi xture or verify the outcome by going through a back door (such as direct database access). Every test requires a starting point (the test fi xture) and an expected fi nishing point (the expected results). The “normal” approach is to set up the fi xture and verify the outcome by using the API of the SUT itself. In some circumstances this is either not possible or not desirable. In some situations we can use Back Door Manipulation to set up the fi xture and/or verify the SUT’s state. How It Works The state of the SUT comes in many fl avors. It can be stored in memory, on disk as fi les, in a database, or in other applications with which the SUT interacts. Whatever form it takes, the pre-conditions of a test typically require that the state of the SUT is not just known but is a specifi c state. Likewise, at the end of the test we often want to do State Verifi cation (page 462) of the SUT’s state. If we have access to the state of the SUT from outside the SUT, the test can set up the pre-test state of the SUT by bypassing the normal API of the SUT and interacting directly with whatever is holding that state via a “back door.” When exercising of the SUT has been completed, the test can similarly access Data Fixture Setup Exercise Verify Teardown SUT Data Fixture Setup Exercise Verify Teardown SUT Back Door Manipulation Back Door Manipulation 327 Also known as: Layer-Crossing Test Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com [...]... - http://www.simpopdf.com Test Method Where do we put our test code? Test Method We encode each test as a single Test Method on some class Create testMethod_1 Test Runner Suite Run Create Testcase Object testMethod_1 Exercise Test Suite Object SUT Testcase Object testMethod_n Create Fixture Exercise testMethod_n Testcase Class Fully Automated Tests (see page 26) consist of test logic That logic has... interface Layer Test 337 Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Layer Test Layer Test How can we verify logic independently when it is part of a layered architecture? We write separate tests for each layer of the layered architecture Layer1TestcaseClass LayernTestcaseClass testMethod_1 testMethod_1 testMethod_2 testMethod_2 Layer n Layer 1 Test Double Test Double DOC It is... have to encode the test logic somewhere In the procedural world, we would encode each test as a test case procedure located in a file or module In objectoriented programming languages, the preferred option is to encode them as methods on a suitable Testcase Class (page 373) and then to turn these Test Methods into Testcase Objects (page 382) at runtime using either Test Discovery (page 393) or Test Enumeration... http://www.simpopdf.com Chapter 19 xUnit Basics Patterns Patterns in This Chapter Test Definition Test Method 348 Four-Phase Test 358 Assertion Method 362 Assertion Message 370 Testcase Class 373 Test Execution Test Runner ... involves replacing the DOC with a Test Double One option is to use a Fake Object (page 55 1) that we have preloaded with some data as though the SUT had already been interacting with it; this strategy allows us to avoid using the SUT to set up the SUT’s state The other option is to use some kind of Configurable Test Double (page 55 8), such as a Mock Object (page 54 4) or a Test Stub (page 52 9) Either way, we... define each test as a method, procedure, or function that implements the four phases (see Four-Phase Test on page 358 ) necessary to realize a Fully Automated Test Most notably, the Test Method must include assertions if it is to be a SelfChecking Test (see page 26) We organize the test logic following one of the standard Test Method templates to make the type of test easily recognizable by test readers... Service Layer Test The Service Layer is where most of our unit tests and component tests are traditionally concentrated Testing the business logic using customer tests is a bit more challenging because testing the Service Layer via the presentation layer often involves Indirect Testing and Sensitive Equality (see Fragile Test on page 239), either of which can lead to Fragile Tests and High Test Maintenance... Tests and High Test Maintenance Cost (page 2 65) Testing the Service Layer directly helps avoid these problems To avoid Slow Tests (page 253 ), we usually replace the persistence layer with a Fake Database (see Fake Object on page 55 1) and then run the tests In fact, most of the impetus behind a layered architecture is to isolate this code from the other, harder-to -test layers Alistair Cockburn puts an interesting... Obscure Tests by making the state of the Test Double visible within the Test Method (page 348) When we want to perform Behavior Verification (page 468) of the calls made by the SUT to one or more DOCs, we can use a layer-crossing test that replaces the DOC with a Test Spy (page 53 8) or a Mock Object When we want to verify that the SUT behaves a specific way when it receives indirect inputs from a DOC (or... 377 Testcase Object 382 Test Suite Object 387 Test Discovery 393 Test Enumeration 399 Test Selection 403 347 xUnit Basics Patterns 348 Chapter 19 xUnit Basics Patterns Simpo PDF Merge and Split . interfaces DOC Layer n LayernTestcaseClass testMethod_1 testMethod_2 Layer1TestcaseClass testMethod_1 testMethod_2 Layer 1 Test Double Test Double DOC Layer n LayernTestcaseClass testMethod_1 testMethod_2 Layer1TestcaseClass testMethod_1 testMethod_2 Layer. n LayernTestcaseClass testMethod_1 testMethod_2 Layer1TestcaseClass testMethod_1 testMethod_2 Layer 1 Test Double Test Double Layer Test Layer Test 337 Also known as: Single Layer Test, Testing by Layers, Layered Test Simpo PDF. Con- fi gurable Test Double (page 55 8), such as a Mock Object (page 54 4) or a Test Stub (page 52 9). Either way, we can completely avoid Obscure Tests by making the state of the Test Double visible

Ngày đăng: 14/08/2014, 01:20

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

  • Đang cập nhật ...

TÀI LIỆU LIÊN QUAN