Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 22 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
22
Dung lượng
216,32 KB
Nội dung
C H A P T E R 3 TestingDatabaseRoutines What defines a great developer? Is it the ability to code complex routines quickly and accurately? The ability to implement business requirements correctly, within budget, and on schedule? Or perhaps it can be defined by how quickly the developer can track down and fix bugs in the application—or the inverse, the lack of bugs in the developer’s code? All of these are certainly attributes of a great developer, but in most cases they don’t manifest themselves merely due to raw skill. The hallmark of a truly great developer, and what allows these qualities to shine through, is a thorough understanding of the importance of testing. By creating unit tests early on in the development process, developers can continuously validate interfaces and test for exceptions and regressions. Carefully designed functional tests ensure compliance with business requirements. And performance testing—the kind of testing that always seems to get the most attention—can be used to find out whether the application can actually handle the anticipated amount of traffic. Unfortunately, like various other practices that are better established in the application development community, testing hasn’t yet caught on much with database professionals. Although some development shops performance test stored procedures and other database code, it is rare to see database developers writing data-specific unit tests. There is no good reason that database developers should not write just as many—or more—tests than their application developer counterparts. It makes little sense to test a data-dependent application without validating the data pieces that drive the application components! This chapter provides a brief introduction to the world of software testing and how testing techniques can be applied in database development scenarios. Software testing is a huge field, complete with much of its own lingo, so my intention is to concentrate only on those areas that I believe to be most important for database developers. Approaches to Testing There are a number of testing methodologies within the world of quality assurance, but in general, all types of software tests can be split into two groups: • Black box testing refers to tests that make assumptions only about inputs and outputs of the module being tested, and as such do not validate intermediate conditions. The internal workings of the module are not exposed to (or required by) the tester—hence they are contained within a “black box.” • White box testing, on the other hand, includes any test in which the internal implementation of the routine or function being tested is known and validated by the tester. White box testing is also called “open-box” testing, as the tester is allowed to look inside the module to see how it operates, rather than just examining its inputs and outputs. 49 CHAPTER 3 TESTINGDATABASEROUTINES Within each of these broad divisions are a number of specific tests designed to target different particular areas of the application in question. Examples of black box tests include unit tests, security tests, and basic performance tests such as stress tests and endurance tests. As the testing phase progresses, target areas are identified that require further testing, and the types of tests performed tend to shift from black box to white box to focus on specific internal elements. From a database development perspective, examples of white box tests include functional tests that validate the internal working of a module, tests that perform data validation, and cases when performance tuning requires thorough knowledge of data access methods. For instance, retrieving and analyzing query plans during a performance test is an example of white box testing against a stored procedure. Unit and Functional Testing Developing software with a specific concentration on the data tier can have a benefit when it comes to testing: there aren’t too many types of tests that you need to be familiar with. Arguably, the two most important types of test are those that verify that the application behaves as it is meant to, and returns the correct results. This is the purpose of unit tests and functional tests. Unit tests are black box tests that verify the contracts exposed by interfaces. For instance, a unit test of a stored procedure should validate that, given a certain set of inputs, the stored procedure returns the correct set of output results, as defined by the interface of the stored procedure being tested. The term correct as used here is important to define carefully. It means “correct” only insofar as what is defined as the contract for the stored procedure; the actual data returned is not important. So, as long as the results represent valid values in the correct format and of the correct datatypes given the interface’s contract, a unit test should pass. Phrased another way, unit tests test the ability of interfaces to communicate with the outside world exactly as their contracts say they will. Functional tests, as their name implies, verify the functionality of whatever is being tested. In testing nomenclature, the term functional test has a much vaguer meaning than unit test. It can mean any kind of test, at any level of an application, that tests whether that piece of the application works properly—in other words, that it performs the appropriate sequence of operations to deliver the correct final result as expected. For a simple stored procedure that selects data from the database, this asks the question of whether the stored procedure returning the correct data? Again, I will carefully define the term correct. This time, correct means both the kind of validation done for a unit test (data must be in the correct format), as well as a deeper validation of the accuracy of the actual values returned. The logic required for this kind of validation means that a functional test is a white box test in the database world, compared to the black box of unit testing. Let’s take a look at an example to make these ideas a bit clearer. Consider the following stored procedure, which might be used for a banking application: CREATE PROCEDURE GetAggregateTransactionHistory @CustomerId int AS BEGIN SET NOCOUNT ON; 50 CHAPTER 3 TESTINGDATABASEROUTINES SELECT SUM ( CASE TransactionType WHEN 'Deposit' THEN Amount ELSE 0 END ) AS TotalDeposits, SUM ( CASE TransactionType WHEN 'Withdrawal' THEN Amount ELSE 0 END ) AS TotalWithdrawals FROM TransactionHistory WHERE CustomerId = @CustomerId; END; This stored procedure’s implied contract states that, given the input of a customer ID into the @CustomerId parameter, a result set of two columns and zero or one rows will be output (the contract does not imply anything about invalid customer IDs or customers who have not made any transactions). The column names in the output result set will be TotalDeposits and TotalWithdrawals, and the datatypes of the columns will be the same as the datatype of the Amount column in the TransactionHistory table (we’ll assume it’s decimal). What if the Customer Doesn’t Exist? The output of the GetAggregateTransactionHistory stored procedure will be the same whether you pass in a valid customer ID for a customer that happens to have had no transactions, or an invalid customer ID. Either way, the procedure will return no rows. Depending on the requirements of a particular situation, it might make sense to make the interface richer by changing the rules a bit, only returning no rows if an invalid customer ID is passed in. That way, the caller will be able to identify invalid data and give the user an appropriate error message rather than implying that the nonexistent customer made no transactions. A unit test against this stored procedure should do nothing more than validate the interface. A customer ID should be passed in, and the unit test should interrogate the output result set (or lack thereof) to ensure that there are two columns of the correct name and datatype and zero or one rows. No verification of data is necessary; it would be out of scope, for instance, to find out whether the aggregate information was accurate or not—that would be the job of a functional test. The reason that we draw such a distinction between unit tests and functional tests is that when testing pure interface compliance, we want to put ourselves in the position of someone programming against the interface from a higher layer. Is the interface working as documented, providing the appropriate level of encapsulation and returning data in the correct format? 51 CHAPTER 3 TESTINGDATABASEROUTINES Each interface in the system will need one or more of these tests (see the “How Many Tests Are Needed?” section later in the chapter), so they need to be kept focused and lightweight. Programming full white box tests against every interface may not be feasible, and it might be simpler to test the validity of data at a higher layer, such as via the user interface itself. In the case of the GetAggregateTransactionHistory stored procedure, writing a functional test would essentially entail rewriting the entire stored procedure again—hardly a good use of developer time. Unit Testing Frameworks Unit testing is made easier through the use of unit testing frameworks, which provide structured programming interfaces designed to assist with quickly testing software. These frameworks generally make use of debug assertions, which allow the developer to specify those conditions that make a test true or false. A debug assertion is a special kind of macro that is turned on only when a piece of software is compiled in debug mode. It accepts an expression as input and throws an exception if the expression is false; otherwise, it returns true (or void, in some languages). For instance, the following assertion would always throw an exception: Assert(1 == 0); Assertions allow a developer to self-document assumptions made by the code of a routine. If a routine expects that a variable is in a certain state at a certain time, an assertion can be used in order to help make sure that assumption is enforced as the code matures. If, at any time in the future, a change in the code invalidates that assumption, an exception will be thrown should the developer making the change hit the assertion during testing or debugging. In unit testing, assertions serve much the same purpose. They allow the tester to control what conditions make the unit test return true or false. If any assertion throws an exception in a unit test, the entire test is considered to have failed. Unit testing frameworks exist for virtually every language and platform, including T-SQL (for example, the TSQLUnit project available from http://sourceforge.net/projects/tsqlunit). Personally, I find unit testing in T-SQL to be cumbersome compared to other languages, and prefer to write my tests in a .NET language using the .NET unit testing framework, NUnit (http://www.nunit.org). Providing an in-depth guide to coding against unit testing frameworks is outside the scope of this book, but given that unit testing stored procedures is still somewhat of a mystery to many developers, I will provide a basic set of rules to follow. When writing stored procedure unit tests, the following basic steps can be followed: 1. First, determine what assumptions should be made about the stored procedure’s interface. What are the result sets that will be returned? What are the datatypes of the columns, and how many columns will there be? Does the contract make any guarantees about a certain number of rows? 2. Next, write code necessary to execute the stored procedure to be tested. If you’re using NUnit, I find that the easiest way of exposing the relevant output is to use ADO.NET to fill a DataSet with the result of the stored procedure, where it can subsequently be interrogated. Be careful at this stage; you want to test the stored procedure, not your data access framework. You might be tempted to call the stored procedure using the same method as in the application itself. However, this would be a mistake, as you would end up testing both the stored procedure and that method. Given that you only 52 CHAPTER 3 TESTINGDATABASEROUTINES need to fill a DataSet, recoding the data access in the unit test should not be a major burden, and will keep you from testing parts of the code that you don’t intend to. 3. Finally, use one assertion for each assumption you’re making about the stored procedure; that means one assertion per column name, one per column datatype, one for the row count if necessary, and so on. Err on the side of using too many assertions—it’s better to have to remove an assumption later because it turns out to be incorrect than to not have had an assumption there to begin with and have your unit test pass when the interface is actually not working correctly. The following code listing gives an example of what an NUnit test of the GetAggregateTransactionHistory stored procedure might look like: [TestMethod] public void TestAggregateTransactionHistory() { // Set up a command object SqlCommand comm = new SqlCommand(); // Set up the connection comm.Connection = new SqlConnection( @"server=serverName; trusted_connection=true;"); // Define the procedure call comm.CommandText = "GetAggregateTransactionHistory"; comm.CommandType = CommandType.StoredProcedure; comm.Parameters.AddWithValue("@CustomerId", 123); // Create a DataSet for the results DataSet ds = new DataSet(); // Define a DataAdapter to fill a DataSet SqlDataAdapter adapter = new SqlDataAdapter(); adapter.SelectCommand = comm; try { // Fill the dataset adapter.Fill(ds); } catch { Assert.Fail("Exception occurred!"); } // Now we have the results -- validate them . // There must be exactly one returned result set Assert.IsTrue( 53 CHAPTER 3 TESTINGDATABASEROUTINES ds.Tables.Count == 1, "Result set count != 1"); DataTable dt = ds.Tables[0]; // There must be exactly two columns returned Assert.IsTrue( dt.Columns.Count == 2, "Column count != 2"); // There must be columns called TotalDeposits and TotalWithdrawals Assert.IsTrue( dt.Columns.IndexOf("TotalDeposits") > -1, "Column TotalDeposits does not exist"); Assert.IsTrue( dt.Columns.IndexOf("TotalWithdrawals") > -1, "Column TotalWithdrawals does not exist"); // Both columns must be decimal Assert.IsTrue( dt.Columns["TotalDeposits"].DataType == typeof(decimal), "TotalDeposits data type is incorrect"); Assert.IsTrue( dt.Columns["TotalWithdrawals"].DataType == typeof(decimal), "TotalWithdrawals data type is incorrect"); // There must be zero or one rows returned Assert.IsTrue( dt.Rows.Count <= 1, "Too many rows returned"); } Although it might be disturbing to note that the unit test is over twice as long as the stored procedure it is testing, keep in mind that most of this code can be easily turned into a template for quick reuse. As noted before, you might be tempted to refactor common unit test code into a data access library, but be careful lest you end up testing your test framework instead of the actual routine you’re attempting to test. Many hours can be wasted debugging working code trying to figure out why the unit test is failing, when it’s actually the fault of some code the unit test is relying on to do its job. Unit tests allow for quick, automated verification of interfaces. In essence, they help you as a developer to guarantee that in making changes to a system you didn’t break anything obvious. In that way, they are invaluable. Developing against a system with a well-established set of unit tests is a joy, as each developer no longer needs to worry about breaking some other component due to an interface change. The unit tests will complain if anything needs to be fixed. Regression Testing As you build up a set of unit tests for a particular application, the tests will eventually come to serve as a regression suite, which will help to guard against regression bugs—bugs that occur when a developer 54 CHAPTER 3 TESTINGDATABASEROUTINES breaks functionality that used to work. Any change to an interface—intentional or not—will cause unit tests to fail (assuming that the tests have been written correctly). For the intentional changes, the solution is to rewrite the unit test accordingly. But it is these unintentional changes for which we create unit tests, and which regression testing targets. Experience has shown that fixing bugs in an application often introduces other bugs. It can be difficult to substantiate how often this happens in real development scenarios, but it has been suggested that figures as high as 50 percent can occur in some cases. By building a regression suite, the cost of fixing these “side effect” bugs is greatly reduced. They can be discovered and mended during the development phase, instead of being reported by end users once the application has already been deployed. Regression testing is also the key to some newer software development methodologies, such as agile development and extreme programming (XP). As these methodologies increase in popularity, and their adoption filters through to the database world, it can be expected that database developers will begin to adopt some of these techniques more readily. Guidelines for Implementing DatabaseTesting Processes and Procedures Of all the possible elements that make up a testing strategy, there is really only one key to success: consistency. Tests must be repeatable, and must be run the same way every time, with only well-known (i.e., understood and documented) variables changed. Inconsistency, or a lack of knowledge concerning those variables that might have changed between tests, can mean that any problems identified during testing will be difficult to trace. Development teams should strive to build a suite of tests that are run at least once for every release of the application, if not more often. These tests should be automated and easy to run. Preferably, the suite of tests should be modular, so that if a developer is working on one part of the application, the subset of tests that apply to only that section can be easily exercised in order to validate any changes. Continuous Testing Once you’ve built a set of automated tests, you’re one step away from a fully automatic testing environment. Such an environment should retrieve the latest code from the source control repository, run appropriate build scripts to compile a working version of the application, and run through the entire test suite. Many software development shops use this technique to run their tests several times a day, throwing alerts almost instantly if problem code is checked in. This kind of rigorous automated testing is called continuous integration, and it’s a great way to take some of the testing burden out of the hands of developers while still making sure that all of the tests get run as often as (or even more often than) necessary. A great free tool to help set up continuous integration in .NET environments is CruiseControl.NET, available at http://sourceforge.net/projects/ccnet . Testers must also pay particular attention to any data used to conduct the tests. It can often be beneficial to generate test data sets that include every possible case the application is likely to see. Such a set of data can guarantee consistency between test runs, as it can be restored to its original state. It can also guarantee that rare edge cases are tested that might otherwise not be seen. 55 CHAPTER 3 TESTINGDATABASEROUTINES It’s also recommended that a copy of actual production data (if available) be used for testing near the end of any given test period, rather than relying on artificially generated test data. Oftentimes, generated sets can lack the realism needed to bring to light obscure issues that only real users can manage to bring out of an application. Why Is Testing Important? It can be argued that the only purpose of software is to be used by end users, and therefore the only purpose of testing is to make sure that those end users don’t encounter issues. Thus, there are two important goals that testing hopes to achieve: • Testing finds problems that need to be fixed. • Testing ensures that no problems need to be fixed. Eventually, all software must be tested. If not fully tested by developers or a quality assurance team, an application will be tested by the end users trying to use the software. Unfortunately, this is a great way to lose credibility; users are generally not pleased with buggy software. Testing by development and quality assurance teams validates the software. Each kind of testing that is performed validates a specific piece of the puzzle, and if a complete test suite is used (and the tests are passed), the team can be fairly certain that the software has a minimal number of bugs, performance defects, and other issues. Since the database is an increasingly important component in most applications, testing the database makes sense; if the database has problems, they will propagate to the rest of the application. What Kind of Testing Is Important? From the perspective of a database developer, only a few types of tests are really necessary in the majority of cases. Databases should be tested for the following issues: • Interface consistency should be validated in order to guarantee that applications have a stable structure for data access. • Data availability and authorization tests are similar to interface consistency tests, but more focused on who can get data from the database than how the data should be retrieved. • Authentication tests verify whether valid users can log in, and whether invalid users are refused access. These kinds of tests are only important if the database is being used for authenticating users. • Performance tests are important for verifying that the user experience will be positive, and that users will not have to wait longer than necessary for data. Performance testing may involve load tests, which monitor the performance of the database under a given load; saturation tests, which attempt to overwhelm the system by constantly adding load and/or removing resources from it until it breaks; and, endurance tests, which place a continuous demand on the database over a sustained period of time. 56 CHAPTER 3 TESTINGDATABASEROUTINES • Regression testing covers every other type of test, but generally focuses on uncovering issues that were previously fixed. A regression test is a test that validates that a fix still works. How Many Tests Are Needed? Although most development teams lack a sufficient number of tests to test the application thoroughly, in some cases the opposite is true. Too many tests can be just as much of a problem as not enough tests; writing tests can be time-consuming, and tests must be maintained along with the rest of the software whenever functionality changes. It’s important to balance the need for thorough testing with the realities of time and monetary constraints. A good starting point for databasetesting is to create one unit test per interface parameter “class,” or group of inputs. For example, consider the following stored procedure interface: CREATE PROCEDURE SearchProducts SearchText varchar(100) = NULL, PriceLessThan decimal = NULL, ProductCategory int = NULL This stored procedure returns data about products based on three parameters, each of which is optional, based on the following (documented) rules: • A user can search for text in the product’s description. • A user can search for products where the price is less than a given input price. • A user can combine a text search or price search with an additional filter on a certain product category, so that only results from that category are returned. • A user cannot search on both text and price simultaneously. This condition should return an error. • Any other combination of inputs should result in an error. In order to validate the stored procedure’s interface, one unit test is necessary for each of these conditions. The unit tests that pass in valid input arguments should verify that the stored procedure returns a valid output result set per its implied contract. The unit tests for the invalid combinations of arguments should verify that an error occurs when these combinations are used. Known errors are part of an interface’s implied contract (see Chapter 4 for more information on this topic). In addition to these unit tests, an additional regression test should be produced for each known issue that has been fixed within the stored procedure, in order to ensure that the procedure’s functionality does not degenerate over time. Although this seems like a massive number of tests, keep in mind that these tests can—and should—share the same base code. The individual tests will have to do nothing more than pass the correct parameters to a parameterized base test. Will Management Buy In? It’s an unfortunate fact that many management teams believe that testing is either an unnecessary waste of time or not something that should be a well-integrated part of the software development process at all. Many software shops, especially smaller ones, have no dedicated quality assurance staff, and such 57 CHAPTER 3 TESTINGDATABASEROUTINES compressed development schedules that little testing gets done, making full functionality testing nearly impossible. Several companies I’ve done work for have been in this situation, and it never results in the time or money savings that management thinks it will. On the contrary, time and money is actually wasted by lack of testing. A test process that is well integrated into development finds most bugs up front, when they are created, rather than later on. A developer who is currently working on enhancing a given module has an in-depth understanding of the code at that moment. As soon as he or she moves on to another module, that knowledge will start to wane as focus moves on to other parts of the application. If defects are discovered and reported while the developer is still in the trenches, the developer will not need to relearn the code in order to fix the problem, thereby saving a lot of time. These time savings translate directly into increased productivity, as developers end up spending more time working on new features, and less on fixing defects. If management teams refuse to listen to reason and allocate additional development time for proper testing, try doing it anyway. Methodologies such as test-driven development (TDD), in which you write the tests first, and then create routines that pass the tests, can greatly enhance overall developer productivity. Adopting a testing strategy—with or without management approval—can mean better, faster output, which in the end will help to ensure success. Performance Monitoring Tools Verification using unit, functional, and regression tests is extremely important for thoroughly testing that an application behaves correctly, but it is performance testing that really gets the attention of most developers. Performance testing is imperative for ensuring a positive user experience. Users don’t want to wait any longer than absolutely necessary for data. Performance testing relies on collecting, reviewing, and analyzing performance data for different aspects of the system. Before going into details about how to analyze the performance of a system, it is therefore necessary to look at some of the tools that can be used to capture such data. SQL Server 2008 provides a number of in-built tools that allow DBAs and developers to store or view real-time information about activity taking place on the server, including the following: • SQL Server Profiler • Server-side traces • System Monitor console • Dynamic Management Views (DMVs) • Extended Events • Data Collector There are also a number of third-party monitoring tools available that can measure, aggregate, and present performance data in different ways. In this section, I’ll discuss some of the different methods of monitoring performance, and the type of situations in which they can be used. 58 [...]... 3 TESTINGDATABASEROUTINES Figure 3-1 The server activity collection report 66 CHAPTER 3 TESTINGDATABASEROUTINES Analyzing Performance Data In the previous section, I discussed how to capture SQL Server performance data using a number of different tools and techniques In this section, let’s now consider what data you should monitor, and how to analyze it in order to build up a profile of how a database. .. Software testing is a complex field, but it is necessary that developers understand enough of it to make the development process more effective By implementing testing processes during development, more robust software can be delivered with less expense Database developers, like application developers, must learn to exploit unit tests in order to increase software project success rates Database routines. .. performance issues, the baseline data can be analyzed against other test data in order to establish performance trends 67 CHAPTER 3 TESTINGDATABASEROUTINES Big-Picture Analysis Once you have set up performance counters and traces, you are ready to begin actual performance testing But this raises the question, “Where to begin?” Especially in a large legacy application, running an end-to-end performance... components Granular Analysis If the results of a big-picture test show that certain areas need work, a more granular investigation into specific routines will generally be necessary Using aggregated trace data collected from a full system 68 CHAPTER 3 TESTINGDATABASEROUTINES test, it’s important to evaluate both individual queries and groups of queries that are long-running or resource intensive While it...CHAPTER 3 TESTINGDATABASEROUTINES Note Access to performance monitoring tools in many organizations is restricted to database or system administrators However, most of the tools described in this section allow for performance logs to be saved, so if you have insufficient... fires can then be delivered to a variety of synchronous and asynchronous targets This flexible framework for handling extended events allows you to build a customized performance 63 CHAPTER 3 TESTINGDATABASEROUTINES monitoring system, which collects very specific measurements and delivers them in a variety of formats to meet your monitoring requirements One of the shortcomings of the monitoring tools... payload captured by the extended event session is saved to the file target in XML format To analyze the data contained within this file, it can be loaded back into SQL Server using the 64 CHAPTER 3 TESTINGDATABASEROUTINES sys.fn_xe_file_target_read_file method, and then queried using XQuery syntax, as shown in the following query: SELECT xe_data.value('(/event/action[@name=''sql_text'']/value)[1]','varchar(max)')... identify possible queries that are making inefficient use of indexes and therefore may be causing I/O problems The SP:Recompile event, on the other hand, indicates queries that are 59 CHAPTER 3 TESTINGDATABASEROUTINES getting recompiled by the query optimizer, and may therefore be consuming larger-than-necessary amounts of CPU time Server-Side Traces While SQL Server Profiler is a convenient and useful... imperative I recommend using a test database that can be restored to its original state each time, as well as rebooting all servers involved in the test just before beginning a run, in order to make sure that the test starts with the same initial conditions each time Another option that might be easier than backing up and restoring a test database is using SQL Server 2008’s database snapshot feature, or,... now is the time to execute your test suite When you are done tracing, you must stop and close the trace by using the sp_trace_setstatus stored procedure, supplying the TraceID trace 60 CHAPTER 3 TESTINGDATABASEROUTINES identifier returned when the trace was started This is demonstrated in the following code listing (in this case the trace identifier is listed as 99): EXEC sp_trace_setstatus @traceid=99, . set. 65 CHAPTER 3 TESTING DATABASE ROUTINES Figure 3-1. The server activity collection report 66 CHAPTER 3 TESTING DATABASE ROUTINES Analyzing Performance. such 57 CHAPTER 3 TESTING DATABASE ROUTINES compressed development schedules that little testing gets done, making full functionality testing nearly impossible.