Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 50 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
50
Dung lượng
555,33 KB
Nội dung
Testing Methods 29
functions properly from end-to-end. Figure 1–7 shows the components
found in a production Web-enabled application data center.
Functional system tests check the entire application, from the client, which
is depicted in Figure 1–7 as a Web browser but could be any application that
speaks an open protocol over a network connection, to the database and every-
thing in between. Web-enabled application frameworks deploy Web browser
software, TCP/IP networking routers, bridges and switches, load-balancing
routers, Web servers, Web-enabled application software modules, and a data-
base. Additional systems may be deployed to provide directory service, media
servers to stream audio and video, and messaging services for email.
A common mistake of test professionals is to believe that they are conduct-
ing system tests while they are actually testing a single component of the sys-
tem. For example, checking that the Web server returns a page is not a
system test if the page contains only a static HTML page. Instead, such a test
checks the Web server only—not all the components of the system.
Figure 1–7 Components of a Web-enabled application.
Load
Balancer
Module Module
Web Server
Browser
Module
Database
Internet
PH069-Cohen.book Page 29 Monday, March 15, 2004 9:00 AM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
30 Chapter 1 The Forces at Work Affecting Your Web-Enabled Software
Scalability and Performance Testing
Scalability and performance testing is the way to understand how the system
will handle the load caused by many concurrent users. In a Web environment
concurrent use is measured as simply the number of users making requests at
the same time. One of the central points of this book is that the work to per-
form a functional system test can and should be leveraged to conduct a scal-
ability and performance test. The test tool you choose should be able to take
the functional system test and run it multiple times and concurrently to put
load on the server. This approach means the server will see load from the
tests that is closer to the real production environment than ever before.
Quality of Service Testing
Understanding the system’s ability to handle load from users is key to provi-
sioning a datacenter correctly, however, scalability and performance testing
does not show how the actual datacenter performs while in production. The
same functional system test from earlier in this chapter can, and should, be
reused to monitor a Web-enabled application. By running the functional sys-
tem test over long periods of time, the resulting logs are your proof of the
quality of service (QoS) delivered to your users. (They also make a good basis
for a recommendation of a raise when service levels stay high.)
This section showed definitions for the major types of testing. This section
also started making the case for developers, QA technicians, and IT manag-
ers to leverage each other’s work when testing a system for functionality, scal-
ability, and performance.
Next we will see how the typical behavior of a user may be modeled into
an intelligent test agent. Test agents are key to automating unit tests, func-
tional system tests, scalability and performance tests, and quality-of-service
tests. The following sections delve into definitions for these testing methods.
Defining Test Agents
In my experience, functional system tests and quality of service tests are the
most difficult of all tests, because they require that the test know something of
the user’s goals. Translating user goals into test agent code can be challenging.
A Web-enabled application increases in value as the software enables a
user to achieve important goals, which will be different for each user. While
it may be possible to identify groups of users by their goals, understanding a
PH069-Cohen.book Page 30 Monday, March 15, 2004 9:00 AM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Defining Test Agents 31
single user’s goals and determining how well the Web-enabled application
helped the user achieve those goals is the best way to build a test. Such a test
will better determine a Web-enabled application’s ability to perform and to
scale than a general test. This technique also helps the test professional trans-
late user goals into test agent code.
The formal way to perform system tests is to define a test agent that mod-
els an individual user’s operation of the Web-enabled application to achieve
particular goals. A test agent is composed of a checklist, a test process, and a
reporting method, as described in Table 1–1.
Suppose a Web-enabled application provides travel agents with an online
order-entry service to order travel brochures from a tour operator. The order-
entry service adds new brochures every spring and removes the prior season’s
brochures. A test agent for the order-entry service simulates a travel agent
ordering a current brochure and an outdated brochure. The test agent’s job is
to guarantee that the operation either succeeds or fails.
The following method will implement the example travel agent test by
identifying the checklist, the test process, and the reporting method. The
checklist defines the conditions and states the Web-enabled application will
achieve. For example, the checklist for a shopping basket application to order
travel brochures might look like this:
1. View list of current brochures. How many brochures appear?
2. Order a current brochure. Does the service provide a confirma-
tion number?
3. Order an out-of-date brochure. Does the service indicate an
error?
Table 1–1 Components of an Intelligent Test Agent
Component Description
Checklist Defines conditions and states
Test process Defines transactions needed to perform the checklist
Reporting method Records results after the process and checklist are completed
PH069-Cohen.book Page 31 Monday, March 15, 2004 9:00 AM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
32 Chapter 1 The Forces at Work Affecting Your Web-Enabled Software
Checklists determine the desired Web-enabled application state. For
example, while testing the Web-enabled application to order a current bro-
chure, the Web-enabled application state is set to hold the current brochure;
otherwise, the application is in an error state when an out-of-date brochure
order appears.
A test agent process defines the steps needed to initialize the Web-enabled
application and then to run the Web-enabled application through its paces,
including going through the checklist. The test agent process deals with
transactions. In the travel agent brochure order-entry system, the test agent
needs these transactions:
1. Initialize the order entry service.
2. Look up a brochure number.
3. Order a brochure.
The transactions require a number of individual steps. For example, trans-
action 2 requires that the test agent sign in to the order-entry service, post a
request to show the desired brochure number, confirm that the brochure
exists, post a request to order the brochure, and then sign out.
Finally, a test agent must include a reporting method that defines where
and in what format the results of the process will be saved. The brochure
order-entry system test agent reports the number of brochures successfully
ordered and the outdated brochures ordered.
Test agents can be represented in a number of forms. A test agent may be
defined on paper and run by a person. Or a test agent may be a program that
drives a Web-enabled application. The test agent must define a repeatable
means to have a Web-enabled application produce a result. The more auto-
mated a test agent becomes, the better position a development manager will
be in to certify that a Web-enabled application is ready for users.
The test agent definition delivers these benefits:
•Regression tests become easier. For each new update or
maintenance change to the Web-enabled application software,
the test agent shows which functions still work and which
functions fail.
•Regression tests also indicate how close a Web-enabled
application is ready to be accessed by users. The less regression,
the faster the development pace.
PH069-Cohen.book Page 32 Monday, March 15, 2004 9:00 AM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Defining Test Agents 33
•Developing test agents also provides a faster path to scalability
and performance testing. Since a test agent models an
individual user’s use of a Web-enabled application, running
multiple copies of the same test agent concurrently makes
scalability and performance testing much more simple.
Scalability and Performance Testing with Test Agents
Testing Web-enabled applications is different than testing desktop software.
At any time, a medium-scale Web-enabled application handles 1 to 5,000
concurrent users. Learning the scalability and performance characteristics of
a Web-enabled application under the load of hundreds of users is important
to manage software development projects, to build sufficient data centers,
and to guarantee a good user experience. The interoperating modules of a
Web-enabled application often do not show their true nature until they’re
loaded with user activity.
You can analyze a Web-enabled application in two ways: by scalability and
performance. I have found that analyzing one without the other will often
result in meaningless answers. What good is it to learn of a Web-enabled
application’s ability to serve 5,000 users quickly if 500 of those users receive
error pages?
Scalability describes a Web-enabled application’s ability to serve users
under varying levels of load. To measure scalability, run a test agent and mea-
sure its time. Then run the same test agent with 1, 50, 500, and 5,000 concur-
rent users. Scalability is the function of the measurements. Scalability
measures a Web-enabled application’s ability to complete a test agent under
conditions of load. Experience shows that a test agent should test 10 data
points to deliver meaningful results, but the number of tests ultimately
depends on the cost of running the test agent. Summarizing the measure-
ments enables a development manager to predict the Web-enabled applica-
tion’s ability to serve users under load conditions.
Table 1–2 shows example scalability results from a test Web-enabled appli-
cation. The top line shows the results of running a test agent by one user. In
the table, 85% of the time the Web-enabled application completed the test
agent in less than one second; 10% of the time the test agent completed in
less than 6 seconds; and 5% of the time the test agent took 6 or more seconds
to finish.
PH069-Cohen.book Page 33 Monday, March 15, 2004 9:00 AM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
34 Chapter 1 The Forces at Work Affecting Your Web-Enabled Software
Notice in the table what happens when the Web-enabled application is put
under load. When 50 users concurrently run the same test agent, the Web-
enabled application does not perform as well as it does with a single user.
With 50 users, the same test agent is completed in less than 1 second only
75% of the time. The Web-enabled application begins to suffer when 5,000
users begin running the test agent. At 5,000 users, only 60% will complete
the test agent in less than 1 second.
One can extrapolate the scalability results for a Web-enabled application
after a minimum of data points exists. If the scalability results contained tests
at only 1 and 50 users, for example, the scalability extrapolation for 5,000
users would be meaningless. Running the scalability tests with at least four
levels of load, however, provides meaningful data points from which extrapo-
lations will be valid.
Next we look at performance indexes. Performance is the other side of the
coin of scalability. Scalability measures a Web-enabled application’s ability to
serve users under conditions of increasing load, andtesting assumes that
valid test agents all completed correctly. Scalability can be blind to the user
experience. On the other hand, performance testing measures failures.
Performance testing evaluates a Web-enabled application’s ability to
deliver functions accurately. A performance test agent looks at the results of a
test agent to determine whether the Web-enabled application produced an
exceptional result. For example, in the scalability test example shown in
Table 1–3, a performance test shows the count of error pages returned under
the various conditions of load.
Table 1–3 shows the performance results of the sample example Web-
enabled application whose scalability was profiled in Table 1–2. The perfor-
mance results show a different picture of the same Web-enabled application.
At the 500 and 5,000 concurrent-user levels, a development manager looking
Table 1–2 Example Results Showing Scalability of a Web Service
<1 second 2–5 seconds >5 seconds
1 85% 10% 5%
50 75% 15% 10%
500 70% 20% 10%
5000 60% 25% 15%
PH069-Cohen.book Page 34 Monday, March 15, 2004 9:00 AM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Testing for the Single User 35
solely at the scalability results might still decide to release a Web-enabled
application to users, even though Table 1–2 showed that at 500 concurrent
users 10% of the pages delivered had very slow response times—slow test
agents in this case are considered to take 6 or more seconds to complete.
Would the development manager still release the Web-enabled application
after looking at the performance test results?
Table 1–3 shows the Web-enabled application failed the test 15% of the
time when the test agent completed the test in less than 1 second while serv-
ing at the 5,000 concurrent user level. Add the 25% value for test agents that
complete in 5 seconds or less and the 40% for test agents that complete in 6
or more seconds, and the development manager has a good basis for expect-
ing that 80% of the users will encounter errors when 5,000 users concur-
rently load the Web-enabled application.
Both scalability and performance measures are needed to determine how
well a Web-enabled application will serve users in production environments.
Taken individually, the results of these two tests may not show the true
nature of the Web-enabled application. Or even worse, they may show mis-
leading results!
Taken together, scalability and performance testing shows the true nature
of a Web-enabled application.
Testing for the Single User
Many developers think testing is not complete until the tests cover a general
cross-section of the user community. Other developers believe high-quality
software is tested against the original design goals of a Web-enabled applica-
tion as defined by a product manager, project marketer, or lead developer.
Table 1–3 Example Performance Test Agent Results
Performance <1 second 2–5 seconds >6 seconds Total
11%5%7%13%
50 2% 4% 10% 16%
500 4% 9% 14% 27%
5000 15% 25% 40% 80%
PH069-Cohen.book Page 35 Monday, March 15, 2004 9:00 AM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
36 Chapter 1 The Forces at Work Affecting Your Web-Enabled Software
These approaches are all insufficient, however, because they test toward the
middle only.
Testing toward the middle makes large assumptions of how the aggregate
group of users will use the Web-enabled application and the steps they will
take as a group to accomplish common goals. But Web-enabled applications
simply are not used this way. In reality, each user has their own personal goal
and method for using a Web-enabled application.
Intuit, publishers of the popular Quicken personal finance management
software, recognized the distinctiveness of each user’s experience early on.
Intuit developed the “Follow me home” software testing method. Intuit
developers and product managers visited local software retail stores, waiting
in the aisles near the Intuit products and watching for a customer to pick up a
copy of Quicken. When the customer appeared ready to buy Quicken, the
Intuit managers introduced themselves and asked for permission to follow
the customer home to learn the user’s experience installing and using the
Quicken software.
Intuit testers could have stayed in their offices and made grand specula-
tions about the general types of Quicken users. Instead, they developed user
archetypes—prototypical Web-enabled application users based on the real
people they met and the experience these users had. The same power can be
applied to developing test agents. Using archetypes to describe a user is more
efficient and more accurate than making broad generalizations about the
nature of a Web-enabled application’s users. Archetypes make it easier to
develop test agents modeled after each user’s individual goals and methods of
using a Web-enabled application.
The best way to build an archetype test agent is to start with a single user.
Choose just one user, watch the user in front of the Web-enabled application,
and learn what steps the user expects to use. Then take this information and
model the archetype against the single user. The better an individual user’s
needs are understood, the more valuable your archetype will be.
Some developers have taken the archetypal user method to heart. They
name their archetypes and describe their background and habits. They give
depth to the archetype so the rest of the development team can better under-
stand the test agent.
For example, consider the archetypal users defined for Inclusion Technol-
ogies, one of the companies I founded, Web-enabled application software. In
1997, Inclusion developed a Web-enabled application to provide collabora-
PH069-Cohen.book Page 36 Monday, March 15, 2004 9:00 AM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Testing for the Single User 37
tive messaging services to geographically disbursed teams in global corpora-
tions. Companies like BP, the combined British Petroleum and Amoco
energy companies, used the Inclusion Web-enabled application to build a
secure private extranet, where BP employees and contractors in the financial
auditing groups could exchange ideas and best practices while performing
their normal work.
Test agents for the BP extranet were designed around these archetypal
users:
• Jack, field auditor, 22 years old, recently joined BP from
Northwestern University, unmarried but has a steady girlfriend,
has been using spreadsheet software since high school, open to
using new technology if it gets his job done faster, loves
motocross and snow skiing.
•Madeline, central office manager, 42 years old, married 15 years
with two children, came up through the ranks at BP, worked in
IT group for three years before moving into management,
respects established process but will work the system to bring in
technology that improves team productivity.
•Lorette, IT support, 27 years old, wears two pagers and one
mobile phone, works long hours maintaining systems, does
system training for new employees, loves to go on training
seminars in exotic locations.
The test agents that modeled Jack’s goals concentrate on accessing and
manipulating data. Jack often needs to find previously stored spreadsheets.
In this case, a test agent signs in to the Web-enabled application and uses the
search functions to locate a document. The test agent modifies the document
and checks to make sure the modifications are stored correctly.
The test agent developed for Madeline concentrates on usage data. The
first test agent signs in to the Web-enabled application using Madeline’s high-
level security clearance. This gives permission to run usage reports to see
which of her team members is making the most use of the Web-enabled
application. That will be important to Madeline when performance reviews
are needed. The test agent will also try to sign in as Jack and access the same
reports. If the Web-enabled application performs correctly, only Madeline
has access to the reports.
PH069-Cohen.book Page 37 Monday, March 15, 2004 9:00 AM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
38 Chapter 1 The Forces at Work Affecting Your Web-Enabled Software
Test agents modeled after Lorette concentrate on accessing data. When
Lorette is away from the office on a training seminar, he still needs access to
the Web-enabled application as though she were in the office. The test agent
uses a remote login capability to access the needed data.
Understanding the archetypes is your key to making the test agents intelli-
gent. For example, a test agent for Lorette may behave more persistently
than a test agent for Madeline. If a test agent tries to make a remote connec-
tion that fails, the test agent for Lorette would try again and then switch to a
difference access number.
Creating Intelligent Test Agents
Developing test agents and archetypes is a fast, predictable way to build good
test data. The better the data, the more you know about a Web-enabled
application’s scalability and performance under load. Analyzing the data
shows scalability and performance indexes. Understanding scalability and
performance shows the expenses a business will undertake to develop and
operate a high-quality Web-enabled application.
In many respects, testing Web-enabled applications is similar to a doctor
treating patients. For example, an oncologist for a cancer patient never indi-
cates the number of days a patient has left to live. That’s simply not the
nature of oncology—the study and treatment of cancer. Oncology studies
cancer in terms of epidemiology, whereby individual patient tests are mean-
ingful only when considered in collection with a statistically sufficient num-
ber of other patients. If the doctor determines an individual patient falls
within a certain category of overall patients, the oncologist will advise the
patient what all the other patients in that category are facing. In the same
way, you can’t test a Web-enabled application without using the system, so
there is no way to guarantee a system will behave one way or the other.
Instead you can observe the results of a test agent and extrapolate the perfor-
mance and scalability to the production system.
Accountants and business managers often cite the law of diminishing
returns, where the effort to develop one more incremental addition to a
project provides less and less returns against the cost of the addition. Some-
times this thinking creeps into software test projects.
You can ask yourself, at what point have enough test agents and archetypes
been used to make the results meaningful? In reality, you can never use
PH069-Cohen.book Page 38 Monday, March 15, 2004 9:00 AM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
[...]... useful and meaningful data and how to determine how close the user gets to meeting his or her needs while using the Web-enabled application We will also see how intelligent test agents automate running of test suites and discuss the test environments and test tools necessary to understand the test suite results This chapter shows the forces at work that have made Web-enabled applications so popular and. .. 10:30 AM, 11:00 AM, and 11:30 AM has a performance measurement of 30 minutes Over the years I found that many times these terms are confused and used interchangeably I am not immune from such foibles too! A handy technique to remember these four tests of good performance is to think of someone CLAP-ing their hands CLAP is a handy acronym for concurrency, latency, availability, and performance that... rubric provides a systematic way of testing a Web-enabled application Complexity and subjectivity are avoided by using a Web rubric to define the testing criteria With a Web rubric in hand, special care must be taken to implement criteria correctly Subjectivity and laxness can creep into a test As the old expression goes: Garbage in, garbage out The modern expression for testing Web-enabled applications... mandatory today With poor methodology, the results are at best useless, and in the worst case, misleading Defining criteria for good Web-enabled application performance has changed over the years With so much information available, it’s relatively easy to use current knowledge to identify and update outdated criteria Common “old” testing techniques include ping tests, click-stream measurement tools and. .. profession for software testing Few people in the world are software test professionals And few text professionals expect to remain in software testing throughout their careers Many software test professionals I have known view their jobs as steppingstones into software development and management When developing criteria for Web-enabled application performance, the testers and the test software need... developers’ best intentions are to deliver software that scales, performs well, and is usable But when a software project goes over budget, behind schedule, and appears overly complex, many developers become conservative and defensive Developers may even fall back on an often-cited and overused bromide: Quality, features, and schedule Choose any two of these In this case, I recommend using WAPS By agreeing... for building and deploying Web-enabled applications, instructions on how to apply intelligent test agents to Web-enabled applications, and a free open source set of tools and scripting language called TestMaker that can help build automated intelligent test agents The result is a system and methodology that builds better software faster The Flapjacks Architecture In my years as a designer and builder... their associated host, and if the links returned successful or exceptional conditions The downside is that the hyperlinks in a Web-enabled application are dynamic and can change, depending on the user’s actions There is little way to know the context of the hyperlinks in a Web-enabled application Just checking the links’ validity is meaningless, if not misleading Understanding and developing the criteria... Performance Acceptable? Check with three to five users to determine how long they will wait for your Web-enabled application to perform one of the features before they abandon the feature and move on to another Take some time and watch the user directly, and time the seconds it takes to perform a basic feature How Often Does It Fail? Web-enabled application logs, if formatted with valuable data, can show the time... • • Assessment is more objective and consistent The rubric can help the tester focus on clarifying criteria into specific terms The rubric clearly describes to the developer how his or her work will be evaluated and what is expected The rubric provides benchmarks against which the developer can measure and document progress Rubrics can be created in a variety of forms and levels of complexity; however, . Web-Enabled Software
Scalability and Performance Testing
Scalability and performance testing is the way to understand how the system
will handle the load caused by.
scalability and performance testing much more simple.
Scalability and Performance Testing with Test Agents
Testing Web-enabled applications is different than testing