1 Sample Test Automation Tool
Rational offers the most complete lifecycle toolset (including testing) of these vendors for the windows platform When it comes to Object Oriented development they are the acknowledged leaders with most of the leading OO experts working for them Some of their products are worldwide leaders e.g Rational Robot, Rational Rose, Clear case, RequistePro, etc.Their Unified Process is a very good development model that I have been involved with which allows mapping of requirements to use cases, test cases and a whole set of tools to support the process.
1.1 Rational Suite of tools
Rational RequisitePro is a requirements management tool that helps project teams control the
development process RequisitePro organizes your requirements by linking Microsoft Word to arequirements repository and providing traceability and change management throughout the projectlifecycle A baseline version of RequisitePro is included with Rational Test Manager When you define atest requirement in RequisitePro, you can access it in Test Manager.
Rational Clear Quest is a change-request management tool that tracks and manages defects
and change requests throughout the development process With Clear Quest, you can manage every typeof change activity associated with software development, including enhancement requests, defect reports,and documentation modifications.
Rational Purify is a comprehensive C/C+ + run-time error checking tool that automatically
pinpoints run-time errors and memory leaks in all components of an application, including third-partylibraries, ensuring that code is reliable
Rational Quantify is an advanced performance profiler that provides application performance
analysis, enabling developers to quickly find, prioritize and eliminate performance bottlenecks within anapplication
Rational Pure Coverage is a customizable code coverage analysis tool that provides detailed
application analysis and ensures that all code has been exercised, preventing untested code fromreaching the end-user.
Rational Suite Performance Studio is a sophisticated tool for automating performance tests on
client/server systems A client/server system includes client applications accessing a database orapplication server, and browsers accessing a Web server Performance Studio includes Rational Robotand Rational Load Test Use Robot to record client/server conversations and store them in scripts UseLoad Test to schedule and play back the scripts.
Rational Robot Facilitates functional and performance testing by automating record and
playback of test scripts Allows you to write, organize, and run tests, and to capture and analyze the results.
Rational Test Factory Automates testing by combining automatic test generation with
source-code coverage analysis Tests an entire application, including all GUI features and all lines of source code.
During playback, Rational Load Test can emulate hundreds, even thousands, of users placing
heavy loads and stress on your database and Web servers.
Rational Test categorizes test information within a repository by project You can use the
Trang 2The tools that are to discussed here areRational Administrator
Rational Robot
Rational Test Manager
1.2 Rational Administrator
What is a Rational Project?
A Rational project is a logical collection of databases and data stores that associates the data youuse when working with Rational Suite A Rational project is associated with one Rational Test data store, one RequisitePro database, one Clear Quest databases, and multiple Rose models and RequisitePro projects, and optionally places them under configuration management
Rational administrator is used to create and manage rational repositories, users and groups and manage security privileges.
How to create a new project?
Open the Rational administrator and go to File->New Project.
In the above window opened enter the details like Project name and location.
Click Next.
In the corresponding window displayed, enter the Password if you want to protect the project with password, which is required to connect to, configure or delete the project.
Click Finish.
In the configure project window displayed click the Create button To manage the
Trang 3Once the Create button in the Configure project window is chosen, the below seen Create Test Data store window will be displayed Accept the default path and click OK button.
Once the below window is displayed it is confirmed that the Test datastore is successfully created
Trang 4Click OK in the configure project window and now your first Rational project is ready to play
with….
Trang 51.3 Rational Robot
Rational Robot to develop three kinds of scripts: GUI scripts for functional testing and VU and VB scripts for performance testing
Robot can be used to:
Perform full functional testing Record and play back scripts that navigate through your applicationand test the state of objects through verification points.
Perform full performance testing Use Robot and TestManager together to record and play back scripts that help you determine whether a multi-client system is performing within user-defined standards under varying loads.
Create and edit scripts using the SQABasic, VB, and VU scripting environments The Robot editorprovides color-coded commands with keyword Help for powerful integrated programming during script development.
Test applications developed with IDEs such as Visual Basic, Oracle Forms, PowerBuilder, HTML, and Java Test objects even if they are not visible in the application's interface.
Collect diagnostic information about an application during script playback Robot is integrated withRational Purify, Quantify, and PureCoverage You can play back scripts under a diagnostic tool and see the results in the log.
Trang 6The Object Testing technology in Robot lets you test any object in the application-under-test, including theobject's properties and data You can test standard Windows objects and IDE-specific objects, whether they are visible in the interface or hidden.
1.4 Robot login window
Once logged you will see the robot window Go to File-> New->Script
In the above screen displayed enter the name of the script say “First Script” by which the script is referred to from now on and any description (Not mandatory).The type of the script is GUI for functional testing and VU for performance testing.
Trang 7The GUI Script top pane) window displays GUI scripts that you are currently recording, editing, or debugging It has two panes:
Asset pane (left) – Lists the names of all verification points and low-level scripts for this script Script pane (right) – Displays the script
The Output window bottom pane) has two tabs:
Build – Displays compilation results for all scripts compiled in the last operation Line numbers areenclosed in parentheses to indicate lines in the script with warnings and errors.
Console – Displays messages that you send with the SQAConsoleWrite command Also displays certain system messages from Robot.
To display the Output window:Click View ® Output.How to record a play back script?
To record a script just go to Record->Insert at cursor
Then perform the navigation in the application to be tested and once recording is done stop the
recording Record-> Stop
1.6 Record and Playback options
Trang 8In this window we can set general options like identification of lists, menus ,recording think time in
General tab:
Web browser tab: Mention the browser type IE or Netscape…
Robot Window: During recording how the robot should be displayed and hotkeys details…Object Recognition Order: the order in which the recording is to happen
For ex: Select a preference in the Object order preference list.
If you will be testing C++ applications, change the object order preference to C++ Recognition Order.
1.6.1 Playback options
Trang 9This will help you to handle unexpected window during playback, error recovery, mention the time out period, to manage log and log data
1.7 Verification points
A verification point is a point in a script that you create to confirm the state of an object across builds of theapplication-under-test During recording, the verification point captures object information (based on the type of verification point) and stores it in a baseline data file The information in this file becomes the baseline of the expected state of the object during subsequent builds.
When you play back the script against a new build, Robot retrieves the information in the baseline file for each verification point and compares it to the state of the object in the new build If the captured object does not match the baseline, Robot creates an actual data file The information in this file shows the actual state of the object in the build.
After playback, the results of each verification point appear in the log in Test Manager If a verification point fails (the baseline and actual data do not match), you can select the verification point in the log and click View ® Verification Point to openthe appropriate Comparator The Comparator displays the baselineand actual files so that you can compare them.
A verification point is stored in the project and is always associated with a script When you create a verification point, its name appears in the Asset (left) pane of the Script window The verification point script command, which always begins with Result =, appears in the Script (right) pane
Because verification points are assets of a script, if you delete a script, Robot also deletes all of its associated verification points
Trang 101.7.1 List of Verification Points
The following table summarizes each Robot verification point.
TypeDescription
Alphanumeric Captures and compares alphabetic or numericvalues.
Clipboard Captures and compares alphanumeric data that has been copied to the Clipboard
File Comparison Compares the contents of two files File Existence Checks for the existence of a specified file
Menu Captures and compares the text, acceleratorkeys, and state of menus Captures up to fivelevels of sub-menus
Module Existence Checks whether a specified module is loaded intoa specified context (process), or is loadedanywhere in memory
Object Data
Captures and compares the data in objects.Object Properties Captures and compares the properties of objects Region Image Captures and compares a region of the screen (as
a bitmap).
Web Site Compare Captures a baseline of a Web site and comparesit to the Web site at another point in time.
Web Site Scan Checks the content of a Web site with everyrevision and ensures that changes have notresulted in defects.
Window Existence Checks that the specified window is displayedbefore continuing with the playback
Window Image Captures and compares the client area of awindow as a bitmap (the menu, title bar, andborder are not captured).
1.8 About SQABasic Header Files
SQABasic header files let you declare custom procedures, constants, and variables that you want to use with multiple scripts or SQABasic library source files.
SQABasic files are stored in the SQABas32 folder of the project, unless you specify another location You can specify another location by clicking Tools ® General Options Click the Preferences tab Under SQABasic path, use the Browse button to find the location Robot will check this location first If the file is not there, it will look in the SQABas32 directory
You can use Robot to create and edit SQABasic header files They can be accessed by all modules withinthe project SQABasic header files have the extension sbh.
1.9 Adding Declarations to the Global Header File
For your convenience, Robot provides a blank header file called Global.sbh Global.sbh is a project-wide header file stored in SQABas32 in the project You can add declarations to this global header file and/or create your own.
Trang 111.Click File ® Open ® SQABasic File.2.Set the file type to Header Files (*.sbh).3 Select global.sbh, and then click Open.
1.10 Inserting a Comment into a GUI Script:
During recording or editing, you can insert lines of comment text into a GUI script Comments are helpful for documenting and editing scripts Robot ignores comments at compile time.
To insert a comment into a script during recording or editing.
1 If recording, click the Display GUI Insert Toolbar button on the GUI Record toolbar.
If editing, position the pointer in the script and click the Display GUI Insert Toolbar button on the Standard toolbar.
2 Click the Comment button on the GUI Insert toolbar.3 Type the comment (60 characters maximum) 4 Click OK to continue recording or editing.
Robot inserts the comment into the script (in green by default) preceded by a single quotation mark For
example:
' This is a comment in the script
To change lines of text into comments or to uncomment text:1 Highlight the text.
2 Click Edit ® Comment Line or Edit ® Uncomment Line.
1.11 About Data pools
A datapool is a test dataset It supplies data values to the variables in a script during script playback.
Datapools let you automatically pump test data to virtual testers under high-volume conditions thatpotentially involve hundreds of virtual testers performing thousands of transactions.
Typically, you use a datapool so that:
Each virtual tester that runs the script can send realistic data (which can include unique data) to the server.
A single virtual tester that performs the same transaction multiple times can send realistic data to the server in each transaction.
1.11.1Using Datapools with GUI Scripts
Trang 12values each time.
A GUI script can access a datapool when it is played back in Robot Also, when a GUI script is played back in a TestManager suite, the GUI script can access the same datapool as other scripts.
There are differences in the way GUI scripts and sessions are set up for datapool access:
You must add datapool commands to GUI scripts manually while editing the script in Robot Robotadds datapool commands to VU scripts automatically.
There is no DATAPOOL_CONFIG statement in a GUI script The SQADatapoolOpen command defines the access method to use for the datapool.
Although there are differences in setting up datapool access in GUI scripts and sessions, you define a datapool for either type of script using TestManager in exactly the same way.1.12 Debug menuThe Debug menu has the following commands:Go Go Until Cursor Animate Pause Stop
Set or Clear Breakpoints Clear All Breakpoints Step Over
Step Into Step Out
Note: The Debug menu commands are for use with GUI scripts only.
1.13 Compiling the script
When you play back a GUI script or VU script, or when you debug a GUI script, Robot compiles the script if it has been modified since it last ran You can also compile scripts and SQABasic library source files manually
.
To compileDo this
The active script or library source file Click File ® Compile.All scripts and library source files in
the current project
Click File ® Compile All Use this if, for example, you have made changes to global definitions that may affect all of your SQABasic files
During compilation, the Build tab in the Output window displays compilation results and error messages with line numbers for all compiled scripts and library source files The compilation results can
Trang 152 Rational Test Manager
Test Manager is the open and extensible framework that unites all of the tools, assets, and data both related to and produced by the testing effort Under this single framework, all participants inthe testing effort can define and refine the quality goals they are working toward It is where the team defines the plan it will implement to meet those goals And, most importantly, it provides the entire team with one place to go to determine the state of the system at any time.
In Test Manager you can plan, design, implement, execute tests and evaluate results.With Test manager we can
Create, manage, and run reports The reporting tools help you track assets such as scripts, builds, and test documents, and track test coverage and progress.
Create and manage builds, log folders, and logs
Create and manage data pools and data types
Trang 162.1 Test Manager-Results screen
In the Results tab of the Test Manager, you could see the results stored From Test
Trang 183 Supported environments3.1 Operating systemWinNT4.0 with service pack 5Win2000WinXP(Rational 2002)Win98 Win95 with service pack13.2 ProtocolsOracleSQL serverHTTPSybaseTuxedoSAPPeople soft3.3 Web browsersIE4.0 or laterNetscape navigator (limited support)3.4 Markup languagesHTML and DHTML pages on IE4.0 or later.3.5 Development environmentsVisual basic 4.0 or aboveVisual C++JavaOracle forms 4.5Delphi
Power builder 5.0 and above
Trang 194 Performance Testing
The performance testing is a measure of the performance characteristics of an application The mainobjective of a performance testing is to demonstrate that the system functions to specification withacceptable response times while processing the required transaction volumes in real-time productiondatabase The objective of a performance test is to demonstrate that the system meets requirementsfor transaction throughput and response times simultaneously The main deliverables from such atest, prior to execution, are automated test scripts and an infrastructure to be used to executeautomated tests for extended periods.
4.1 What is Performance testing?
Performance testing of an application is basically the process of understanding how the webapplication and its operating environment respond at various user load levels In general, we want tomeasure the latency, throughput, and utilization of the web site while simulating attempts by virtualusers to simultaneously access the site One of the main objectives of performance testing is tomaintain a web site with low latency, high throughput, and low utilization.
4.2 Why Performance testing?
Performance problems are usually the result of contention for, or exhaustion of, some systemresource When a system resource is exhausted, the system is unable to scale to higher levels ofperformance Maintaining optimum Web application performance is a top priority for applicationdevelopers and administrators
Performance analysis is also carried for various purposes such as:
During a design or redesign of a module or a part of the system, more than one alternative presents itself In such cases, the evaluation of a design alternative is the prime mover for an analysis.
Post-deployment realities create a need for the tuning the existing system A systematicapproach like performance analysis is essential to extract maximum benefit from an existingsystem.
Identification of bottlenecks in a system is more of an effort at troubleshooting This helps toreplace and focus efforts at improving overall system response.
As the user base grows, the cost of failure becomes increasingly unbearable To increase confidence and to provide an advance warning of potential problems in case of load conditions, analysis must be done to forecast performance under load.
Typically to debug applications, developers would execute their applications using different executionstreams (i.e., completely exercise the application) in an attempt to find errors.
When looking for errors in the application, performance is a secondary issue to features;however, it is still an issue.
4.3 Performance Testing Objectives
The objective of a performance test is to demonstrate that the system meets requirements fortransaction throughput and response times simultaneously.
Trang 20The performance testing goals are:
End-to-end transaction response time measurements.
Measure Application Server components performance under various loads. Measure database components performance under various loads.
Monitor system resources under various loads.
Measure the network delay between the server and clients
4.4 Pre-Requisites for Performance Testing
We can identify five pre-requisites for a performance test Not all of these need be in place prior to planning or preparing the test (although this might be helpful), but rather, the list defines what is required before a test can be executed.
First and foremost thing is
The design specification or a separate performance requirements document should : Defines specific performance goals for each feature that is instrumented Bases performance goals on customer requirements
Defines specific customer scenarios.
Quantitative, relevant, measurable, realistic, achievable requirements
As a foundation to all tests, performance requirements should be agreed prior to the test This helps indetermining whether or not the system meets the stated requirements The following attributes will help to have a meaningful performance comparison.
Quantitative - expressed in quantifiable terms such that when response times aremeasured, a sensible comparison can be derived.
Relevant - a response time must be relevant to a business process.
Measurable - a response time should be defined such that it can be measured usinga tool or stopwatch and at reasonable cost.
Realistic - response time requirements should be justifiable when compared with thedurations of the activities within the business process the system supports.
Achievable - response times should take some account of the cost of achievingthem.
Stable system
A test team attempting to construct a performance test of a system whose software is of poor quality isunlikely to be successful If the software crashes regularly, it will probably not withstand the relatively minor stress of repeated use Testers will not be able to record scripts in the first instance, or may not be able to execute a test for a reasonable length of time before the software, middleware or operating systems crash.
Realistic test environment
Trang 214.5 Performance Requirements
Performance requirements normally comprise three components: Response time requirements
Transaction volumes detailed in ‘Load Profiles’ Database volumes
Response time requirements
When asked to specify performance requirements, users normally focus attention onresponse times, and often wish to define requirements in terms of generic response times.A single response time requirement for all transactions might be simple to define from theuser’s point of view, but is unreasonable Some functions are critical and require shortresponse times, but others are less critical and response time requirements can be lessstringent.
Load profiles
The second component of performance requirements is a schedule of load profiles A loadprofile is the level of system loading expected to occur during a specific business scenario.Business scenarios might cover different situations when the users’ organization has differentlevels of activity or involve a varying mix of activities, which must be supported by the system.
Database volumes
Trang 235.1 Phase 1 – Requirements Study
This activity is carried out during the business and technical requirements identificationphase The objective is to understand the performance test requirements, Hardware &Software components and Usage Model It is important to understand as accurately and asobjectively as possible the nature of load that must be generated.
Following are the important performance test requirement that needs to be captured duringthis phase.
Response Time
Transactions Per Second Hits Per Second Workload No of con current users Volume of data Data growth rate Resource usage
Hardware and Software configurations
ActivityWork items
Performance- Stress Test, Load Test, Volume Test, Spike Test, Endurance Test
Understand the system and application model Server side and Client side Hardware and software
requirements.
Browser Emulation and Automation Tool Selection Decide on the type and mode of testing
Operational Inputs – Time of Testing, Client and Server sideparameters.5.1.15.1.2 DeliverablesDeliverableSampleRequirement CollectionRequirementCollection.doc
5.2 Phase 2 – Test Plan
The following configuration information will be identified as part of performance testingenvironment requirement identification.
Hardware Platform
Trang 24 Processors Memory Disk Storage Load Machines configuration Network configurationSoftware Configuration Operating System Server Software Client Machine Software Applications
ActivityWork items
Test Plan Preparation Hardware and Software Details Test data
Transaction Traversal that is to be tested with sleep times. Periodic status update to the client.
5.2.1 Deliverables
DeliverableSample
Test Plan
TestPlan.doc
5.3 Phase 3 – Test Design
Based on the test strategy detailed test scenarios would be prepared During the test design period the following activities will be carried out:
Scenario design
Detailed test execution plan Dedicated test environment setup Script Recording/ Programming
Script Customization (Delay, Checkpoints, Synchronizations points) Data Generation
Parameterization/ Data pooling
ActivityWork items
Test Design Generation Hardware and Software requirements that includes the server components , the Load Generators used etc., Setting up the monitoring servers
Setting up the data
Preparing all the necessary folders for saving the results as the test is over.
Pre Test and Post Test Procedures
5.3.1 Deliverables
Trang 25Test Design
TestDesign.doc
5.4 Phase 4 –Scripting
ActivityWork items
Scripting Browse through the application and record the transactions with the tool
Parameterization, Error Checks and Validations
Run the script for single user for checking the validity of scripts 5.4.1 DeliverablesDeliverableSample Test ScriptsSample Script.doc
5.5 Phase 5 – Test Execution
The test execution will follow the various types of test as identified in the test plan All thescenarios identified will be executed Virtual user loads are simulated based on the usagepattern and load levels applied as stated in the performance test strategy.
The following artifacts will be produced during test execution period: Test logs
Test Result
ActivityWork items
Test Execution Starting the Pre Test Procedure scripts which includes start scripts for server monitoring.
Modification of automated scripts if necessary Test Result Analysis
Report preparation for every cycle
5.5.1 Deliverables
Trang 26 Test Execution
Time Sheet.docRun Logs.doc
5.6 Phase 6 – Test Analysis
ActivityWork items
Test Analysis Analyzing the run results and preparation of preliminary report.5.6.1 DeliverablesDeliverableSample Test AnalysisPreliminary Report.doc
5.7 Phase 7 – Preparation of Reports
The test logs and results generated are analyzed based on Performance under variousload, Transaction/second, database throughput, Network throughput, Think time, Networkdelay, Resource usage, Transaction Distribution and Data handling Manual and automatedresults analysis methods can be used for performance results analysis.
The following performance test reports/ graphs can be generated as part of performance
testing:- Transaction Response time Transactions per Second Transaction Summary graph
Transaction performance Summary graph Transaction Response graph – Under load graph Virtual user Summary graph
Error Statistics graph Hits per second graph Throughput graph
Down load per second graph
Based on the Performance report analysis, suggestions on improvement or tuningwill be provided to the design team:
Performance improvements to application software, middleware, databaseorganization.
Changes to server system parameters.
Upgrades to client or server hardware, network capacity or routing.
ActivityWork items
Trang 275.7.1 DeliverablesDeliverableSample Final ReportFinal Report.doc5.8 Common Mistakes in Performance Testing• No Goals
• No general purpose model
• Goals =>Techniques, Metrics, Workload• Not trivial• Biased Goals• ‘To show that OUR system is better than THEIRS”• Analysts = Jury• Unsystematic Approach
• Analysis without Understanding the Problem• Incorrect Performance Metrics
• Unrepresentative Workload• Wrong Evaluation Technique• Overlook Important Parameters• Ignore Significant Factors
• Inappropriate Experimental Design• Inappropriate Level of Detail• No Analysis
• Erroneous Analysis• No Sensitivity Analysis• Ignoring Errors in Input
• Improper Treatment of Outliers• Assuming No Change in the Future• Ignoring Variability
• Too Complex Analysis
• Improper Presentation of Results• Ignoring Social Aspects
• Omitting Assumptions and Limitations
5.9 Benchmarking Lessons
Trang 28Performance goals needs to be ensured If we decide to make performance a goal and a measure of the quality criteria for release, the management team must decide to enforce the goals Establish incremental performance goals throughout the product development cycle All the members in the team should agree that a performance issue is not just a bug; it is a software architectural problem.
Performance testing of Web services and applications is paramount to ensuring an excellent customer experience on the Internet The Web Capacity Analysis (WebCAT) tool provides Web server performance analysis; the tool can also assess Internet Server Application Programming Interface and application server provider (ISAPI/ASP) applications.
Creating an automated test suite to measure performance is time-consuming and labor-intensive Therefore, it is important to define concrete performance goals Without defined performance goals or requirements, testers must guess, without a clear purpose, at how to instrument tests to best measure various response times.
The performance tests should not be used to find functionality-type bugs Design the performance test suite to measure response times and not to identify bugs in the product Design the build verification test (BVT) suite to ensure that no new bugs are injected into the build that would prevent the performance test suite from successfully completing.
The performance tests should be modified consistently Significant changes to the performance test suite skew or make obsolete all previous data Therefore, keep the performance test suite fairly static
throughout the product development cycle If the design or requirements change and you must modify a test, perturb only one variable at a time for each build.
Strive to achieve the majority of the performance goals early in the product development cycle because: Most performance issues require architectural change
Performance is known to degrade slightly during the stabilization phase of the development cycle.
Achieving performance goals early also helps to ensure that the ship date is met because a product rarely ships if it does not meet performance goals You should reuse automated performance tests Automated performance tests can often be reused in many other automated test suites For example, incorporate the performancetest suite into the stress test suite to validate stress scenarios and to identify potential performance issues under different stress conditions.
Tests are capturing secondary metrics when the instrumented tests have nothing to do with measuring clear and established performance goals Although secondary metrics look good on wall charts and in reports, if the data is not going to be used in a meaningful way to make improvements in the engineering cycle, it is probably wasted data En sure that you know what you are measuring and why.