1. Trang chủ
  2. » Cao đẳng - Đại học

Kiem thu phan mem English

84 10 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 84
Dung lượng 847,5 KB

Nội dung

 Informal software tests that is not based on any formal test plans or test cases;  By this type of testing, tester would learn the software as they test it.. Ad-hoc Testing[r]

(1)

SOFTWARE TESTING

“Testing is the process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements”

Testing is a process used to help identify the correctness, completeness and quality of developed computer software

On a whole, testing objectives could be summarized as:

Testing is a process of executing a program with the intent of finding an error

· A good test is one that has a high probability of finding an as yet undiscovered error

(2)

Testing is required to ensure that the applications meets the objectives related to the applications’ functionality, performance, reliability, flexibility,

ease of use, and timeliness of delivery

• Developers hide their mistakes

• To reduce the cost of rework by detecting defects at an early stage • Avoid project overruns by following a defined test methodology • Ensure the quality and reliability of the software to the users

(3)

•Test early and test often

•Integrate the application development and testing life cycles You'll get better results and you won't have to mediate between two armed camps in your IT shop

•Formalize a testing methodology; you'll test everything the same way and you'll get uniform results

•Develop a comprehensive test plan; it forms the basis for the testing methodology •Use both static and dynamic testing

•Define your expected results

•Understand the business reason behind the application You'll write a better application and better testing scripts

•Use multiple levels and types of testing (regression, systems, integration, stress and load)

•Review and inspect the work, it will lower costs

•Don't let your programmers check their own work; they'll miss their own errors

(4)

 A Good Test Engineer has a ‘test to break’ attitude  An ability to take the point of view of the customer  A strong desire for Quality

 Gives more attention to minor details

 Tact & diplomacy are useful in maintaining a co-operative relationship

with developers

 Ability to communicate with both technical & non-technical people is

useful

 Judgment skills are needed to assess high-risk areas of an application on

which to focus testing efforts when time is limited.

(5)

 

A Project company survives on the number of contacts that the company has and the number of Projects that the company gets from other forms Whereas a Product based company’s existence depends entirely on how it’s product does in the market

A Project Company will have the specifications made from the customer

as to how the Application should be Since a Project company will be doing the same kind of Project for some other Companies, they get to be better and know

what are the issues and can handle them

A Product company needs to develop it’s own specification and make sure that they are generic Also it has to be made sure that the Application is compatible with other Applications In a Product company, the application

created will always be new in some way or the other, causing the application to be more vulnerable in terms of bugs When upgrades are made for the different functionalities, care has to be taken that it will not cause any other module to not function

(6)

Automated Vs Manual Testing

Manual Testing Automated Testing

 Prone to human errors More reliable

 Time Consuming Time Conserving

 Skilled man power required No human intervention

required once started

 Tests have to be performed Batch testing can be done

(7)

WHEN TO STOP TESTING

This can be difficult to determine Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done Common factors in deciding when to stop are · Deadlines, e.g release deadlines, testing deadlines;

· Test cases completed with certain percentage passed; · Test budget has been depleted;

· Coverage of code, functionality, or requirements reaches a specified point;

(8)

• ISO – International Standards for Organisation

• SEI CMM –Software Engineering Institute Capability maturity module

• CMMI - Capability maturity module Integration • TMM – Testing Maturity Model (Testing Dept)

• PCMM – People Capability maturity module (Hr Dept) • SIX SIGMA – Zero defect oriented product (out of million product 3.4% can be defect tolerance) Presently in india WIPRO holds the certification

SOME BRANDED

(9)(10)

Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with

It is oriented to 'prevention'

In simple words, it is a review with a goal of improving the process as well as the deliverable

QA:for entire life cycle

QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right requirements Testing is one example of a QC activity,

QC is corrective process QC:for testing part in SDLC

(11)

Coherent sets of activities for specifying, designing, implementing and testing software systems

Objectives

• To introduce software lifecycle models

• To describe a number of different lifecycle models and when they may be used

• To describe outline process models for

requirements engineering, software development, testing and evolution

(12)

A project using the waterfall model moves down a series of steps starting from an initial idea to a final product At the end of each step the project team holds a review to determine if they are ready to move to the next step If the product isn’t ready to progress, it stays at that level until it’s ready

(13)

Notice three important things about the waterfall model:

· There’s no way to back up As soon as you’re on a step, you need to complete the tasks for that step and then move on-you can’t go back

· The steps are discrete; there’s no overlap

· Note that development or coding is only a single block

Disadvantages : are More rework and changes will be more, if any error occurs, Time frame will be more, More people will be idle during the initial time Inflexible partitioning of the project into distinct stages makes it

(14)

DEFINITION - The spiral model, also known as the spiral lifecycle model, is a systems development method (SDM) used in information technology (IT)

This model of development combines the features of the prototyping model and the waterfall model

The spiral model is favored for large, expensive, and complicated projects

SPIRAL MODEL

ADVANTAGES :

Estimates (i.e budget, schedule, etc.) get more realistic as work progresses, because important issues are discovered earlier

It is more able to cope with the (nearly inevitable) changes that software development generally entails

(15)(16)

Each time around the spiral involves six steps:

1 Determine the objectives, alternatives and constraints

2 Identify and Resolve Risks

3 Evaluate alternatives

4 Develop and Test the Current level

5 Plan the Next level

(17)

The V shows the typical sequence of development activities on the left-hand

(downhill) side and the corresponding sequence of test execution activities on the right-hand (uphill) side

In fact, the V Model emerged in reaction to some waterfall models that showed testing as a single phase following the traditional development phases of

requirements analysis, high-level design, detailed design and coding The waterfall model did considerable damage by supporting the common impression that testing is merely a brief detour after most of the mileage has been gained by mainline

development activities Many managers still believe this, even though testing usually takes up half of the project time

'V' shape model describes about the process about the construting the application at a time all the Analysing, designing, coding and testing will be done at a time i.e once coding finishes it'll go to tester to test for bugs if we got OK form tester we can

immediately start coding after coding again send to tester he'll check for BUGS and will send back to programmer then he,programmer can finish up by implementing the project

(18)

it is the model what is using by most of the companies v model is model in which testing is done prallelly with

development.left side of v model ,reflect development input for the corresponding testing activities

It is a parallel activity which would give the tester the domain knowledge and perform more value added,high quality testing with greater efficiency Also it reduces time since the test

(19)(20)

Extreme Programming.

Extreme Programming.

New approach to development based on the development and delivery of very small increments of functionality

(21)(22)

Static testing, the review, inspection and validation of development requirements, is the most effective and cost efficient way of testing A structured approach to testing should use both dynamic and static testing techniques

Static testing is the most effective and cost efficient way of testing

A structured approach to testing should use both dynamic and static testing techniques

Dynamic Testing

Testing that is commonly assumed to mean executing software and finding errors is dynamic testing

Two types : Structural and Functional Testing

(23)

Unit Testing

Unit Testing

Require knowledge of code

 High level of detail

 Deliver thoroughly tested components to

integration

Stopping criteria

 Code Coverage

(24)

Strategies

 Bottom-up, start from bottom and add one at a

time

 Top-down, start from top and add one at a time  Big-bang, everything at once

Simulation of other components

 Stubs receive output from test objects  Drivers generate input to test objects

Integration Testing

(25)

Driver: It is a calling program It provides facility to invoke a sub module instead of main modules

Stub: It is a called program This temporary program called by main module instead of sub module

Top down Approach:

MAIN

Sub1 stub

Sub2

Bottom up Approach

Main Driver Sub1

(26)

Functional testing

 Test end to end functionality, Testing against

complete requirement.

 Requirement focus

 Test cases derived from specification

Use-case focus

 Test selection based on user profile

System Testing

(27)

User (or customer) involved

Environment as close to field use as possible

Focus on:

 Building confidence

 Compliance with defined acceptance

criteria in the contract

Acceptance Testing

(28)

WHITE BOX TESTING TECHNIQUES

Statement coverage : Execute each & every statement of the code is called Statement coverage

Decision Coverage : Execute each decision direction at least once

(29)

Definition

This technique is used to

ensure that every statement / decision in the program is

executed at least once

Program Sample

//statement //statement

If((A > 1) and (B=0)) //sub-statement Else //sub-statement Test Conditions Statement1 Statement2

1 (A > 1) and (B = 0)

2 (A<=1) and (B NOT = 0) (A<=1) and (B=0)

4 (A>1) and (B NOT= 0)

Description

Statement coverage requires only that the if … else statement be executed once – not that sub-statements and be executed

 Minimum level of Structural Coverage achieved

 Helps to identify unreachable Code and its removal if required  “Null else” problem: It does not ensure exercising the

statements completely Example: if x<5 then x= x+3; x>5 decision not enforced Paths not covered

(30)

Definition

A test case design technique in which test cases are designed to execute all the outcomes of every decision

Program Sample

IF Y > THEN Y = Y + IF Y > THEN Y = Y + ELSE

Y = Y + END

Y = Y + ELSE

Y = Y + END

Decision Coverage

No Of Paths = Test Cases:

1 (Y > 1) and (Y > 9) (Y > 1) and (Y <= 9) (Y < = 1)

Graph

Y = Y +

Y >

T F

T F

Y = Y +

Y = Y + Y >

Y = Y +

Y = Y +

1 3

(31)

Definition

 Both parts of the predicate are

tested

 Program Sample shows that all

4 test conditions are tested

Conditions Table ( n )

Condition Coverage - AND

Program Sample

If((A > 1) AND (B=0) { //sub-statement } Else { //sub-statement } Test Conditions

1 (A > 1) AND (B = 0)

(32)

Definition

 Both parts of the predicate are

tested

 Program Sample shows that all

4 test conditions are tested

Conditions Table ( n)

Condition Coverage - OR

Program Sample

If((A > 1) OR (B=0) { //sub-statement } Else { //sub-statement } Test Conditions

1 (A > 1) OR (B = 0)

2 (A<=1) OR (B NOT = 0) (A<=1) OR (B=0)

4 (A>1) OR (B NOT= 0)

A > 1 B = 0 RESULT

TRUE OR TRUE TRUE

TRUE OR FALSE TRUE

FALSE OR FALSE FALSE

(33)

Loop Coverage

Simple

 Nested Loops

Serial / Concatenated Loops  Unstructured Loops (Goto)

Coverage

 Boundary value tests  Cyclomatic Complexity

Loop Coverage

I < N

END

I ++ T F

I = 1

Print

Example of CC

for ( I=1 ; I<n ; I++ )

printf (“Simple Loop”);

(34)

Types of Testing

 Black Box Testing  White Box Testing  Unit Testing

 Incremental Integration Testing  Integration Testing

 Functional Testing  System Testing

 End-to-End Testing  Sanity Testing

 Regression Testing  Acceptance Testing  Load Testing

 Stress Testing

 Performance Testing

List of the different types of testing that can be implemented are listed below which will be followed by explanations of the same

Usability Testing

Install / Uninstall Testing Recovery Testing Security Testing Compatibility Testing Exploratory Testing Ad-hoc Testing Comparison Testing Alpha Testing Beta Testing Mutation Testing Conformance Testing

(35)

Black Box Testing

 It can also be termed as functional testing

 Tests that examine the observable behavior of software as evidenced by its outputs

without referencing to internal functions is black box testing

 It is not based on any knowledge of internal design or code and tests are based on

requirements and functionality

 In object oriented programming environment, automatic code generation and code

re-use becomes more prevalent, analysis of source code itself becomes less important and functional tests become more important

(36)

White Box Testing

 It can also be termed as Structural Testing

 Tests that verify the structure of the software and require complete access to the

object’s source code is white box testing

 It is known as white box as all internal workings of the code can be seen

 White-box tests make sure that the software structure itself contributes to proper

and efficient program execution

 It is based in of the internal logic of an applications’ code and tests are based on

coverage of code statements, branches, paths, conditions

(37)

Unit testing

 This is the ‘micro’ scale testing and tests particular functions or code modules  It is always a combination of structural and functional tests and typically done by

programmers and not by testers

 Requires detailed knowledge of the internal program design and code and may

require test driver modules or test harnesses

 Unit tests are not always done easily done unless the application has a well designed

(38)

Incremental 

Integration Testing

 This is continuous testing of an application as new functionality is added

 These tests require that the various aspects of an application’s functionality be

independent enough to work separately before all parts of the program are completed

(39)

Integration Testing

 This is testing of combined parts of an application to ensure that they function

together correctly

 The parts can be code modules, individual applications, client and server

applications on a network, etc

(40)

Functional  Testing

 It is black-box testing geared to functional requirements and should be done by

testers

 Testing done to ensure that the product functions the way it is designed to

according to the design specifications and documentation

 This testing can involve testing of product’s user interface, database management,

(41)

System  Testing

 This is like black-box testing that is based on over-all requirements specifications  This testing begins once the modules are integrated enough to perform tests in a

whole system environment

(42)

End-to-End testing

 This is the ‘macro’ end of the test scale and similar to system testing

 This would involve testing of a complete application environment as in a real world

(43)

Sanity  Testing

Initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort

(44)

Regression Testing

 This is re-testing of the product/software to ensure that all reported bugs have been

fixed and implementation of changes has not affected other functions

 It is always difficult to the amount of re-testing required, especially when the

software is at the end of the development cycle

 These tests apply to all phases wherever changes are being made

 This testing also ensures reported product defects have been corrected for each new

(45)

Acceptance  Testing

 This can be told as the final testing which is based on specifications of the end-user

or the customer

 It can also be based on use by end-users/customers over some limited period of time  This testing is more used in Web environment, where “virtual clients” perform

typical tasks such as browsing, purchasing items and searching databases contained within your web site

 “probing clients” start recording the exact server response times, where this testing

(46)

Load 

Testing

 Testing an application under heavy loads

 For example, testing of a Web site under a range of loads to determine at what point

the system’s response time degrades or fails

(47)

Stress Testing

 This term is more often used interchangeably with ‘load’ and ‘performance’ testing  It is system functional testing while under unusually heavy loads, heavy repetition

of certain actions or inputs, input of large numerical values, large complex queries to a database system

 Always aimed at finding the limits at which the system will fail through abnormal

quantity or frequency of inputs

 Examples could

be:- higher rates of inputs

 data rates an order of magnitude above ‘normal’

 test cases that require maximum memory or other resources  test cases that cause ‘thrashing’ in a virtual operating system

 test cases that cause excessive ‘hunting’ for data on disk systems

 This testing can also attempt to determine combinations of otherwise normal inputs

(48)

Performance  Testing

 This term is more often used interchangeably with ‘stress’ and ‘load’ testing

 To understand the applications’ scalability, or to benchmark the performance in a

environment or to identify the bottlenecks in high hit-rate Web sites, this testing can be used

 This testing checks the run-time performance in the context of the integrated system  This may require special software instrumentation

 Ideally, these types of testing are defined in requirements documentation or QA or

(49)

Usability Testing

 This testing is testing for ‘user-friendliness’

 The target will always be the end-user or customer

 Techniques such as interviews, surveys, video recording of user sessions can be

used in this type of testing

(50)

Install / Uninstall  testing

(51)

Recovery testing

 Testing that is performed to know how well a system recovers from crashes,

hardware failures or other catastrophic problems

 This is the forced failure of the software in a variety of ways to verify for the

recovery

 Systems need to be fault tolerant - at the same time processing faults should not

(52)

Security  Testing

 This testing is performed to know how well the system protects against

unauthorized internal or external access, willful damage, etc; this can include :

 attempted penetration of the system by ‘outside’ individuals for fun or

personal gain

 disgruntled or dishonest employees

 During this testing the tester plays the role of the individual trying to penetrate

into the system

 Large range of methods include:

 attempt to acquire passwords through external clerical means  use custom software to attack the system

 overwhelm the system with requests

(53)

Compatibility Testing

 Testing whether the software is compatible in particular hardware / software /

(54)

Exploratory testing

 Tests based on creativity

(55)

Ad-hoc Testing

 Similar to Exploratory testing

 The only difference is that, these tests are taken to mean that the testers have

(56)

Comparison Testing

 This testing is comparing software weaknesses and strengths to competing

products

 For some applications reliability is critical, redundant hardware and software may

be used, independent versions can be used

 Testing is conducted for each version with same test data to ensure all provide

identical output

 All the versions are run with a real-time comparison of results

 When outputs of versions differ, investigations are made to determine if there is a

(57)

Alpha Testing

 This is testing of an application when development is nearing completion;mostly

testing conducted at the developer’s site by a customer

 The customer uses the software with the developer ‘looking over the shoulder’

and recording errors and usage problems

 Testing is conducted in a controlled environment

 Minor design changes can be still made as a result of this testing

(58)

Beta Testing

 Testing conducted when development and testing are completed and bugs and

problems need to be found before final release

 It is ‘live’ testing in an environment not controlled by the developer

 Customer records the errors / problems reports difficulties at regular intervals  Testing is conducted at one or more customer sites

(59)

Mutation Testing

 Method of determining if a set of test date or test cases is useful

 Various code changes (‘bugs’) are deliberately introduced and retested with the

original test date/cases to determine whether the bugs are detected

 Proper implementation requires large computational resources  A mutated program differs from the original

 The mutants are tested until the results differ from those obtained from the

original program

(60)

Conformance Testing

 Testing conducted to verify the implementation in conformance to the industry

standards

 Producing tests for the behavior of an implementation to be sure that it provides the

(61)

Economics of Continuous Testing

Traditional Testing Continuous Testing

Accumulated Accumulated Accumulated Accumulated Test Cost Error Remaining Error Remaining Cost

0 20      10    $10

0 40      15     $25

0 60      18    $42

$480 12       4        $182

$1690 0       0     $582

(62)

Error:

"Is an undesirable deviation from requirements?"

Any problem or cause for many problems which stops the system to perform its functionality is referred as Error

Bug:

Any Missing functionality or any action that is performed by the system which is not supposed to be performed is a Bug

"Is an error found BEFORE the application goes into production?" Any of the following may be the reason for birth of Bug

1 Wrong functionality Missing functionality

(63)

Defect:

A defect is a variance from the desired attribute of a system or application "Is an error found AFTER the application goes into production?"

Defect will be commonly categorized into two types: Defect from product Specification

2 Variance from customer/user expectation

Failure:

Any Expected action that is supposed to happen if not can be referred as failure or we can say absence of expected response for any request

Fault:

(64)

http://www.exforsys.com

http://www.testing-post.com/testing/ http://en.wikipedia.org/wiki/Main_Page www.sureshkumar.net

http://www.aptest.com/glossary.html http://www.adstag.com/

http://www.softwaretestinghub.com http://www.itquestionbank.com

(65)

STLC (Testing Life cycle)

 Test Plan

 Test Design

 Test Execution

 Test Log

 Defect Tracking

(66)

A set of test data and test programs (test scripts) and their expected results A test case validates one or more system requirements and generates a pass or fail

Test Case

Test Scenario

A set of test cases that ensure that the business process flows are tested from end to end They may be independent tests or a series of tests that follow each other, each dependent on the output of the

(67)

Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation This usually does not include combinations of input, but rather a single state value based by class For example, with a given function there may be several classes of input that may be used for positive testing If function expects an integer and receives an integer as input, this would be considered as positive test

assertion On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition

E.g.: Verify a credit limit within a given range(1,000 – 2,000) Here we can identify conditions

1 < 1000

2 Between 1,000 and 2,000 >2000

Error Guessing

E.g.: Date Input – February 30, 2000 Decimal Digit – 1.99

Boundary Value Analysis

(68)

Age

BVA (Boundary Value Analysis )( Here we can define for Size and Range)

Size: three Range:

Min  Pass

Min-1Fail

Min+1Pass

Max Pass

Max-1Pass

Max+1Fail

(69)

Test Scenarios - Sample

FS Reference: 3.2.1.Deposit

An order capture for deposit contains fields like Client Name, Amount, Tenor and interest for the deposit.

Business Rule:

If tenor is great than 10 months interest rate should be greater than 10% else a warning should be given by application.

If Tenor greater than 12 months, then the order should not proceed.

FS Reference: 3.2.1.Deposit

An order capture for deposit contains fields like Client Name, Amount, Tenor and interest for the deposit.

Business Rule:

If tenor is great than 10 months interest rate should be greater than 10%

else a warning should be given by application.

Test Scenario IDIf Tenor greater than 12 months, then the order should not proceed.Client Name Amount Tenor Interest Warning

Dep/01 123 >0 12 months 0<interest<10% Warning

Dep/02 abc <0 6 months <0 Nogo Dep/03 12ab With two decimal 11 months With two decimal

and rate to be 11 %

No Warning

Dep/04 Ab.Pvt With four decimal

1.5 months With four decimal

No Warning

Dep/05 abc Character Blank Character No Warning Dep/06 abc >0 Invalid date >100 No Warning Dep/07 abc >0 <system date >0 No Warning

(70)

Test Cases

Test Cases will be defined, which will form the basis for mapping

the Test cases to the actual transaction types that will be used for the integrated testing

Test cases gives values / qualifiers to the attributes that the test condition can have

Test cases is the end state of a test condition, i.e., it cannot be decomposed or broken down further

Test Cases contains the Navigation Steps, Instructions, Data and Expected Results required to execute the test case(s) It covers transfer of control between components

It covers transfer of data between components (in both directions)

(71)

Test Data

Test Data could be related to both inputs and maintenance that

are required to execute the application Data for executing the test scenarios should be clearly defined

Test Team can prepare this with the database team and Domain experts support or Revamp the existing production Data

Example:

Business rule, if the Interest to be Paid is more than % and the Tenor of the deposit exceeds one month, then the system should give a warning

To populate an Interest to be Paid field of a deposit, we can give 9.5478 and make the Tenor as two months for a particular deposit

This will trigger the warning in the application Example:

Business rule, if the Interest to be Paid is more than % and the Tenor of the deposit exceeds one month, then the system should give a warning

To populate an Interest to be Paid field of a deposit, we can give 9.5478 and make the Tenor as two months for a particular deposit

(72)

Test Conditions

A Test Condition is all possible combinations and validations that can be attributed to a requirement in the

specification.The importance’s of determining the conditions are:

1 Deciding on the architecture of testing approach Evolving design of the test scenarios

3 Ensuring Test coverage

The possible condition types that can be built are

• Positive condition: Polarity of the value given for test is to

comply with the condition existence

• Negative condition: Polarity of the value given for test is not

to comply with the condition existence

• Boundary condition: Polarity of the value given for test is to

assess the extreme values of the condition

• User Perspective condition: Polarity of the value given for

(73)

A defect is an improper program condition that is generally the result of an error Not all errors produce program defects, as with incorrect comments or some documentation errors Conversely, a defect could result from such nonprogrammer causes as improper program packaging or handling

Software Defects

Software Defects

Defect Categories

Wrong Missing Extra

The specifications have been implemented incorrectly

A requirement incorporated into the product that was not

specified A specified requirement

(74)

Step 1:Identify the module for which the Use Case belongs

Step 2:Identify the functionality of the Use Case with respect to the overall functionality of the system

Step 3:Identify the Actors involved in the Use Case

Step 4:Identify the pre-conditions

Step 5:Understand the Business Flow of the Use Case

Step 6:Understand the Alternate Business Flow of the Use Case

Step 7:Identify the any post-conditions and special requirements

Step 8:Identify the Test Conditions from Use Case / Business Rule’s and make a Test Condition Matrix Document – Module Wise for each and every Use Case

Step 9:Identify the main functionality of the module and document a complete Test scenario Document for the Business Flow (include any actions made in the alternate business flow if applicable)

Step 10:For every test scenarios, formulate the test steps based on a navigational flow of the application with the test condition matrix in a specific test case template

(75)

Role of Documentation in Testing

 Testing practices should be documented so that they are repeatable

 Specifications, designs, business rules, inspection reports, configurations, code

changes, test plans, test cases, bug reports, user manuals, etc should all be documented

 Change management for documentation should be used if possible

 Ideally a system should be developed for easily finding and obtaining documents

(76)

Under the condition where the bug report is invalid The question is

what are the comments that the developer left to indicate that it is invalid? If there are none, you need to discuss this with the developer The reasons that they may have are many:

1) You didn't understand the system under test correctly because 1a) the requirements have changed

1b) you don't get the whole picture

2) You were testing against the wrong version of software, or configuration, with the wrong OS, or wrong browser

3) You made an assumption that was incorrect

4) Your bug was not repeatable (in which case they may mark it as "works for me"), or if it was repeatable it was because the memory was already corrupted after the first instance, but you can't

reproduce it on a clean machine (again, could be a "works for me" bug) Just remember that a bug report isn't you writing a law that the

(77)

Traceability Matrix

Traceability Matrix ensures that each requirement has been

traced to a specification in the Use Cases and Functional Specifications to a test condition/case in the test scenario and Defects raised during Test Execution, thereby achieving one-to-one test coverage

The entire process of Traceability is a time consuming process In order to simplify, Rational Requisite Pro / Test Director a tool, which will maintain the specifications of the documents Then these are mapped correspondingly The specifications have to be loaded into the system by the user

Even though it is a time consuming process, it helps in finding the ‘ripple’ effect on altering a specification The impacts on test conditions can immediately be identified using the trace matrix Traceability matrix should be prepared between requirements to Test cases

(78)

What is Test Management?

Test management is a method of organizing application test assets and artifacts — such as

Test requirements Test plans

Test documentation Test scripts

Test results

To enable easy accessibility and reusability.Its aim is to deliver quality applications in less time

(79)

Test Strategy

Scope of Testing Types of Testing Levels of Testing Test Methodology Test Environment Test Tools

Entry and Exit Criteria Test Execution

Roles and Responsibilities Risks and Contingencies Defect Management

(80)

Test Requirements

Test Team gathers the test requirements from the following Base Lined documents Customer Requirements Specification(CRS)

Functional Specification (FS) – Use Case, Business Rule, System Context Non – Functional Requirements (NFR)

High Level Design Document (HLD) Low Level Design Document (LLD) System Architecture Document

Prototype of the application Database Mapping Document Interface Related Document

Other Project related documents such as e-mails, minutes of meeting Knowledge Transfer Sessions from the Development Team

(81)

Configuration Management

Configuration Management

Software Configuration management is an umbrella activity that is applied throughout the software process SCM identifies controls, audits and reports

modifications that invariably occur while software is being developed and after it has been released to a customer All information produced as part of software

engineering becomes of software configuration The configuration is organized in a manner that enables orderly control of change

The following is a sample list of Software Configuration Items:

 Management plans (Project Plan, Test Plan, etc.)

 Specifications (Requirements, Design, Test Case, etc.)

 Customer Documentation (Implementation Manuals, User Manuals, Operations

Manuals, On-line help Files)

 Source Code (PL/1 Fortran, COBOL, Visual Basic, Visual C, etc.)  Executable Code (Machine readable object code, exe's, etc.)

 Libraries (Runtime Libraries, Procedures, %include Files, API's, DLL's, etc.)  Databases (Data being Processed, Data a program requires, test data, Regression

test data, etc.)

(82)

Automated Testing Tools

 Win Runner, Load Runner, Test Director from Mercury Interactive  QARun ,QA Load from Compuware

 Rational Robot, Site Load and SQA Manager from Rational  SilkTest, SilkPerformer from Segue

(83)

Test attributes

To different degrees, good tests have these attributes:

Power When a problem exists, the test will reveal it

Valid When the test reveals a problem, it is a genuine problem

Value It reveals things your clients want to know about the product or project

Credible Your client will believe that people will the things that are done in this test

Representative of events most likely to be encountered by the user (xref Musa's Software

Reliability Engineering).

Non-redundant This test represents a larger group that address the same risk

Motivating Your client will want to fix the problem exposed by this test

Performable It can be performed as designed

Maintainable Easy to revise in the face of product changes

Repeatable It is easy and inexpensive to reuse the test

Pop (short for Karl Popper) It reveal things about our basic or critical assumptions

Coverage It exercises the product in a way that isn't already taken care of by other tests

Easy to evaluate

Supports troubleshooting Provides useful information for the debugging programmer

Appropriately complex As the program gets more stable, you can hit it with more complex

tests

and more closely simulate use by experienced users

Accountable You can explain, justify, and prove you ran it

Cost This includes time and effort, as well as direct costs

(84)

Test Project Manager

Test Project Manager

Customer Interface Master Test Plan Test Strategy

Project Technical Contact

Interaction with Development Team Review Test Artifacts

Defect Management

Test Lead

Test Lead

Module Technical Contact Test Plan Development

Interaction with Module Team Review Test Artifacts

Defect Management

Test Execution Summary Defect Metrics Reporting

Test Engineers

Test Engineers

Prepare Test Scenarios

Develop Test Conditions/Cases Prepare Test Scripts

Test Coverage Matrix

Execute Tests as Scheduled Defect Log

Test Tool Specialist

Test Tool Specialist

Prepare Automation Strategy Capture and Playback Scripts Run Test Scripts

Defect Log

Roles & Responsibilities

Support Group for Testing

Support Group for Testing

Ngày đăng: 02/06/2021, 19:51

TỪ KHÓA LIÊN QUAN

w