Custom Search

First Time in India, Concept by CAs & MBAs

Hello ,
The Concept is great, We Learn, Earn and have Fun same time & Its FREE. No buying/selling, the website earns from advertisers and sharing profit with us, Tested and Proven Concept. Check and if convinced; TRY, you don't loose anything at all.
The Concept is great, We Learn, Earn and have Fun same time & Its FREE. No buying/selling, the website earns from advertisers and sharing profit with us, Tested and Proven Concept. Check and if convinced; TRY, you don't loose anything at all.
Just Click here & Judge Yourself Please

If you have received this mail from a unknown source and want to unsubscribe due to personal reasons, please click below link as we also have 0% tolerance for SPAM
To unsubscribe and no longer receive emails from, click here.
If you received this email in Spam then Click Here

Quality Center Multiple Choice Questions-1


Quality Center Multiple Choice Questions-1

1) Test management with Quality Center involves …. Phases.

A) Four
B) Five
C) Six
D) Seven

2) The phases of Test management with Quality Center in order are:

A) Specify Releases, Specify Requirements, Plan Tests, Execute Tests, Track Defects
B) Specify Requirements, Specify Releases, Plan Tests, Execute Tests, Track Defects
C) Specify Requirements, Plan Tests, Specify Releases, Execute Tests, Track Defects
D) Specify Releases, Specify Requirements, Plan Tests, Track Defects, Execute Tests

3) By creating a list of authorized users and assigning each user a password and user group, you control the kinds of additions and modifications each user makes to the project.

A) True
B) False

4) The …….determines the privileges that the user has within a project.

A) User ID
B) User Name
C) User group
D) User SID

5) You can import data from Word or Excel to a Quality Center project.

A) True
B) False

6) Quality Center 9.2 prompts you to install Microsoft .NET ...... if it is not already installed on your machine.

A) Framework 2.0
B) Framework 1.0
C) Framework 3.0
D) Framework 4.0

7) When you connect to a project, the Quality Center main window opens and displays the module in which you were last working.

A) True
B) False

8) The ….can change and override a user's properties or password.

A) Global Administrator
B) Site Administrator
C) Administrator
D) QC Administrator

9) You can filter Quality Center data to display only those records that meet the criteria that you define. How many filters can you define

A) Single item as a filter
B) Two items as a filter
C) Multiple items as filter
D) Five items as filter

10) In the test plan tree, you can define the ……filter for associated test sets as "Open". This ensures that only tests that belong to an open test set are displayed.

A) Double
B) Twice
C) Cross
D) Multiple

11) By default, records appear in Quality Center in the

A) Increasing Order
B) Decreasing Order
C) Order in which they were added
D) None of these

12) You can save filter and sort information as a favorite view and then load it as needed.

A) True
B) False

13) You can copy and paste the filter, sort, and group by settings to another project.

A) True
B) False

14) You can also share the settings (filter, sort, and group by settings) with another user by pasting them to an e-mail or a text file.

A) True
B) False

15) When you use text search, the search ignores

A) articles (a, an, the)
B) coordinate conjunctions (and, but, for, nor, or);
C) Boolean operators (and, or, not, if, or, then).
D) All of above

16) You can export the data in a grid as a

A) text file,
B) Microsoft Excel worksheet,
C) Microsoft Word document, or
D) HTML document.
E) All of above

17) …….. alerting functions available in Quality Center

A) Three
B) Four
C) Two
D) Five

18) The alerting functions in Quality Center are:

A) Automatic notification alerts, Follow up alerts
B) Email alerts, Automatic notification alerts, Follow up alerts
C) Defect alerts, Email alerts, Automatic notification alerts, Follow up alerts
D) Final alerts, Email alerts, Automatic notification alerts, Follow up alerts

19) You can create traceability links between ……….. in the Requirements module.

A) Requirements
B) Test Cases
C) Tests
D) Defects

20) You can associate a test instance with a defect. This is performed by creating ……… the Test Plan module, or by adding a defect during a manual test run.

A) Linked Tests
B) Linked Defects
C) Linked Errors
D) Linked Data

Quality Center Multiple Choice Questions-2

21) You can view a list of alerts for a ………… entity.

A) A test in the test plan tree or Test Grid
B) A test instance in the Execution Grid
C) None of above
D) Both A) & B)

22) A red flag indicates that the alert is.........

A) New
B) Old
C) Follow up
D) Urgent

23) A gray flag indicates that the alert ………..

A) New
B) has already been read
C) Follow up
D) Urgent

24) A follow up flag is specific to your user login name. Anyone else viewing the record does not see your follow up alert.

A) True
B) False

25) Quality Center assigns the image a unique file name with a ..... extension.

A) .jpeg
B) .gif
C) .doc
D) .jpg

26) A ……… a view of a Quality Center window with the settings you apply to it.

A) Personal View
B) My view
C) Favorite View
D) My QC View

27) You save favorite views in ........

A) Favorite folder
B) Personal folder
C) Both A) & B)
D) Public & Private folder

28) You can define the number of views displayed on the menu by setting the …………….. parameter in the Site Configuration tab in Site Administration.

A) favorites
B) favorites_Depth
C) favorites_view
D) favorites_Path

29) The requirements specification workflow consists of the following:

A) Define Testing Scope, Create Requirements, Detail Requirements, Assign to Releases, Analyze Requirements
B) Define Testing Scope, Detail Requirements, Create Requirements, Assign to Releases, Analyze Requirements
C) Define Testing Scope, Create Requirements, Detail Requirements, Analyze Requirements, Assign to Releases
D) Define Testing Scope, Create Requirements, Analyze Requirements, Detail Requirements, Assign to Releases

30) Requirement topics are recorded in the Requirements module by creating a

A) Requirements cycle
B) Requirements tree
C) Requirements plan
D) Requirements module


31) QA Manager changes a requirement from a ……….. status to a Reviewed status once it is approved.

A) Released
B) Tested
C) Not reviewed
D) None of these

32) You can also import requirements to your Quality Center project from Microsoft Word, Excel, or other third-party requirement management tools. To import requirements, you must first install the appropriate………

A) HP Third Party add-in.
B) HP Quality Center add-in.
C) HP Quality Center
D) HP Quality Center License

33) The Requirements Grid view enables you to display requirements in a …………….view.

A) Flat
B) Hierarchical
C) Flat-hierarchical
D) Flat non-hierarchical

34) The …………view enables you to analyze the breakdown of child requirements according to test coverage status..

A) Coverage Analysis
B) Coverage Requirements
C) Coverage
D) Coverage Tests


35) You can access the Requirements menu bar from the Requirements module by pressing the shortcut key …….

A) F1
B) F9
C) Ctrl + R
D) Alt + R


36) You can use the ………to restrict and dynamically change the fields and values in the Requirements module.

A) Script Edit
B) Scriptor Editor
C) Script Editor
D) Script Editing

37) The Requirements module enables you to define and manage your……...

A) requirements
B) All requirements
C) some requirements
D) Tedious requirements

38) You can rename or delete Requirements root folder.

A) true
B) False

39) You can search for a particular requirement in the requirements tree or in the requirements grid using the …….command.

A) Search
B) Find
C) Search All
D) Find All

40) You can replace field values in the requirements tree or in the requirements grid using the …….command.

A) Replace
B) Replace All
C) Find & Replace

41) By default, Quality Center sends e-mail in HTML format. To send e-mail as plain text instead, edit the ……. parameter in the Site Configuration tab in Site Administration.


42) You can copy a requirement within the same project or between projects. Which of the below items are copied at the time of copying a requirement.

A) Test coverage.
B) defect linkage.
C) risk-based quality management data
D) All of above
E) None of above

43) You can also move a requirement to a new location in the requirements tree by dragging it..

A) True
B) False

44) You can delete a requirement from the Requirements module. Deleting a requirement does not delete its child requirements, tests coverage, requirement traceability links, and defects linkage..

B) False

45) There are two methods you can use to create tests from requirements:

A) Convert Requirements to Tests & Generate a Test from Requirements
B) Convert Requirements to Tests & Convert a Test from Requirements
C) Convert Requirements to Tests & Generate a Requirement from Tests
D) Convert a Test from Requirements & Generate Requirements to Tests


46) When analyzing the impact of a change proposed in a specific requirement, the traceability links indicate the other ...... that the change might affect.

A) tests
B) requirements
C) tests & requirements
D) None

47) ...... links indicate requirements that affect a selected requirement. .....links indicate requirements that are affected by a selected requirement.

A) Trace from, Trace to
B) Trace to, Trace from
C) From trace, To trace
D) None of above

48) When a requirement changes, Quality Center alerts the affected requirements. The alerts can be seen by.......

A) Author of the requirement
B) users authorized by the Author of the requirement
C) all users
D) Administrator

49) while Defining Traceability Relationships, You cannot add a requirement traceability link by dragging a requirement from the requirements tree to the appropriate grid..

A) True
B) False

50) While viewing Traceability Impact, the Impact Analysis tab helps you understand the many associations and dependencies that exist between the ........ by displaying them in a hierarchical tree structure.

A) Tests
B) Requirements
C) Both
D) None

51) Each requirement type with risk-based quality management enabled supports either.

A) risk analysis or risk assessment
B) risk breakdown or risk assessment
C) risk breakdown or risk review
D) risk analysis or risk evaluation

52) Performing a risk-based quality management analysis for an analysis requirement involves the following steps

A) Determine Risk groups, Define Testing Policy Settings, Finalize Testing Policy, Analyze Testing Strategy
B) Determine Risk Categories, Define Testing guidelines Settings, Finalize Testing Policy, Analyze Testing Strategy
C) Determine Risk Categories, Define Testing Policy Settings, Finalize Testing guidelines, Analyze Testing Strategy
D) Determine Risk Categories, Define Testing Policy Settings, Finalize Testing Policy, Analyze Testing Strategy

53) An ...... requirement is a requirement belonging to a type that represents higher levels in the requirements tree hierarchy, such as the Folder type.

A) analysis
B) assessment
C) Policy
D) Test

54) An ....... requirement is a requirement belonging to a type that represents requirements that are children of analysis requirements and at a lower level in the requirements tree hierarchy.

A) analysis
B) assessment
C) Policy
D) Test

55) For each assessment requirement under the analysis requirement, you determine the Risk Category. The Risk Category is composed of two factors.

A) Business Probability and Failure Criticality
B) Business Vitality and Failure Probability
C) Business Criticality and Failure Probability
D) Business Criticality and Failure Possibility

56) The Business Criticality of a requirement has three possible values:.....

A) Critical, Important, Nice to Have
B) Critical, Importance, Nice to Have
C) Critical, Important, Nice to Had
D) None of these

57) The Failure Probability of a requirement has three possible values:...

A) High, Standard, Low
B) High, Average, Low
C) High, Medium, Low
D) None of these

58) Quality Center defines four Testing Levels:.

A) Full, Partial, Basic, and Low
B) Full, Partial, Basic, and None
C) Full, Half, Basic, and None
D) Full, Partial, Critical, and None

59) The Business Criticality of a requirement is a measure of how likely a test on the requirement is to fail, based on the technical complexity of the requirement's implementation, without consideration of the requirement's impact on the business.

A) True
B) False

60) The Failure Probability of a requirement is a measure of how important the requirement is to your business

A) True
B) False










Static Testing

Static Testing
Important Terms:
Static techniques and the test process
dynamic testing, static testing, static technique
Review process
entry criteria, formal review, informal review, inspection, metric, moderator/inspection leader, peer review, reviewer, scribe, technical review, walkthrough.
Static analysis by tools
Compiler, complexity, control flow, data flow, static analysis

I) Phases of a formal review

1) Planning
Selecting the personal, allocating roles, defining entry and exit criteria for more formal reviews etc.
2) Kick-off
Distributing documents, explaining the objectives, checking entry criteria etc.
3) Individual preparation
Work done by each of the participants on their own work before the review meeting, questions and comments etc.
4) Review meeting
Discussion or logging, make recommendations for handling the defects, or make decisions about the defects etc.
5) Rework
Fixing defects found, typically done by the author Fixing defects found, typically done by the author
6) Follow-up
Checking the defects have been addressed, gathering metrics and checking on exit criteria

II) Roles and responsibilities

Decides on execution of reviews, allocates time in projects schedules, and determines if the review objectives have been met
Leads the review, including planning, running the meeting, follow-up after the meeting.
The writer or person with chief responsibility of the document(s) to be reviewed.
Individuals with a specific technical or business background. Identify defects and describe findings.
Scribe (recorder)
Documents all the issues, problems

 III) Types of review

Informal review
No formal process, pair programming or a technical lead reviewing designs and code.
Main purpose: inexpensive way to get some benefit.
Meeting led by the author, 'scenarios, dry runs, peer group', open-ended sessions.
Main purpose: learning, gaining understanding, defect finding
Technical review
Documented, defined defect detection process, ideally led by trained moderator, may be performed as a peer review, pre meeting preparation, involved by peers and technical experts
Main purpose: discuss, make decisions, find defects, solve technical problems and check conformance to specifications and standards

Led by trained moderator (not the author), usually peer examination, defined roles, includes metrics, formal process, pre-meeting preparation, formal follow-up process
Main purpose: find defects.

Note: walkthroughs, technical reviews and inspections can be performed within a peer group-colleague at the same organization level. This type of review is called a "peer review".

IV) Success factors for reviews

  • Each review has a clear predefined objective.
  • The right people for the review objectives are involved.
  • Defects found are welcomed, and expressed objectively.
  • People issues and psychological aspects are dealt with (e.g. making it a positive experience for the author).
  • Review techniques are applied that are suitable to the type and level of software work products and reviewers.
  • Checklists or roles are used if appropriate to increase effectiveness of defect identification.
  • Training is given in review techniques, especially the more formal techniques, such as inspection.
  • Management supports a good review process (e.g. by incorporating adequate time for review activities in project schedules).
  • There is an emphasis on learning and process improvement.
V) Cyclomatic Complexity

The number of independent paths through a program

Cyclomatic Complexity is defined as: L – N + 2P

L = the number of edges/links in a graph
N = the number of nodes in a graphs
P = the number of disconnected parts of the graph (connected components)

Alternatively one may calculate Cyclomatic Complexity using decision point rule
Decision points +1

Cyclomatic Complexity and Risk Evaluation
1 to 10a simple program, without very much risk
11 to 20 a complex program, moderate risk
21 to 50, a more complex program, high risk
> 50an un-testable program (very high risk

Software Testing Standards

Software Testing Standards

ISO - International Organization for Standardization (1947)
IEEE - Institute of Electrical and Electronics Engineers (1963)
IEC - International Electrotechnical Commission (1906)
BS (BSI) - British Standards (1901)
CMMI- Capability Maturity Model Integration
1) IEEE 610.12-1990
IEEE Standard Glossary of Software Engineering Terminology

2) IEEE 829-1998
IEEE Standard for Software Test Documentation
The Types of Document:
There are eight document types in the IEEE 829 standard, which can be used in three distinct phases of software testing:
Preparation of Tests
Test Plan: Plan how the testing will proceed.
Test Design Specification: Decide what needs to be tested.
Test Case Specification: Create the tests to be run.
Test Procedure: Describe how the tests are run.
Test Item Transmittal Report: Specify the items released for testing.
Running the Tests
Test Log: Record the details of tests in time order.
Test Incident Report: Record details of events that need to be investigated.
Completion of Testing
Test Summary Report: Summarise and evaluate tests.

3) IEEE Standard 1012-1998
IEEE Standard for Software Verification and Validation -Description

4) IEEE Standard 1028-1997
 IEEE Standard for Software Reviews -Description

5) IEEE 1008
IEEE Standard for Software Unit Testing

6) IEEE 1044-1993
IEEE Standard Classification for Software Anomalies  
7) IEEE 1219-1998
Standard for Software Maintenance
8) ISO/IEC 9126-1:2001
(Software engineering – Software product quality- part 1)
Support for review, verification and validation, and a framework for quantitative quality evaluation, in the support process;
 Support for setting organisational quality goals in the management process.

9) ISO/IEC 12207:2008
Systems and software engineering -- Software life cycle processes

10) ISO/IEC 14598-1:1999
Information technology -- Software product evaluation

11) ISO/IEC 2382-1:1993
Data Processing -- Vocabulary -- Part 1: Fundamental terms

12) ISO 9000:2000
Quality management systems – Fundamentals and vocabulary

13) BS 7925-2:1998
Software testing, Software component testing
(This standard defines the process for software component testing using specified test case design and measurement techniques. This will enable users of the standard to directly improve the quality of their software testing, and improve the quality of their software products)

14) DO-178B:1992
 – Software Considerations in Airborne Systems and Equipment

15) BS7925-1
The British software testing standard governing testing terminology

ISTQB Foundation Level Certification Guidelines

ISTQB Foundation Level Certification Guidelines

a) All questions must be written in US English.
b) Use good grammar, punctuation, and spelling.
c)  Questions must be based on the syllabus but should also be consistent with the 'real world'.
d) Type of the exam: Objective (Multiple choices)
e) Total no of Questions: 40
f)  Pass marks: 26 (65%), No negative marks
g)  Examination fees: 4000 (Through authorized organization like NRSTT- 3500 only)
h) Re-examination fees: 2500
i)  Exam duration: 75 Minutes
j) Result will be announced in 2 weeks, certificate will be sent in 8 weeks
k)  ISTQB Foundation Level Certification Validity: Lifetime, where as Advanced Certification Validity is 3 years.
l) No pre-requisites for ISTQB Foundation Level Certification, where as Advanced Certification requires 'Foundation Certification and 2 years of work experience'.
m) 100000 ISTQB Certifications by Sep 2008

Types of multiple-choice questions

Within the multiple-choice format, questions can be presented in different ways. For example, the amount of information presented in a question's stem can be limited or extensive. Also, a question writer can include written code within the stem, specifically for example, when writing questions to test knowledge of white box techniques.
Following are examples of the type of multiple-choice items to be used in any ISTQB qualification. Correct answers should always be the first option.

Basic Type Questions

The basic multiple-choice question has a short stem and a single correct response. A limited amount of information is presented in the stem, and a single set of response options is presented to the candidates. The following example of a basic multiple-choice question is targeted to assess knowledge of static testing at K1 cognitive level of application.

What does a tester do during "Static testing"?

a) Reviews requirements and compares them with the design
b) Runs the tests on the exact same setup each time
c) Executes test to check that all hardware has been set up correctly
d) Runs the same tests multiple times, and checks that the results are statistically meaningful

Roman Type Questions

Another variation of the basic multiple-choice question is the Roman type. In this format, the candidate is presented with several statements; each proceeded by either a Roman numeral or a letter of the alphabet. This differs from the multiple-choice questions already discussed in that the response options may require the candidate to know or derive several pieces of related information. The task for the candidate is to select the option that represents the correct combination of statements; as shown in the following example:

Which of the following answers reflect when Regression testing should normally be performed?

A. Every week
B. After the software has changed
C. On the same day each year
D. When the environment has changed
E. Before the code has been written

1) B ,D are true, A, C, E are false
2) A ,B are true, C, D , E are false
3) B, C and D are true, A , E are false

4) B is true, A, C, D and E are false

Database Testing

Database Testing

1). what is Database Testing?

     Testing the backend databases, like comparing   the actual results   with expected results.

2). what is database testing and what we test in database testing

Data bas testing basically include the following.
1) Data validity testing.
2) Data Integrity testing
3) Performances related to database.
4) Testing of Procedure, triggers and functions.
       For doing data validity testing you should be good in SQL queries
For data integrity testing you should know about referential integrity and different constraint.
For performance related things you should have idea about the table structure and design.
For testing Procedure triggers and functions you should be able to understand the same.

3). what we normally check for in the Database Testing?

          Database testing involves some in-depth knowledge of the given application and requires more defined plan of approach to test the data. Key issues include:
1) data Integrity
2) data validity
3) data manipulation and updates.

Tester must be aware of the database design concepts and implementation rules

4). How to Test database in manually? Explain with an example
       Observing that operations, which are operated on front-end is effected on back-end or not.
The approach is as follows:
While adding a record thru' front-end check back-end that addition of record is effected or not.
So same for delete, update...

Ex: Enter employee record in database thru' front-end and check if the record is added or not to the back-end (manually).

5). How to test a SQL Query in WinRunner? With out using Database Checkpoints?

     By writing scripting procedure in the TCL we can connect to the database and we can test database and queries.

6) .How does you test whether a database in updated when information is entered in the front end?

         With database check point only in WinRunner, but in manual we will go to front end using some information. Will get some session names using that session names we search in backend. If that information is correct then we will see query results.
7). What are the different stages involved in Database Testing
In DB testing we need to check for,
1. The field size validation
2. Check constraints.
3. Indexes are done or not (for performance related issues)
4. Stored procedures.
5.The field size defined in the application is matching with that in the db.

8). What SQL statements we use in Database Testing?

            DDLDDL is Data Definition Language statements. Some examples: · CREATE · ALTER - · DROP -· TRUNCATE -· COMMENT - · RENAME - DMLDML is Data Manipulation Language statements. Some examples: · SELECT - · INSERT - · UPDATE - · DELETE - · MERGE - UPSERT -· CALL - · EXPLAIN PLAN - · LOCK TABLE - DCLDCL is Data Control Language statements. Some examples: · GRANT - · REVOKE - · COMMIT - · SAVEPOINT - · ROLLBACK - COMMIT -· SET TRANSACTION - This are the Database testing commands.
9). How to use SQL queries in WinRunner/QTP
Using output database check point and database check point,
Select SQL manual queries option
And enter the "select" queries to retrieve data in the database and compare
 The expected and actual.

10). What steps does a tester take in testing
Stored Procedures?

           In my view, the tester has to go through the requirement, as to why the particular stored procedure is written for? And check whether all the required indexes, joins, updates, deletions are correct comparing with the tables mentions in the Stored Procedure.  And also he has to ensure whether the Stored Procedure follows the standard format like comments, updated by, etc.
11). How to check a trigger is fired or not, while doing Database testing?

             It can be verified by querying the common audit log where we can able to see the triggers fired.

12). Is an "A fast database retrieval rate" a testable requirement?

           Since the requirement seems to be ambiguous. The SRS should clearly mention the performance or transaction requirements i.e. It should say like 'A DB retrieval rate of 5 micro sec'.

13). How to test a DTS package created for data Insert, update and delete? What should be considered in the above case while testing it? What conditions are to be checked if the data is inserted, updated or deleted using a text files?

           Data Integrity checks should be performed.  IF the database schema is 3rd normal form, then that should be maintained.  Check to see if any of the constraints have thrown an error.  The most important command will have to be the DELETE command.  That is where things can go really wrong.
Most of all, maintain a backup of the previous database.
     14). How to test data loading in Data base testing
Using with Query analyzer.
You have to do the following things while you are involving in Data Load testing.
1. You have know about source data (table(s), columns, data types and Constraints)
2. You have to know about Target data (table(s), columns, data types and Constraints)
3. You have to check the compatibility of Source and Target.
4. You have to Open corresponding DTS package in SQL Enterprise Manager and run the DTS package (If you are using SQL Server).
5. Then you should compare the column's data of Source and Target.
6. You have to check the number to rows of Source and Target.
7. Then you have to update the data in Source and see the change is reflecting in Target or not.
8. You have to check about junk character and Nulls.

15). What is way of writing test cases for database testing?

You have to do the following for writing the database test cases.
1. First of all you have to understand the functional requirement of the application thoroughly.
2. Then you have to find out the back end tables used, joined used between the tables, cursors used (if any), triggers used (if any), stored procedures used (if any), input parameter used and output parameters used for developing that requirement.
3. After knowing all these things you have to write the test cases with different input values for checking all the paths of SP.
One thing writing test cases for back-end testing not like functional testing. You have to use white box testing techniques.

Test Design Techniques

Test Design Techniques
Important Terms:

The test development process

Test case specification, test design, test execution schedule, test procedure specification, test script, traceability.

Categories of test design techniques

Black-box test design technique, specification-based test design technique, white-box test design technique, structure-based test design technique, experience-based test design technique.

Specification-based or black box techniques

Boundary value analysis, decision table testing, equivalence partitioning, state transition testing, use case testing.

Structure-based or white box techniques
Code coverage, decision coverage, statement coverage, structure-based testing.

Experience-based techniques

Exploratory testing, fault attack.

Test Design Techniques
o Specification-based/Black-box techniques
o Structure-based/White-box techniques
o Experience-based techniques

I) Specification-based/Black-box techniques
a) Equivalence partitioning
b) Boundary value analysis
c) Decision table testing
d) State transition testing
e) Use case testing

Equivalence partitioning

o Inputs to the software or system are divided in to groups that are expected to exhibit similar behavior
o Equivalence partitions or classes can be found for both valid data and invalid data
o Partitions can also be identified for outputs, internal values, time related values and for interface values.
o Equivalence partitioning is applicable all levels of testing

Boundary value analysis

o Behavior at the edge of each equivalence partition is more likely to be incorrect. The maximum and minimum values of a partition are its boundary values.
o A boundary value for a valid partition is a valid boundary value; the boundary of an invalid partition is an invalid boundary value.
o Boundary value analysis can be applied at all test levels
o It is relatively easy to apply and its defect-finding capability is high
o This technique is often considered as an extension of equivalence partitioning.

Decision table testing

o In Decision table testing test cases are designed to execute the combination of inputs
o Decision tables are good way to capture system requirements that contain logical conditions.
o The decision table contains triggering conditions, often combinations of true and false for all input conditions
o It maybe applied to all situations when the action of the software depends on several logical decisions

State transition testing

o In state transition testing test cases are designed to execute valid and invalid state transitions
o A system may exhibit a deferent response on current conditions or previous history. In this case, that aspect of the system can be shown as a state transition diagram.
o State transition testing is much used in embedded software and technical automation.

Use case testing

o In use case testing test cases are designed to execute user scenarios
o A use case describes interactions between actors, including users and the system
o Each use case has preconditions, which need to be met for a use case to work successfully.
o A use case usually has a mainstream scenario and some times alternative branches.
o Use cases, often referred to as scenarios, are very useful for designing acceptance tests with customer/user participation

II) Structure-based/White-box techniques
o Statement testing and coverage
o Decision testing and coverage
o Other structure-based techniques
 condition coverage
 multi condition coverage

Statement testing and coverage:

An entity in a programming language, which is typically the smallest indivisible unit of execution

Statement coverage
The percentage of executable statements that have been exercised by a test suite

Statement testing
A white box test design technique in which test cases are designed to execute statements

Decision testing and coverage

A program point at which the control flow has two or more alternative routes

A node with two or more links to separate branches

Decision Coverage
The percentage of decision outcomes that have been exercised by a test suite

100% decision coverage implies both 100% branches coverage and 100% statement coverage

Decision testing
A white box test design technique in which test cases are designed to execute decision outcomes.

Other structure-based techniques
A logical expression that can be evaluated as true or false

Condition coverage
The percentage of condition outcomes that have been exercised by a test suite

Condition testing
A white box test design technique in which test cases are designed to execute condition outcomes

Multiple condition testing
A white box test design technique in which test cases are designed to execute combinations of single condition outcomes

III) Experience-based techniques
o Error guessing
o Exploratory testing

Error guessing
o Error guessing is a commonly used experience-based technique

o Generally testers anticipate defects based on experience, these defects list can be built based on experience, available defect data, and from common knowledge about why software fails.

Exploratory testing
o Exploratory testing is concurrent test design, test execution, test logging and learning , based on test charter containing test objectives and carried out within time boxes

It is approach that is most useful where there are few or inadequate specifications and serve time pressure

Web Testing

Web Applications Testing


The instant worldwide audience of any Web Browser Enabled Application -- a Website -- makes its quality and reliability crucial factors in its success. Correspondingly, the nature of Websites and Web Applications pose unique software testing challenges. Webmasters, Web applications developers, and Website quality assurance managers need tools and methods that meet their specific needs. Mechanized testing via special purpose Web testing software offers the potential to meet these challenges. Our technical approach, based on existing Web browsers, offers a clear solution to most of the technical needs for assuring Website quality.

Websites impose some entirely new challenges in the world of software quality! Within minutes of going live, a Web application can have many thousands more users than a conventional, non-Web application. The immediacy of the Web creates immediate expectations of quality and rapid application delivery, but the technical complexities of a Website and variances in the browser make testing and quality control that much more difficult, and in some ways, more subtle, than "conventional" client/server or application testing. Automated testing of Websites is an opportunity and a challenge.

Dimensions of Quality

There are many dimensions of quality; each measure will pertain to a particular Website in varying degrees. Here are some common measures:
  • Timeliness: Websites change often and rapidly. How much has a WebSite changed since the last upgrade? How do you highlight the parts that have changed?
  • Structural Quality: How well do all of the parts of the WebSite hold together? Are all links inside and outside the WebSite working? Do all of the images work? Are there parts of the WebSite that are not connected?
  • Content: Does the content of critical pages match what is supposed to be there? Do key phrases exist continually in highly-changeable pages? Do critical pages maintain quality content from version to version? What about dynamically generated HTML (DHTML) pages?
  • Accuracy and Consistency: Are today's copies of the pages downloaded the same as yesterday's? Close enough? Is the data presented to the user accurate enough? How do you know?
  • Response Time and Latency: Does the WebSite server respond to a browser request within certain performance parameters? In an e-commerce context, how is the end-to-end response time after a SUBMIT? Are there parts of a site that are so slow the user discontinues working?
  • Performance: Is the Browser --> Web --> ebSite --> Web --> Browser connection quick enough? How does the performance vary by time of day, by load and usage? Is performance adequate for e-commerce applications? Taking 10 minutes -- or maybe even only 1 minute -- to respond to an e-commerce purchase may be unacceptable!

Impact of Quality

Quality remains is in the mind of the WebSite user. A poor quality WebSite, one with many broken pages and faulty images, with Cgi-Bin error messages, etc., may cost a lot in poor customer relations, lost corporate image, and even in lost sales revenue. Very complex, disorganized WebSites can sometimes overload the user.

The combination of WebSite complexity and low quality is potentially lethal to Company goals. Unhappy users will quickly depart for a different site; and, they probably won't leave with a good impression.


The browser is the viewer of a WebSite and there are so many different browsers and browser options that a well-done WebSite is probably designed to look good on as many browsers as possible. This imposes a kind of de facto standard: the WebSite must use only those constructs that work with the majority of browsers. But this still leaves room for a lot of creativity, and a range of technical difficulties. And, multiple browsers' renderings and responses to a WebSite have to be checked.

Display Technologies:

What you see in your browser is actually composed from many sources:
  • HTML. There are various versions of HTML supported, and the WebSite ought to be built in a version of HTML that is compatible. This should be checkable.
  • Java, JavaScript, ActiveX. Obviously JavaScript and Java applets will be part of any serious WebSite, so the quality process must be able to support these. On the Windows side, ActiveX controls have to be handled well.
  • Cgi-Bin Scripts. This is link from a user action of some kind (typically, from a FORM passage or otherwise directly from the HTML, and possibly also from within a Java applet). All of the different types of Cgi-Bin Scripts (perl, awk, shell-scripts, etc.) need to be handled, and tests need to check "end to end" operation. This kind of a "loop" check is crucial for e-commerce situations.
  • Database Access. In e-commerce applications you are either building data up or retrieving data from a database. How does that interaction perform in real world use? If you give in "correct" or "specified" input does the result produce what you expect? Some access to information from the database may be appropriate, depending on the application, but this is typically found by other means.
    Navigation: Users move to and from pages, click on links, click on images (thumbnails), etc. Navigation in a WebSite is often complex and has to be quick and error free.
    Object Mode: The display you see changes dynamically; the only constants are the "objects" that make up the display. These aren't real objects in the OO sense; but they have to be treated that way. So, the quality test tools have to be able to handle URL links, forms, tables, anchors, buttons of all types in an "object like" manner so that validations are independent of representation.
    Server Response: How fast the WebSite host responds influences whether a user (i.e. someone on the browser) moves on or gives up. Obviously, InterNet loading affects this too, but this factor is often outside the Webmaster's control at least in terms of how the WebSite is written. Instead, it seems to be more an issue of server hardware capacity and throughput. Yet, if a WebSite becomes very popular -- this can happen overnight! -- loading and tuning are real issues that often are imposed -- perhaps not fairly -- on the WebMaster.
    Interaction & Feedback: For passive, content-only sites the only real quality issue is availability. For a WebSite that interacts with the user, the big factor is how fast and how reliable that interaction is.
    Concurrent Users: Do multiple users interact on a WebSite? Can they get in each others' way? While WebSites often resemble client/server structures, with multiple users at multiple locations a WebSite can be much different, and much more complex, than complex applications.

Functionality Testing:

Test for - all the links in web pages, database connection, forms used in the web pages for submitting or getting information from user, Cookie testing.

Check all the links:

  • Test the outgoing links from all the pages from specific domain under test.
  • Test all internal links.
  • Test links jumping on the same pages.
  • Test links used to send the email to admin or other users from web pages.
  • Test to check if there are any orphan pages.
  • Lastly in link checking, check for broken links in all above-mentioned links.

Test forms in all pages:

Forms are the integral part of any web site. Forms are used to get information from users and to keep interaction with them. So what should be checked on these forms?
  • First check all the validations on each field.
  • Check for the default values of fields.
  • Wrong inputs to the fields in the forms.
  • Options to create forms if any, form delete, view or modify the forms.

Cookies testing:

Cookies are small files stored on user machine. These are basically used to maintain the session mainly login sessions. Test the application by enabling or disabling the cookies in your browser options. Test if the cookies are encrypted before writing to user machine. If you are testing the session cookies (i.e. cookies expire after the sessions ends) check for login sessions and user stats after session end. Check effect on application security by deleting the cookies.

Validate your HTML/CSS:

If you are optimizing your site for Search engines then HTML/CSS validation is very important. Mainly validate the site for HTML syntax errors. Check if site is crawl able to different search engines.

Database testing:

Data consistency is very important in web application. Check for data integrity and errors while you edit, delete, modify the forms or do any DB related functionality. Check if all the database queries are executing correctly, data is retrieved correctly and also updated correctly. More on database testing could be load on DB, we will address this in web load or performance testing below.

Usability Testing:

Test for navigation:

Navigation means how the user surfs the web pages, different controls like buttons, boxes or how user using the links on the pages to surf different pages.
Usability testing includes:
Web site should be easy to use. Instructions should be provided clearly. Check if the provided instructions are correct means whether they satisfy purpose. Main menu should be provided on each page. It should be consistent.

Content checking:

Content should be logical and easy to understand. Check for spelling errors. Use of dark colors annoys users and should not be used in site theme. You can follow some standards that are used for web page and content building. These are common accepted standards like as I mentioned above about annoying colors, fonts, frames etc.
Content should be meaningful. All the anchor text links should be working properly. Images should be placed properly with proper sizes.
These are some basic standards that should be followed in web development. Your task is to validate all for UI testing

Other user information for user help:

Like search option, sitemap, help files etc. Sitemap should be present with all the links in web sites with proper tree view of navigation. Check for all links on the sitemap.
"Search in the site" option will help users to find content pages they are looking for easily and quickly. These are all optional items and if present should be validated.

Interface Testing:

The main interfaces are:
Web server and application server interface
Application server and Database server interface.
Check if all the interactions between these servers are executed properly. Errors are handled properly. If database or web server returns any error message for any query by application server then application server should catch and display these error messages appropriately to users. Check what happens if user interrupts any transaction in-between? Check what happens if connection to web server is reset in between?

Compatibility Testing:

Compatibility of your web site is very important testing aspect. See which compatibility test to be executed:
  • Browser compatibility
  • Operating system compatibility
  • Mobile browsing
  • Printing options

Browser compatibility:

In my web-testing career I have experienced this as most influencing part on web site testing.
Some applications are very dependent on browsers. Different browsers have different configurations and settings that your web page should be compatible with. Your web site coding should be cross browser platform compatible. If you are using java scripts or AJAX calls for UI functionality, performing security checks or validations then give more stress on browser compatibility testing of your web application.
Test web application on different browsers like Internet explorer, Firefox, Netscape navigator, AOL, Safari, Opera browsers with different versions.

OS compatibility:

Some functionality in your web application is may not be compatible with all operating systems. All new technologies used in web development like graphics designs, interface calls like different API's may not be available in all Operating Systems.
Test your web application on different operating systems like Windows, Unix, MAC, Linux, Solaris with different OS flavors.

Mobile browsing:

This is new technology age. So in future Mobile browsing will rock. Test your web pages on mobile browsers. Compatibility issues may be there on mobile.

Printing options:

If you are giving page-printing options then make sure fonts, page alignment, page graphics getting printed properly. Pages should be fit to paper size or as per the size mentioned in printing option.

Performance testing:

Web application should sustain to heavy load. Web performance testing should include:
Web Load Testing
Web Stress Testing
Test application performance on different internet connection speed.
In web load testing test if many users are accessing or requesting the same page. Can system sustain in peak load times? Site should handle many simultaneous user requests, large input data from users, Simultaneous connection to DB, heavy load on specific pages etc.

Stress testing: Generally stress means stretching the system beyond its specification limits. Web stress testing is performed to break the site by giving stress and checked how system reacts to stress and how system recovers from crashes. Stress is generally given on input fields, login and sign up areas.
In web performance testing web site functionality on different operating systems, different hardware platforms is checked for software, hardware memory leakage errors,

 Security Testing:

Following are some tests for web security testing:
  • Test by pasting internal URL directly into browser address bar without login. Internal pages should not open.
  • If you are logged in using username and password and browsing internal pages then try changing URL options directly. I.e. If you are checking some publisher site statistics with publisher site ID= 123. Try directly changing the URL site ID parameter to different site ID which is not related to log in user. Access should deny for this user to view others stats.
  • Try some invalid inputs in input fields like login username, password and input text boxes. Check the system reaction on all invalid inputs.
  • Web directories or files should not be accessible directly unless given download option.
  • Test the CAPTCHA for automates scripts logins.
  • Test if SSL is used for security measures. If used proper message should get displayed when user switch from non-secure http:// pages to secure https:// pages and vice versa.
  • All transactions, error messages, security breach attempts should get logged in log files somewhere on web server.