Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Software Testing is the process of executing a program or system with the intent of finding
errors.
Software testing is the process used to help identify the Correctness, Completeness,
Security and Quality of the developed Computer Software
(OR)
The process of evaluating the software application or program to find the difference
between the actual results to the expected result.
1. Verification
2. Validation and
3. Defect finding.
The verification process confirms that the software meets its technical specifications and
user requirements. It's a Process based application.
The Defect is a variance between the expected and actual result. The defect's ultimate
source may be traced to a fault introduced in the specification, design, or development
(coding) phases.
Verification is done by frequent evaluation and meetings to appraise the documents, policy,
code, requirements, and specifications. This is done with the checklists, walkthroughs, and
inspection meetings.
Validation is done during actual testing and it takes place after all the verifications are being
done.
Use cases are prepared by business analysts from the functional requirement analysis (FRS)
according to the user requirements.
Test case are prepared by Test Engineer based on the use case. The test case is a set
procedure that guides a tester to execute a test.
Testing Methodology?
Means what kind of approach is following while testing (e.g.) functional testing, Regression
testing, Retesting, Confirmation testing.
Exploratory Testing:
With out the knowledge of requirements, testing is done by giving random inputs.
Ad-Hoc testing:
New: When the bug is posted for the first time is called new.
Open: After the tester sends the bug, the lead checks if it genuine then it is called as
open.
Assign: After the lead checks, he assigns to the developer and that state is called
assign.
Test: Before the developer releases the software with bug fixed, he changes the
state of bug to "TEST".
Fixed: When the developer resolved the bug the status is fixed.
Reopen: If the bug still exists even after the bug is fixed by the developer, the
tester changes the status to reopen.
Closed: If the bug is no more the status is closed.
V-Model:
V-model is a model in which verification and validation parallely .As soon as we get the
requirement from the customer, the left side is verification done and right side is validation
done.
Short duration project like 6 months Water fall model is followed, longer duration project V
Model is followed. Water fall model is much easier than V Model.
Test plan:
SRS:
Software requirement specification (SRS). It describes what the software will do and how it
will be expected to perform.
It is the mapping between customer requirements and prepared test cases. This is used to
find whether all the requirements are covered or not.
Unit Testing
Integrated Testing
System Testing
Acceptance Testing
The testing done to a unit or to a smallest piece of software. Done to verify if it satisfies its
functional specification or its intended design structure.
The Tools used in Unit Testing are debuggers, tracers and is done by Programmers.
Integration Testing
System Testing
System integration testing is the process of verifying the synchronization between two or
more software systems and which can be performed after software system collaboration is
completed.
User Acceptance Testing = (It the testing done with the intent of conforming readiness of
the product and Customer acceptance.)
Testing conducted to determine whether or not a system satisfies its acceptance criteria and
to enable the customer to determine whether or not to accept the system. It is done against
requirements and is done by actual users.
Acceptance Testing:
Formal testing conducted to determine whether or not a system satisfies its acceptance
criteria, which enables a customer to determine whether to accept the system or not.
Compatibility testing
Installation Testing
Functional Testing
It checks that the functional specifications are correctly implemented. Can also check if Non
Functional behavior is as per expectations.
Stress testing
To evaluate a system beyond the limits of the specified requirements or system resources
(such as disk space, memory, processor utilization) to ensure the system do not break
unexpectedly
Load Testing
Load Testing, a subset of stress testing, verifies that a web site can handle a particular
number of concurrent users while maintaining acceptable response times.
Scalability Testing is used to check whether the functionality and performance of a system
are capable to meet the volume and size change as per the requirements.
Scalability testing can be done using load test with various software and hardware
configurations changed, where the testing environment settings unchanged.
Regression Testing = (Testing the application to find whether the change in code affects
anywhere in the application)
Performance Testing
To evaluate the time taken or response time of the system to perform it's required functions
in comparison
Testing of a software product or system conducted at the developer's site by the customer
Testing conducted at one or more customer sites by the end user of a delivered software
product system.
Usability Testing = (Testing the ease with which users can learn and use a product.)
Usability testing is a technique used to evaluate a product by testing it on users. This can be
seen as an irreplaceable usability practice, since it gives direct input on how real users use
the system. This is in contrast with usability inspection methods where experts use different
methods to evaluate a user interface without involving users.
OR
It evaluates the Human Computer Interface. Verifies for ease of use by end-users. Verifies
ease of learning the software, including user documentation. Checks how effectively the
software functions in supporting user tasks. Checks the ability to recover from user errors.
Selects test paths according to the location of definitions and use of variables.
Loops fundamental to many algorithms. Can define loops as simple, concatenated, nested,
and unstructured.
Examples:
Note that unstructured loops are not to be tested. Rather, they are redesigned.
Configuration Testing
It is used when software meant for different types of users. It also checks that whether the
software performs for all users.
Recovery Testing
OR
Recovery testing is the activity of testing how well an application is able to recover from
crashes, hardware failures and other similar problems.
1. While an application is running, suddenly restart the computer, and afterwards check
the validness of the application's data integrity.
2. While an application is receiving data from a network, unplug the connecting cable.
After some time, plug the cable back in and analyze the application's ability to
continue receiving data from the point at which the network connection disappeared.
3. Restart the system while a browser has a definite number of sessions. Afterwards,
check that the browser is able to recover all of them.
Security Testing
Security testing is a process to determine that an information system protects data and
maintains functionality as intended.
OR
Security testing is the process that determines that confidential data stays confidential
OR
Testing how well the system protects against unauthorized internal or external access,
willful damage, etc?
Test Plan: Test Plan is a document with information on Scope of the project,
Approach, Schedule of testing activities, Resources or Manpower required, Risk
Issues, Features to be tested and not to be tested, Test Tools and Environment
Requirements.
Test Strategy: Test Strategy is a document prepared by the Quality Assurance
Department with the details of testing approach to reach the Quality standards.
Test Scenario: Test Scenario is prepared based on the test cases and test scripts
with the sequence of execution.
Test Case: Test case is a document normally prepared by the tester with the
sequence of steps to test the behavior of feature/functionality/non-functionality of
the application. Test Case document consists of Test case ID, Test Case Name,
Conditions (Pre and Post
Conditions) or Actions, Environment, Expected Results, Actual Results, Pass/Fail. The
Test cases can be broadly classified as User Interface Test cases, Positive Test cases
and Negative Test cases.
Test Script: Test Script is a program written to test the functionality of the
application. It is a set of system readable instructions to automate the testing with
the advantage of doing repeatable and regression testing easily.
Test Environment: It is the Hardware and Software Environment where the testing is
going to be done. It also explains whether the software under test interacts with
Stubs and Drivers.
Test Procedure: Test Procedure is a document with the detailed instruction for step
by step execution of one or more test cases. Test procedure is used in Test Scenario
and Test Scripts.
Test Log: Test Log contains the details of test case execution and the output
information.
Fuzz testing can be automated for maximum effects on large applications. This testing
improves the confidence that the application is safe and secure.
Testing strategy:
Testing of application without the knowledge of coding. Black box testing (BBT) is also
called as Functional testing.
White box testing (WBT) is also called Structural or Glass box testing.
White box testing involves looking at the structure of the code. When you know the internal
structure of a product, tests can be conducted to ensure that the internal operations
performed according to the specification. And all internal components have been adequately
exercised.
Designing and Testing are two different phases in a software development process (SDLC).
1. Information Gathering
2. Analysis
3. Designing
4. Coding
5. Testing
6. Implementation and Maintenance
If u want answer in Testing terms means STLC, designing test includes preparing Test
Strategy, Test Plan and Test Case documents, and testing means executing the test cases
and generating Test Reports.
Designing the application as per the requirements Designing the application is nothing but
deriving the functional flow, alternative flow, How many modules that we are handling, data
flow etc.
LLD - Low Level Design Documentation : This level deals with lower level modules. The
flow of diagram handled here is data Flow Diagram. Developers handle this Level.
In this designing team will divide the total application into modules and they will derive logic
for each module.
HLD - High Level Design Documentation: This level deals with higher level modules. The
flow of diagram handled here is ER - Entity Relationship. Both Developers and Testers
handle this Level.
In this designing team will prepare functional architecture i.e. Functional flow.
Coding: writing the source code as per the LLD to meet customer requirements.
SMOKE TESTING:
Set of the test cases which we getting a new build when we execute the application.
Smoke testing is verified whether the build is testable or not.
Testers can reject the application.
SANITY TESTING:
It is also a set of test cases which is testing major and critical functionality of the
application.
It is one time testing process.
A quick-and-dirty test that the major functions of a piece of software work. Originated in the
hardware testing practice of turning on a new piece of hardware for the first time and
considering it a success if it does not catch on fire.
Skim Testing:
Skim Testing A testing technique used to determine the fitness of a new build or release.
Testing in which all branches in the program source code are tested at least once.
testing how well the system protects against unauthorized internal or external access, willful
damage, etc?
OR
Security testing is the process that determines that confidential data stays confidential
I've seen that many test engineers are confused with the understanding of Software Test
Efficiency and Software Test Effectiveness. Below is the summary of what I understand from
Efficiency and Effectiveness.
a. It is internal in the organization how much resources were consumed how much of
these resources were utilized.
b. Software Test Efficiency is number of test cases executed divided by unit of time
(generally per hour).
c. Test Efficiency test the amount of code and testing resources required by a program
to perform a particular function.
Here are some formulas to calculate Software Test Efficiency (for different
factors):
Software Test Effectiveness judge the Effect of the test environment on the application.
Here are some formulas to calculate Software Test Effectiveness (for different
factors):
Running a system at high load for a prolonged period of time. For example, running several
times more transactions in an entire day (or night) than would be expected in a busy day,
to identify and performance problems that appear after a large number of transactions have
been executed.
It includes aspects such as initial concept, requirements analysis, functional design, internal
design, documentation planning, test planning, coding, document preparation, integration,
testing, maintenance, updates, retesting, phase-out, and other aspects.
What are SDLC and STLC and the different phases of both?
SDLC
Requirement phase
Design phase (HLD, DLD (Program spec))
Coding
Testing
Release
Maintenance
STLC
System Study
Test planning
Writing Test case or scripts
Review the test case
Executing test case
Bug tracking
Report the defect
STLC
Every testing project has to follow the waterfall model of the testing process.
According to the respective projects, the scope of testing can be tailored, but the process
mentioned above is common to any testing activity.
Software Testing has been accepted as a separate discipline to the extent that there is a
separate life cycle for the testing activity. Involving software testing in all phases of the
software development life cycle has become a necessity as part of the software quality
assurance process. Right from the Requirements study till the implementation, there needs
to be testing done on every phase. The V-Model of the Software Testing Life Cycle along
with the Software Development Life cycle given below indicates the various phases or levels
of testing.
Project initiation
Requirement gathering and documenting
Designing
Coding and unit testing
Integration testing
System testing
Installation and acceptance testing
Support or maintenance
Waterfall Model
Requirement Analysis -> Design -> Coding and Unit testing -> Functional testing ->
Maintenance
Test Bed is an execution environment configured for software testing. It consists of specific
hardware, network topology, Operating System, configuration of the product to be under
test, system software and other applications. The Test Plan for a project should be
developed from the test beds to be used.
Test Data is that run through a computer program to test the software. Test data can be
used to test the compliance with effective controls in the software.
Changing requirements there are chances of the end-user not understanding the effects of
changes, or may understand and request them anyway to redesign, rescheduling of
engineers, effects of other projects, work already completed may have to be redone or
thrown out.
Time force - preparation of software projects is difficult at best, often requiring a lot of
guesswork. When deadlines are given and the crisis comes, mistakes will be made.
Structural testing is a "white box" testing and it is based on the algorithm or code.
Functional testing is a "black box" (behavioral) testing where the tester verifies the
functional specification.
Re-test - Retesting means we testing only the certain part of an application again and not
considering how it will effect in the other part or in the whole application.
Regression Testing - Testing the application after a change in a module or part of the
application for testing that is the code change will affect rest of the application.
Load Testing and Performance Testing are commonly said as positive testing where as
Stress Testing is said to be as negative testing.
Say for example there is an application which can handle 25 simultaneous user logins at a
time. In load testing we will test the application for 25 users and check how application is
working in this stage, in performance testing we will concentrate on the time taken to
perform the operation. Where as in stress testing, we will test with more users than 25 and
the test will continue to any number and we will check where the application is cracking.
The above six are called as Microsoft six rules standard for user Interface testing. These are
very important in GUI testing.
A security risk may be classified as vulnerability. Vulnerability with one or more known
instances of working and fully-implemented attacks is classified as an exploit. The window
of vulnerability is the time from when the security hole was introduced or manifested in
deployed software, to when access was removed, a security fix was available/deployed, or
the attacker was disabled.
Functional Testing: Testing the functionality of the Application i.e., (suppose click
on login button in login screen it goes to next page or not)
GUI Functional Testing: Testing the GUI objects along with Functionality.
GUI testing or UI testing is user interface testing. That is, testing how the application and
the user interact. This includes how the application handles keyboard and mouse input and
how it displays screen text, images, buttons, menus, dialog boxes, icons, toolbars and
more.
Functional Testing is done with the intent to identify errors related to the functionality of the
Application under test.
To check whether all the functionalities are working properly or not. It is simply we can say
Look And Feel.
1. Encryption
2. Authentication
3. Authorization
Encryption: Encryption is the conversion of data into a form, called a cipher text that
cannot be easily understood by unauthorized people.
Localization testing and internationalization testing are comes into black box testing or
white box testing
PENETRATION TESTING
Baseline Testing
OR
The point at which some deliverable produced during the software engineering process is
put under formal change control.
Volume Testing
Volume Testing belongs to the group of non-functional tests, which are often misunderstood
and/or used interchangeably. Volume testing refers to testing a software application for a
certain data volume. This volume can in generic terms be the database size or it could also
be the size of an interface file that is the subject of volume testing. For example, if you want
to volume test your application with a specific database size, you will explode your database
to that size and then test the application's performance on it.
Another example could be when there is a requirement for your application to interact with
an interface file (could be any file such as .dat, .xml); this interaction could be reading
and/or writing on to/from the file. You will create a sample file of the size you want and
then test the application's functionality with that file to check performance.
WEB TESTING
Web Applications are more popular because they support more clients, no client side
installation & are accessible from any where.
1. Web Sites
2. Web Portals
3. Web Applications
Browser
It is a Software Application, which retrieves, and Presents information in text, image and
voice like different file formats. The browser is the viewer of a Web Site.
Popular Browsers:
Web Technologies
1. Functionality Testing
2. Usability testing
3. Interface testing
4. Compatibility testing
5. Performance testing
6. Cookie Testing
7. Security Testing
Cookie Testing
What is Cookie?
Cookie is small piece of information stored in text file by web server. This information is
later used by web browser.
Generally cookie contains personalized user data or information that is used to communicate
between different web pages.
Cookies save the user.s identity and used to track where the user navigated throughout the
web site pages. The communication between web browser and web server is stateless.
Whenever user visits the site or page small code inside that HTML page writes a text file on
users machine called cookie.
Example:
Types of Cookies
1. Session cookies: This cookie is active till the browser that invoked the cookie is
open. When we close the browser this session cookie gets deleted. Some time
session of say 20 minutes can be set to expire the cookie.
2. Persistent cookies: The cookies that are written permanently on user machine and
lasts for months or years.
Where cookies are stored?
The path where the cookies get stored depends on the browser.
The first obvious test case is to test if your application is writing cookies properly on disk.
1. As Cookie privacy policies make sure that no personal or sensitive data is stored in
the cookie.
2. If you have no option than saving sensitive data in cookie make sure data stored in
cookie is stored in encrypted format.
3. Make sure that there is no overuse of cookies on your site under test. Overuse of
cookies will annoy users if browser is prompting for cookies more often and this
could result in loss of site traffic and eventually loss of business.
4. Disable the cookies from your browser settings: If you are using cookies on your
site, your sites major functionality will not work by disabling the cookies. Then try to
access the web site under test. Navigate through the site. See if appropriate
messages are displayed to user like .For smooth functioning of this site make sure
that cookies are enabled on your browser. There should not be any page crash due
to disabling the cookies.
5. Accepts/Reject some cookies: The best way to check web site functionality is, not to
accept all cookies. If you are writing 10 cookies in your web application then
randomly accept some cookies say accept 5 and reject 5 cookies. For executing this
test case you can set browser options to prompt whenever cookie is being written to
disk. On this prompt window you can either accept or reject cookie. Try to access
major functionality of web site. See if pages are getting crashed or data is getting
corrupted.
6. Delete cookie: Allow site to write the cookies and then close all browsers and
manually delete all cookies for web site under test. Access the web pages and check
the behavior of the pages.
7. Corrupt the cookies: Corrupting cookie is easy. You know where cookies are
stored. Manually edit the cookie in notepad and change the parameters to some
vague values. Like alter the cookie content, Name of the cookie or expiry date of the
cookie and see the site functionality. In some cases corrupted cookies allow to read
the data inside it for any other domain. This should not happen in case of your web
site cookies. Note that the cookies written by one domain say rediff.com can't be
accessed by other domain say yahoo.com unless and until the cookies are corrupted
and someone trying to hack the cookie data.
8. Checking the deletion of cookies from your web application page:
Some times cookie written by domain say rediff.com may be deleted by same
domain but by different page under that domain. This is the general case if you are
testing some .action tracking. web portal. Action tracking or purchase tracking pixel
is placed on the action web page and when any action or purchase occurs by user
the cookie written on disk get deleted to avoid multiple action logging from same
cookie. Check if reaching
to your action or purchase page deletes the cookie properly and no more invalid
actions or purchase get logged from same user.
9. Cookie Testing on Multiple browsers: This is the important case to check if your web
application page is writing the cookies properly on different browsers as intended
and site works properly using these cookies. You can test your web application on
Major used browsers like Internet explorer (Various versions), Mozilla Firefox,
Netscape, Opera etc.
10. If your web application is using cookies to maintain the logging state of any user
then log in to your web application using some username and password. In many
cases you can see the logged in user ID parameter directly in browser address bar.
Change this parameter to different value say if previous user ID is 100 then make it
101 and press enter. The proper access message should be displayed to user and
user should not be able to see other users account.
Security Testing
Security testing is the process that determines that confidential data stays confidential
What is .Vulnerability?
This is a weakness in the web application. The cause of such a .weakness. Can be bugs in
the application, an injection (SQL/ script code) or the presence of viruses.
Some web applications communicate additional information between the client (browser)
and the server in the URL. Changing some information in the URL may sometimes lead to
unintended behavior by the server.
This is the process of inserting SQL statements through the web application user interface
into some query that is then executed by the server.
What is .XSS (Cross Site Scripting)?
When a user inserts HTML/ client-side script in the user interface of a web application and
this insertion is visible to other users, it is called XSS.
What is .Spoofing?
Change and configuration management can be accessed to the development people to save
their development deliverables and Visual SourceSafe (VSS) like tool they used for version
Control Test case database (TCDB) can be accessed to the testing people to store the
deliverables like test plan, test cases document, test metrics and other summary reports
Defect repository can be accessed to both testers and developers, for required negotiation
between testers and developers .Ex: Bugzilla, Ms Excel sheet, Problem reporting tool etc,
WHAT IS TESTLOG?
Either in manual or automation testing, the test engineer is running test cases batch by
batch and in every batch tests by test. In this level-1 test execution, every test engineer is
preparing "test log" document with results. Test log is nothing but a document which is
maintaining 3 types of test results such as passed, failed and blocked.
S no, Test case id, Test case description, Status (passed, failed)
S no, Test case id, Test case description, input data, actual result, expected result and
Status (passed, failed)
In functional testing:
After completion of successful functional testing on software build, the test engineers are
concentrating on extra characteristics of that software testing. Such as user Interface
testing, reliability testing and configuration testing...
Reliability testing
The purpose of reliability testing is to discover potential problems with the design as early
as possible and, ultimately, provide confidence that the system meets its reliability
requirements.
If there is more number of test cases, how can u pick up a selective test case?
If there are more no of test cases then we have to pick the test cases in terms of
functionality i.e. priority (p0, p1, p2) p0 (high) for functional test case, p1 (medium) for non
Functional except usability and p2 (low) for usability test cases. This according to test case
format (IEEE 829).
Yes I have involved in peer reviews which is conduct after implementing the test cases for
the given module in these reviews we will discus the written test cases are Sufficient to
validate the functionality of the module.
Severity tells the seriousness/depth of the bug where as Priority tells which bug should
rectify first.
What are the exact testing types you involved when testing the web application testing and
client server application testing? Have u find difference in terms of testing?
1. Performance (Must)
2. Functionality
3. Data Driven.
4. Stress testing.
5. Load testing.
6. Performance testing.
7. Regression testing.
Tell me the test cases for a search and replace functionality in a Microsoft document (.doc)?
Combination of 2 or more points should be considered as test cases and tested Use Ctl+H
for that.
Automation Testing cannot test the entire application. Only a part of the application can be
automated but not full. and also automation is very costly to do and maintenance.
Test Driver
Bottom-up approach: In this approach testing is conducted from sub module to main
module, if the main module is not developed a temporary program called DRIVERS is used
to simulate the main module.
Top-down approach: In this approach testing is conducted from main module to sub
module. If the sub module is not developed a temporary program called STUB is used for
simulate the sub module.
Database connectivity
Domain Constraints
Key constraints
Using (JOINS)
What is way of writing test cases for database testing. For writing test cases in Database
first one should define the project name, then module, Bug number, objective, steps/action
undertaken, expected result, actual result, then status, priority and severity.
A test strategy is an outline that describes the testing portion of the software development
cycle. It is created to inform project managers, testers, and developers about some key
issues of the testing process. This includes the testing objective, methods of testing new
functions, total time and resources required for the project, and the testing environment.
The test strategy describes the test level to be performed. There are primarily three levels
of testing: unit testing, integration testing, and system testing. In most software
development organizations, the developers are responsible for unit testing. Individual
testers or test teams are responsible for integration and system testing.
Acceptance Testing
Level 1: is comprehensive testing
Level 2: is regression testing
Level 3: is Acceptance testing
Comprehensive testing. After level 0 testing and selection of possible testcases for
automation, test engineers concentrate on test suite and test set. Every test batch consists
of a set of dependent testcases.during these test batch execution test engineers create test
log document
with these types of entries like no. of pass, failed and blocked. {During comprehensive test
execution, test engineers are reporting mismatches to developers as defects. After
resolution of the bug, developers release modified build to testers. Testers reexecute their
test to ensure bug fix work and occurrences of side effects}. Is according to mind material.
V model is one of the software development model where testing is done parallely with the
application development.i.e When the development of application is in process test engg will
test each and every out come document
For example: Consider a bank application consisting of three modules admin, banker and
the customer. The development team has completed the admin module and working with
banker module. At this stage the testing team will test the admin module while the
developers are in process with the banker module i.e. before completing the whole
application. This is the process of v-model. There can be changes in the application with v-
model.
Waterfall model is used when requirement are clear and complete and for the small
projects. Here we can't incorporate new changes in the application Test Strategy: This is
high level document which defines the approach for testing the overall product.
Test planning: Test plan defines the specific information about how to drive, track and
record the test efforts along entrance exit criteria, resource planning, risk and contingency
plans, etc. Test planning also define the milestone and schedules to effectively manage the
efforts and performance.
Test suite (more formally known as a validation suite) is a collection of test cases that are
intended to be used as input to a software program to show that it has some specified set of
behaviors (i.e., the behaviors listed in its specification).
A test suite often also contains detailed instructions or goals for each collection of test cases
and information on the system configuration to be used during testing. A group of test cases
may also contain prerequisite states or steps, and descriptions of the following tests.
Collections of test cases are sometimes incorrectly termed a test plan. They may also be
called a test script, or even a test scenario.
An executable test suite is a test suite that is ready to be executed. This usually means that
there exists a test harness that is integrated with the suite and such that the test suite and
the test harness together can work on a sufficiently detailed level to correctly communicate
with the system under test (SUT).
When requirements are traced to test cases and vice versa it is called bidirectional
traceability.
In the Requirements Management (REQM) process area, specific practice 1.4 states,
"Maintain bidirectional traceability among the requirements and work products."
Bidirectional traceability is the ability to trace both forward and backward (i.e., from
requirements to end products and from end product back to requirements).
Typically, traceability identifies the origin of items (e.g., customer needs) and follows these
same items as they travel through the hierarchy of the Work Breakdown Structure to the
project teams and eventually to the customer. When the requirements are managed well,
bidirectional traceability is achieved from the source requirements to lower-level
requirements and selected work products and verifications and then back to their source.
Such bidirectional traceability helps determine that all source requirements have been
completely addressed and that all lower level requirements and selected work products can
be traced to a valid source.
Examples of system testing are provided in SP 1.1 of the Verification process area and SP
1.1 of the Validation process area. However, system testing is not a term used in CMMI,
since the terms "system" and "testing" can be interpreted in many ways.
The term "system" was not used in CMMI because of its multiple interpretations across
disciplines. Instead of "system," the term "product" and "product component" were used for
consistency and clarity. The terms "verification" or "validation" were used instead of
"testing" since (1) testing can be either part of verification or validation, and (2) testing is
only one method used for verification or validation.
The test script modularity framework requires the creation of small, independent
scripts that represent modules, sections, and functions of the application-under-test.
These small scripts are then used in a hierarchical fashion to construct larger tests,
realizing a particular test case.
2. The Test Library Architecture Framework
The test library architecture framework is very similar to the test script modularity
framework and offers the same advantages, but it divides the application-under-test
into procedures and functions instead of scripts. This framework requires the
creation of library files (SQABasic libraries, APIs, DLLs, and such) that represent
modules, sections, and functions of the application-under-test. These library files are
then called directly from the test case script.
3. The Keyword-Driven or Table-Driven Testing Framework
Keyword-driven testing and table-driven testing are interchangeable terms that refer
to an application-independent automation framework. This framework requires the
development of data tables and keywords, independent of the test automation tool
used to execute them and the test script code that "drives" the application-under-
test and the data. Keyword-driven tests look very similar to manual test cases. In a
keyword-driven test, the functionality of the application-under-test is documented in
a table as well as in step-by-step instructions for each test.
If we were to map out the actions we perform with the mouse when we test our
Windows Calculator functions by hand, we could create the following table. The
"Window" column contains the name of the application window where we're
performing the mouse action (in this case, they all happen to be in the Calculator
window). The "Control" column names the type of control the mouse is clicking. The
"Action" column lists the action taken with the mouse (or by the tester) and the
"Arguments" column names a specific control (1, 2, 3, 5, +, -, and so on).
Data-driven testing is a framework where test input and output values are read from
data files (datapools, ODBC sources, cvs files, Excel files, DAO objects, ADO objects,
and such) and are loaded into variables in captured or manually coded scripts. In this
framework, variables are used for both input values and output verification values.
Navigation through the program, reading of the data files, and logging of test status
and information are all coded in the test script.
This is similar to table-driven testing in that the test case is contained in the data file
and not in the script; the script is just a "driver," or delivery mechanism, for the
data. Unlike in table-driven testing, though, the navigation data isn't contained in the
table structure. In data-driven testing, only test data is contained in the data files.
6. The Hybrid Test Automation Framework
Why do a test strategy? The test strategy is the plan on how to approach testing. The
purpose of a test strategy includes:
Answer:
A test strategy must address the risks and present a process that can reduce those risks.
a. Test Factor: The risk of issue that needs to be addressed as a part of the test
strategy. Factors that are to be addressed in testing a specific application system will
form the test factor.
b. Test phase: The phase of the systems development life cycle in which testing will
occur.
Here are some points to be considered when you are in such a situation:
Answer:
a. When all the requirements are adequately executed successfully through test cases
b. Bug reporting rate reaches a particular limit
c. The test environment no more exists for conducting testing
d. The scheduled time for testing is over
e. The budget allocation for testing is over
Your company is about to roll out an E-Commerce application. It is not possible to test the
application on all types of browsers on all platforms and operating systems. What steps
would you take in the testing environment to reduce the business risks and commercial
risks?
Answer:
Compatibility testing should be done on all browsers (IE, Netscape, Mozilla etc.) across all
the operating systems (win 98/2K/NT/XP/ME/Unix etc.)
Answer:
Your manager has taken you onboard as a test lead for testing a web-based application. He
wants to know what risks you would include in the Test plan. Explain each risk factor that
would be a part of your test plan.
Answer:
What is parallel testing and when do we use parallel testing? Explain with
example?
Answer:
Testing a new or an altered data processing system with the same source data that is used
in another system. The other system is considered as the standard of comparison. OR we
can say that parallel testing requires the same input data be run through two versions of
the same application.
Parallel testing should be used when there is uncertainty regarding the correctness of
processing of the new application. And old and new versions of the applications are same.
E.g.-
1. Operate the old and new version of the payroll system to determine that the
paychecks from both systems are reconcilable.
2. Run the old version of the application system to ensure that the operational status of
the old system has been maintained in the event that problems are encountered in
the new application.
What is the difference between testing Techniques and tools? Give examples.
Answer:
Testing technique :- Is a process for ensuring that some aspects of the application
system or unit functions properly there may be few techniques but many tools.
Tools :- Is a vehicle for performing a test process. The tool is a resource to the
tester, but itself is insufficient to conduct testing.
E.g. :- The swinging of hammer to drive the nail. The hammer is a tool, and
swinging the hammer is a technique. The concept of tools and technique is important
in the testing process. It is a combination of the two that enables the test process to
be performed. The tester should first understand the testing techniques and then
understand the tools that can be used with each of the technique.
Differentiate between Transaction flow modeling, Finite state modeling, Data flow
modeling and Timing modeling?
Answer:
Transaction Flow modeling :-The nodes represent the steps in transactions. The
links represent the logical connection between steps.
Finite state modeling :-The nodes represent the different user observable states of
the software. The links represent the transitions that occur to move from state to
state.
Data flow modeling :-The nodes represent the data objects. The links represent
the transformations that occur to translate one data object to another.
Timing Modeling :-The nodes are Program Objects. The links are sequential
connections between the program objects. The link weights are used to specify the
required execution times as program executes.
CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of
organizational 'maturity' that determine effectiveness in delivering quality software. It is
geared to large organizations such as large U.S. Defense Department contractors. However,
many of the QA processes involved are appropriate to any organization, and if reasonably
applied can be helpful. Organizations can receive CMM ratings by undergoing assessments
by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals
to successfully complete projects. Few if any processes in place; successes may not be
repeatable.
Level 4 - metrics are used to track productivity, processes, and products. Project
performance is predictable, and quality is consistently high.
Level 5 - the focus is on continuous process improvement. The impact of new processes
and technologies can be predicted and effectively implemented when required.
ISO = 'International Organization for Standards' - The ISO 9001, 9002, and 9003 standards
concern quality systems that are assessed by outside auditors, and they apply to many
kinds of production and manufacturing organizations, not just software. The most
comprehensive is 9001, and this is the one most often used by software development
organizations. It covers documentation, design, development, production, testing,
installation, servicing, and other processes. ISO 9000-3 (not the same as 9003) is a
guideline for applying ISO 9001 to software development organizations. The U.S. version of
the ISO 9000 series standards is exactly the same as the international version, and is called
the ANSI/ASQ Q9000 series. The U.S. version can be purchased directly from the ASQ
(American Society for Quality) or the ANSI organizations. To be ISO 9001 certified, a third-
party auditor assesses an organization, and certification is typically good for about 3 years,
after which a complete reassessment is required. Note that ISO 9000 certification does not
necessarily indicate quality products - it indicates only that documented processes are
followed.
IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates
standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard
829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard
for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.
ANSI = 'American National Standards Institute', the primary industrial standards body in
the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ
(American Society for Quality).
1. See the limit of username field. I mean the data type of this field in DB and the field
size. Try adding more characters to this field than the field size limit. See how
application responds to this.
2. Repeat above case for number fields. Insert number beyond the field storage
capacity. This is typically a boundary test.
3. For username field try adding numbers and special characters in various
combinations. (Characters like!@#$ %^&*()_+}{":?><,./;'[]). If not allowed
specific message should be displayed to the user.
4. Try above special character combination for all the input fields on your sign up page
having some validations. Like Email address field, URL field validations etc.
5. Many applications crash for the input field containing ' (single quote) and " (double
quote) examples field like: "Vijay's web". Try it in all the input fields one by one.
6. Try adding only numbers to input fields having validation to enter only characters
and vice versa.
7. If URL validation is there then see different rules for url validation and add urls not
fitting to the rules to observe the system behavior.
Code coverage:
An analysis method that determines which parts of the software have been executed
(covered) by the test case suite and which parts have not been executed and therefore may
require additional attention.
A notation for representing control flow similar to flow charts and UML activity diagrams.
The cyclomatic complexity gives a quantitative measure of 4the logical complexity. This
value gives the number of independent paths in the basis set, and an upper bound for the
number of tests to ensure that each statement is executed at least once. An independent
path is any path through a program that introduces at least one new set of processing
statements or a new condition (i.e., a new edge). Cyclomatic complexity provides upper
bound for number of tests required to guarantee coverage of all program statements.
Condition testing aims to exercise all logical conditions in a program module. They may
define:
Selects test paths according to the location of definitions and use of variables.
Loops fundamental to many algorithms. Can define loops as simple, concatenated, nested,
and unstructured.
Examples:
Note that unstructured loops are not to be tested . rather, they are redesigned.
This approach tends to uncover bugs like variables used but not initialize, or declared but
not used, and so on.
Path Testing: Path testing is where all possible paths through the code are defined
and covered. It's a time consuming task.
Loop Testing: These strategies relate to testing single loops, concatenated loops,
and nested loops. Independent and dependent code loops and values are tested by
this approach.
To ensure:
That all independent paths within a module have been exercised at least once.
All logical decisions verified on their true and false values.
All loops executed at their boundaries and within their operational bounds internal
data structures validity.
We need to write test cases that ensure the complete coverage of the program logic.
For this we need to know the program well i.e. we should know the specification and the
code to be tested. Knowledge of programming languages and logic.
Limitations of WBT:
Not possible for testing each and every path of the loops in program. This means exhaustive
testing is impossible for large systems.
This does not mean that WBT is not effective. By selecting important logical paths and data
structure for testing is practically possible and effective.
Black box testing treats the system as a "black-box", so it doesn't explicitly use Knowledge
of the internal structure or code. Or in other words the Test engineer need not know the
internal working of the "Black box" or application.
Main focus in black box testing is on functionality of the system as a whole. The term
'behavioral testing' is also used for black box testing and white box testing is also
sometimes called 'structural testing'. Behavioral test design is slightly different from black-
box test design because the use of internal knowledge isn't strictly forbidden, but it's still
discouraged.
Each testing method has its own advantages and disadvantages. There are some bugs that
cannot be found using only black box or only white box. Majority of the application are
tested by black box testing method. We need to cover majority of test cases so that most of
the bugs will get discovered by black box testing.
Black box testing occurs throughout the software development and Testing life cycle i.e. in
Unit, Integration, System, Acceptance and regression testing stages.
Tools used for Black Box testing:
Black box testing tools are mainly record and playback tools. These tools are used for
regression testing that to check whether new build has created any bug in previous working
application functionality. These record and playback tools records test cases in the form of
some scripts like TSL, VB script, Java script, Perl.
Each and every application is build up of some objects. All such objects are identified and
graph is prepared. From this object graph each object relationship is identified and test
cases written accordingly to discover the errors.
Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the
art of guessing where errors can be hidden. For this technique there are no specific tools,
writing the test cases that cover all the application paths.
Many systems have tendency to fail on boundary. So testing boundary values of application
is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where
the extreme boundary values are chosen. Boundary values include maximum, minimum,
just inside/outside boundaries, typical values, and error values.
BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test case.
2. Kinds of ranges
Robustness Testing - Boundary Value Analysis plus values that go beyond the limits
Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
Forces attention to exception handling
Boundary value testing is efficient only for variables of fixed values i.e. boundary.
Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a
program into classes of data from which test cases can be derived.
1. If an input condition specifies a range, one valid and one two invalid classes are
defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid
equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.
Comparison Testing:
Different independent versions of same software are used to compare to each other for
testing in this method.
Impact analysis means when we r doing regressing testing at that time we are checking
that the bug fixes r working properly, and by fixing these bug other components are
working as per their requirements are they got disturbed.
Test strategy comes first and this is the high level document. And approach for the testing
starts from test strategy and then based on this the test lead prepares the test plan.
What is the difference between web based application and client server application as a
tester's point of view?
1. Web Base Application (WBA) is a 3 tier application; Browser, Back end and Server.
Client server Application (CSA) is a 2 tier Application; Front End, Back end.
2. In the WBA tester test for the Script error like java script error VB script error etc,
that shown at the page. In the CSA tester does not test for any script error.
3. Because in the WBA once changes perform reflect at every machine so tester has
less work for test. Whereas in the CSA every time application need to be install
hence ,it maybe possible that some machine has some problem for that Hardware
testing as well as software testing is needed.
To check for the bug fixes. And this fix should not disturb other functionality. To ensure the
newly added functionality or existing modified functionality or developer fixed bug arises
any new bug or affecting any other side effect. This is called regression test and ensure
already PASSED TEST CASES would not arise any new bug.
1. you can check the field width for minimum and maximum.
2. If that field only take the Numeric Value then check it'll only take Numeric no other
type.
3. If it takes the date or time then check for other.
4. Same way like Numeric you can check it for the Character, Alpha Numeric and all.
5. And the most Important if you click and hit the enter key then some time page may
give the error of JavaScript, that is the big fault on the page.
6. Check the field for the Null value.
The date field we can check in different ways Positive testing: first we enter the date in
given format.
Testing aimed at showing software does not work. Also known as "test to fail".
In negative testing, we check whether the application or system handles the exception
properly or not. It is nothing but "Test to Break" testing.
Quality assurance is the process where the documents for the product to be tested are
verified with actual requirements of the customers. It includes inspection, auditing, code
review, meeting etc. Quality control is the process where the product is actually executed
and the expected behavior is verified by comparing with the actual behavior of the software
under test. All the testing types like black box testing, white box testing comes under
quality control. Quality assurance is done before quality control.
Smoke Testing is non-exhaustive software testing, ascertaining that the most crucial
functions of a program work, but not bothering with finer details. Sanity Testing is a cursory
testing, it is performed whenever a cursory testing is sufficient to prove the application is
functioning according to specifications. This level of testing is a subset of regression testing.
It normally includes a set of core tests of basic GUI functionality to demonstrate
connectivity to the database, application servers, printers, etc.
The difference between smoke and sanity testing is in smoke testing tester concentrate on
the core functionality of the application whether it is working or not for further
functionality.eg.build crash, environmental effect, networking etc.
Sanity testing basic functionalities are tested, eg.check boxes, radio buttons, text boxes, list
boxes.
testing when development and testing are essentially completed and final bugs and
problems need to be found before final release. Typically done by end-users or others, not
by programmers or testers.
An execution environment configured for testing. May consist of specific hardware, OS,
network topology, configuration of the product under test, other application or system
software, etc. The Test Plan for a project should enumerate the test beds(s) to be used.
What is a scenario?
A scenario defines the events that occur during each testing session. For example, a
scenario defines and controls the number of users to emulate, the actions to be performed,
and the machines on which the virtual users run their emulations.
what is the difference between system testing and end to end testing. System testing is
done with respect to the Application functionality by considering that system as a
individual(internal functionality flow).
Where as in End to end testing we will verify the application end to end functional flow by
considering all other integrated applications functionality (includes upstream and
downstream systems connected to that particular application for which System Testing is
completed as mentioned above).
An analysis method that determines which parts of the software have been executed
(covered) by the test case suite and which parts have not been executed and therefore may
require additional attention.
GLOBALIZATION TESTING
The goal of globalization testing is to detect potential problems in application design that
could inhibit globalization. It makes sure that the code can handle all international support
without breaking functionality that would cause either data loss or display problems.
Globalization testing checks proper functionality of the product with any of the culture/locale
settings using every type of international input possible.
So, which operating system (OS) should you use for your international testing platform? The
first choice should be your local build of Windows 2000 with a language group installed. For
example, if you use the U.S. build of Windows 2000, install the East Asian language group.
MUI (Multilanguage User Interface) Windows 2000 - especially useful if your code
implements multilingual UI and it must adjust to the UI settings of the OS. This
approach is an easier implemented alternative to installing multiple localized versions
of the OS. To further enhance multilingual support, Microsoft offers a separate
Windows 2000 Multilanguage Version, which provides up to 24 localized language
versions of the Windows user interface.
Localized build of the target OS - German or Japanese are good choices. Remember
it might be harder to work with them if you do not know the operating system's UI
language. This approach does not have significant advantages over the solutions
above.
Execute tests
After the environment has been set for globalization testing, you must pay special
attention to potential globalization problems when you run your regular test cases:
Put greater importance on test cases that deal with the input/output of strings,
directly or indirectly.
Test data must contain mixed characters from East Asian languages, German,
Complex Script characters (Arabic, Hebrew, Thai), and optionally, English. In some
cases, there are limitations, such as the acceptance of characters that only match
the culture/locale. It might be difficult to manually enter all of these test inputs if
you do not know the languages in which you are preparing your test data. A simple
Unicode text generator may be very helpful at this step.
The most serious globalization problem is functionality loss, either immediately (when a
culture/locale is changed) or later when accessing input data (non-U.S. character input).
Question marks (?) appearing instead of displayed text indicate problems in Unicode-
to-ANSI conversion.
Random High ANSI characters (e.g., ¼, †, ‰, ‡, ¶) appearing instead of
readable text indicate problems in ANSI code using the wrong code page.
The appearance of boxes, vertical bars, or tildes (default glyphs) [â–¡, |, ~] indicates
that the selected font cannot display some of the characters.
It might be difficult to find problems in display or print results that require shaping, layout,
or script knowledge. This test is language-specific and often cannot be executed without
language expertise. On the other hand, your test may be limited to code inspection. If
standard text-handling mechanisms are used to form and display output text, you may
consider this area safe.
Another area of potential problems is code that fails to follow local conventions as defined
by the current culture/locale. Make sure your application displays culture/locale-sensitive
data (e.g., numbers, dates, time, currency, and calendars) according to the current regional
settings of your computer.
LOCALIZATION TESTING
Localization translates the product UI and occasionally changes some initial settings to make
it suitable for another region. Localization testing checks the quality of a product's
localization for a particular target culture/locale. This test is based on the results of
globalization testing, which verifies the functional support for that particular culture/locale.
Localization testing can be executed only on the localized version of a product. Localizability
testing does not test for localization quality.
You can select any language version of Windows 2000 as a platform for the test.
However, you must install the target language support.
The localization testing of the user interface and linguistics should cover items such
as:
Validation of all application resources
Verification of linguistic accuracy and resource attributes
Typographical errors
Consistency checking of printed documentation, online help, messages, interface
resources, command-key sequences, etc.
Confirmation of adherence to system, input, and display environment standards
User interface usability
Assessment of cultural appropriateness
Checking for politically sensitive content
DATABASE TESTING
Truncate removes all the rows from the table and cannot be rollbacked, while delete
removes all/specific rows from table and can be rollbacked.
Also truncate resets the high water mark.
A common misconception is that they do the same thing. Not so. In fact, there are many
differences between the two. DELETE is a logged operation on a per row basis. This means
that the deletion of each row gets logged and physically deleted. You can DELETE any row
that will not violate a constraint, while leaving the foreign key or any other constraints in
place. TRUNCATE is also a logged operation, but in a different way. TRUNCATE logs the
deallocation of the data pages in which the data exists. The deallocation of data pages
means that your data rows still actually exist in the data pages, but the extents have been
marked as empty for reuse. This is what makes TRUNCATE a faster operation to perform
over DELETE. You cannot TRUNCATE a table that has any foreign key constraints. You will
have to remove the constraints, TRUNCATE the table, and reapply the constraints.
The difference between the two is that the truncate command is a DDL operation and just
moves the high water mark and produces a now rollback. The delete command, on the
other hand, is a DML operation, which will produce a rollback and thus take longer to
complete.
Desktop application runs on personal computers and work stations, so when you test the
desktop application you are focusing on a specific environment. You will test complete
application broadly in categories like GUI, functionality, Load, and backend i.e. DB.
In client server application you have two different components to test. Application is loaded
on server machine while the application exe on every client machine. You will test broadly in
categories like, GUI on both sides, functionality, Load, client-server interaction, backend.
This environment is mostly used in Intranet networks. You are aware of number of clients
and servers and their locations in the test scenario.
Web application is a bit different and complex to test as tester don't have that much control
over the application. Application is loaded on the server whose location may or may not be
known and no exe is installed on the client machine, you have to test it on different web
browsers. Web applications are supposed to be tested on different browsers and OS
platforms so broadly Web application is tested mainly for browser compatibility and
operating system compatibility, error handling, static pages, backend testing and load
testing.
Several standards suggest what a test plan should contain, including the IEEE.
IEEE standards:
1. bug free
2. reusable
3. independent
4. less complexity
5. well documented
6. easy to chage is called good code
How involved where you with your Team Lead in writing the Test Plan?
As per my knowledge Test Member are always out of scope while preparing the Test Plan,
Test Plan is a higher level document for Testing Team. Test Plan includes Purpose, scope,
Customer/Client scope, schedule, Hardware, Deliverables and Test Cases etc.
Test plan derived from PMP (Project Management Plan). Team member scope is just go
through TEST PLAN then they come to know what all are their responsibilities, Deliverable of
modules.
Test Plan is just for input documents for every testing Team as well as Test Lead.
Methodology
1. Spiral methodology
2. Waterfall methodology. these two are old methods.
3. Rational unified processing. this is from I B M and
4. Rapid application development. this is from Microsoft office.
The goal of globalization testing is to detect potential problems in application design that
could inhibit globalization. It makes sure that the code can handle all international support
without breaking functionality that would cause either data loss or display problems.
Testing of programs or procedures used to convert data from existing systems for use in
replacement systems.
UAT stands for 'User acceptance Testing' This testing is carried out with the user perspective
and it is usually done before a release UAT stands for User Acceptance Testing. It is done by
the end users along with testers to validate the functionality of the application. It is also
called as Pre-Production testing.
Black box testing treats the system as a “black-box”, so it doesn’t explicitly use Knowledge of
the internal structure or code. Or in other words the Test engineer need not know the internal
working of the “Black box” or application.
Main focus in black box testing is on functionality of the system as a whole. The term
‘behavioral testing’ is also used for black box testing and white box testing is also sometimes
called ‘structural testing’. Behavioral test design is slightly different from black-box test design
because the use of internal knowledge isn’t strictly forbidden, but it’s still discouraged.
Each testing method has its own advantages and disadvantages. There are some bugs that cannot
be found using only black box or only white box. Majority of the applicationa are tested by black
box testing method. We need to cover majority of test cases so that most of the bugs will get
discovered by blackbox testing.
Black box testing occurs throughout the software development and Testing life cycle i.e in Unit,
Integration, System, Acceptance and regression testing stages.
Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of
guessing where errors can be hidden. For this technique there are no specific tools, writing the
test cases that cover all the application paths.
BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1. Robustness Testing – Boundary Value Analysis plus values that go beyond the limits
2. Min – 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling
Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a
program into classes of data from which test cases can be derived.
In the previous Software Testing Class I have explain about Black Box testing. In this section I
am introducing the What is White Box Testing, What do you verify in White Box Testing, White
box testing techniques, white box testing definition, types of white box testing, white box testing
example, advantages and disadvantages of white box testing etc.
White Box Testing (WBT) is also known as Code-Based Testing or Structural Testing. White
box testing is the software testing method in which internal structure is being known to tester
who is going to test the software.
White box testing involves the testing by looking at the internal structure of the code & when you
completely aware of the internal structure of the code then you can run your test cases & check
whether the system meet requirements mentioned in the specification document. Based on
derived test cases the user exercised the test cases by giving the input to the system & checking
for expected outputs with actual output. In this is testing method user has to go beyond the user
interface to find the correctness of the system.
Typically such method are used at Unit Testing of the code but this different as Unit testing done
by the developer & White Box Testing done by the testers, this is learning the part of the code &
finding out the weakness in the software program under test.
For tester to test the software application under test is like a white/transparent box where the
inside of the box is clearly seen to the tester (as tester is aware/access of the internal structure
of the code), so this method is called as White Box Testing.
The White-box testing is one of the best method to find out the errors in the software application
in early stage of software development life cycle. In this process the deriving the test cases is
most important part. The test case design strategy include such that all lines of the source code
will be executed at least once or all available functions are executed to complete 100% code
coverage of testing.
In the White box testing following steps are executed to test the software code:
Above steps can be executed at the each steps of the STLC i.e. Unit Testing, Integration &
System testing.
In the White Box Testing verify the flow of the application. The pre designed test cases are
executed with the help of input data & compare the output with the expected one & found
mismatch if any means you found a bug.
Here are some white box testing techniques used in White Box Testing?
In the White box testing the Code Coverage analysis is main part.
Code Coverage analysis helps to identifying the gaps in a test case suite. It allows you to find the
area in the code to which is not executed by given set of test cases. Upon identifying ht gaps in
the test case suite you can add the respective test case. So it helps to improve the quality of the
software application.
In the market lots of white box testing tools are available to perform Code Coverage analysis.
Here are some White Box Testing Techniques
Statement Coverage:
In this white box testing technique try to cover 100% statement coverage of the code, it means
while testing the every possible statement in the code is executed at least once.
Tools: To test the Statement Coverage the Cantata++ can be used as white box testing tool.
Decision Coverage:
In this white box testing technique try to cover 100% decision coverage of the code, it means
while testing the every
possible decision conditions like if-else, for loop and other conditional loops in the code is
executed at least once.
Tools: To cover the Decision Coverage testing in the code the TCAT-PATH is used. This supports
for the C, C++ and Java applications.
Condition Coverage:
In this white box testing techniquie try to cover 100% Condition coverage of the code, it means
while testing the every possible conditions in the code is executed at least once.
Decision/Condition Coverage:
In this mixed type of white box testing technique try to cover 100% Decision/Condition coverage
of the code, it means while testing the every possible Decisions/Conditions in the code is
executed at least once.
In the actual development process developers are make use of the combination of techniques
those are suitable for there software application.
Using above mentions testing white box testing techniques the 80% to 90% code coverage is
completed which might be sufficient with white box testing.
Let take a simple website application. The end user is simply access the website, Login & Logout,
this is very siple and day 2 days life example. As end users point of view user able to access the
website from GUI but inside there are lots of things going on to check the internal things are
going right or not, the white box testing method is used. To explain this we have to divide this
part in two steps. This is all is being done when the tester is testing the application using White
box testing techniques.
To start the testing of the software no need to wait for the GUI, you can start the White
Box Testing.
As covering all possible paths of code so this is a thorough testing.
It helps in removing the .
Tester can ask about implementation of each section, so it might be possible to remove
unused lines of code which might be causing introduction of bug.
By executing equivalence use to approximates the partitioning.
As the tester is aware of internal coding structure, then it is helpful to derive which type
of input data is needed to testing software application effectively.
White box testing allows you to help in the code optimization.
To test the software application a highly skilled resource is required to carry out testing
who know the deep knowledge of internal structure of the code which increase the cost.
Update test script is required if changing the implementation too frequently.
If the application under test large is size then exhaustive testing is impossible.
It is not possible for testing each and every path/condition of software program, which
might miss the defects in code.
White box testing very expensive type of testing.
To analyze each line by line or path by path is nearly impossible work which may
introduce or miss the defects in the code.
To test each paths or conditions may require different input conditions, so to test full
application tester need to create fill range of inputs which may be a time consuming.
Priority means how fast it has to be fixed. Normally talking about this, “High Severity” bug are
marked as “High Priority” bugs & it’s should be resolved as early as possible, but this case is not
all the time. There can be different exceptions to this rule and depending on the nature of the
application it can be change from company to company. Let’s take a example to the Priority: e.g.
To deal with all issues present what issues to be consider on first based on its urgency or
importance on application under test. Adding this field in while reporting bug will help analyzing
the Bug Report.
Priority:
The severity is assigned by tester. Based on seriousness of the bug severity is assigned to
defect. It can be divided into four categories:
Let discuss few examples of Priority & Severity from High to Low:
1. All show stopper bugs would be added under this category (I mean to say tester should
log Severity as High, to set up Priority as High is Project manager’s call), means bug due
to which tester is not able to continue with the Software Testing, Blocker Bugs.
2. Let’s take an example of High Priority & High Severity, Upon login to system “Run time
error” displayed on the page, so due to which tester is not able to proceed the testing
further.
1. On the home page of the company’s web site spelling mistake in the name of the
company is surely a High Priority issue. In terms of functionality it is not breaking
anything so we can mark as Low Severity, but making bad impact on the reputation of
company site. So it highest priority to fix this.
1. The download Quarterly statement is not generating correctly from the website & user is
already entered in quarter in last month. So we can say such bugs as High Severity, this
is bugs occurring while generating quarterly report. We have time to fix the bug as report
is generated at the end of the quarter so priority to fix the bug is Low.
2. System is crashing in the one of the corner scenario, it is impacting major functionality of
system so the Severity of the defect is high but as it is corner scenario so many of the
user not seeing this page we can mark it as Low Priority by project manager since many
other important bugs are likely to fix before doing high priority bugs because high priority
bugs are can be visible to client or end user first.
1. Spelling mistake in the confirmation error message like “You have registered success”
instead of successfully, success is written.
2. Developer is missed remove cryptic debug information shortcut key which is used
developer while developing he application, if you pressing the key combination
LEFT_ALT+LEFT_CTRL+RIGHT_CTRL+RIGHT_ALT+F5+F10 for 1 mins (funny na).
It is where rare scenario where user can hold the key for such long period of time so bug should
be marked as low priority.
Answer:
“Priority” is associated with scheduling, and “severity” is associated with standards.
“Priority” means something is afforded or deserves prior attention; a precedence
established by order of importance (or urgency). “Severity” is the state or quality of
being severe; severe implies adherence to rigorous standards or high principles and
often suggests harshness; severe is marked by or requires strict adherence to
rigorous standards or high principles, e.g. a severe code of behavior. The words
priority and severity do come up in bug tracking. A variety of commercial, problemtracking/
management software tools are available. These tools, with the detailed
input of software test engineers, give the team complete information so developers
can understand the bug, get an idea of its ‘severity’, reproduce it and fix it. The fixes
are based on project ‘priorities’ and ‘severity’ of bugs. The ‘severity’ of a problem is
defined in accordance to the customer’s risk assessment and recorded in their
selected tracking tool. A buggy software can ‘severely’ affect schedules, which, in
turn can lead to a reassessment and renegotiation of ‘priorities’.]