Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Product:
Safety
Security
Usability
Understandability
Modularity
Compatibility
Reliability
Testability
Adaptability
Performance
Software Development Life Cycle (SDLC):It is a frame work it describes the activities which we are performing by the time of development
of the software application.
Prototype: -Defined as a roughly & rapidly developed model which is used for demonstrating
to the client, In order to gather the clear requirements and to win the confidence of a customer.
II.
Analysis Phase:
(a) Tasks:
1.
2.
3.
4.
Feasibility Study
Tentative planning
Technology Selection
Requirement Analysis
Design Phase: -
Tasks:
1. High level designing
2. Low level designing
Roles: High-level designing is done by the chief Architect & Low level designing is done by the
Technical Lead.
Process: The Chief architect will be drawing some diagrams using unified modeling language in
order to divide the whole project into modules.
The Technical lead will also draw some diagrams in order to divide the modules into sub
modules.
The technical lead will also develop the PSEUDO code in order to make developers comfortable
while developing the actual code.
Proof: The proof document of this phase is Technical Design Document.
IV.
Coding phase:
V.
Testing Phase:
(a) Task: Testing
(b) Roles: Test engineers
Process:
1. First of all the test engineers will collect the requirements document
and try to understand all the requirements
2. While understanding it at all they get any doubts they will list out all
of them in a review report.
3. They will send the review report to the author of the requirements
document for clarifications.
4. Once the clarifications are given and after understanding all the
requirements clearly, they will take the test case template and writes
the test cases.
5. Once the first build is released then they will execute the test cases.
6. If at all any defects are found. They will list out all of them in a defect
profile template then.
7. They will send the Defect Profile Document to the development
department and then will be waiting for the next build to be released.
8. Once the next build is released then they re-execute the test cases.
9. If at all any defects are found they will update the profile document
and sent it to the development department and will be waiting for the
next build to be released.
10. This process continuous till the product is Quality
Proof: The proof of the testing phase is Quality Product.
Test Case: (def) Test case is an idea of a test engineer based on the customers requirements in
order to test a particular feature (or) a function.
VI.
Process: - The senior test engineer or deployment engineer will go to the clients place and install
the application in their environment with the help of the guidelines provide in the deployment
document.
Maintenance:
After delivering the software, while using if at all any problem occurs then that
problem becomes a task, based on that problem corresponding roles will be appointed, the roles
will defined the process and solve that problem.
Some clients may request for the continuous maintenance in such situations a group of
people from the software company will be continuously working on the clients place and taking
care of the soft ware.
SOFTWARE ARCHITECTURE
There are 4 types of architecture
1)
2)
3)
4)
N- TIER ARCHITECTURE:
In this architecture environment, presentation layer available at client side, application layer,
business layer, database layer, will be available at server side. But the main difference between
three tire architecture and distributed environment in this multiple database layer will be
maintain for fast accessing purpose.
1) Sequential model
2) Incremental model
SEQUENTIAL MODEL:
These models are best suitable for small application development purpose.
All requirements must be known before developing the application.
It is two types
1. Waterfall model
2. V model
WATERFALL MODEL:
REQUIREMENT:
Defines needed information, function, behavior, performance and installation.
DESIGN:
Data structure, software architecture, interface representation, algorithms detail.
IMPLEMENTATION:
Source code, database, user documentation testing.
ADVANTAGES OF WATERFALL MODEL:
DISADVANTAGES:
V MODEL:
Architecture of v model:
Process of v model:
1) Project requirement planning:
Resource allocation will be done. Such as project manager, team leader, team manager.
2) Product requirement and specification analysis:
In this entire system will be analyzed
3) Architecture and high level designs:
Design application architecture (1, 2, 3, N tier) and define the functionality for the
application.
4) Low level designs:
In this design sub module and algorithms for the entire architecture of the system (high and
low level).
5) Coding: Developer transforms algorithms into code.
6) Unit testing: Checking or testing each and every developed unit.
7) Integration testing: Establishing connection between unit and conduct checking among
the relation.
8) System and acceptance testing: Check entire software system in company environment
and customer environment.
9) Production and maintains: Deploy application into the customer environment and
provide to the customer gathering new requirements for the enhancement.
ADVANTAGES OF V MODEL:
DISADVANTAGES:
When time is high reliability of the application (ex: hospital manager application)
All requirements must be known upfront
Technology is understandable
INCREMENTAL MODELS:
These models are best suitable for the big application development purpose.
No need to known all requirements before developing
Requirement may not be stable
Process:
INTERNAL PROCESS:
Customer can see the system requirement as they are being gathered function
requirement
Developer learns from the customer
A more accurate of end product
Unexpected requirement are accommodate(dynamic)
Annoys flexibility designing and development awareness of additional needs
functionality
DISADVANTAGES:
Bad reputation from the customer for quick and dirty methods
Over all maintains may be overload.
The customer may want the prototype developed
Process may continue for ever
RAD MODEL:
Flow:-
PROCESS:
DISADVANTAGES:
SPIRAL MODEL:
Flow:-
ARCHITECTURE:
Functionality
s/w and h/w application
critical success factor areas
GUI etc.
ALTERNATIVES:
CONSTRAINTS:
COST
Time schedule
Interface (GUI)
PROCESS:
ADVANTAGES:
DISADVANTAGES:
Software testing:
VERIFICATION:
It is a process verifying document (TEAM MEMBER) and process(MANAGER) to check
whether we are developing right system or not.
VALIDATION:
Architecture:
Always left side is the base for right side activity such as system requirement are base
line
For system testing FRS document is the base line for acceptance testing
Testing is conduct properly to identify the defects as soon as possible
VERIFICATION:
Quality Assurances;
Monitoring and measuring the strength of
development process is called quality
assurance
VALIDATION:
Quality control:
The verification of s/w product (code level
validity and functional level validity) is called
QC
Un conventional testing:
It is a process of testing conduct on the
document and company process this testing is
conducted by quality assurance people (pm,
tm).
conventional testing:
It is a process of conducting testing by the test
engineer and developer to check whether it is
working properly or not
QA and QC:
Testing Principles:
1)
2)
3)
4)
5)
6)
7)
Early testing
Exhaustive testing
Pesticide paradox
Testing is context dependent
Presence of errors
Absence of errors
Defect clustering
Early testing:
Testing conduct in the initial stage s/w development then that is called early testing the main
intension of early testing is the reduce cost of fixing defects.
Exhaustive testing:
If we test functionality in the system with all possible valid inputs and invalid outputs then that is
Exhaustive testing
Ex: first name edit box accepting 4 to 20 characters
Testing is context dependent it is nothing but we cant implement same testing activities
for all applications
According to the application type the implementation of testing will be changing
Presence of errors:
As a test engineer need to have test to break attitude nothing but that is if an application
under production (live environment) if application has been given to us again we are
ready to find in the application
As a test engineer always we need to have positive approach and negative approach they
only we can able to N number of defects
If we are identifying in n number defects automatically that application is quality
product.
Absence of errors:
Whenever there are unused areas in the application and that areas having defects we no need to
concentrate on that areas that is called absence of errors
Defect clustering:
Whenever if we identifying a defect if has to cover more functionality (group)
Software testing technical / methodology:
1) static testing
white box testing
Unit testing
Integration testing
Incorrect requirements
Wrong design
Poor coding
Complex business logics and complex technology
Incorrect functionality
Incorrect data edits
Poor usability
Poor performance
Un compatibility
Static testing:
It is a process in where we are going to understand company process and guide lines
By the time of conducting static testing we can able to identify what model the company
are using
Reviews:
Examine process related work and document related work is called review
Reviews are different types
They are:
Management review
Technical review
Format review
Information review
Management review:
This review will be conducted by top level or middle level management to monitor the
project status.
These reviews are help full for the management to take the necessary corrective action if
there are any slippages.
Corrective action:
If at all roles commit a repairable, mistake then the corrective these mistakes.
Preventive action:
If at all roles commit a mistake which is not possible for repair the preventive such type of
mistakes at least in the future.
Slip pages:
The deviation between planned efforts to actual effort is called slip pages.
Note: daily or weekly project status meeting are called as management reviews.
Formal review:
In a formal reviews meeting is conducted with a plane, document and procedure then those
meetings are called formal review meeting.
Architecture:
Planning
Kick of meeting (any start up meeting)
Preparation
Review meeting
Re work
Follow up
1. Planning: The first phase of the formal review is the Planning phase. In this phase the
review process begins with a request for review by the author to the moderator (or inspection
leader). A moderator has to take care of the scheduling like date, time, place and invitation of
the review.
The documents should not reveal a large number of major defects.
The documents should be cleaned up by running any automated checks that apply.
The author should feel confident about the quality of the document so that he can join the
review team with that document.
2. Kick-off: This kick-off meeting is an optional step in a review procedure. The goal of this
step is to give a short introduction on the objectives of the review and the documents to
everyone in the meeting.
3. Preparation: In this step the reviewers review the document individually using the related
documents, procedures, rules and checklists provided. Each participant while reviewing
individually identifies the defects, questions and comments according to their understanding
of the document and role.
Usually the checking rate is in the range of 5 to 10 pages per hour.
4. Review meeting: The review meeting consists of three phases:
Logging phase: In this phase the issues and the defects that have been identified during the
preparation step are logged page by page. The logging is basically done by the author or by
a scribe. Scribe is a separate person to do the logging and is especially useful for the formal
review types such as an inspection. Every defects and its severity should be logged in any of
the three severity classes given below:
Critical: The defects will cause downstream damage.
Major: The defects could cause a downstream damage.
Minor: The defects are highly unlikely to cause the downstream damage.
Discussion phase: If any issue needs discussion then the item is logged and then handled in the
discussion phase. As chairman of the discussion meeting, the moderator takes care of the people
issues and prevents discussion from getting too personal and calls for a break to cool down the
heated discussion. The outcome of the discussions is documented for the future reference.
Decision phase: At the end of the meeting a decision on the document under review has to be
made by the participants, sometimes based on formal exit criteria. Exit criteria are the average
number of critical and/or major defects found.
5.Rework: In this step if the number of defects found per page exceeds the certain level then the
document has to be reworked. Not every defect that is found leads to rework.
It is the authors responsibility to judge whether the defect has to be fixed.
If nothing can be done about an issue then at least it should be indicated that the author has
considered the issue.
6.Follow-up: In this step the moderator check to make sure that the author has taken action on
all known defects. If it is decided that all participants will check the updated documents then the
moderator takes care of the distribution and collects the feedback.
It is the responsibility of the moderator to ensure that the information is correct and stored for
future analysis.
Inspection and audits are the example of formal review.
Inspection: If a formal review is conducted while executing a task then it is called inspection.
Audit: If a formal review is conducted after completion of a task it is called audit
9.Technical review:
It is led by the trained moderator but can also be led by a technical expert
Defects are found by the experts (such as architects, designers, key users) who focus on
the content of the document.
Informal reviews:
If a review is conducted without following any procedure and documentation then these
reviews are called informal
Peer reviews:
Review conduct among collection is called a peer review objective of reviews
A step by step presentation which is given by business analyst or by domain experts and
subject matter experts.
Ex: KT is best ex of walkthroughs
Difference between static and dynamic testing:
Static testing
1) Testing is done without executing program
and functionality
2)This testing does verification process
3) This testing is to prevent the defects
Dynamic testing
1)This testing with done executing program
and functionality
2)This testing does validation
3) This testing is finding and fixing the defects
Dynamic testing:
WHITE BOX TESTING
It is conducted by developers
It is conducts on source code of the application
The main intention if WBT is to ensure 100% code coverage
Whether written code is working according to developer expectation or not
By the time of conducting WBT developer are conducting unit and integration testing
This testing as also called as clear box testing and structural testing and glass box
testing.
Need of WBT:
Unit testing:
Unit is nothing but small testable program in the source code of the application such as
class, function, objects, methods and procedure etc.
If all these things are working according to the design execution that is called unit
testing.
WBT testing:
1) Basic path testing
2) Program technique testing
3) Control structural testing
4) Mutation testing
Basic path testing:
In this developers are going to check the completeness and correctness of the program
by implementing loops and conditions.
Mutation testing:
By the time of changing program whether their change is impacting on unchanged areas then that
technique is called mutation testing.
Integration testing:
STUB:
A simulated program that replaces a called program is called a stub.
Bottom up approach:
This approach is recommended when there are incomplete program at this top level. In this
approach integration testing will be replaced with drivers
Driver:
A simulated program that replaces a calling program is called a driver.
Big bang testing:
This approach is recommended when all source code units are available and unit testing. In this
approach all source code units will be combined together as large
System then integration among all these will be validated it takes very less time for conducting
integration testing but if any defects are accounted finding the root cause of the defect will
become a difficult task.
Sandwich approach:
This approach combine the top down and bottom up down approaches of the integration testing.
In this middle level module are testing using the driver and stubs.
Called: A dummy program is being called by a developed program then it is called stub
Calling: A dummy program is calling a developed program then that is called driver.
Note:
What is unit testing?
Does the piece of work by itself it is called unit testing
What is integration testing?
Do different piece working together is called integration testing.
Testing conducted on the application with a positive perspective to check what system
supposed to do is called the positive testing.
Entering a valid username and valid password and click on submit button. To determine
what login supposed to do is called positive testing
The objective of the positive testing is conformance to requirement
Negative testing:
Negative testing:
1) It means to test the application by giving
invalid data
2) In this testing testers always check for
invalid set of values.
3) It is done by keeping negative perspective.
Ex: checking mobile number with invalid
inputs. 9440456abc.
4) It is always done to break the project and
product with unknown set of test
conditions.
5) This testing is to check how the product or
project not behave (defects) by proceeding
invalid data.
According to ECP at first analyzing entire possible valid and invalid inputs to divide
into groups while making groups make sure that every input data that belong to a group
is producing the same output.
As every input that belongs to a group is producing the same output every input will
take equal amount of property for testing. So we are no need to test with every input
consider one input from each class property middle value for testing.
Ex: prepare input data using ECP by technique to check above functionality that is does the
system displaying appropriate message or not based on the type of character.
Ex2: In a shopping mall application employee salary validation are given below.
1. Salary minimum 5000 to 50000
2. Numeric only mandatory prepare the test data for the given requirement
VALID
INVALID
Sal Bet Salary Salary
Non
>5K <50K<5000 >50K
NULL Numeric
5000
4999
4998
25000
-1
-2
-3
Infinity
50000
50001 <BLANK>
abc
abc123
Infinity
Ex: 3: in a bank application services checks for fund transaction are given below
1)
2)
3)
4)
testing clean cases representation those with in allowable range dirty cases capability to
handle worst cases conditions
Note: EC/EP and BVA together ensure 100% requirement coverage.
Ex: In a login processing page user id password fields are their user id allows 6-18
Uppercases, password allows 5-17 alpha numeric, special character prepare test data.
User Name Table
BVA
Valid
6
7
13
17
18
Invalid
5
19
ECP
Valid
A-Z
Invalid
0-9
Lower case
Alpha numeric
Special character
Password Table
BVA
Valid
5
6
11
16
17
Invalid
4
18
ECP
Valid
A-Z
Lower case
Alpha numeric
Special character
Invalid
null
Ex: in an insurance application visitors can able to see insurance policies types upon existing
then age 16 to 80 years prepare test cases
BVA
Valid
2
Invalid
1
3
ECP
Valid
0-9
Invalid
null
A-Z
Lower case
Alpha numeric
Special character
Ex: in a shopping mall application can able to bill amount upon existing the quantity up to 10
prepare test cases.
BVA
ECP
Valid
1
2
5
9
10
Invalid
0
11
Valid
0-9
Invalid
null
A-Z
Lower case
Alpha numeric
Special character
Ex: in an e bank application customer able into login by existing password 6 digit numeric, area
code 3 digit number prefix 4 character alphanumeric suffix 5 digit number should be start with
0,1
Password Table
BVA
Valid
6
Invalid
5
7
Invalid
UC
LC
Special
Numeric
Alpha
numeric
BVA
Valid
3
Invalid
2
4
ECP
Valid
0-9
BVA
Valid
4
BVA
Valid
5
Invalid
3
5
ECP
Valid
0-9
UC
LC
Invalid
Special
Numeric
Alpha n
Invalid
4
6
ECP
Valid
0-9
Should
not(0,1)
invalid
UC
LC
Special
Numeric
Alpha
numeric
invalid
UC
LC
Special
Numeric
Alpha n
Most of the time test engineers are going to give multiple sets of input to execute
functionality
There we are going to write number of test cases to execute functionality
Before writing test cases we are going to make decision table to write the test case in a
simple format there we are going to use decision tables.
Test case id
TC 01
TC 02
TC 03
TC 04
TC 05
Expected value
System should Display FR
window
System should Display a msg
as An must be 4chr
System should Display a msg
As PLZ enter AN
System should Display a msg
as please enter valid PWD
System should display a msg
as please enter password
this is one type of security testing to check whether the system is finite for the
transaction
Ex: in a login window if you perform the operation up to 5 times if allow the user after
completion of 5th execution the corresponding application has been terminated
Most of times all offshore testing department people are depending up on use case
document for writing test case
By the time of execution we are going to check whether the application is developed
according to use case or not.
Use case is nothing but user action and system response.
Beta testing:
location
In the above ex, to check does the login displaying right module to the right user or not. We need
to interact with both the database and application which is called GBT.
Note:
Database testing is a best ex. for GBT.
Database testing:
Validating various operations performed in front end it backend validating various operations
backend at front end validating the database design such as field data type , field size,
constraints, and also validating SQL script such as store procedure and trigger is collectively
called as database testing.
Need of database testing in general a test engineer will confirm the functionality by seeing the
appropriate the application.
Ex: The check the above employee registration functionality a test engineer will input a valid
employee number employee name designation salary and click on submit button. In this
application displayed a message employee created successfully he assumes that functionality is
justified in the technique. But here message box is a programming technique not a configuration
from the database.
So it is not generate that really the data storing in the data base. In order to confirm their database
testing is required.
Difference between user interface testing and database testing:
User interface testing:
Database testing:
Phase
Test Planning
Deliverable
Test strategy
Test plane
2) Test engineer
Test analysis
3) Test engineer
Test design
4) Test engineer
Test execution
Test closer
1) Test plan:
Test strategy:
It is a high level document it describe what are the testing approaches we need to
implement by the time of conducting in the organization
Test plane document is different from application to application.
Test plan template:
Phase
Test plan
Entry criteria
1)Requirement
documents
2)Requirement
traceability
matrices
3)test automation
feasibility
document
Activity criteria
1)Identification
various testing
approaches
2)select best suitable
approach for the
project
3) preparing test plan
for various types of
testing
4) tools selection
5) effort estimation
6) resource allocation
Exit criteria
Deliverable
1)Approved test Test plan
plan document
document
by project
manager and
Clint side
approvals
2)Approved
effort
estimation
document
Section 2: Introduction
2.1Purpose
2.2 Scope
2.3 Over view
2.4 Definition, Acronyms and abbreviation (terminology)
2.5 Reference
5.3 Escalations
5.3.1 Configuration management
5.3.2 Test phase
5.3.3 Test activity
Entry criteria
1)requirement
documents must be
available
2)Accept criteria
defined
3)application
document
Must be available.
Activity criteria
1)understand function
and non functional
requirement of the
system
2)identify navigation
in the module and user
properties
3) gather in task flow
diagram information
4) identify test to be
formed
5) gather detailed
about requirement
property
6)identifying test
environment details
6) identifying
automated areas
Exit criteria
1)RTM is sign
off test
automation areas
2)approve
automation areas
sign by the client
Deliverable
RTM and
automation
feasibility report
(if applicable)
FRS document
1.1.0
1.1.1
1.1.2
1.1.3
1.1.4
1.1.5
Input validation and error status: It describes what are the inputs we need to give in
the corresponding functionality and data structure also
Task flow diagram: It describe the corresponding user flow in the application
Use case diagram: It describe user action and system response
1) The execution status is also displayed. During execution, it gives a consolidated snapshot
of how work is progressing.
2) Defects: When this column is used to establish the backward traceability we can tell that
the New user functionality is the most flawed. Instead of reporting that so and so test cases
failed, TM provides a transparency back to the business requirement that has most defects
thus show casing the Quality in terms of what the client desires.
3) As a further step, you can color code the defect ID to represent their states. For example,
defect ID in red can mean it is still Open, in green can mean it is closed. When this is done,
the TM works as a health check report displaying the status of the defects corresponding to a
certain BRD or FSD functionality is being open or closed.
4) If there is a technical design document or use cases or any other artifacts that you would
like to track you can always expand the above created document to suit your needs by adding
additional columns.
1. Ensuring 100% test coverage
2. Showing requirement/document inconsistencies
3. Displaying the overall defect/execution status with focus on business requirements.
4. If a certain business and/or functional requirement were to change, a TM helps estimate
or analyze the impact on the QA teams work in terms of revisiting/reworking on the test
cases.
Additionally,
1. A TM is not a manual testing specific tool, it can be used for automation projects as well.
For an automation project, the test case ID can indicate the automation test script name.
2. It is also not a tool that can be used just by the QAs. The development team can use the
same to map BRD/FSD requirements to blocks/units/conditions of code created to make
sure all the requirements are developed.
3. Test management tools like HP ALM come with the inbuilt traceability feature.
An important point to note is that, the way you maintain and update your Traceability
Matrix determines the effectiveness of its use. If not updated often or updated incorrectly the tool
is a burden instead of being a help and creates the impression that the tool by itself it not worthy
of using.
RTM Template:
3) Test designing:
Test scenario:
A scenario is nothing but a situation or a plane what to be tested in the application
(or)
A scenario is nothing but an item or functionality to be tested in an application is
called a scenario.
By the time of writing scenario we need to use verify or check that words only
We should not use enter, click, select etc.
A scenario is nothing but a response from the application
Phase
Entry criteria
Test Design 1)requirement
documents must be
available
2) RTM and test
plane
3)Automation
analysis report
.
Activity criteria
1)prepare test
scenario
2)prepare test cases
3)list out information
in RTM
4) Review test cases
(peer review)
5) prepare
automation test script
5) prepare test data
Exit criteria
1)test cases
approved by
test manager
Deliverable
Test cases
3. Test if the mouth of the bottle is not too small to pour water. [Usability Testing!]
4. Fill the bottle with water and keep it on a smooth dry surface. See if it leaks.
[Usability Testing!]
5. Fill the bottle with water, seal it with the cap and see if water leaks when the
bottle is tilted, inverted, squeezed (in case of plastic made bottle)! [Usability
Testing!]
6. Take water in the bottle and keep it in the refrigerator for cooling. See what
happens. [Usability Testing!]
7. Keep a water-filled bottle in the refrigerator for a very long time (say a week). See
what happens to the water and/or bottle. [Stress Testing!]
8. Keep a water-filled bottle under freezing condition. See if the bottle expands (if
plastic made) or breaks (if glass made). [Stress Testing!]
9. Try to heat (boil!) water by keeping the bottle in a microwave oven! [Stress
Testing!]
10. Pour some hot (boiling!) water into the bottle and see the effect. [Stress
Testing!]
11. Keep a dry bottle for a very long time. See what happens. See if any physical or
chemical deformation occurs to the bottle.
12. Test the water after keeping it in the bottle and see if there is any chemical
change. See if it is safe to be consumed as drinking water.
13. Keep water in the bottle for some time. And see if the smell of water changes.
14. Try using the bottle with different types of water (like hard and soft water).
[Compatibility Testing!]
15. Try to drink water directly from the bottle and see if it is comfortable to use. Or
water gets spilled while doing so. [Usability Testing!]
16. Test if the bottle is ergonomically designed and if it is comfortable to hold. Also
see if the center of gravity of the bottle stays low (both when empty and when filled
with water) and it does not topple down easily.
17. Drop the bottle from a reasonable height (may be height of a dining table) and
see if it breaks (both with plastic and glass model). If it is a glass bottle then in most
cases it may break. See if it breaks into tiny little pieces (which are often difficult to
clean) or breaks into nice large pieces (which could be cleaned without much
difficulty). [Stress Testing!] [Usability Testing!]
18. Test the above test idea with empty bottles and bottles filled with water. [Stress
Testing!]
19. Test if the bottle is made up of material, which is recyclable. In case of plastic
made bottle test if it is easily crushable.
20. Test if the bottle can also be used to hold other common household things like
honey, fruit juice, fuel, paint, turpentine, liquid wax etc. [Capability Testing!]
4. Verify that validation message gets displayed in case user leaves username
or password field as blank.
5. Verify that validation message is displayed in case user exceeds the
character limit of the user name and password fields.
6. Verify that there is reset button to clear the field's text.
7. Verify if there is checkbox with label "remember password" in the login page.
8. Verify that the password is in encrypted form when entered.
9. Verify that there is limit on the total number of unsuccessful attempts.
10.For security point of view, in case of in correct credentials user is displayed
the message like "incorrect username or password" instead of exact message
pointing at the field that is incorrect. As message like "incorrect username"
will aid hacker in brute forcing the fields one by one.
11.Verify the timeout of the login session.
12.Verify if the password can be copy-pasted or not.
13.Verify that once logged in, clicking back button doesn't logout user.
14.Verify if SQL Injection attacks works on login page.
15.Verify if XSS vulnerability work on login page.
19.Verify that in case capacity limit is reached users are prompted with warning
alert- audio/visual.
20.Verify that inside lift users are prompted with current floor and direction
information the lift is moving towards- audio/visual prompt.
39.Keep the air pressure different on all the four tires and then drive the car.
40.Use hand break while driving the car.
41.Try to start the car with some other key.
42.Check the condition of types on filling them at pressure higher than
prescribed.
43.Check the condition, speed and fuel consumption of car on filling the types
with pressure less than prescribed.
44.Check car's speed, performance and fuel consumption on driving the car on
roads not conducive for driving.
20.Verify that for a keywords, some related search terms are also displayed to
aid user's search.
21.Verify that for number of results more than the limit on a single page,
pagination should be present, clicking on which user can navigate to
subsequent page's result.
22.Verify Google's advanced search options like- searching within a website,
searching for files of specific extension.
23.Verify if the search is case-insensitive or not.
24.Verify the functionality of "I'm feeling Lucky" search- the top most search
result should get directly returned (but as of now Google doodle page link is
displayed).
25.Front End - UI Test Cases of Google Search.
26.Verify that Google Logo is present and centre aligned.
27.Verify that the search textbox is centre aligned and editable.
28.Verify that search request should get hit by clicking on search button or
hitting enter after writing the search term.
29.Verify that in the search result- webpage's title, URL and description are
present.
30.Verify that clicking the search result will lead to the corresponding web page.
31.Verify that pagination is present in case number of results are greater than
the maximum results allowed in a page.
32.Verify that user can navigate to a page number directly or move to previous
or next page using the links present.
33.Verify that different languages links are present and gets applied on clicking
the same.
34.Verify that the total number of results for the keyword is displayed.
35.Verify that the time taken to fetch the result is displayed.
TEST CASES:
It is nothing but it is a detailed discretion of what to test and how to test (or) a test case is a set of
preconditions, expected values, input fields, result, etc. it is called test case.
Types of test cases:
There are 3 types of test cases:
1) Positive test case:
If we write of test case to check the business flow of the application those test
cases are called positive test cases.
In the test cases we need to prepare valid data only by following BVA, ECP
conditions.
By the time of writing this test case we need to have end user mindset.
2) Negative test case:
If we write a test case to identify defects in the application then we are writing
negative test case.
3) GUI test case:
If we prepare test case to check the look and feel of the application those test cases
are called GUI test case
Principle of test cases:
1)
2)
3)
4)
5)
6)
7)
Test execution:
In this phase we need to concentrate on test execution process and test environment setup,
deliverables we need to check.
Entry criteria
1)System design
and architecture
document
available
2) Environment
plane is available
Activity criteria
1)Understand the
requirement architecture to
setup the environment
2) Prepare h/w and s/w req.
list
3) Finalized all
environment and setup req.
4) Setup environment and
test data
5) Prepare smoke testing
6) Accept or reject the
build depending on smoke
testing
7) Identify SRN,
Exit criteria
1)Environmen
t setup
working as for
the plane and
guideline
2)Smoke test s
successfully
Deliverable
1)Deploy
document
2)Smoking
test result
QRN,BRN documents
Smoke Testing:
The main intension of conducting smoking testing is to check the released the application is
eligible for derailed testing or not
Smoking testing situation to check the below contestation
1)
2)
3)
4)
Test execution:
Phase
Entry criteria
Activity criteria
Test
execution
Exit
criteria
1)All test
cases must
be
executed
2) All
defects
must be
fixed and
closed
Deliverable
1)Complete
RTM with
execution
status
2) Test cases
updated with
result
3) Defect
report
status as closed
Sanity testing:
It is a detailed testing conduct on each and every functionality to check the functionality
is justifying all customer or not.
In this type of the testing we are going to test the application functionality in detail
manner by providing different inputs into the functionality.
Re testing:
Sanity testing
1) It is a detailed testing to check the
stability of the functionality in detailed
manner
2) It is done or conduct on new features
and after bugs fixes
3) This testing is performed by testers
4) It is a usually not documented based on
test cases we are conducting this testing
5) It is a subset of acceptance
6) It is a exercise only particular
functionality in the system
7) It is like to check detailed test drive
(mail age, engine checkup, breaks etc)
Testing the functionality again and again (with multiple set of test data) then that is called
Re testing.
Re testing is conducted on bug fixed areas to check whether the bug is properly fixed or
not.
Regression testing:
In this we are conducting on already working functionality to know whether the changes
areas or impacting on unchanged areas or already working functionality got effect or not.
This testing will be carried out on new functionality are modified functionality.
Re testing
1) Retesting is done to make sure that, that
cases which are failed earlier are passed
after modification for that we are
conducting re testing
2) It is carried out based on defect fixes
3) Defect verification will come under this
testing
4) In this testing we can include failed test
cases also
5) Re testing is unplanned testing
6) Re testing cant be automated
7) Re testing is having more property then
regression testing
Regression testing
1) In this type of testing the s/w
functionaries are enhanced are
modified it is showing any effect on
already working functionality
2) It is carried out based on defect fixes or
enhance
3) Defect verification it is not a part of
regression testing
4) In this we can include passed test cases
5) Regression testing is a planned testing
if we want to conduct this testing we
need to understand all functionality
requirement and the related module
with passed test cases we can carried
out in regression testing
6) Automation testing tools are designed
for regression testing purpose only
7) Regression testing we are conducting
based on the availability resource by
the time re testing part we can go for re
testing
Adhoc testing:
It is a also called as Random testing. In this type of testing ,we are conducting on the application
to our interest, the main intention of adhoc testing to find tricky defects in the application.
This testing has deferent types it is conduct due to lack of time or due to lack resource we can
conduct this testing.
1) Monkey testing
2) Buddy testing
3) Pair testing
4) Exploratory testing
1) Monkey testing:
In this type of testing we need to concentrate on main functionality when there is a lock of time
we can able to perform this testing.
2) Buddy testing:
In this type of testing developers and test engineers are grouped together and conducting
developers and test pearly this testing mostly recommended in incremental module when
there is a less time.
3) Pair testing:
In this junior test engineer are grouped with seniors and conduct on the application when there is
a lack of knowledge on the application.
4) Exploratory testing:
This testing is conducted by domain experience people that is the person who are having relevant
domain or entire application knowledge that people will conduct this testing.
Explore Meaning: having some basic knowledge on doing some operation on the application,
knowing some information from the application.
Note: In some situation in their areas lack of documentation we are conducting explore testing.
End to End testing:
This testing is conduct before releasing the application to the customer .
In this type of testing a simulated (similar to exact customer) environment will be created in the
organization and end to end test cases are executed to check entire functionality in the system
and system testing.
Difference between System and End to End Testing:
System testing
1) It is conducted to check all functional
requirement system
2) Functional and non functional will be
carried out in this testing
3) It is used to check whether the
functional is working according to the
customer requirement or not(MCR)
or not
4) In this testing will be conducted after
system testing
5) It is conducted to check external
interface of the system which can be to
automated hence manual testing is
performed
Scalability - Determines maximum user load the software application can handle.
Load testing - checks the application's ability to perform under anticipated user loads.
The objective is to identify performance bottlenecks before the software application goes
live.
Stress testing - involves testing an application under extreme workloads to see how it
handles high traffic or data processing .The objective is to identify breaking point of an
application.
Endurance testing - is done to make sure the software can handle the expected load over a
long period of time.
Spike testing - tests the software's reaction to sudden large spikes in the load generated by
users.
Volume testing - Under Volume Testing large no. of. Data is populated in database and
the overall software system's behavior is monitored. The objective is to check software
application's performance under varying database volumes.
Defect Reporting
Status: New: - Whenever the defect is found for the first time the test engineer will set the status as
New.
Open: -Whenever the developer accepts the raised defect then he will set the status as Open.
Fixed for verification Or Fixed for rectified: - Whenever the developer rectifies the raised defect
then he will change the status to Fixed.
Re open and Closed: -Whenever the defects are rectified, next build is released to the testing dept
then the test engineers will check whether the defects are rectified properly or not. If the defect is
rectified properly then the test engineer will set the status as Closed. If the defect is not
rectified properly then the test engineer will set the status as Re open.
Hold: - Whenever the developer is confused to accept or reject the defect then he will set the
status of the defect as Hold.
Testers Error or Testers Mistake or Rejected: - Whenever the developer is confirmed it is not at
all a defect then he will set the status of the defect as Rejected.
As Per Design: - Whenever the test engineer is not aware of new requirements and if he raises
defects related to the new features then the developer will set the status As Per Design.
--------(Sev1) or S1 or 1
2. Major -------(Sev2) or S2 or 2
3. Minor -------(Sev3) or S3 or 3
4. Suggestion (Sev4) or S4 or 4
1. Fatal: - All Run time errors, Show Stopper Defects
Priority: Priority defines the sequence in which the defects has to be rectified. It is classified in to
four types:
1.Critical
(Pri1) or P1 or 1
2.High
(Pri2) or P2 or 2
3.Medium
(Pri3) or P3 or 3
4.Low
(Pri4) or P4 or 4
Usually the Fatal defects are given critical priority, Major defects are given High priority,
Minor defects are given Medium Priority and suggestions are given Low Priority, But depending
up on the situations the priority will be changing.
I - Case: Low severity-High Priority Case
Up on customer visit to the company all the look and feel defects are given
highest priority.
II - Case: High severity Low Priority Case
When ever 80% of the application is released to testing department as 20% is
missing the test engineers will treat them as Fatal defect but the development lead will give least
priority for those defects as features are under development.
TEST CLOSURE
After completion of reasonable cycles of test execution, test lead concentrates on test closure to
estimate completeness and correctness of test execution and bugs resolved. In review meeting,
the test lead is considering some factors to review testing team to responsibility.
1.
2.
Defect density:
Module name
3.
no of defects
20%
20%
20%
Analysis of deferred (postponed) defects: Whether deferred defects are postponed or not?
After completion, closure review by testing team concentrates on postmortem testing or final
regression testing or pre acceptance testing, if required.
Select high defect
density module
Test
reporting
Effort
estimation
Plan regression
Regression testing
4. User acceptance testing: After completion of testing and their reviews, project management
concentrates on user acceptance testing to collect feedback from real customer model customers.
There are two way to conduct UAT such as -testing and -testing.
5. Sign off: After completion of user acceptance testing and modification, project management
declares release team and CCB. In both teams few developer and test engineers are involved
along with project manager. In sign off stage testing team submits all prepared testing documents
to project manager.
Test strategy
Test plan
Test case title/ test scenario
Test case document
Test logo
Test defect reports above all documents combination is also known as a Final Test
Summary Report(FTSR)
Sl.No
1
Informal Review
Conducted on an as-needed I.e.
Informal agenda
Formal Review
Conducted at the end of each life cycle
phase I.e. Formal Agenda