Sei sulla pagina 1di 23

1. Can you explain the PDCA cycle and where testing fits in?

Software testing is an important part of the software development process. In normal software development there are four
important steps, also referred to, in short, as the PDCA (Plan, Do, Check, Act) cycle.

Let's review the four steps in detail.
1. Plan: Define the goal and the plan for achieving that goal.
2. Do/Execute: Depending on the plan strategy decided during the plan stage we do execution accordingly in this phase.
3. Check: Check/Test to ensure that we are moving according to plan and are getting the desired results.
4. Act: During the check cycle, if any issues are there, then we take appropriate action accordingly and revise our plan
again.

So developers and other stakeholders of the project do the "planning and building," while testers do the check part of the cycle.
Therefore, software testing is done in check part of the PDCA cyle.
2. What is the difference between white box, black box, and gray box testing?
Black box testing is a testing strategy based solely on requirements and specifications. Black box testing requires no knowledge of
internal paths, structures, or implementation of the software being tested.

White box testing is a testing strategy based on internal paths, code structures, and implementation of the software being tested.
White box testing generally requires detailed programming skills.

There is one more type of testing called gray box testing. In this we look into the "box" being tested just long enough to
understand how it has been implemented. Then we close up the box and use our knowledge to choose more effective black box
tests.



The above figure shows how both types of testers view an accounting application during testing. Black box testers view the basic
accounting application. While during white box testing the tester knows the internal structure of the application. In most scenarios
white box testing is done by developers as they know the internals of the application. In black box testing we check the overall
functionality of the application while in white box testing we do code reviews, view the architecture, remove bad code practices, and
do component level testing.
3. Can you explain usability testing?
Usability testing is a testing methodology where the end customer is asked to use the software to see if the product is easy to use,
to see the customer's perception and task time. The best way to finalize the customer point of view for usability is by using
prototype or mock-up software during the initial stages. By giving the customer the prototype before the development start-up we
confirm that we are not missing anything from the user point of view.


4. What are the categories of defects?
There are three main categories of defects:

1. Wrong: The requirements have been implemented incorrectly. This defect is a variance from the given specification.
2. Missing: There was a requirement given by the customer and it was not done. This is a variance from the specifications,
an indication that a specification was not implemented, or a requirement of the customer was not noted properly.
3. Extra: A requirement incorporated into the product that was not given by the end customer. This is always a variance
from the specification, but may be an attribute desired by the user of the product. However, it is considered a defect
because it's a variance from the existing requirements.
5. How do you define a testing policy?
The following are the important steps used to define a testing policy in general. But it can change according to your organization.
Let's discuss in detail the steps of implementing a testing policy in an organization.


Definition: The first step any organization needs to do is define one unique definition for testing within the organization
so that everyone is of the same mindset.
How to achieve: How are we going to achieve our objective? Is there going to be a testing committee, will there be
compulsory test plans which need to be executed, etc?.
Evaluate: After testing is implemented in a project how do we evaluate it? Are we going to derive metrics of defects per
phase, per programmer, etc. Finally, it's important to let everyone know how testing has added value to the project?.
Standards: Finally, what are the standards we want to achieve by testing? For instance, we can say that more than 20
defects per KLOC will be considered below standard and code review should be done for it.
6. On what basis is the acceptance plan prepared?
In any project the acceptance document is normally prepared using the following inputs. This can vary from company to company
and from project to project.
1. Requirement document: This document specifies what exactly is needed in the project from the customers perspective.
2. Input from customer: This can be discussions, informal talks, emails, etc.
3. Project plan: The project plan prepared by the project manager also serves as good input to finalize your acceptance
test.

The following diagram shows the most common inputs used to prepare acceptance test plans.




7. What is configuration management?
Configuration management is the detailed recording and updating of information for hardware and software components. When we
say components we not only mean source code. It can be tracking of changes for software documents such as requirement, design,
test cases, etc.

When changes are done in adhoc and in an uncontrolled manner chaotic situations can arise and more defects injected. So
whenever changes are done it should be done in a controlled fashion and with proper versioning. At any moment of time we should
be able to revert back to the old version. The main intention of configuration management is to track our changes if we have issues
with the current system. Configuration management is done using baselines.
8. How does a coverage tool work?
While doing testing on the actual product, the code coverage testing tool is run simultaneously. While the testing is going on, the
code coverage tool monitors the executed statements of the source code. When the final testing is completed we get a complete
report of the pending statements and also get the coverage percentage.


9. Which is the best testing model?
In real projects, tailored models are proven to be the best, because they share features from The Waterfall, Iterative, Evolutionary
models, etc., and can fit into real life time projects. Tailored models are most productive and beneficial for many organizations. If
it's a pure testing project, then the V model is the best.
10. What is the difference between a defect and a failure?
When a defect reaches the end customer it is called a failure and if the defect is detected internally and resolved it's called a defect.


11. Should testing be done only after the build and execution phases are complete?
In traditional testing methodology testing is always done after the build and execution phases.

But that's a wrong way of thinking because the earlier we catch a defect, the more cost effective it is. For instance, fixing a defect
in maintenance is ten times more costly than fixing it during execution.

In the requirement phase we can verify if the requirements are met according to the customer needs. During design we can check
whether the design document covers all the requirements. In this stage we can also generate rough functional data. We can also
review the design document from the architecture and the correctness perspectives. In the build and execution phase we can
execute unit test cases and generate structural and functional data. And finally comes the testing phase done in the traditional way.
i.e., run the system test cases and see if the system works according to the requirements. During installation we need to see if the
system is compatible with the software. Finally, during the maintenance phase when any fixes are made we can retest the fixes and
follow the regression testing.

Therefore, Testing should occur in conjunction with each phase of the software development.
12. Are there more defects in the design phase or in the coding phase?
The design phase is more error prone than the execution phase. One of the most frequent defects which occur during design is that
the product does not cover the complete requirements of the customer. Second is wrong or bad architecture and technical decisions
make the next phase, execution, more prone to defects. Because the design phase drives the execution phase it's the most critical
phase to test. The testing of the design phase can be done by good review. On average, 60% of defects occur during design and
40% during the execution phase.



13. What group of teams can do software testing?
When it comes to testing everyone in the world can be involved right from the developer to the project manager to the customer.
But below are different types of team groups which can be present in a project.
Isolated test team
Outsource - we can hire external testing resources and do testing for our project.
Inside test team
Developers as testers
QA/QC team.
14. What impact ratings have you used in your projects?
Normally, the impact ratings for defects are classified into three types:


Minor: Very low impact but does not affect operations on a large scale.
Major: Affects operations on a very large scale.
Critical: Brings the system to a halt and stops the show.
15. Does an increase in testing always improve the project?
No an increase in testing does not always mean improvement of the product, company, or project. In real test scenarios only 20%
of test plans are critical from a business angle. Running those critical test plans will assure that the testing is properly done. The
following graph explains the impact of under testing and over testing. If you under test a system the number of defects will
increase, but if you over test a system your cost of testing will increase. Even if your defects come down your cost of testing has
gone up.
16. What's the relationship between environment reality and test phases?
Environment reality becomes more important as test phases start moving ahead. For instance, during unit testing you need the
environment to be partly real, but at the acceptance phase you should have a 100% real environment, or we can say it should be
the actual real environment. The following graph shows how with every phase the environment reality should also increase and
finally during acceptance it should be 100% real.


17. What are different types of verifications?
Verification is static type of s/w testing. It means code is not executed. The product is evaluated by going through the code. Types
of verification are:
1. Walkthrough: Walkthroughs are informal, initiated by the author of the s/w product to a colleague for assistance in
locating defects or suggestions for improvements. They are usually unplanned. Author explains the product; colleague
comes out with observations and author notes down relevant points and takes corrective actions.
2. Inspection: Inspection is a thorough word-by-word checking of a software product with the intention of Locating defects,
Confirming traceability of relevant requirements etc.
18. How do test documents in a project span across the software development lifecycle?
The following figure shows pictorially how test documents span across the software development lifecycle. The following discusses
the specific testing documents in the lifecycle:


Central/Project test plan: This is the main test plan which outlines the complete test strategy of the software project.
This document should be prepared before the start of the project and is used until the end of the software development
lifecycle.
Acceptance test plan: This test plan is normally prepared with the end customer. This document commences during the
requirement phase and is completed at final delivery.
System test plan: This test plan starts during the design phase and proceeds until the end of the project.
Integration and unit test plan: Both of these test plans start during the execution phase and continue until the final
delivery.
19. Which test cases are written first: white boxes or black boxes?
Normally black box test cases are written first and white box test cases later. In order to write black box test cases we
need the requirement document and, design or project plan. All these documents are easily available at the initial start of
the project. White box test cases cannot be started in the initial phase of the project because they need more
architecture clarity which is not available at the start of the project. So normally white box test cases are written after
black box test cases are written.

Black box test cases do not require system understanding but white box testing needs more structural understanding.
And structural understanding is clearer i00n the later part of project, i.e., while executing or designing. For black box
testing you need to only analyze from the functional perspective which is easily available from a simple requirement
document.


20. Explain Unit Testing, Integration Tests, System Testing and Acceptance Testing?
Unit testing - Testing performed on a single, stand-alone module or unit of code.

Integration Tests - Testing performed on groups of modules to ensure that data and control are passed properly between
modules.

System testing - Testing a predetermined combination of tests that, when executed successfully meets requirements.

Acceptance testing - Testing to ensure that the system meets the needs of the organization and the end user or customer
(i.e., validates that the right system was built).
21. What is a test log?
The IEEE Std. 829-1998 defines a test log as a chronological record of relevant details about the execution of test cases.
It's a detailed view of activity and events given in chronological manner.

The following figure shows a test log and is followed by a sample test log.


22. Can you explain requirement traceability and its importance?
In most organizations testing only starts after the execution/coding phase of the project. But if the organization wants to
really benefit from testing, then testers should get involved right from the requirement phase.

If the tester gets involved right from the requirement phase then requirement traceability is one of the important reports
that can detail what kind of test coverage the test cases have.
23. What does entry and exit criteria mean in a project?
Entry and exit criteria are a must for the success of any project. If you do not know where to start and where to finish
then your goals are not clear. By defining exit and entry criteria you define your boundaries.

For instance, you can define entry criteria that the customer should provide the requirement document or acceptance
plan. If this entry criteria is not met then you will not start the project. On the other end, you can also define exit criteria
for your project. For instance, one of the common exit criteria in projects is that the customer has successfully executed
the acceptance test plan.


24. What is the difference between verification and validation?
Verification is a review without actually executing the process while validation is checking the product with actual
execution. For instance, code review and syntax check is verification while actually running the product and checking the
results is validation.
25. What is the difference between latent and masked defects?
A latent defect is an existing defect that has not yet caused a failure because the sets of conditions were never met.

A masked defect is an existing defect that hasn't yet caused a failure just because another defect has prevented that part of the
code from being executed.
26. Can you explain calibration?
It includes tracing the accuracy of the devices used in the production, development and testing. Devices used must be maintained
and calibrated to ensure that it is working in good order.
27. What's the difference between alpha and beta testing?


Alpha and beta testing has different meanings to different people. Alpha testing is the acceptance testing done at the development
site. Some organizations have a different visualization of alpha testing. They consider alpha testing as testing which is conducted on
early, unstable versions of software. On the contrary beta testing is acceptance testing conducted at the customer end.

In short, the difference between beta testing and alpha testing is the location where the tests are done.
28. How does testing affect risk?
A risk is a condition that can result in a loss. Risk can only be controlled in different scenarios but not eliminated completely. A
defect normally converts to a risk.


29. What is coverage and what are the different types of coverage techniques?
Coverage is a measurement used in software testing to describe the degree to which the source code is tested. There are three
basic types of coverage techniques as shown in the following figure:


Statement coverage: This coverage ensures that each line of source code has been executed and tested.
Decision coverage: This coverage ensures that every decision (true/false) in the source code has been executed and
tested.
Path coverage: In this coverage we ensure that every possible route through a given part of code is executed and
tested.
30. A defect which could have been removed during the initial stage is removed in a later stage. How does this affect
cost?
If a defect is known at the initial stage then it should be removed during that stage/phase itself rather than at some later stage. It's
a recorded fact that if a defect is delayed for later phases it proves more costly. The following figure shows how a defect is costly as
the phases move forward. A defect if identified and removed during the requirement and design phase is the most cost effective,
while a defect removed during maintenance is 20 times costlier than during the requirement and design phases.



For instance, if a defect is identified during requirement and design we only need to change the documentation, but if identified
during the maintenance phase we not only need to fix the defect, but also change our test plans, do regression testing, and change
all documentation. This is why a defect should be identified/removed in earlier phases and the testing department should be
involved right from the requirement phase and not after the execution phase.
31. What kind of input do we need from the end user to begin proper testing?
The product has to be used by the user. He is the most important person as he has more interest than anyone else in the project.



From the user we need the following data:
The first thing we need is the acceptance test plan from the end user. The acceptance test defines the entire test which
the product has to pass so that it can go into production.
We also need the requirement document from the customer. In normal scenarios the customer never writes a formal
document until he is really sure of his requirements. But at some point the customer should sign saying yes this is what
he wants.
The customer should also define the risky sections of the project. For instance, in a normal accounting project if a
voucher entry screen does not work that will stop the accounting functionality completely. But if reports are not derived
the accounting department can use it for some time. The customer is the right person to say which section will affect him
the most. With this feedback the testers can prepare a proper test plan for those areas and test it thoroughly.
The customer should also provide proper data for testing. Feeding proper data during testing is very important. In many
scenarios testers key in wrong data and expect results which are of no interest to the customer.
32. Can you explain the workbench concept?
In order to understand testing methodology we need to understand the workbench concept. A Workbench is a way of documenting
how a specific activity has to be performed. A workbench is referred to as phases, steps, and tasks as shown in the following figure.



There are five tasks for every workbench:
Input: Every task needs some defined input and entrance criteria. So for every workbench we need defined inputs. Input
forms the first steps of the workbench.
Execute: This is the main task of the workbench which will transform the input into the expected output.
Check: Check steps assure that the output after execution meets the desired result.
Production output: If the check is right the production output forms the exit criteria of the workbench.
Rework: During the check step if the output is not as desired then we need to again start from the execute step.


33. Can you explain the concept of defect cascading?
Defect cascading is a defect which is caused by another defect. One defect triggers the other defect. For instance, in the accounting
application shown here there is a defect which leads to negative taxation. So the negative taxation defect affects the ledger which
in turn affects four other modules.


34. Can you explain cohabiting software?
When we install the application at the end client it is very possible that on the same PC other applications also exist. It is also very
possible that those applications share common DLLs, resources etc., with your application. There is a huge chance in such
situations that your changes can affect the cohabiting software. So the best practice is after you install your application or after any
changes, tell other application owners to run a test cycle on their application.


35. What is the difference between pilot and beta testing?
The difference between pilot and beta testing is that pilot testing is nothing but actually using the product (limited to some users)
and in beta testing we do not input real data, but it's installed at the end customer to validate if the product can be used in
production.


36. What are the different strategies for rollout to end users?
There are four major ways of rolling out any project:


Pilot: The actual production system is installed at a single or limited number of users. Pilot basically means that the
product is actually rolled out to limited users for real work.
Gradual Implementation: In this implementation we ship the entire product to the limited users or all users at the
customer end. Here, the developers get instant feedback from the recipients which allow them to make changes before
the product is available. But the downside is that developers and testers maintain more than one version at one time.
Phased Implementation: In this implementation the product is rolled out to all users in incrementally. That means
each successive rollout has some added functionality. So as new functionality comes in, new installations occur and the
customer tests them progressively. The benefit of this kind of rollout is that customers can start using the functionality
and provide valuable feedback progressively. The only issue here is that with each rollout and added functionality the
integration becomes more complicated.
Parallel Implementation: In these types of rollouts the existing application is run side by side with the new application.
If there are any issues with the new application we again move back to the old application. One of the biggest problems
with parallel implementation is we need extra hardware, software, and resources.
37. What's the difference between System testing and Acceptance testing?
Acceptance testing checks the system against the "Requirements." It is similar to System testing in that the whole
system is checked but the important difference is the change in focus:

System testing checks that the system that was specified has been delivered. Acceptance testing checks that the system
will deliver what was requested. The customer should always do Acceptance testing and not the developer.

The customer knows what is required from the system to achieve value in the business and is the only person qualified to
make that judgement. This testing is more about ensuring that the software is delivered as defined by the customer. It's
like getting a green light from the customer that the software meets expectations and is ready to be used.
38. Can you explain regression testing and confirmation testing?
Regression testing is used for regression defects. Regression defects are defects occur when the functionality which was
once working normally has stopped working. This is probably because of changes made in the program or the
environment. To uncover such kind of defect regression testing is conducted.

The following figure shows the difference between regression and confirmation testing.



If we fix a defect in an existing application we use confirmation testing to test if the defect is removed. It's very possible
because of this defect or changes to the application that other sections of the application are affected. So to ensure that
no other section is affected we can use regression testing to confirm this.

Q. 21: What is the difference between Alpha Testing and Beta Testing?
Typically a software product passes through two stages of testing before it is considered to
be Final. The first stage is known as Alpha Testing. It is often performed by potential users
/ customers or an independent test team at the developers site. It is usually done when the
development of the software product is nearing completion; minor design changes may still
be made as a result of Alpha testing.
The second stage coming after alpha testing is known as Beta Testing. Versions of the
software, known as beta versions, are released to a limited audience outside of the
programming team so that further evaluation by the users can reveal more faults or bugs in
the product. Sometimes, beta versions are made available to the open public to increase the
feedback field to a maximum number of future users.
<<<<<< =================== >>>>>>
Q. 22: What is the difference between Static Testing and Dynamic Testing?
Static Testing involves testing activities performed without actually running the software.
It includes Document review, code inspections, walkthroughs and desk checks etc.
Whereas Dynamic Testing Is used to describe the testing of the dynamic behavior of the
software code. It involves actual compilation & running of the software by giving input
values and checking if the output is as expected. It is the validation portion of Verification
and Validation.
<<<<<< =================== >>>>>>
Q. 23: What is the difference between Smoke Testing and Sanity Testing?
The general term of Smoke Testing has come from leakage testing of sewers & drain lines
involving blowing smoke into various parts of the sewer and drain lines to detect sources of
unwanted leaks and sources of sewer odors. In relation to software testing field, Smoke
testing Is a non-exhaustive software testing, ascertaining that the most crucial functions of
the program work well, without getting bothered about finer details of it.
Whereas Sanity Testing Is an initial testing effort to find out if the new software version is
performing well enough to accept it for a major testing effort. For example, if the new
software is crashing the systems every 5 minutes, bogging down the systems to a crawl, or
destroying the databases, then it can be concluded that the software may not be in a sane
enough condition to warrant further testing in its current state.
<<<<<< =================== >>>>>>
Q. 24: What is the difference between Stress Testing and Load Testing?
Stress Testing is subjecting a system to an unreasonable load while denying it the
adequate resources as required to process that load. The resources can be RAM, disc space,
mips & interrupts etc. etc. The idea is to stress a system to the breaking point in order to
find bugs, which will make the break potentially harmful. The system is not expected to
process the overload without adequate resources, but to fail in a decent manner (e.g.,
failure without corrupting or losing data). In stress testing the load (incoming transaction
stream) is often deliberately distorted so as to force the system into resource depletion.
Whereas Load Testing is a test performed with an objective to determine the maximum
sustainable load which the system can handle. Load is varied from a minimum (zero) to the
maximum level the system can sustain without running out of resources or causing
excessive delay in transactions.
<<<<<< =================== >>>>>>
Q. 25: What is the difference between Black Box Testing & White Box Testing?
First of all Black-Box and White-Box both are Test Design Methods.
Black-Box test design treats the system as a Black-Box (Wherein the tester cant see as
to what is there inside the box). Hence we design the test cases in such a way that we pour
the input from one end of the box and expect a certain specific output from the other end of
the box. To run these test cases, the tester need not know as to how the input gets
transformed to output inside the box. Black-Box is also known as Behavioral-Box or
Functional-Box or Opaque-Box or Gray-Box or Closed-Box.
Whereas White-Box test design treats the system as a Transparent Box, which allows
anyone to see inside the Box. In White-Box the tester is able to see the process of
transformation of an Input into an Output inside the box. Hence we design the test
cases with a view to test the internal Logic, Paths or Branches of the box. White-Box is also
known as Structural-Box or Glass-Box or Clear-Box or Translucent-Box test design
<<<<<< =================== >>>>>>
Q. 26: What is Quality?
Quality software is software that is reasonably bug-free, delivered on time and within
budget, meets requirements and expectations and is maintainable. However, quality is a
subjective term. Quality depends on who the customer is and their overall influence in the
scheme of things.
Customers of a software development project include end-users, customer acceptance test
engineers, testers, customer contract officers, customer management, the development
organizations management, test engineers, testers, salespeople, software engineers,
stockholders and accountants. Each type of customer will have his or her own slant on
quality. The accounting department might define quality in terms of profits, while an end-
user might define quality as user friendly and bug free.
<<<<<< =================== >>>>>>
Q. 27: What is an Inspection?
An inspection is a formal meeting, more formalized than a walkthrough and typically
consists of 3-10 people including a moderator, reader (the author of whatever is being
reviewed) and a recorder (to make notes in the document). The subject of the inspection is
typically a document, such as a requirements document or a test plan. The purpose of an
inspection is to find problems and see what is missing, not to fix anything. The result of the
meeting is documented in a written report. Attendees should prepare for this type of
meeting by reading through the document, before the meeting starts; most problems are
found during this preparation. Preparation for inspections is difficult, but is one of the most
cost-effective methods of ensuring quality, since bug prevention is more cost effective than
bug detection.
<<<<<< =================== >>>>>>
Q. 28: What is Good Design?
Design could mean to many things, but often refers to functional design or internal design.
Good functional design is indicated by software functionality can be traced back to customer
and end-user requirements. Good internal design is indicated by software code whose
overall structure is clear, understandable, easily modifiable and maintainable; is robust with
sufficient error handling and status logging capability; and works correctly when
implemented.
<<<<<< =================== >>>>>>
Q. 29: What is Six Sigma?
Six Sigma means Six Standard Deviations from the mean. It is a methodology aimed to
reduce defect levels below 3.4 Defects Per one Million Opportunities. Six Sigma approach
improves the process performance, decreases variation and maintains consistent quality of
the process output. This leads to defect reduction and improvement in profits, product
quality and customer satisfaction.
<<<<<< =================== >>>>>>
Q. 30: What is difference between CMM and CMMI?
CMM means Capability Maturity Model developed by the Software Engineering Institute
(SEI). It is a process capability maturity model, which aids in the definition and
understanding of an organizations processes. CMM is intended as a tool for objectively
assessing the ability of government contractors processes to perform a contracted software
project.
Whereas CMMI means Capability Maturity Model Integration & it has superceded CMM.
The old CMM has been renamed to Software Engineering CMM (SE-CMM).
<<<<<< =================== >>>>>>
Q. 31: What is Verification?
Verification ensures the product is designed to deliver all functionality to the customer; it
typically involves reviews and meetings to evaluate documents, plans, code, requirements
and specifications; this can be done with checklists, issues lists, walkthroughs and
inspection meetings.
<<<<<< =================== >>>>>>
Q. 32: What is Validation?
Validation ensures that functionality, as defined in requirements, is the intended behavior of
the product; validation typically involves actual testing and takes place after verifications
are completed.
<<<<<< =================== >>>>>>
Q. 33: What is a Test Plan?
A software project test plan is a document that describes the objectives, scope, approach
and focus of a software testing effort. The process of preparing a test plan is a useful way to
think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group nderstand the why and how of
product validation. It should be thorough enough to be useful, but not so thorough that
none outside the test group will be able to read it.
<<<<<< =================== >>>>>>
Q. 34: What is a Walkthrough?
A walkthrough is an informal meeting for evaluation or informational purposes. A
walkthrough is also a process at an abstract level. Its the process of inspecting software
code by following paths through the code (as determined by input conditions and choices
made along the way). The purpose of code walkthroughs is to ensure the code fits the
purpose. Walkthroughs also offer opportunities to assess an individuals or teams
competency.
<<<<<< =================== >>>>>>
Q. 35: What is Software Life Cycle?
Software life cycle begins when a software product is first conceived and ends when it is no
longer in use. It includes phases like initial concept, requirements analysis, functional
design, internal design, documentation planning, test planning, coding, document
preparation, integration, testing, maintenance, updates, re-testing and phase-out.
<<<<<< =================== >>>>>>
Q. 36: What is the Difference between STLC & SDLC?
STLC means Software Testing Life Cycle. It starts with activities like : 1) Preparation of
Requirements Document 2) Preparation of Test Plan 3) Preparation of Test Cases 4)
Execution of Test Cases 5) Analysis of Bugs 6) Reporting of Bugs 7) Tracking of Bugs till
closure.
Whereas SDLC means Software Development Life Cycle is a software development
process, used by a systems analyst to develop an information system. It starts with
activities like :
1) Project Initiation
2) Requirement Gathering and Documenting
3) Designing
4) Coding and Unit Testing
5) Integration Testing
6) System Testing
7) Installation and Acceptance Testing
8) Support or Maintenance
<<<<<< =================== >>>>>>
Q. 37: What are the various components of STLC?
Various components of Software Testing Life Cycle are
1) Requirements Document
2) Preparation of Test Plan
3) Preparation of Test Cases
4) Execution of Test Cases
5) Analysis of Bugs
6) Reporting of Bugs
7) Tracking of Bugs till closure
<<<<<< =================== >>>>>>
Q. 38: What is the Difference between Project and Product Testing?
If any organization is developing the application according to the client specification then it
is called as project. Accordingly its testing is known as Project Testing
Whereas If any organization is developing the application and marketing it is called as
product. Hence its testing is known as Product Testing
<<<<<< =================== >>>>>>
Q. 39: What are the Testing Types & Techniques?
Black Box and White Box are the most popular types of software testing. These are not the
stand-alone testing techniques.
Testing techniques falling under the Black-Box type are: 1) Equivalence Partitioning 2)
Boundary Value Analysis 3) Cause-Effect Graphing 4) Error-Guessing etc.
Whereas testing techniques falling under the White-Box type are:
1) Statement coverage
2) Decision coverage
3) Condition coverage
4) Decision-condition coverage
5) Multiple condition coverage
6) Basis Path Testing
7) Loop testing
8) Data flow testing etc.
<<<<<< =================== >>>>>>
Q. 40: How do you introduce a new software QA process?
It depends on the size of the organization and the risks involved. For large organizations
with high-risk projects, a serious management buy-in is required and a formalized QA
process is necessary. For medium size organizations with lower risk projects, management
and organizational buy-in and a slower, step-by-step process is required.
Generally speaking, QA processes should be balanced with productivity, in order to keep
any bureaucracy from getting out of hand. For smaller groups or projects, an ad-hoc
process is more appropriate. A lot depends on team leads and managers, feedback to
developers and good communication is essential among customers, managers, developers,
test engineers and testers. Regardless the size of the company, the greatest value for effort
is in managing requirement processes, where the goal is requirements that are clear,
complete and testable.

Tips for QA Analyst Interview Questions
Q. 101: What is the difference a Software Tester & Testing Analyst?
Testing analysts are more commonly involved with tasks at a higher level of abstraction,
such as test process design, test planning, and test case design.
Whereas Software Testers may be involved with test case design and test procedure
construction, and interaction with the actual software systems.
<<<<<< =================== >>>>>>
Q. 102: What are Software Testing Specialities?
Testing specialties include test automation, load testing, usability testing, testing
methodology, software inspections, industry or application expertise, test metrics, test
management, white box testing & security testing etc.
<<<<<< =================== >>>>>>
Q. 103: What can be the various Job Levels in the Software Testing Domain in a
Company?
Various job levels within the testing domain can include the tester, test analyst, test
manager or test specialist, test consultant or Test executive.
<<<<<< =================== >>>>>>
Q. 104: What is a Test Suite?
Set of collection of test cases is called a test suite.
It contains more detailed instructions or goals for each collection of test cases. It contains a
section where the tester identifies the system configuration used during testing. It may also
contain prerequisite states or steps, and descriptions of the tests as well.
<<<<<< =================== >>>>>>
Q. 105: What is a scenario test?
This is a test based on a hypothetical story used to help a person think through a complex
problem or system.
Generally scenario test have following five key characteristics.
1) A story
2) Which is motivating
3) Which is credible
4) Which is complex
5) Which is easy to evaluate.
Scenario tests are different from test cases in a way that test cases cover single steps
whereas scenarios cover a number of steps. Test suites and scenarios can be used together
for a complete system test.
<<<<<< =================== >>>>>>
Q. 106: What is Defect Tracking?
In engineering practice, defect tracking is the process of finding defects in a product by the
process of inspection, testing, or recording feedback from customers, and tracking them till
their closure.
In software engineering, defect tracking is of significant importance, since complex software
systems have thousands of defects due to which their management, evaluation and
prioritizing is a difficult task. Hence defect tracking systems in software engineering are
computer database systems which store defects and help people to manage them.
<<<<<< =================== >>>>>>
Q. 107: What is Formal Verification in context with Software & Hardware systems?
Formal verification is the process of proving or disproving the correctness of a system with
respect to a certain formal specification or property, with the help of formal methods.
Generally the formal verification is carried out algorithmically.
Approaches to implement formal verification are :
1) State space enumeration
2) Symbolic state space enumeration
3) Abstract interpretation
4) Abstraction refinement
5) Process-algebraic methods
6) Reasoning with the help of automatic theorem provers like HOL or Isabelle.
<<<<<< =================== >>>>>>
Q. 108: What is the concept of Fuzz Testing?
Fuzz testing is a software testing technique involving attachment of the inputs of a program
to a source of random data. Main advantage of fuzz testing is that the test design is
extremely simple, and remains free of preconceptions about system behavior.
Fuzz testing is generally used in large software development projects which use black box
testing. Fuzz testing provides a high benefit to cost ratio.
Fuzz testing technique is also used for the measurement of quality of large software
systems. The advantage is that the cost of generating tests is relatively low.
Fuzz testing is helps to enhance the software security and software safety because it often
finds odd oversights and defects which normal human testers would fail to find, and even
the most careful human test designers would fail to create tests for.
Fuzz testing is not a substitute for exhaustive testing or formal methods; it can only provide
a random sample of the systems behavior. Passing a fuzz test may only indicate that a
particular software is capable to handles exceptions without crashing and it may not indicate
its correct behavior.
<<<<<< =================== >>>>>>
Q. 109: What are the different forms of fuzz testing?
1) Valid fuzz Testing to assure that the random input is reasonable, or conforms to actual
production data.
2) Simple fuzz Testing usually uses a pseudo random number generator to provide an input.
3) A combined approach uses valid test data with some proportion of totally random input
injected.
By using all the above techniques in combination, fuzz-generated randomness can test the
un-designed behavior surrounding a wider range of designed system states.
<<<<<< =================== >>>>>>
Q. 110: What is a Web Application & How does it look like?
A web application is an internet based application, consisting of a set many scripts, which
are normally stored on some Web server and are made to interact with some databases or
any other similar sources of the dynamic content.
Web applications provide an interactive Form to the user, wherein feeds inputs according to
the fields provided in the form; then he clicks on a button like Submit or OK to store his
inputs on the database & perform a set of calculations & present back the desired
information.
Web Applications are becoming popular since these are a via media for exchange of
information between various service providers and respective customers across the internet.
These web applications are by & large not dependent on any platform. Popular examples of
Web applications are Google / Yahoo or similar search engines, Internet Banking websites of
several Banks, E-mail facility providing sites like Gmail, Yahoo Mail, Rediff Mail etc., Sale &
Purchase sites like E-Bay etc.

Latest Quality Assurance Interview Questions With
Solutions
Q. 81: What is configuration Management?
Configuration Management (or CM) is the processes of controlling, coordinating and tracking
the Standards and procedures for managing changes in an evolving software product.
Configuration Testing is the process of checking the operation of the software being tested
on various types of hardware.
<<<<<< =================== >>>>>>
Q. 82: What is the role of QA in a software producing company?
Quality Assurance is responsible for managing, implementing, maintaining and continuously improving the
Processes in the Company and enable internal projects towards process maturity and
facilitate process improvements and innovations in the organization.
Tester is responsible for carrying out the testing efforts in the company.
In many companies QA person is responsible both the roles of Testing as well as creating
and improving the processes.
<<<<<< =================== >>>>>>
Q. 83: What is Fuzz Testing?
Fuzz testing a technique of testing an application by feeding random inputs.
<<<<<< =================== >>>>>>
Q. 84: What is Failure Mode and Effect Analysis (FMEA)?
Failure Mode and Effect Analysis is a systematic approach to risk identification and analysis
of identifying possible modes of failure and attempting to prevent their occurrence.
<<<<<< =================== >>>>>>
Q. 85: What is Path Testing?
Path Testing or Path Coverage is a white box method of testing which satisfies the coverage
criteria through which the program is tested across each logical path. Usually, paths through
the program are grouped into a finite set of classes and one path out of every class is
tested.
In Path Coverage flow of execution takes place from the start of a method to its exit. Path
Coverage ensures that we test all decision outcomes independently of one another
<<<<<< =================== >>>>>>
Q. 86: What is Test Maturity Model or TMM?
Test Maturity Model or TMM is a five level staged framework for test process improvement,
related to the Capability Maturity Model (CMM) that describes the key elements of an
effective test process.
<<<<<< =================== >>>>>>
Q. 87: What is Back-To-Back Testing?
Back-To-Back Testing refers to the testing process in which two or more variants of a
component or system are executed with the same inputs, the outputs compared, and
analyzed in cases of discrepancies.
<<<<<< =================== >>>>>>
Q. 88: What is a Blocked Test Case?
Blocked Test Case refers to the test case, which cannot be executed because the
preconditions for its execution are not fulfilled.
<<<<<< =================== >>>>>>
Q. 89: What is the difference between API & ABI?
Application Programming Interface (API) is a formalized set of software calls and routines
that can be referenced by an application program in order to access supporting system or
network services.
Whereas Application Binary Interface (ABI) is a specification defining requirements for
portability of applications in binary forms across different system platforms and
environments.
<<<<<< =================== >>>>>>
Q. 90: What is I V & V?
I V & V means Independent Verification and Validation.
Verification typically involves reviews and meetings to evaluate documents, plans, code,
requirements, and specifications. Verification can be done with the help of checklists, issues
lists, walk throughs, and inspection meetings.
Whereas Validation typically involves actual testing and takes place after verifications are
completed.