Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
2
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
3
This first step is a prerequisite to building the VV&T Plan used to evaluate the implemented
software solution. During this step, testers challenge the completeness and correctness of the
development plan. Based on the extensiveness and completeness of the Project Plan the testers
can estimate the amount of resources they will need to test the implemented software solution.
Forming the plan for testing will follow the same pattern as any software planning process. The
structure of all plans should be the same, but the content will vary based on the degree of risk the
testers perceive as associated with the software being developed.
Incomplete, inaccurate, or inconsistent requirements lead to most software failures. The inability
to get requirements right during the requirements gathering phase can also increase the cost of
implementation significantly. Testers, through verification, must determine that the requirements
are accurate, complete, and they do not conflict with one another.
This step tests both external and internal design primarily through verification techniques. The
testers are concerned that the design will achieve the objectives of the requirements, as well as
the design being effective and efficient on the designated hardware.
The method chosen to build the software from the internal design document will determine the
type and extensiveness of tests needed. As the construction becomes more automated, less
testing will be required during this phase. However, if software is constructed using the waterfall
process, it is subject to error and should be verified. Experience has shown that it is significantly
cheaper to identify defects during the construction phase, than through dynamic testing during
the test execution step.
This involves the testing of code in a dynamic state. The approach, methods, and tools specified
in the test plan will be used to validate that the executable code in fact meets the stated software
requirements, and the structural specifications of the design.
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
4
Acceptance testing enables users to evaluate the applicability and usability of the software in
performing their day-to-day job functions. This tests what the user believes the software should
perform, as opposed to what the documented requirements state the software should perform.
Test reporting is a continuous process. It may be both oral and written. It is important that
defects and concerns be reported to the appropriate parties as early as possible, so that
corrections can be made at the lowest possible cost.
Once the test team has confirmed that the software is ready for production use, the ability to
execute that software in a production environment should be tested. This tests the interface to
operating software, related software, and operating procedures.
While this is shown as Step 10, in the context of performing maintenance after the software is
implemented, the concept is also applicable to changes throughout the implementation process.
Whenever requirements change, the test plan must change, and the impact of that change on
software systems must be tested and evaluated.
Testing improvement can best be achieved by evaluating the effectiveness of testing at the end of
each software test assignment. While this assessment is primarily performed by the testers, it
should involve the developers, users of the software, and quality assurance professionals if the
function exists in the IT organization.
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
5
Overview:
Figure 15-1 shows simplified client/server architecture. There are many possible variations of the
client/server architecture, but for illustration purposes, this is representative. In this example,
application software resides on the client workstations. The application server handles processing
requests. The back-end processing (typically a mainframe or super-minicomputer) handles
processing such as batch transactions that are accumulated and processed together at one time on
a regular basis. The important distinction to note is that application software resides on the client
workstation.
Figure 15-1 shows the key distinction between workstations connected to the mainframe and
workstations that contain the software used for client processing. This distinction represents a
major change in processing control. For this reason, client/server testing must first evaluate the
organization’s readiness to make this control change, and then evaluate the key components of
the client/server system prior to conducting tests.
This chapter will provide the material on assessing readiness and key components. The actual
testing of client/server systems will be achieved using the seven-step testing process.
Concerns:
The concerns about client/server systems reside in the area of control. The testers need to
determine that adequate controls are in place to ensure accurate, complete, timely, and secure
processing of client/server software systems.
2. Client installation. The concern is that the appropriate hardware and software will be in place
to enable processing that will meet client needs.
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
6
3. Security. There is a need for protection of both the hardware, including residence software,
and the data that is processed using that hardware and software. Security must address threats
from employees, outsiders, and acts of nature.
4. Client data. Controls must be in place to ensure that everything is not lost, incorrectly
processed, or processed differently on a client workstation than in other areas of the organization.
5. Client/server standards. Standards must exist to ensure that all client workstations operate
under the same set of rules.
Workbench:
Figure 15-2 provides a workbench for testing client/server systems. This workbench can be used
in steps as the client/server system is developed or concurrently after the client/server system has
been developed. The workbench shows four steps, as well as the quality control procedures
necessary to ensure that those four steps are performed correctly. The output will be any
identified weaknesses uncovered during testing.
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
7
Input:
The input to this test process will be the client/server system. This will include the server
technology and capabilities, the communication network, and the client work stations that will be
incorporated into the test. Because both the client and the server components will include
software capabilities, the materials should provide a description of the client software, and any
test results on that client software should be input
to this test process.
Do Procedures:
Testing client/server software involves the following three tasks:
■■ Assess readiness
■■ Assess key components
■■ Assess client needs
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
8
Overview:
This testing process lists the more common concerns associated with the data ware house
concept. It also explains the more common activities performed as part of a data warehouse.
Testing begins by determining the appropriateness of those concerns to the data warehouse
process under test. If appropriate, the severity of the concerns must be determined. This is
accomplished by relating those high-severity concerns to the data warehouse activity controls. If
in place and working, the controls should minimize the concerns.
Concerns:
The following are the concerns most commonly associated with a data warehouse:
■■ Losing an update to a single data item. One or more updates to a single data item
can be lost because of inadequate concurrent update procedures.
■■ Inadequate service level. Multiple users vying for the same resources may degrade
the service to all because of excessive demand or inadequate resources.
■■ Placing data in the wrong calendar period. Identifying transactions with the proper
calendar period is more difficult in some online data warehouse environments than in
others.
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
9
■■ Improper use of data. Systems that control resources are always subject to mis use
and abuse.
■■ Lack of skilled independent data warehouse reviewers. Most reviewers are not
skilled in data warehouse technology and, thus, have not evaluated data warehouse
installations.
Workbench:
Figure 21-1 illustrates the workbench for testing the adequacy of the data warehouse activity.
The workbench is a three-task process that measures the magnitude of the concerns, identifies
the data warehouse activity processes, and then determines the tests necessary to determine
whether the high-magnitude concerns have been adequately addressed. Those performing the test
must be familiar with the data warehouse activity processes. The end result of the test is an
assessment of the adequacy of those processes to minimize the high-magnitude concerns.
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
10
Input:
Organizations implementing the data warehouse activity need to establish processes to manage,
operate, and control that activity. The input to this test process is knowledge of those data
warehouse activity processes. If the test team does not have that knowledge, it should be
supplemented with one or more individuals who possess a detailed knowledge of the data
warehouse activity processes.
Enterprise-wide requirements are data requirements that are applicable to all software systems
and their users. Whenever anyone accesses or updates a data warehouse, that process is subject
to the enterprise-wide requirements. They are called enterprise wide requirements because they
are defined once for all software systems and users. Each organization must define its own
enterprise-wide controls. However, testers should be aware that many IT organizations do not
define enterprise-wide requirements. Therefore, testers need to be aware that there may be
inconsistencies between software systems and/or users. For example, if there are no security
requirements applicable enterprise-wide, each software system may have different security
procedures.
Enterprise-wide requirements applicable to the data warehouse include but are not limited to the
following:
■■ Data accessibility. Who has access to the data warehouse, and any constraints or
limitations placed on that access.
■■ Update controls. Who can change data within the data warehouse as well as the
sequence in which data may be changed in the data warehouse.
■■ Date controls. The date that the data is applicable for different types of processes.
For example, with accounting data it is the date that the data is officially recorded on the
books of the organization.
■■ Usage controls. How data can be used by the users of the data warehouse, including
any restrictions on users forwarding data to other potential users.
■■ Documentation controls. How the data within the data warehouse is to be described
to users.
Do Procedures:
To test a data warehouse, testers should perform the following three tasks:
1. Measure the magnitude of data warehouse concerns.
2. Identify data warehouse activities to test
3. Test the adequacy of data warehouse activity processes
Output:
The output from the data warehouse test process is an assessment of the adequacy of the data
warehouse activity processes. The assessment report should indicate the concerns addressed by
the test team, the processes in place in the data warehouse activity, and the adequacy of those
processes.
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
11
Testing
Web-Based Systems:
Web-based systems are those systems that use the Internet, intranets, and extranets. The Internet
is a worldwide collection of interconnected networks. An intranet is a private network inside a
company using web based applications, but for use only within an organization. An extranet is a
private network that allows external access to customers and suppliers using web-based
applications.
Overview:
For web-based systems, the browsers reside on client workstations. These client workstations are
networked to a web server, either through a remote connection or through a network such as a
local area network (LAN) or wide area network (WAN).
As the web server receives and processes requests from the client workstation, requests may be
sent to the application server to perform actions such as data queries, electronic commerce
transactions, and so forth.
The back-end processing works in the background to perform batch processing and handle high-
volume transactions. The back-end processing can also interface with transactions to other
systems in the organization. For example, when an online bank ing transaction is processed over
the Internet, the transaction is eventually updated to the customer’s account and shown on a
statement in a back-end process.
Concerns:
Testers should have the following concerns when conducting web-based testing:
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
12
■■ Integration. Testers should validate the integration between browsers and servers,
applications and data, and hardware and software.
■■ Usability. Testers should validate the overall usability of a web page or a web
application, including appearance, clarity, and navigation.
■■ Security. Testers should validate the adequacy and correctness of security controls,
including access control and authorizations.
■■ Performance. Testers should validate the performance of the web application under
load.
■■ Verification of code. Testers should validate that the code used in building the web
application (HTML, Java, and so on) has been used in a correct manner. For example, no
nonstandard coding practices should be used that would cause an application to function
incorrectly in some environments.
Workbench:
Figure 22-1 illustrates the workbench for web-based testing. The input to the workbench is the
hardware and software that will be incorporated in the web-based system to be tested. The first
three tasks of the workbench are primarily involved in web-based test planning. The fourth task
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
13
is traditional software testing. The output from the workbench is to report what works and what
does not work, as well as any concerns over the use of web technology.
Input:
The input to this test process is the description of web-based technology used in the systems
being tested. The following list shows how web-based systems differ from other technologies.
The description of the web-based systems under testing should address these differences:
■■ Security issues. Protection is needed from unauthorized access that can corrupt
applications and/or data. Another security risk is that of access to confidential
information.
■■ New terminology and skill sets. Just as in making the transition to client/server, new
skills are needed to develop, test, and use web-based technology effectively.
Do Procedures:
Security.
Performance
Correctness
Compatibility (configuration
Reliability.
Data integrity
Usability.
Recoverability.
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
14
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
15
Effectiveness of security testing can be improved by focusing on the points where security has
the highest probability of being compromised. A testing tool that has proved effective in
identifying these points is the penetration-point matrix. The security-testing process described in
this chapter focuses primarily on developing the penetration-point matrix, as opposed to the
detailed testing of those identified points.
Overview:
This test process provides two resources: a security baseline and an identification of the points in
an information system that have a high risk of being penetrated. Neither resource is statistically
perfect, but both have proven to be highly reliable when used by individuals knowledgeable in
the area that may be penetrated. The penetration-point tool involves building a matrix. In one
dimension are the activities that may need security controls; in the other dimension are potential
points of penetration. Developers of the matrix assess the probability of penetration at various
points in the software system at the points of intersection. By identifying the points with the
highest probability of penetration, organizations gain insight as to where information systems
risk penetration the most.
Concerns:
There are two major security concerns: Security risks must be identified, and adequate controls
must be installed to minimize these risks.
Workbench:
This workbench assumes a team knowledgeable about the information system to be secured. This
team must be knowledgeable about the following:
The workbench provides three tasks for building and using a penetration-point matrix (see Figure
20-1). The security-testing techniques used in this workbench are the security baseline and the
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
16
penetration-point matrix. The prime purpose of the matrix is to focus discussion on high-risk
points of potential penetration and to assist in determining which points require the most
attention. These techniques can be used by project teams, special teams convened to identify
security risks, or by quality assurance/quality-control personnel to assess the adequacy of
security systems.
Input:
The input to this test process is a team that is knowledgeable about the information system to be
protected and about how to achieve security for a software system. The reliability of the results
will depend heavily on the knowledge of the individuals involved with the information system
and the specific types of individuals who are likely to penetrate the system at risk points. The
security-testing techniques presented in this chapter are simple enough that the team should not
require prior training in the use of the security test tools.
Do Procedures:
This test process involves performing the following three tasks:
1. Establish a security baseline.
2. Build a penetration-point matrix.
3. Analyze the results of security testing.
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
17
Check Procedures:
The check procedures for this test process should focus on the completeness and competency of
the team using the security baseline process and the penetration-point matrix, as well as the
completeness of the list of potential perpetrators and potential points of penetration. The analysis
should also be challenged
Output:
The output from this test process is a security baseline, the penetration-point matrix identifying
the high-risk points of penetration, and a security assessment.
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
18
Overview:
The test plan describes how testing will be accomplished. Its creation is essential to effective
testing and should take about one-third of the total testing effort. If you develop the plan
carefully, test execution, analysis, and reporting will flow smoothly. Consider the test plan as an
evolving document. As the developmental effort changes in scope, the test plan must change
accordingly. It is important to keep the test plan current and to follow it, for it is the execution of
the test plan that management must rely on to ensure that testing is effective; and it is from this
plan that testers will ascertain the status of the test effort and base opinions on its results.
Objective:
The objective of a test plan is to describe all testing that is to be accomplished, together with the
resources and schedule necessary for completion. The test plan should provide background
information on the software being tested, test objectives and risks, and specific tests to be
performed. Properly constructed, the test plan is a contract between the testers and the project
team/users. Thus, status reports and final reports will be based on that contract.
Concerns:
The concerns testers face in ensuring that the test plan will be complete include the following:
Not enough training. The majority of IT personnel have not been formally trained in
testing, and only about half of full-time independent testing personnel have been trained
in testing techniques. This causes a great deal of misunderstanding and misapplication of
testing techniques.
Us-versus-them mentality. This common problem arises when developers and testers
are on opposite sides of the testing issue. Often, the political infighting takes up energy,
sidetracks the project, and negatively impacts relationships.
Lack of testing tools. IT management may consider testing tools to be a luxury. Manual
testing can be an overwhelming task. Although more than just tools are needed, trying to
test effectively without tools is like trying to dig a trench with a spoon.
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
19
Lack of customer and user involvement. Users and customers may be shut out of the
testing process, or perhaps they don’t want to be involved. Users and customers play one
of the most critical roles in testing: making sure the software works from a business
perspective.
Not enough time for testing. This is common complaint. The challenge is to prioritize
the plan to test the right things in the given time.
Over-reliance on independent testers. Sometimes called the “throw it over the wall”
syndrome. Developers know that independent testers will check their work, so they focus
on coding and let the testers do the testing. Unfortunately, this results in higher defect
levels and longer testing times.
Testers are in a lose-lose situation. On the one hand, if the testers report too many
defects, they are blamed for delaying the project. Conversely, if the testers do not find the
critical defects, they are blamed for poor quality.
Having to say no. The single toughest dilemma for testers is to have to say, “No, the
software is not ready for production.” Nobody on the project likes to hear that, and
frequently, testers succumb to the pressures of schedule and cost.
Input:
Accurate and complete inputs are critical for developing an effective test plan. The following
two inputs are used in developing the test plan:
Project plan. This plan should address the totality of activities required to implement
the project and control that implementation. The plan should also include testing.
Project plan assessment and status report. This report (developed from Step 1 of the
seven-step process) evaluates the completeness and reasonableness of the project plan. It
also indicates the status of the plan as well as the method for reporting status throughout
the development effort.
Do Procedures:
The following six tasks should be completed during the execution of this step:
1. Profile the software project.
2. Understand the software project’s risks.
3. Select a testing technique.
4. Plan unit testing and analysis.
5. Build the test plan.
6. Inspect the test plan.
K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad