Sei sulla pagina 1di 19

1

SQT UNIT-V LECTURE NOTES

The Eleven Step Testing Process:


The software testing process example, as illustrated in Figure 19, is an 11-step testing
process that follows the “V” concept of testing. The “V” represents both the software
development process and the 11-step software testing process. The first five steps use
verification as the primary means to evaluate the correctness of the interim development
deliverables. Validation is used to test the software in an executable mode. Results of both
verification and validation should be documented. Both verification and validation will be used
to test the installation of the software as well as changes to the software. The final step of the
“V” process represents both the development and test team evaluating the effectiveness of
testing.

Step 1: Assess Development Plan and Status

Step 2: Develop the Test plan

Step 3: Test Software Requirements

Step 4: Test Software Design

Step 5: Test Software Construction

Step 6: Execute Tests

Step 7: Acceptance Test

Step 8: Report Test results

Step 9: Test Software Installation

Step 10: Test Software Changes

Step 11: Evaluate the test Effectiveness

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
2

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
3

Step 1: Assess Development Plan and Status:

This first step is a prerequisite to building the VV&T Plan used to evaluate the implemented
software solution. During this step, testers challenge the completeness and correctness of the
development plan. Based on the extensiveness and completeness of the Project Plan the testers
can estimate the amount of resources they will need to test the implemented software solution.

Step 2: Develop the Test Plan

Forming the plan for testing will follow the same pattern as any software planning process. The
structure of all plans should be the same, but the content will vary based on the degree of risk the
testers perceive as associated with the software being developed.

Step 3: Test Software Requirements

Incomplete, inaccurate, or inconsistent requirements lead to most software failures. The inability
to get requirements right during the requirements gathering phase can also increase the cost of
implementation significantly. Testers, through verification, must determine that the requirements
are accurate, complete, and they do not conflict with one another.

Step 4: Test Software Design

This step tests both external and internal design primarily through verification techniques. The
testers are concerned that the design will achieve the objectives of the requirements, as well as
the design being effective and efficient on the designated hardware.

Step 5: Program (Build) Phase Testing

The method chosen to build the software from the internal design document will determine the
type and extensiveness of tests needed. As the construction becomes more automated, less
testing will be required during this phase. However, if software is constructed using the waterfall
process, it is subject to error and should be verified. Experience has shown that it is significantly
cheaper to identify defects during the construction phase, than through dynamic testing during
the test execution step.

Step 6: Execute and Record Results

This involves the testing of code in a dynamic state. The approach, methods, and tools specified
in the test plan will be used to validate that the executable code in fact meets the stated software
requirements, and the structural specifications of the design.

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
4

Step 7: Acceptance Test

Acceptance testing enables users to evaluate the applicability and usability of the software in
performing their day-to-day job functions. This tests what the user believes the software should
perform, as opposed to what the documented requirements state the software should perform.

Step 8: Report Test Results

Test reporting is a continuous process. It may be both oral and written. It is important that
defects and concerns be reported to the appropriate parties as early as possible, so that
corrections can be made at the lowest possible cost.

Step 9: The Software Installation

Once the test team has confirmed that the software is ready for production use, the ability to
execute that software in a production environment should be tested. This tests the interface to
operating software, related software, and operating procedures.

Step 10: Test Software Changes

While this is shown as Step 10, in the context of performing maintenance after the software is
implemented, the concept is also applicable to changes throughout the implementation process.
Whenever requirements change, the test plan must change, and the impact of that change on
software systems must be tested and evaluated.

Step 11: Evaluate Test Effectiveness

Testing improvement can best be achieved by evaluating the effectiveness of testing at the end of
each software test assignment. While this assessment is primarily performed by the testers, it
should involve the developers, users of the software, and quality assurance professionals if the
function exists in the IT organization.

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
5

Testing Client/Server Systems:


The success of a client/server program depends heavily on both the readiness of an organization
to use the technology effectively and its ability to provide clients the information and capabilities
that meet their needs. If an organization is not ready to move to client/server technology, it is far
better to work on changing the organization to a ready status than on installing client/server
technology. Preparing the organization for client/server technology is an important component of
a successful program, regardless of whether it is an organization-wide client/server technology or
just a small program. If the organization is ready, the client/server approach should be evaluated
prior to testing the client systems.

Overview:
Figure 15-1 shows simplified client/server architecture. There are many possible variations of the
client/server architecture, but for illustration purposes, this is representative. In this example,
application software resides on the client workstations. The application server handles processing
requests. The back-end processing (typically a mainframe or super-minicomputer) handles
processing such as batch transactions that are accumulated and processed together at one time on
a regular basis. The important distinction to note is that application software resides on the client
workstation.

Figure 15-1 shows the key distinction between workstations connected to the mainframe and
workstations that contain the software used for client processing. This distinction represents a
major change in processing control. For this reason, client/server testing must first evaluate the
organization’s readiness to make this control change, and then evaluate the key components of
the client/server system prior to conducting tests.

This chapter will provide the material on assessing readiness and key components. The actual
testing of client/server systems will be achieved using the seven-step testing process.

Concerns:

The concerns about client/server systems reside in the area of control. The testers need to
determine that adequate controls are in place to ensure accurate, complete, timely, and secure
processing of client/server software systems.

The testers must address the following five concerns:

1. Organizational readiness. The culture is adequately prepared to process data using


client/server technology. Readiness must be evaluated in the areas of management, client
installation, and server support.

2. Client installation. The concern is that the appropriate hardware and software will be in place
to enable processing that will meet client needs.

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
6

3. Security. There is a need for protection of both the hardware, including residence software,
and the data that is processed using that hardware and software. Security must address threats
from employees, outsiders, and acts of nature.

4. Client data. Controls must be in place to ensure that everything is not lost, incorrectly
processed, or processed differently on a client workstation than in other areas of the organization.

5. Client/server standards. Standards must exist to ensure that all client workstations operate
under the same set of rules.

Workbench:
Figure 15-2 provides a workbench for testing client/server systems. This workbench can be used
in steps as the client/server system is developed or concurrently after the client/server system has
been developed. The workbench shows four steps, as well as the quality control procedures
necessary to ensure that those four steps are performed correctly. The output will be any
identified weaknesses uncovered during testing.

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
7

Input:
The input to this test process will be the client/server system. This will include the server
technology and capabilities, the communication network, and the client work stations that will be
incorporated into the test. Because both the client and the server components will include
software capabilities, the materials should provide a description of the client software, and any
test results on that client software should be input
to this test process.

Do Procedures:
Testing client/server software involves the following three tasks:
■■ Assess readiness
■■ Assess key components
■■ Assess client needs

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
8

Testing a Data Warehouse:


A data warehouse is a central repository of data made available to users. The centralized storage
of data provides significant processing advantages but at the same time raises concerns of the
data’s security, accessibility, and integrity. This chapter focuses on where testing would be most
effective in determining the risks associated with those concerns.

Overview:
This testing process lists the more common concerns associated with the data ware house
concept. It also explains the more common activities performed as part of a data warehouse.
Testing begins by determining the appropriateness of those concerns to the data warehouse
process under test. If appropriate, the severity of the concerns must be determined. This is
accomplished by relating those high-severity concerns to the data warehouse activity controls. If
in place and working, the controls should minimize the concerns.

Concerns:

The following are the concerns most commonly associated with a data warehouse:

■■ Inadequate assignment of responsibilities. There is inappropriate segregation of


duties or failure to recognize placement of responsibility.

■■ Inaccurate or incomplete data in a data warehouse. The integrity of data entered


in the data warehouse is lost because of inadvertent or intentional acts.

■■ Losing an update to a single data item. One or more updates to a single data item
can be lost because of inadequate concurrent update procedures.

■■ Inadequate audit trail to reconstruct transactions. The use of data by multiple


applications may split the audit trail among those applications and the data warehouse
software audit trail.

■■ Unauthorized access to data in a data warehouse. The concentration of data may


make sensitive data available to anyone who gains access to the data warehouse.

■■ Inadequate service level. Multiple users vying for the same resources may degrade
the service to all because of excessive demand or inadequate resources.

■■ Placing data in the wrong calendar period. Identifying transactions with the proper
calendar period is more difficult in some online data warehouse environments than in
others.

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
9

■■ Failure of data warehouse software to function as specified. Vendors provide most


data warehouse software, making the data warehouse administrator dependent on the
vendor to ensure the proper functioning of the software.

■■ Improper use of data. Systems that control resources are always subject to mis use
and abuse.
■■ Lack of skilled independent data warehouse reviewers. Most reviewers are not
skilled in data warehouse technology and, thus, have not evaluated data warehouse
installations.

■■ Inadequate documentation. Documentation of data warehouse technology is needed


to ensure consistency of understanding and use by multiple users.
■■ Loss of continuity of processing. Many organizations rely heavily on data
warehouse technology for the performance of their day-to-day processing.

■■ Lack of criteria to evaluate. Without established performance criteria, an


organization cannot be assured that it is achieving its data warehouse goals.

■■ Lack of management support. Without adequate resources and “clout,” the


advantages of data warehouse technology may not be achieved.

Workbench:
Figure 21-1 illustrates the workbench for testing the adequacy of the data warehouse activity.
The workbench is a three-task process that measures the magnitude of the concerns, identifies
the data warehouse activity processes, and then determines the tests necessary to determine
whether the high-magnitude concerns have been adequately addressed. Those performing the test
must be familiar with the data warehouse activity processes. The end result of the test is an
assessment of the adequacy of those processes to minimize the high-magnitude concerns.

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
10

Input:
Organizations implementing the data warehouse activity need to establish processes to manage,
operate, and control that activity. The input to this test process is knowledge of those data
warehouse activity processes. If the test team does not have that knowledge, it should be
supplemented with one or more individuals who possess a detailed knowledge of the data
warehouse activity processes.

Enterprise-wide requirements are data requirements that are applicable to all software systems
and their users. Whenever anyone accesses or updates a data warehouse, that process is subject
to the enterprise-wide requirements. They are called enterprise wide requirements because they
are defined once for all software systems and users. Each organization must define its own
enterprise-wide controls. However, testers should be aware that many IT organizations do not
define enterprise-wide requirements. Therefore, testers need to be aware that there may be
inconsistencies between software systems and/or users. For example, if there are no security
requirements applicable enterprise-wide, each software system may have different security
procedures.

Enterprise-wide requirements applicable to the data warehouse include but are not limited to the
following:
■■ Data accessibility. Who has access to the data warehouse, and any constraints or
limitations placed on that access.
■■ Update controls. Who can change data within the data warehouse as well as the
sequence in which data may be changed in the data warehouse.
■■ Date controls. The date that the data is applicable for different types of processes.
For example, with accounting data it is the date that the data is officially recorded on the
books of the organization.
■■ Usage controls. How data can be used by the users of the data warehouse, including
any restrictions on users forwarding data to other potential users.
■■ Documentation controls. How the data within the data warehouse is to be described
to users.

Do Procedures:
To test a data warehouse, testers should perform the following three tasks:
1. Measure the magnitude of data warehouse concerns.
2. Identify data warehouse activities to test
3. Test the adequacy of data warehouse activity processes

Output:
The output from the data warehouse test process is an assessment of the adequacy of the data
warehouse activity processes. The assessment report should indicate the concerns addressed by
the test team, the processes in place in the data warehouse activity, and the adequacy of those
processes.

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
11

Testing
Web-Based Systems:
Web-based systems are those systems that use the Internet, intranets, and extranets. The Internet
is a worldwide collection of interconnected networks. An intranet is a private network inside a
company using web based applications, but for use only within an organization. An extranet is a
private network that allows external access to customers and suppliers using web-based
applications.

Overview:

Web-based architecture is an extension of client/server architecture. The following section


describes the difference between client/server architecture and web-based architecture.

In a client/server architecture, as discussed in Chapter 15, application software resides on the


client workstations. The application server handles processing requests. The back-end processing
(typically a mainframe or super-minicomputer) handles processing such as batch transactions
that are accumulated and processed together at one time on a regular basis. The important
distinction to note is that application software resides on the client workstation.

For web-based systems, the browsers reside on client workstations. These client workstations are
networked to a web server, either through a remote connection or through a network such as a
local area network (LAN) or wide area network (WAN).

As the web server receives and processes requests from the client workstation, requests may be
sent to the application server to perform actions such as data queries, electronic commerce
transactions, and so forth.

The back-end processing works in the background to perform batch processing and handle high-
volume transactions. The back-end processing can also interface with transactions to other
systems in the organization. For example, when an online bank ing transaction is processed over
the Internet, the transaction is eventually updated to the customer’s account and shown on a
statement in a back-end process.

Concerns:

Testers should have the following concerns when conducting web-based testing:

■■ Browser compatibility. Testers should validate consistent application performance


on a variety of browser types and configurations.

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
12

■■ Functional correctness. Testers should validate that the application functions


correctly. This includes validating links, calculations, displays of information,and
navigation.

■■ Integration. Testers should validate the integration between browsers and servers,
applications and data, and hardware and software.

■■ Usability. Testers should validate the overall usability of a web page or a web
application, including appearance, clarity, and navigation.

■■ Security. Testers should validate the adequacy and correctness of security controls,
including access control and authorizations.

■■ Performance. Testers should validate the performance of the web application under
load.

■■ Verification of code. Testers should validate that the code used in building the web
application (HTML, Java, and so on) has been used in a correct manner. For example, no
nonstandard coding practices should be used that would cause an application to function
incorrectly in some environments.

Workbench:

Figure 22-1 illustrates the workbench for web-based testing. The input to the workbench is the
hardware and software that will be incorporated in the web-based system to be tested. The first
three tasks of the workbench are primarily involved in web-based test planning. The fourth task

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
13

is traditional software testing. The output from the workbench is to report what works and what
does not work, as well as any concerns over the use of web technology.

Input:
The input to this test process is the description of web-based technology used in the systems
being tested. The following list shows how web-based systems differ from other technologies.

The description of the web-based systems under testing should address these differences:

■■ Uncontrolled user interfaces (browsers). Because of the variety of web browsers


available, a web page must be functional on those browsers that you expect to be used in
accessing your web applications. Furthermore, as new releases of browsers emerge, your
web applications will need to keep up with compatibility issues.

■■ Complex distributed systems. In addition to being complex and distributed, web-


based applications are also remotely accessed, which adds even more concerns to the
testing effort. While some applications may be less complex than others, it is safe to say
that the trend in web applications is to become more complex rather than less.

■■ Security issues. Protection is needed from unauthorized access that can corrupt
applications and/or data. Another security risk is that of access to confidential
information.

■■ Multiple layers in architecture. These layers of architecture include application


servers, web servers, back-end processing, data warehouses, and secure servers for
electronic commerce.

■■ New terminology and skill sets. Just as in making the transition to client/server, new
skills are needed to develop, test, and use web-based technology effectively.

■■ Object-oriented. Object-oriented languages such as Java are the mainstay of web


development.

Do Procedures:

Task 1: Select Web-Based Risks to Include in the Test Plan

 Security.
 Performance
 Correctness
 Compatibility (configuration
 Reliability.
 Data integrity
 Usability.
 Recoverability.

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
14

Task 2: Select Web-Based Tests


 Unit or Component
 Integration
 System
 User Acceptance
 Performance
 Load
 Regression
Task 3: Select Web-Based Test Tools
 HTML tools
 Site validation tools
 Load/stress testing tools
 Test case generators.

Task 4: Test Web-Based Systems

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
15

Testing Software System Security


Testing software system security is a complex and costly activity. Performing comprehensive
security testing is generally not practical. What is practical is to establish a security baseline to
determine the current level of security and to measure improvements.

Effectiveness of security testing can be improved by focusing on the points where security has
the highest probability of being compromised. A testing tool that has proved effective in
identifying these points is the penetration-point matrix. The security-testing process described in
this chapter focuses primarily on developing the penetration-point matrix, as opposed to the
detailed testing of those identified points.

Overview:

This test process provides two resources: a security baseline and an identification of the points in
an information system that have a high risk of being penetrated. Neither resource is statistically
perfect, but both have proven to be highly reliable when used by individuals knowledgeable in
the area that may be penetrated. The penetration-point tool involves building a matrix. In one
dimension are the activities that may need security controls; in the other dimension are potential
points of penetration. Developers of the matrix assess the probability of penetration at various
points in the software system at the points of intersection. By identifying the points with the
highest probability of penetration, organizations gain insight as to where information systems
risk penetration the most.

Concerns:
There are two major security concerns: Security risks must be identified, and adequate controls
must be installed to minimize these risks.

Workbench:

This workbench assumes a team knowledgeable about the information system to be secured. This
team must be knowledgeable about the following:

■■ Communication networks in use


■■ Who has access to those networks
■■ Data or processes that require protection
■■ Value of information or processes that require protection
■■ processing flow of the software system
■■ Security systems and concepts
■■ Security-penetration methods

The workbench provides three tasks for building and using a penetration-point matrix (see Figure
20-1). The security-testing techniques used in this workbench are the security baseline and the

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
16

penetration-point matrix. The prime purpose of the matrix is to focus discussion on high-risk
points of potential penetration and to assist in determining which points require the most
attention. These techniques can be used by project teams, special teams convened to identify
security risks, or by quality assurance/quality-control personnel to assess the adequacy of
security systems.

Input:

The input to this test process is a team that is knowledgeable about the information system to be
protected and about how to achieve security for a software system. The reliability of the results
will depend heavily on the knowledge of the individuals involved with the information system
and the specific types of individuals who are likely to penetrate the system at risk points. The
security-testing techniques presented in this chapter are simple enough that the team should not
require prior training in the use of the security test tools.

Do Procedures:
This test process involves performing the following three tasks:
1. Establish a security baseline.
2. Build a penetration-point matrix.
3. Analyze the results of security testing.

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
17

Check Procedures:
The check procedures for this test process should focus on the completeness and competency of
the team using the security baseline process and the penetration-point matrix, as well as the
completeness of the list of potential perpetrators and potential points of penetration. The analysis
should also be challenged

Output:
The output from this test process is a security baseline, the penetration-point matrix identifying
the high-risk points of penetration, and a security assessment.

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
18

Developing the Test Plan:


The scope of the effort to determine whether software is ready to be placed into production
should be defined in a test plan. To expend the resources for testing without a plan will almost
certainly lead to waste and the inability to evaluate the status of corrections prior to installation.
The test planning effort should follow the normal test planning process, although the content will
vary because it will involve not only in-house developed software but also vendor-developed
software and software embedded into computer chips.

Overview:
The test plan describes how testing will be accomplished. Its creation is essential to effective
testing and should take about one-third of the total testing effort. If you develop the plan
carefully, test execution, analysis, and reporting will flow smoothly. Consider the test plan as an
evolving document. As the developmental effort changes in scope, the test plan must change
accordingly. It is important to keep the test plan current and to follow it, for it is the execution of
the test plan that management must rely on to ensure that testing is effective; and it is from this
plan that testers will ascertain the status of the test effort and base opinions on its results.

Objective:
The objective of a test plan is to describe all testing that is to be accomplished, together with the
resources and schedule necessary for completion. The test plan should provide background
information on the software being tested, test objectives and risks, and specific tests to be
performed. Properly constructed, the test plan is a contract between the testers and the project
team/users. Thus, status reports and final reports will be based on that contract.
Concerns:
The concerns testers face in ensuring that the test plan will be complete include the following:

 Not enough training. The majority of IT personnel have not been formally trained in
testing, and only about half of full-time independent testing personnel have been trained
in testing techniques. This causes a great deal of misunderstanding and misapplication of
testing techniques.

 Us-versus-them mentality. This common problem arises when developers and testers
are on opposite sides of the testing issue. Often, the political infighting takes up energy,
sidetracks the project, and negatively impacts relationships.

 Lack of testing tools. IT management may consider testing tools to be a luxury. Manual
testing can be an overwhelming task. Although more than just tools are needed, trying to
test effectively without tools is like trying to dig a trench with a spoon.

 Lack of management understanding/support of testing. If support for testing does not


come from the top, staff will not take the job seriously and testers’ morale will suffer.
Management support goes beyond financial provisions; management must also make the
tough calls to deliver the software on time with defects or take a little longer and do the
job right.

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad
19

 Lack of customer and user involvement. Users and customers may be shut out of the
testing process, or perhaps they don’t want to be involved. Users and customers play one
of the most critical roles in testing: making sure the software works from a business
perspective.

 Not enough time for testing. This is common complaint. The challenge is to prioritize
the plan to test the right things in the given time.

 Over-reliance on independent testers. Sometimes called the “throw it over the wall”
syndrome. Developers know that independent testers will check their work, so they focus
on coding and let the testers do the testing. Unfortunately, this results in higher defect
levels and longer testing times.

 Rapid change. In some technologies, especially rapid application development (RAD),


the software is created and/or modified faster than the testers can test it. This highlights
the need for automation, but also for version and release management

 Testers are in a lose-lose situation. On the one hand, if the testers report too many
defects, they are blamed for delaying the project. Conversely, if the testers do not find the
critical defects, they are blamed for poor quality.
 Having to say no. The single toughest dilemma for testers is to have to say, “No, the
software is not ready for production.” Nobody on the project likes to hear that, and
frequently, testers succumb to the pressures of schedule and cost.

Input:
Accurate and complete inputs are critical for developing an effective test plan. The following
two inputs are used in developing the test plan:

 Project plan. This plan should address the totality of activities required to implement
the project and control that implementation. The plan should also include testing.

 Project plan assessment and status report. This report (developed from Step 1 of the
seven-step process) evaluates the completeness and reasonableness of the project plan. It
also indicates the status of the plan as well as the method for reporting status throughout
the development effort.

Do Procedures:
The following six tasks should be completed during the execution of this step:
1. Profile the software project.
2. Understand the software project’s risks.
3. Select a testing technique.
4. Plan unit testing and analysis.
5. Build the test plan.
6. Inspect the test plan.

K.Venkata SubbaReddy, Assistant Professor, Department of CSE, Muffakham Jah College of Engineering and Technology, Hyderabad

Potrebbero piacerti anche