Sei sulla pagina 1di 37

Module 5: Systems implementation, testing, and support

Overview
When organizations decide that its time to replace the current information system, either through a buy or
build process, a number of tasks must be followed. In Module 4, you learned that the first two steps in systems
development (systems analysis and design) were crucial to determining the functional specifications of the new
system (exactly what and how to achieve its goals). Using requirements gathering techniques (interviews,
questionnaires, observing) and logical modelling diagrams (DFD, ER, decision tables, case diagrams), the
specifications for a new system are written and ready for the implementation phase of systems development.
Programming, testing, conversion, installation, documentation, and training are the tasks that take place during
systems implementation. (As we described, these activities are represented by the programming, testing, and
conversion steps of systems development in Exhibit 3.2-1.) The purpose of implementation is to convert the
system design into a dependable, functioning system and introduce it to everyday use. It is also at this stage
that any errors or omissions from the analysis and design stages present themselves and must be corrected.
In this module, you look at the different tasks associated with systems implementation, including the various
conversion schemes and the importance of testing and continued maintenance/support. Two of the most
critical aspects of systems implementation, training and change management, are presented in Module 10.
Maintenance and support is the final phase of systems development. It covers the period from the time the
system becomes operational to the time it is replaced. You study the various activities that make up this stage.
The text combines maintenance and support with systems implementation, but it is more common to see them
as separate phases, as they are presented in this module.
You will improve your competence to advise on the design, development, and implementation of IT projects,
including specific applications software. You will develop your ability to identify, analyze, and evaluate
enterprise risk factors, and to generate and evaluate alternative solutions.
5.1 Systems installation and conversion
5.2 Testing in systems implementation
5.3 Quality assurance in system development
5.4 Systems maintenance
5.5 Systems enhancement and reengineering
5.6 Legacy system issues
5.7 Measuring system benefits
Module summary
Print this module
Course Schedule Course Modules Review and Practice Exam Preparation Resources
5.1 Systems installation and conversion
Learning objectives
Recommend and defend a strategy for converting from an existing system to a new system,
based on situational factors. (Level 1)
Assess the advantages and disadvantages of the four different conversion strategies. (Level 1)
Required reading
Review Chapter 9, Section 9.2, Overview of Systems Development (subsections Conversion up
to and including Documentation)
Module scenario: A conversion is not based on hope
It is Friday afternoon after a long week. You are in your office, going over the conversion plans once again to
make sure that everything is in place for Monday morning. Your boss comes by and asks how everything is
going. Fine, Carl, you reply, just a last check before Monday. Thats what I wanted to ask you about, says
Carl, and he sits down. So, he says, we have spent all this time, effort, and money to choose a system that
is newer, web-enabled, integrated all those things you told us. Its ready to go, but we are only going to
pilot inventory management beginning next week, and the rest of the conversion will happen in the next of
couple months. He pauses, and narrows his eyes. Why dont we just convert everything over this weekend,
and next week well be live on the new system and away we go? Were right there. Even you said weve done
everything right. So why are we holding up 50 yards from the finish line?
You carefully choose your words. Well, its because we are so close that we dont want anything to go wrong
in those last 50 yards. Yes, we have a great new system, and we are completely behind it, and cant wait to
have everyone on. But we also need our current system to keep working until we are done. Remember how I
said that system implementation is like rebuilding a plane while its airborne? Thats my concern. We are going
to pilot inventory management rebuild the tail section of the plane and make sure it still flies before we
commit to the whole thing. Maybe Im extra cautious, but Im not going to risk our business to save us maybe
two months of testing. Then, after a pause, you say: Besides, running a pilot system gives us a chance to
benchmark any performance improvements in this key module before converting the whole system.
LEVEL 1
This topic focuses on the following systems implementation activities:
documenting IS projects
programming and coding
conversion
installation
Documenting IS projects
This is no magic answer to IS project documentation. It takes commitment, it takes time, and there are many
software applications to choose from. For example, Basecamp is a SaaS solution that can act as a repository for
all project communications. Many companies also use Evernote, a web-based suite of software and services
designed for note-taking and archiving, as a documentation archive. For a more formal project management
environment, there are applications such as OpenText, a more integrated, content management system. And
for organizations that already use Microsoft Project there is Microsoft Project Server, an application that
connects MS-Project and MS-SharePoint to create a project repository that uses a SQL Server database.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Responsibility should be assigned at the beginning of the project, and not just left to team members to figure
out as we go along. The following steps can help establish the importance of documentation. Not all items
listed may be necessary for a given project, but it is important to present a thorough list from which to choose:
1. Assign one team member as the document manager. Their role is to gather, classify, co-ordinate,
and disseminate the project documents into a series of complete and current reference manuals.
If the task is too big for one person, let them choose a partner.
2. Create an internal website or portal where all documentation related to the project can be stored.
Or choose one of the software applications previously described. If handwritten documents or
drawings are important, scan them.
3. Different users of the system require different documents. There is user documentation, and
system documentation. For a purchases system, many of the user documentation will already be
included. The minimum that should be created include:
User documentation
Functional description: introductory document that describes what the system does
Installation document: how to install the system hardware and software
Introductory manual: describe how to use the system
System reference guide: the types of error messages that arise and how to correct them
System documentation
Requirements document: describing the system as a logic model
Systems architecture: how to connect and relate the system models and programs
Source code list: a list of programs, tables and fields
In Module 4 you learned how unified modelling language (UML) could be used at the software development
stage to specify, visualize, modify, construct, and document the software architecture of systems. Specifically,
the use case diagram in UML was introduced, but you also learned that there are many other modelling
diagrams available. The diagrams help the development and documentation processes to be more aligned with
each other, and therefore, they are more likely to get done properly.
Programming and coding
Programming and coding is the process of converting the physical design, or system blueprint, into software.
During this phase, programmers convert the inputs, outputs, and processes described and logically diagrammed
in the design phase into actual software applications.
The programmers may be full-time staff of the organization, or they may be consultants or contract workers.
When systems are purchased externally, the main programming of the system has already taken place, but
customization of the system to the organization may be required. Many systems remain vanilla installations.
The vendor code is untouched; if modifications are required, they are completed outside of the code, and then
combined with the application/program when it is called. This way application updates or fixes from the vendor
can be more easily applied, since the code is unmodified.
Business managers and end-users are not typically involved in this process. However, it is important for
managers to monitor progress during this phase to ensure that the overall project schedule is maintained.
Thus, to liaise with the programming team or the external consultants remains an important role for managers
at this stage.
Conversion
Conversion and installation is the process of upgrading or replacing the existing system with the new system.
This includes not only the software and procedures of the new system, but also any changes or improvements
to the IT infrastructure, which may include network or hardware improvements/additions. These would all have
been accounted for during requirements gathering in the analysis phase. During the conversion and installation
process, all the requirements must be taken into account.
There are four approaches to system conversion:
Parallel
Pilot
Phased
Direct cutover
Each approach has its own strengths and weaknesses. The decision on which approach to adopt is an
important one, and the trade-offs between cost and risk must be carefully considered. Exhibit 5.1-1 shows how
each approach handles conversion from the existing system to a new system.
Exhibit 5.1-1
Parallel conversion
In a parallel conversion, the old system continues to be used at the same time as the new system is
introduced. Both systems run in parallel for a predetermined amount of time; in a parallel conversion that goes
on for too long, the new system runs the risk of being rejected by the users, often over trust. People use both
systems but increase the amount of time that they use the new system until it is in use and has been adopted
the majority of the time. Then the old system is discontinued.
A parallel conversion allows for a comparison of the new system to the old so that you can benchmark and
quantify its effectiveness. A parallel conversion also minimizes the risks of operational and data-processing
failures because the old system continues to function with the new system. However, this can incur additional
costs in resources, since both systems continue to run in parallel until the conversion.
The challenges of a parallel conversion revolve around the management and costs of running two separate
systems. The duplication of effort associated with running two systems can be costly. With large, complex
systems, this would be prohibitive. If the old system is an option, some users will not use the new system and
continue to access the old system. The benefits of the new system will be delayed as long as the old system
continues to be used.
The parallel approach is considered the least risky conversion approach. Nevertheless, the cost and potential
confusion of running two systems at once makes it a poor choice for large, complex systems. It is a good
choice for smaller systems that will use the existing infrastructure.
Pilot conversion
In a pilot conversion, the new system is introduced in a single unit or location for a set period before it is
installed in other parts of the organization. The pilot conversion allows an organization to test out a new
system in a controlled way. It limits the amount of disruption and harm a new system can produce in an
organization. By concentrating on one site or department, you can work out all of the details and potential
problems before the new system is fully introduced to the organization.
The success of a pilot installation can be used to overcome user resistance, and sell the new system to the rest
of the business. You can use the experience of a pilot installation to decide whether or not to continue with the
system deployment and what approach to use.
With a pilot conversion, the selection of the unit or location is critical. If you select the easiest site with very
positive conditions for adoption of the system, you may hide some potential problems. The inverse is also true;
the most difficult site may dissuade the acceptance of the new system.
A pilot conversion creates additional burdens for the IS staff in the maintenance and support of two different
systems that may or may not be able to effectively communicate with each other. This approach also runs the
risk of delaying the full implementation of a new system because the pilot is constantly being improved.
The pilot approach to systems conversion is a middle of the road method designed to minimize risk. The
challenge is that you must still select a method of installing the new system in the rest of the organization.
Pilot installations work well when there is some uncertainty regarding how the system will work and when there
is time to work out any potential problems, without jeopardizing the benefits to the company.
Phased conversion
In a phased conversion, a new information system is broken down into smaller functional components that can
be brought into operation one at a time, with each one adding more improvements and functionality to the
overall system. A phased installation is gradual, incremental, and easier to manage than the other installation
approaches. A phased approach keeps the risk fairly low by spreading the conversion out over time.
A phased conversion can also work with a phased systems development process. This slow and steady
approach limits the potential of system errors and the costs associated with system failures. With a phased
conversion, a completion point can be difficult to define because it takes place over such a long period of time.
The old and the new systems must be able to work together seamlessly, which may require additional
programming and development, adding cost and resource in the analysis phase.
Phased installation works well with systems that are building on existing systems or enhancing those systems.
This approach manages the risks and costs more easily in the short term, but it can become a never-ending
project; by the time all the components have been introduced, it is time to install a new system. Therefore, the
installation time frame of the system must be closely managed.
Direct cutover conversion
In a direct cutover conversion, the old system is discarded and the new system takes over all at once. Also
known as the plunge, abrupt cut-over, or the big bang approach, it is essentially turning the old system off
and turning the new system on. This approach can be the least expensive of the different methods and can
occur in the quickest time. Users and management have a high interest in making the new system work
because, by design, there is no turning back. A direct cutover conversion may be the only option if the old and
new systems cannot co-exist in any form. The greatest risk is the impact that errors and failures would have on
the organization. The timing of this type of conversion is a key element of its success.
The riskiest strategy for new systems installation, the direct cutover conversion can be low cost and the
benefits of the new system can be realized without delay. A direct cutover conversion approach works well
with immovable deadlines, such as was the case with Y2K or a response to regulatory changes like the re-
implementation of provincial sales tax (PST) in British Columbia. Direct cutover, despite its risks, is also the
method most used for large implementation projects like ERP systems.
Activity 5.1-1: Judgment call
This activity presents a scenario for you to work through to test your understanding of conversion strategies.
Installation
Regardless of the conversion strategy adopted, you must still plan and execute the physical installation of the
new system. If the system has been purchased, the programming phase is replaced by a series of procedures
that describes how to configure the new system. You must include any new hardware and operating systems
that are required by the new system, and you must have provisions for selecting and purchasing new
computers and other related hardware. When and where the new hardware will be installed depends on the
type of new system being installed.
The choice of conversion strategy will dictate how you install new hardware. If you are using a direct cutover
conversion, all the necessary hardware and software must be in place. If you are using a pilot conversion, only
the pilot site will have the needed hardware, temporarily delaying costs. The pilot approach also allows you to
test the procedures for installing and operating new hardware in a single, controlled location. This spreads out
the workload of a new hardware installation over a longer period, which makes it easier to manage the
installation.
5.2 Testing in systems implementation
Learning objective
Distinguish between the different methods of testing in systems implementation and formulate
strategies to mitigate the limitations of testing. (Level 1)
Required reading
Review Chapter 9, Section 9.2, Overview of System Development (subsection on "Testing")
LEVEL 1
As part of the systems implementation process, the new system will undergo testing to ensure two things: (1)
that it meets the stated goals of the system as agreed by the users and application owners, and (2) that it
works the way it is supposed to. Testing should be done in simulation on the most current hardware/software
that reflects the new environment. Also, testing should adhere to the following steps:
set out the conditions of the test
walkthrough the conditions in the simulation
create test data in the simulation
perform the tests
evaluate the results
There are two basic phases of testing:
Software or program testing (which includes unit, integration, and system testing)
Acceptance testing
Software testing is focused on ensuring that the programs work as intended and produce correct results in a
controlled environment. Acceptance testing is focused on the use of the system by the intended users, such
as whether it continues to work as intended when users operate it. Certain aspects of the interface design may
confuse users and lead them to misinterpret the results or misuse functions. Such errors result in an overall
failure of the system to produce the results to meet system goals, and they must be addressed before the
system is rolled out.
Software testing
The steps grouped in software testing include three test levels: unit testing, integration testing, and system
testing.
In unit testing, the individual modules of the system are tested for any potential errors in the code. In
essence, the unit test is a functional test. It is done in a simulation or test environment, and involves both
users and IT staff (developers or support), regardless of whether the system is purchased or programmed. The
goal of unit testing is to verify the functionality of the code; does it perform the functions as they are
described? For example, does entering a purchase order into the new system update all the necessary fields,
perform the necessary calculations, and reflect the proper costs in the accounting system?
Because the modules must work together, integration testing must also be performed. The modules are
grouped together gradually, with testing occurring as each module is added. In a typical top-down approach
(where the top modules are tested first, followed by the branch modules, until all testing is complete), this
testing compares the program outputs to the inputs for consistency. The goal of integration testing is to expose
Course Schedule Course Modules Review and Practice Exam Preparation Resources
any defects in the interfaces and integration of the modules, and to ensure that modules satisfy the system
requirements and goals. For purchased systems, the design usually centres on a core of business modules.
Organizations will purchase systems versus build because integration between the modules is assured. If legacy
systems are required to integrate with the new system, then testing for this integration must also be done.
Both purchased and developed systems support the use of application program interfaces (APIs), which is
formal code, whereby software components provide a method of connecting systems together. Middleware (a
software product which often supports many APIs) may also be purchased to help with the integration with
legacy systems.
The goal of system testing is to see how all the components of the new systems work under various
conditions, including normal loads and peak loads. Most of this testing is in the form of simulations and is only
a guideline, not an absolute answer. Still, it is important to test for items like quality, reliability, and scalability.
System throughput is an important measurement that looks at a specific performance instance, that is, the total
number of invoices or inventory transactions performed in an hour. With test data, the system performance
variation between peak and normal performance is measured and benchmarked, to ensure that system
response and user experience are delivered as specified in the system requirements.
Acceptance testing
Acceptance testing is the last phase of testing. Acceptance testing involves the actual users of the system, and
measures how well the new system meets their expectations and requirements. There are two approaches to
acceptance testing. Alpha testing uses simulated data. During this phase, users test system performance.
Procedures for security and recovery are also tested during this time. Beta testing uses the actual data of the
completed system and is the final stage before installation. There can be an intangible element to acceptance
testing. Test scenarios are developed and run, with the results verified to show proper functioning of the
system. However, users get a feel for the system during acceptance testing. In a built system, changes may
be required to win user support; in a purchased system, users must adapt to the design.
Various types of tests are conducted during this phase. Both users and system developers verify the system
through the following tests:
1. Recovery testing examines how the system works when it has been forced to fail. For example,
does it restart properly, refresh data correctly, and provide information about the status of
transactions that were in progress when the system failed?
2. Security testing focuses on whether the security policies have been implemented as intended in
the final system, and whether they create any unforeseen problems. For example, for privacy
reasons, a bookstore may want to give access to customer information only to store managers.
This security measure means that employees would not have access to customer information
when a manager is not present in the store. This access limitation would make it impossible to
update customer information unless the manager is present. In the wake of the Sarbanes-Oxley
Act (SOX), security testing of new systems has taken on greater urgency. An element of SOX
compliance includes the ability to justify the module/application/program security of a system, in
that no unauthorized user could access or tamper with information that flows into the financial
statements.
3. Stress testing tries to break the system by not following the established rules and procedures.
At this level, it tests the soundness of the error handling routines built into the system. What
happens if you dont follow the correct sequence of steps or if you try to submit incomplete
records? A system that crashes when users make simple errors such as these would not be very
useful. Such testing is important, as shown in the following example about system performance
as it relates to user volume.
Stress testing also considers the performance of the system under heavy use conditions. You may
have experienced situations where a program works well when there is only one user, but slows
to a crawl when others are also active. Such an example occurred in a university computing lab in
the early 1990s when MS Office Professional was installed for the first time. The testing phase of
the project was inadequate, and stress testing was not considered. Most of the applications ran
well in this environment, though a little slowly when there were multiple users.
MS Access, which was originally designed to produce databases for a single user (as opposed to
its current form of limited multiple concurrent users) was a different story. Access was being
taught as part of an introductory computer course in one-hour labs. On the day of the first
Access lab, when 30 students were asked to launch the software at approximately the same time,
the network choked. It took about 20 minutes for the application to start. Simple queries took
two to three minutes to process. Also, given the extreme stress on the network, it was not
uncommon to have the whole classroom of computers locked up and restarted. Imagine if you
had been in this class as either the student or teacher! Stress testing would have discovered this
issue in advance.
4. Performance testing examines the use of the system in different environments. Suppose an
organization has multiple networks and workstations running different operating systems. It is
important to test the use of the system in these different environments and configurations to
make sure it works adequately under all of the required conditions. Not only is testing carried out
in multiple environments, but on multiple platforms too. PCs, smartphones, tablets, and hand
scanners any device today that requires access to system data must be tested for compliance
with the entire list of test scenarios. The complexity of enterprise systems today demands
multiple configurations and multiple means of access all tested and able to synchronize with
the demands of the business. Still, a test is just a test
Limitations of testing
While testing is a critical activity in implementing any new system, it is important to have a realistic view of
testing and its limitations. It is impossible to guarantee that once a system is tested, it is bug free. This sounds
like an excuse made up by lazy developers who dont want to work hard at testing their own products. But the
reality of most software is that its complexity precludes exhaustive testing of all aspects. A test is a subset of
the real environment that a system will be measured in. Example 5.2-1 illustrates this idea.
Example 5.2-1: Pressmans example of testing
Suppose you were to obtain a magic testing program. This program would examine a piece of software and
iterate through every possible variation where the software could be executed. You could feed it in a computer
program and let it run until it had finished testing all the permutations and combinations. Then you would
know the program was perfect.
Now take a very simple program, written in the C programming language. This program is short, containing
100 lines and two nested loops. You might have a user entering in the numbers of products in an inventory
database, and then for each product, recording each of the transactions that make changes to inventory on-
hand. Each loop executes from 1 to 20 times. Again, this is very short. Many companies have hundreds, or
even thousands, of product codes and thousands of transactions a day; we are only allowing 20. Within the
loops, there are four IFTHENELSE blocks, which means that you are checking on different conditions to do
different things under those conditions.
You need to check to see whether the transaction was an order received (increase in inventory) or a shipment
(decrease in inventory). When you receive inventory, you need to check if there are customers waiting for it
right now (back orders) and be ready to ship the product out again. You can probably think of a lot of other
conditions you would want to check about inventory. Again, the point is that this is a pretty trivial program.
Given this program and the magic testing program working 24 hours a day, seven days a week, how long do
you think it would take to test the outcome of every possible combination of actions? A day? A month? How
about 3,170 years? When you work out the number of possible combinations and the different things you
would need to test, it would actually take thousands of years to exhaustively test this program.
Although this calculation is based on 1997 computer hardware, which is obviously much slower than what is
available today, you get the general idea testing for 100 percent assurance is not practically feasible.
Source: R.S. Pressman, Software Engineering: A Practitioners Perspective. New York: McGraw-Hill (1997).
Challenges in testing
It is important to recognize that testing is not a perfect science. Software testing and acceptance testing look
for the major activities of the system and the primary conditions of its use. However, no system is ever bug
free. As the system is rolled out to users and used over time, new problems will be encountered. Many
variations, from installed software to hardware chipsets to operating system fix levels, can impact the user
experience of a new system. In a live environment, dozens of connected devices can perform flawlessly, while
dozens more fail. Testing for all the possible complexities is impossible. Having post-implementation
contingency plans and the staff to support them is the crucial third leg of support in system testing, after
software and acceptance. It is also important to ensure that you source not only the best software/hardware
package but also a vendor with reputable customer service. For example, consider that you may have sourced
the best training software that will solve all your organizational/educational needs. But if that softwares
customer support reviews express nothing but dissatisfaction and negative comments, you may want to rethink
your choice of vendor.
As a manager, you cannot expect perfection. But you can expect these two other, more important
consequences:
Since testing is not perfect, developers (internal to your firm, external consultants hired by your
firm, or commercial software companies) need to recognize that the testing process and the
process of correcting problems continues throughout the life of a system. The ability to quickly
isolate a problem, and work with vendor application support staff to solve the issue in a
reasonable time frame this is how vendors maintain the confidence of their users. Prompt,
post-implementation service is key to an ongoing relationship with both new and existing
customers.
Since testing (quality control) cannot be assured to find all problems, processes to try to reduce
or avoid problems (quality assurance) are necessary. The activities that comprise quality
assurance in software development will be explained in the next topic.
How do you make the decision to release a system if testing is never complete? When is it time to turn the
system over to the users? You can answer these questions using various statistical models that compare the
probability of detecting additional errors against the costs of additional testing time.
5.3 Quality assurance in system development
Learning objectives
Identify the different characteristics of systems quality. (Level 1)
Distinguish between quality assurance and quality control. (Level 1)
Required reading
Reading 5-1: Quality Control versus Quality Assurance
LEVEL 1
The objective of quality assurance is to ensure that the system, developed and delivered through the project
phases, meets design specifications and fulfills user requirements. This is an important objective for systems
implementation as it allows you to deliver a new system that is as close to error-free as possible. Quality
assurance procedures can be as simple as clearly identifying testing procedures conducted by programmers and
systems analysts. For large and complex systems, often an independent quality assurance team that is not part
of the systems development team is used. There are a couple of reasons for this. First, the quality assurance
team usually includes analysts and users who are either trained to perform testing or have quality assurance
skills. Someone with good programming or systems analysis skills does not necessarily have good testing skills.
Second, an independent testing team that is responsible for conducting thorough and complete testing may be
more effective at finding problems than the team that developed the system. Purchased system software, on
the other hand, will have been put through extensive assurance testing at the vendor location before being
offered to the business community. Programmed or purchased new software must pass a minimum of quality
tests, but it cannot guarantee an absolute quality product. Still, good quality is obtained by measuring certain
characteristics of a system and tabulating the results.
Characteristics of system quality
The quality of a system is measured by
Completeness The system meets all the performance specifications and provides all the
functions specified.
Correctness The system performs the required functions accurately.
Reliability The system can handle normal and peak loads.
Consistency Operations are carried out in the same manner each time they are requested.
Efficiency The amount of computer resources required by the system is reasonable.
Integrity Access to software or data by unauthorized persons can be controlled.
Testability The system is designed so that it can be tested to ensure it performs the intended
functions.
User-friendliness The system is easy to use, users can understand the interactive dialogue, and
the interface is clean, self-directed, and intuitive.
Maintainability Programming is structured and well-documented, changes can be made easily,
Course Schedule Course Modules Review and Practice Exam Preparation Resources
and errors can be corrected with relative ease.
Scalability The system is able to automatically expand to accept more users or throughput
without intervention, and with little to no effect on user performance.
Distinguishing between quality assurance and quality control
Quality control is concerned with the quality of the products themselves (in this case, information systems),
whereas quality assurance is concerned with the quality of the environment in which products (information
systems) are developed.
Reading 5-1 provides a succinct account of the concerns regarding quality control and quality assurance in
information systems, and the costs of quality within the IS department and user departments. It emphasizes
that quality assurance is preferred over quality control and concludes that not only is high quality critically
important in information systems, it can also be free because the cost is more than offset by the savings in
enhancements and other costs.
Manager support in quality assurance
Information systems are of strategic importance to modern businesses. Many organizations are totally
dependent on the proper operation of their information systems. Yet quality in information systems has been
an elusive goal for management, users, and systems professionals. Everyone involved in the design,
construction, and use of information systems should emphasize quality in information systems.
As a manager, you can support quality assurance in at least two ways. First, you can ensure higher quality
systems by participating actively in the requirements analysis. Whether you participate directly, or whether you
send a staff member to participate (depending on the system involved), you need to ensure that the task is
given enough attention. The requirements analysis determines the design of the system; it makes strategic
sense to ensure that time and effort are spent on this phase.
By ensuring that the right person is involved and has sufficient time to devote to requirements analysis
(perhaps by reducing other activities), you can enhance the quality of systems being developed to support your
business processes. Although you may lose a good staff member to the project for a period, the long-term
benefits are high. The means (the right person) will justify the time by delivering thorough well-developed
functional specifications.
Second, you must avoid the temptation to rush testing as the project comes to its conclusion. Because many
systems development projects are challenged in terms of schedule (that is, they come in late and over budget),
it is tempting to rush the final phases in order to meet the schedule. Managers who stress the importance of
the schedule over the completion of necessary tasks add to this tendency. Managers must ensure that in their
zeal to get a project completed on time, they do not compromise on the testing phases. Such a compromise
will only move the responsibility for finding problems from the system designers to the system users, in effect
reducing released code to beta code. The same is true for documentation, and user training. Rushing to wrap
up a project and failing to adequately document the processes and systems will only lead to more difficulties in
the future. And reducing the amount of user training simply to bring the system delivery in on time will also
affect users acceptance of the new system and their ability to use it correctly.
5.4 Systems maintenance
Learning objectives
Explain the basic tasks of systems maintenance. (Level 2)
Defend the need for sufficient systems maintenance. (Level 2)
Required reading
Review Chapter 9, Section 9.2, Overview of Systems Development (subsection on Production
and Maintenance)
LEVEL 2
After an information system has been implemented and accepted by the users, it moves into the ongoing
maintenance phase. The maintenance phase includes two related activities: systems maintenance and systems
enhancement. There is a distinction between these two systems support activities. In this topic, you will study
the various issues surrounding systems maintenance. In Topic 5.5, you will study the issues surrounding
systems enhancement.
The moment a new system is implemented, the maintenance of it begins. Systems maintenance is the
process of fixing errors and adapting the system to the changing requirements. It can also entail answering
user questions on process, screens, reports, and other concerns. In a well-designed system, the systems
analyst has already taken into consideration the ease of continued maintenance and improvements and has, to
a certain extent, designed the system to be able to adapt to the changing needs of the business. Some people
distinguish between corrective action (bug fixing) and adapting to new requirements, calling the former systems
maintenance and the latter systems enhancement.
Maintenance, excluding enhancement activities, includes a broad range of corrections to:
Program bugs
Design bugs
Documentation errors
Operating-procedure errors
User interface errors
Test-data errors
For example, if a point-of-sale system was built before the harmonized sales tax was implemented, it would
need to be modified to handle this new tax. In this case, the request would be classified as a systems
enhancement; the system would have to be improved to handle the new requirement. However, if an error is
found with the calculation of the harmonized sales tax for sales returns (and the error was not detected during
testing), the modifications required would be classified as systems maintenance.
During the testing phase, users will often request what they foresee as small changes to the screen design or
report calculations that would greatly improve the usability of the system in their eyes. Some requests may be
legitimate. However, taken in total, all these small requests would severely jeopardize the completion of the
project. The project manager, developers, and end users categorize requests as either enhancements,
maintenance, or nice-to-have. Usually, the first and latter requests are documented for inclusion at some later
time, while maintenance requests due to incorrect calculations in a program must be adjusted.
Systems maintenance activities
Course Schedule Course Modules Review and Practice Exam Preparation Resources
The systems maintenance process is comprised of five steps:
1. Define and validate the problems.
2. Benchmark the programs and applications.
3. Understand the application and its programs.
4. Edit and test the programs.
5. Update documentation.
Define and validate the problems
Before a systems maintenance project is started, it is essential that the exact nature of the problem be
identified. Errors reported by users of the system can be caused by real bugs, misuse of the system, or errors
in the documentation. The most difficult bugs to identify are intermittent errors. Intermittent errors are hard to
replicate and can be caused by software faults in other running programs, hardware errors, user errors, or
combinations of all three. At this stage, it is important not to jump to conclusions and solve the wrong problem.
The focus is on fixing and validating the programs in the installation, not on unnecessarily redesigning part of
the system.
Benchmark the programs and applications
Depending on the type of error encountered, before making changes the system should be tested to establish a
benchmark. This is particularly important for measuring the performance of the system (not as much when the
error is a simple calculation). One way to benchmark is to rerun the test cases, adding new cases where
necessary, and recording the performance results. Without establishing a benchmark, it is difficult to determine
the effect of the fixes later on, especially with performance issues. Establishing benchmarks is particularly
important for online systems, such as e-commerce systems that may be sensitive to the timing of events, and
to account for performance degradation. Benchmarks should be taken whenever a system/program fix is
applied, or a program improvement/enhancement is made.
Understand the application and its programs
Before you can make any changes to correct bugs, you must study the programs to understand how they
interact. If the system was a build project, then the project documentation, including the process specifications
and DFDs in the project repositories, are the primary sources of information, as are the actual computer
programs. A purchased system will have documentation CDs or system installed documents already. A part of
the requirements gathering would have been to ensure this. If there is inadequate information in the project
repositories for built systems, use any available software tool to aid in the maintenance.
For example, for user-developed applications in Microsoft Access, a tool called Documenter in the analyze
section of database tools can be used to document all the objects in an Access database. The output from
Documenter includes the properties, relationships, code, and user permission of all the objects. With this
output, you should be able to understand the logic and structure of a poorly designed Access system. Excel
uses the Ctrl+` shortcut key to display/print formulas as opposed to results. With larger integrated business
systems documentation projects in SQL, Red Gate is a tool that automatically generates documentation of the
database; however, there are other details, online documents, blogs, customer accounts, and so on, which can
all aid in understanding the system.
Edit and test the programs
After you fully understand the corrective actions required, you make changes to the appropriate programs. The
most critical aspect of this activity is testing. For business systems, testing is usually performed offline, in a
specially created mock or test environment. The test environment is usually a scaled down version of the
live/production environment, with a partial load of test data, but with system software at the same fix level as
the live environment. This way a programmer can test changes in a safe environment, and perform multiple
iterations without an impact on the live system until a solution is devised. This helps developers avoid the
sense that computer systems often work like dominos: changing one aspect of a system often has totally
unexpected repercussions on other parts of the system. In a closed test environment, such a result should be
discovered before the solution is ported to the live environment.
You should rerun test cases in the live environment after the changes are made and compare them to the
established benchmarks to determine the impact of the changes, if any, on system performance.
Update documentation
The last step, but by no means the least, is to update the documentation so that someone else can maintain
the system in the future. Documentation provides a record of the issues raised and what changes were made
to the system. It provides an important source of information when planning future revisions, or in the event
that incompatibilities arise. System incompatibilities dont always appear in the most immediately revised
version of the system. They may take several iterations to become evident, so having an audit trail of changes
can be particularly useful. For audit compliance with SOX, audit trails with appropriate management sign-offs
are a necessity, as are change management systems that record software version levels so that program
changes can be easily discovered, printed, and verified at audit time. Good documentation has been a
necessity for years, but it still requires management to stress and ensure its adherence.
The high cost of maintenance
Systems maintenance consumes up to 80 percent of some IS development budgets because the current
production systems must be maintained and adapted to changing business needs. As users become familiar
with a new system, support and maintenance costs/resources should decrease, providing IS with more ability to
focus on new development. Still, system support activities are given higher priority than the development of
new systems because business operations depend on the current production systems.
Many organizations do not have separate maintenance and development budgets. Because there are usually
more systems currently running (and therefore needing support) than new systems under development, it is not
unusual to find the majority of the budget devoted to systems maintenance. This can be problematic, because
both the maintenance of existing systems and the development of new systems are essential for the long-term
viability of the organization.
However, the build costs involved in creating large-scale business systems today favour the purchase model. In
this sense, an organization (through user and license fees) backs the development of the software vendor, and
the directions it is taking the software in. Both Oracle and SAP have developed and released SOA versions of
their business suites, and both have also developed cloud-based solutions. Most organizations do not have the
in-house expertise or budget to drive such system developments. By purchasing a vendor suite, an
organization makes a strategic decision not only on where it wants to position itself with information
technologies, but it also buys into the future vision of that vendor. Purchasing should reduce overall system
maintenance costs in both short- and long-term measures, but whether an organization decides to invest those
savings in custom development is a governance decision.
5.5 Systems enhancement and reengineering
Learning objective
Assess a non-IT managers role in evaluating systems enhancements. (Level 2)
No required reading
LEVEL 2
Systems enhancements represent modification or expansion of the system in response to changing
organizational or business requirements.
There are three types of systems enhancements:
Mandatory
Those prompted by strategic analysis of the business environment
Nice-to-haves
The first two are usually dictated by forces beyond your control, such as business environment changes,
pressure from competitors, or changes imposed by statutory requirements. The "nice-to-haves" are under
managements control.
Systems enhancements include program transformation, where an existing program (in one programming
language) is used to generate another program (in another programming language), and virtualization, where a
virtual hardware platform, operating system, or storage device exists on a machine other than the one that is
currently running it. For example, when users place their daily files on C: drive, which is actually a virtual driver
on a physical server and not the user machine.
Systems reengineering refers to an organizations ability to adapt the system to major technological
changes, or to reorganize the applications to make them less likely to break or easier to maintain. The systems
enhancement and systems reengineering processes are similar.
Your role in systems enhancement and reengineering
Assess the implications
As a non-IT manager, you play an important role in assessing both mandatory and business enhancements,
prompted by a strategic analysis of the business environment. A common example is dealing with payroll
changes required by the Canada Revenue Agency (CRA). As a manager who understands IS issues, you would
work with systems developers to assess the implications of a rule change, which processes must be completed
or what data must be maintained, and the implications of these process or data changes to supporting
systems. Many organizations outsource payroll, but the changes must still be understood and incorporated into
the accounting system. Sometimes, U.S.-based software vendors do not always recognize Canadian
enhancements something that a proper requirements analysis would uncover. In this case, third-party
software vendors will often offer solutions, or organizations develop their own workaround methods of adopting
mandatory enhancements.
Anticipate changes
You may also be able to anticipate changes necessary to maintain a competitive position by analyzing the
organizations environment and evaluating the results of operations. Systems analysts generally do not have
such business acumen, so tend to react to, rather than anticipate, changes in the business climate.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
For example, the entry of a new competitor like Amazon.ca into the Canadian retail book business in 2002
demanded new attention to customer needs. The range of materials stocked, the ability to recommend other
titles based on past buying or browsing behaviour, and an accompanying decrease in prices were all strategies
that had to be met. Once this need was identified, you would have worked with the systems department to
evaluate the benefits of the proposed changes and their costs, and decide whether and how to proceed.
Technology scanning, discussed in Module 2 as a way for IS departments to keep informed of trends and
directions, is necessary for strategy in other areas of the business as well.
Make cost-benefit assessments
Even with nice-to-have systems enhancements, your understanding of the operations can provide your
organization with better cost-benefit assessments. For example, changes to an organizations intranet
(discussed in Module 6) are proposed by users who find a need for new information as they carry out their
day-to-day work. The IS department may have hundreds of requests for enhancements. Which should be
implemented? Should any? Where is the priority? As a manager, you play a critical role in assessing these
decisions from a financial/strategic perspective or as part of a steering committee or governance team.
How to implement systems enhancements
The process of implementing systems enhancements is similar to the systems development process as a whole.
The basic steps include
1. Analyze enhancement requests.
2. Design and code the new programs.
3. Restructure files or databases.
4. Analyze program library and maintenance costs.
5. Reengineer, test programs, and document enhancements.
The first step parallels the analysis phase of the SDLC. Steps 2 through 4 combine the design-construction-
implementation activities that are undertaken for any new system. For system enhancements, they are simply
applied on a smaller scale. Finally, Step 5 parallels the testing that takes place during implementation of any
system. From a managerial standpoint, the process of enhancing a system should not be treated differently
than the process of building a new system. Issues of risk and project management must be attended to, with
the justification of the investment and quality assurance and control. After the successful introduction of a
system enhancement, documentation must also be updated to reflect the change.
All the systems analysis and design techniques covered in this course can be applied to the maintenance of
information systems, though on a smaller scale. IS is driven by technology standards, and it must also apply
standards to the way system maintenance and enhancements are managed.
5.6 Legacy system issues
Learning objective
Relate the issues that surround legacy systems to the challenges of replacing them. (Level 1)
Required reading
Chapter 5, Section 5.2, Infrastructure Components (subsection on Consulting and System
Integration Services), and Section 5.4, Contemporary Software Platform Trends
LEVEL 1
Legacy systems and the many problems they present to organizations have become one of the key
management challenges facing large and small IS departments. Legacy systems refer to the older systems in
place in companies, often but not restricted to mainframe environments, and using older procedural
programming languages such as COBOL (Common Business Oriented Language) and RPG (Report Program
Generator).
Legacy system problems
When systems run for a long time in an organization, a number of things happen. Sometimes maintenance and
development ends. The focus switches to new developments, and older working systems are simply left to run
on their own and print their reports. When new systems are purchased/developed, older systems are often
connected data-wise with makeshift programs (in-house or third-party middleware). The problem is that the
complexity of maintaining connections between legacy and new systems expands over time as users continue
to rely on data from both the old and new systems. Also, the vendor of the legacy system could have switched
hands, gone out of business, or simply stopped support on an old product. But its software always filled a
niche in your business, which you always meant to replace, but never got around to doing. When it comes
time to finally replace the legacy system, because it is preventing your organization from adopting new
strategic software, or IS simply cannot support it any longer, you can find that the cost to replace the legacy
system is more expensive than anticipated.
Obsolete programming language
First, the languages in which legacy systems are written often go out of fashion. If this occurs, it is harder to
find staff who can work in that language, which makes modifications (either maintenance or enhancements)
difficult. The scale of supply and demand works for programming languages just as it does for consumer
products/services. When there are fewer programmers to support a language, the support costs for the legacy
system using that language increase.
Poor documentation
Second, as modifications are made, it is not uncommon for the documentation of changes to be poor.
Developers tend not to like documenting their work. It is difficult to view ones own writing from a readers
perspective and see what needs further documentation. In the same way that many writers are puzzled by
what their readers dont immediately understand from their text, developers often feel their programs are self-
evident and dont need further documentation.
For anyone who has ever gone back to code they had previously written, it can be difficult to figure out what
you intended to do. Since developers dont always document systems as thoroughly or clearly as they should, it
becomes increasingly difficult to understand a systems logic as time passes. When this occurs, the risk and
Course Schedule Course Modules Review and Practice Exam Preparation Resources
cost of upgrading a system increases dramatically because pre-development time must be spent in carefully
stepping through each program to understand its influence.
Lost program documentation
Third, changes in the hardware and software environment of the firm can make accessing program
documentation difficult. Suppose the project repository for a system is maintained on a proprietary workstation,
using specific software designed solely for this task. If the system is never replaced eventually the workstation
will wear out, and a new one likely with a new operating system will be required. The newer operating
system may not support the repository software, rendering the documentation effectively lost. New
systems/software are only backward compatible to a finite degree.
Often, companies with legacy systems face the possibly of little or no system documentation. When replacing
or revising systems in this environment, companies will incur more costs because of the increased amount of
time and resources needed to rewrite any programs without documentation. Firms that lack the source code
(for older systems, the base code where changes are made and then compiled into system programs) may be
completely unable to update their systems. For purchased systems, this is why vendor contracts need to
include a statement about holding source code in escrow, to help protect the customer in case the vendor is no
longer in business.
The challenge of replacing legacy systems
It is the responsibility of IS to ensure that systems do not get out of date and are consistently documented.
Organizations should develop operating standards to replace systems as they age to avoid the risks of having
inadequately documented systems that no one knows how to maintain. This makes sense. The issue is that in
many cases, these legacy systems are still running well. They do the tasks that need to be done, and users
know how to operate them. They become forgotten until they pose a problem, at which point replacing them
(especially given the problems of missing documentation) becomes a risky proposition.
Given what you know about the failure rates of IS projects, do you want to take a system that works well and
try to replace it? What if the new system doesnt work as well? Even if it does, what will be the cost-benefit
trade-off? In the simplest case, where you do a one-to-one conversion of the old system to the new (not
changing functionality, just updating environment), there are no real benefits in terms of system performance
from the users perspective. But strategically, with the ability to integrate with new technologies and platforms,
the costs may be worth it.
The challenge in justifying legacy system replacements and the risk associated with taking something that
works, and replacing it with an unknown quantity are the reasons why so many legacy systems persist in
organizations.
Rather than address the legacy issue, some organizations opt to install screen scrapping front ends to old
systems, which impact none of the underlying legacy issues but provides a clean face to the user interface.
While such approaches do provide visual benefits, they add to the complexity of the application environment.
Performance will not be as good as with a redesigned application, and security may be a consideration. When
errors occur, there will be additional sources to consider. The use of a new front end does not result in better
integration; it is a face-lift and not a redesign, and will not provide the long-run flexibility that comes with more
up-to-date platforms.
Implications for management
Managers, especially those outside of IS, must recognize the need for some ongoing investment in keeping
information systems current. The key is to view the IS investment as a whole, and not focus on their own
particular area of interest. IS solutions are organizational strategies, not departmental. Enterprise systems
address the company position in relation to its customers, and all managers must reflect on this need to
maintain flexibility in an increasingly fast-changing IS landscape. Getting too far behind the curve in terms of
applications will increase maintenance costs, decrease quality, and may eventually result in the need to adapt a
system under crisis conditions. Finding the level of investment that will keep you current, but not overspend on
too-frequent changes is not easy. Working through this task is best dealt with as part of the ongoing IS
planning process (described in Module 2).
5.7 Measuring system benefits
Learning objective
Assess the possible benefits of a new system and evaluate some strategies for measuring those
benefits. (Level 1)
No required reading
LEVEL 1
Module 1 presented the variety of strategic benefits that can be obtained from implementing new information
systems. Using Porters five forces model and Barneys resource-based view as guides, information systems can
decrease costs, increase customer lock-in, expand markets, and create new products.
Consider how these benefits are derived. What are the things that allow a company to reduce costs? Process
improvements are one of the main areas. When information technology is applied to manual processes within a
company, tremendous savings can be realized. From shop floor automation, to integrated systems where all
information is accessible regardless of geographical location, to real-time data collection that feeds decision
support systems, information systems are necessary to an organizations growth.
This topic considers some of the specific benefits derived from new information systems and the importance of
finding ways to measure those benefits.
Assessing the benefits of a system
Information systems cross boundaries and industries. There are information systems in business, healthcare,
education, government, retail the list goes on. Not all systems directly relate to one another, and where
industries once saw walls or silos separating systems, today with technologies such as cloud computing, big
data, predictive analytics, and an Internet of Things, we are beginning to see benefits not measured by
independence but by connections. Tangible and intangible benefits are still valid for financial measures, but
connectivity is more concerned with structured versus unstructured data. Connectivity between systems, across
platforms, and between devices, cars, smartphones, homes, networks, and so on, all provide a glimpse into the
benefits of analytics. Not only can they help predict consumer behaviour, but they can also determine patterns
of behaviours fed by our business, social, and personal networks. Information is more than a report of what
has already happened; it can amalgamate various data sets to predict what will.
The greatest benefits of such information systems are still in the future (although technology is quickly moving
us toward this future); today, the advantages are a little more modest. In manufacturing, retail, and many
other businesses, enterprise systems still run the back offices and produce the financial statements. Data both
within a company and outside its walls, whether produced through human transactions or the transactions of
Things, represents a new model of analysis. Web analytics, business intelligence and digital analytics will
combine information for decision making beyond the single area analysis we have enjoyed until now.
Practically, this is still hard to quantify, but we are already feeling its impact in targeted web advertising,
purchase recommendations based on trends and patterns, and in digital/social footprints that can combine all
our digital movements.
The need to run the business will always exist. Enterprise systems will continue to collect and tabulate the daily
needs of the business, and the benefits of such concrete systems cannot be understated.
In a 1999 article entitled ERP Users say Payback is Pass, Rick Mullin wrote about the benefits of
implementing new ERP systems. Exhibit 5.7-1 shows a long list of these benefits, as presented by Mullin. These
are the kinds of benefits that can be achieved through implementing information systems more broadly. The
Course Schedule Course Modules Review and Practice Exam Preparation Resources
list is not meant to be exhaustive; nonetheless, it shows the kinds of benefits to expect when assessing a
large-scale enterprise information system.
Exhibit 5.7-1: Benefits of implementing ERP systems (and enterprise systems in general)
Inventory reduction
Personnel reduction
Productivity improvement
Order management improvement
Financial close cycle reduction
Procurement cost reductions
Cash management improvements
Revenue increases
Transportation/logistics cost reduction
Maintenance reduction
On-time delivery improvements
Information visibility
Improved processes
Customer responsiveness
Cost reduction
Integration
Standardization
Flexibility
Globalization
Improved business performance
Supply chain management
Source: Rick Mullin, "ERP Users Say Payback is Pass", Chemical Week (Feb. 24, 1999).
This list remains applicable today and represents many of the reasons companies invest so heavily in
information systems.
However, the difficulty with all predicted benefits of a system is in how they are measured. How do we assess
and identify the revenue increases that accrue from a new system? In Module 2, you learned about the
difficulty of estimating the indirect benefits expected from an IS. But if no attempt is made to determine the
effects of the new system, it will never be possible to know the importance of the system in the organization.
Anecdotally yes, but not through facts.
Measuring the benefits of a new system
The critical issue for managers is to recognize the importance of measuring benefits, the possible strategies for
measuring benefits, and the need to look for multiple measures. Planning what will be measured makes it
much easier to carry out this task. Part of any systems development plan must be the development of concrete
ways to measure the benefits of the new system. Large system implementations typically have large entry
costs, as do infrastructure costs, and the ROI used as a basis to justify the initial investment must be verified
before, during, and after implementation of the system to ensure that the investment remains sound.
As a measurement example, if a new system is justified at the planning stage based on its ability to reduce
inventory, then a means of assessing whether inventory has been reduced through the use of the system is
essential. Some of the information necessary for this assessment will come from financial statements. Balance
sheets record inventory on hand, and you would expect to see inventory reduced following the introduction of
a new system. This is, however, not a perfect measure. Changes in other business conditions may have made it
easier or harder to reduce inventory. Attributing the benefits of such gross measures to a specific systems
implementation is problematic, because in business, changes rarely, if ever, occur in a vacuum.
Here are some ways that inventory reduction benefits could be assessed and made directly attributable to the
system:
Use the parallel implementation process
One way of assessing benefits is to use a parallel implementation process as a way of gauging how the decision
making would be undertaken. In the inventory example, the change to the new system might result in a
change to ordering practices due to shorter waiting times in replenishing stock. The ordering decisions that
would have been made by the old system can be compared to those made by the new system, and the
differences can be measured. Now the two systems are acting on the same data so the results cannot be
attributed to changes in the business conditions. This process clearly shows how the two systems would
diverge.
Obtain information from managers
Another much less systematic means of assessing benefits is to interview inventory managers, looking for
specific recollections of ways in which the system contributed to lowering inventory. While not systematic, such
anecdotal evidence may provide information about things to look for in more systematic inquiries that would
not have been thought of otherwise.
Obtain customer response and observe customer behaviour
One benefit of ERP systems is customer responsiveness. Customer responsiveness could be measured in
terms of time to ship orders (in hours or days), number of customer complaints, repeat business, and a range
of other activities. For each benefit, there may be multiple metrics that attest to whether that goal is being
achieved.
Intangible benefits
Recall from Topic 2.4 the issues involved in the estimation of intangible benefits, and review the means by
which intangible benefits can be assessed.
There are undoubtedly other ways that benefits can be assessed. New hardware systems are environmentally
greener. Blade servers and desktop computers require less energy to run, and require less physical space a
measureable benefit. Also cloud services require less hardware investment internally, therefore, less energy
consumption as well.
More efficient communications are an intangible benefit, but a highly important one. A company that introduces
cell phones or Integrated Digital Enhanced Network (iDEN) telecommunications where none existed before will
immediately experience the benefits of mobile communications between its employees. Where problems around
finding people to answer critical questions would have resulted in delays, mobile communications now allow for
immediate answers and highlight the importance of connectivity on a local scale.
Assessing the intangible benefits of customer service through a social media connection via LinkedIn,
Facebook, and Twitter is also a difficult benefit to quantify. Aside from the ability to directly communicate with
customers on a one-to-one and a one-to-many basis, social media provides insights into a companys social
standing with its customers. For example, Twitter, using such tools as TweetDeck or Seesmic, can search for
mentions of its name or brand, good or bad, and respond accordingly. With Facebook chat, companies can also
connect with customers, and provide support through direct communications, and through postings. These
kinds of communications allow companies to manage the perception of themselves (or branding) in ways that
were never available before.
The benefits described in this topic show the range of highly systematic to more informal means of measuring
that can be considered for IS investments.
Module 5 self-test
1. Explain why the testing stage of systems development so important. Name and describe the three
stages of testing for an information system.
Source: Kenneth C. Laudon, J ane P. Laudon, and Mary Elizabeth Brabston, Management
Information Systems: Managing the Digital Firm, Fifth Canadian Edition (Toronto: Pearson
Canada, 2011), page 305. Reproduced with permission from Pearson Canada.
Solution
2. Describe the four approaches to systems conversion. Under what circumstances would you
recommend or not recommend each approach?
Solution
3. Identify the characteristics of system quality and explain the importance of each.
Solution
4. What is the difference between quality control (QC) and quality assurance (QA)? Why is QA
arguably more important than QC?
Solution
5. Systems maintenance and enhancement consume the majority of IS development budgets. What
are the reasons for this, and do you think it is appropriate?
Solution
6. You are discussing the implementation and operation plan for a new order processing system
with a senior manager. As part of your plan, you have included funds to develop measures of the
effectiveness of the new system. The senior manager argues with you about these funds:
Why on earth should I pay to count the benefits after the fact? The project plan estimated the
benefits, and I used that information to make the decision to go ahead with this system. All I will
do with your measurement plan is spend more money figuring out whether the estimates were
right or wrong, and its too late now to change things if the figures are incorrect. I could put that
money to better use elsewhere in the organization.
How would you respond to this manager?
Solution
7. Work through the following scenario on conversion options that will test your understanding of
conversion strategies.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Module 5 self-test solution
Question 1 solution
Testing is critical to the success of a system because it is the only way to ascertain whether the system will
produce the right results. The three stages of information system testing are as follows:
Unit testing: refers to separately testing or checking the individual programs.
System testing: the entire system is tested to determine whether program modules are
interacting as planned.
Acceptance testing: the system undergoes final certification by end users to ensure it meets
established requirements and that it's ready for installation.
Source: Adapted from Dale Foster, Instructors Manual to accompany Management Information Systems:
Managing the Digital Firm, Fifth Canadian edition, Pearson Canada, 2011, Chapter 9, page 343. Reproduced
with the permission of Pearson Canada.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Module 5 self-test solution
Question 2 solution
Four approaches to system conversion
Method Recommended Not Recommended
Parallel The old and new system
are run simultaneously for a period
of time.
- when the testing of old versus new
outputs is critical, if this cannot be
done in other ways
- when the likelihood of a failure of
the new system is high and
continuance of the system is critical
- when the amount of work to
duplicate processes would
overwhelm staff
Pilot The new system is run in a
specific area of the business for a
short period of time as a test, then
rolled out to the rest of the
organization.
- when you want to test
implementation plans
- when you want a means to
measure benefits (The pilot can be
used as a research site to assess
changes in comparison to the rest of
the organization.)
- when integration between sites is
high (This makes it difficult to find a
site that can operate differently
during the pilot.)
Phased Different parts of the new
system are rolled out at different
points in time (in phases).
- when the system itself is naturally
divided into modules
- when you want to minimize the
time for implementation (The
phased approach carries on the
change for a long period of time.)
Direct cutover The old system is
turned off and the new system
turned on at the same time.
- when infrastructure differences
make it impossible to run both
together (for example, different
hardware)
- small system where it would be
easy to roll back if there were
problems
- when time is a critical factor in the
implementation
- when you have a fixed or
immoveable deadline
- when implementation plans need
to be developed and tested out
(should use pilot)
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Module 5 self-test solution
Question 3 solution
The characteristics of system quality and the importance of each are as follows:
Completeness: If the system does not do everything it is supposed to (it lacks functionality), then
it is not meeting its objectives.
Correctness: If the system provides wrong information, bad decision making will result.
Reliability: It is not sufficient for the system to work "most of the time." It must be able to
handle peak loads as well.
Consistency: If processes are handled in different ways at different times, information outputs
cannot be compared and support for decision making will be poor.
Efficiency: Using computer resources efficiently means we are not paying for things we dont
need.
Integrity: Being able to control access to the software and data is important to ensure that we
are meeting our societal responsibilities to protect privacy and that we are safeguarding an
important organizational resource.
Testability: Without the ability to prove the system is functioning correctly, the other
characteristics cannot be assessed.
User-friendliness: Systems are only effective when they are used well; user-friendliness is a key
driver of this.
Maintainability: Business needs change over time; if a system cannot be effectively maintained, it
will have limited usefulness to the firm.
Scalability: A system is able to automatically expand to accept more users or throughput without
intervention, and with little to no effect on user performance
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Module 5 self-test solution
Question 4 solution
Quality control is concerned with finding and correcting problems that have occurred. Quality assurance (QA) is
an ongoing process of analyzing the environment to determine the causes of problems and altering the
environment to minimize future problems. Because QA is concerned with the overall development environment
and aims to prevent problems before they occur, it can be argued to be more important.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Module 5 self-test solution
Question 5 solution
Over the life of a system, business needs will change; the system must be changed to deal with
it.
Testing is imperfect (by definition); maintenance is necessary to address problems when they are
found.
Resource limitations sometimes preclude doing everything you would like at one time, creating
requests for enhancement.
Maintenance is often an urgent requirement: if the system breaks, it is more important to get it
up and running again, which can take away from the resources available for development.
Users learn about the system and discover new possibilities, creating requests for enhancement.
System quality is not always good; when quality is poor it will be necessary to spend more money
on maintenance.
This is not an unreasonable state of affairs. Only the last reason above relates to a failure of IS to do a good
job (which would mean that maintenance could be avoided). For the long-term health of the organization,
however, it is important to ensure that maintenance activities, and the need to direct resources there, do not
preclude new development. For this reason, it is often recommended that maintenance and development have
separate budgets, so that development is not overlooked. However, in the case of a purchased enterprise
system, development is the responsibility of the vendor; the organization must ensure that it remains flexible in
its system choices to easily adapt to any vendor development initiatives.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Module 5 self-test solution
Question 6 solution
You should disagree with this managers view, for two principal reasons:
1. Measuring the benefits provides essential data to guide maintenance and enhancement. If a
system is supposed to decrease inventory, for example, and does not, then perhaps some aspect
of the system is not working well and needs to be adjusted. Or perhaps users are not sufficiently
trained to generate the benefits from the system. Measuring the benefits provides diagnostics to
guide ongoing actions to attain them.
2. Measuring the benefits makes it possible to learn about the process by which we estimate system
benefits and thus make better decisions in the future. Too many estimates of benefits (especially
the more intangible benefits) are based on guesswork. However, if we stop to assess the
outcomes, we would have more data on which to base future estimates. This learning will result
in better decisions about what systems to adopt in the future.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Course: MS2

Judgment Call Conversion Strategies



Judgment Call Conversion Strategies
You are a business analyst for Dynaheir National Bank. Over the past several years, client service staff
have become increasingly vocal about not being able to reverse charges to client accounts without the
branch managers authorization. The bank has issued a new policy that authorizes all client service staff
to refund any bank charges less than $5, if the client service staff agrees that the fee was charged in
error. The new policy goes into effect on August 1, 20XX, about two months from today, and the IS
department suggests that the reprogramming requirements of the system will be relatively minor. During a
meeting with senior finance and administration managers, you are asked what conversion strategy you
would recommend.

Options

What conversion strategy would you recommend?
a. I think that we should implement the system change as soon as the programming changes are
ready. For the first month, however, the manual authorization of all refunds by the branch
manager should be continued, to allow the system change to be monitored.
b. I think its best that we only implement the system change at one or two branches beginning on
August 1, 20XX, to ensure that the system is functioning properly. After a satisfactory testing
period, the system change could be implemented nationally.
c. I think that we should first configure the system change to allow all client service staff to process
fee refunds for those fee disputes that are determined to be caused by a clear error on the
banks part. After a successful period of testing, the refund authorization of the client service staff
could then be broadened to more subjective cases.
d. I think that we should have the system change implementation ready in advance of August 1,
20XX. On that day, we would allow all client service staff to have full authorization to process
refunds in accordance with the new policy. Any testing of the system would be done using
dummy data on a closed system before the conversion takes place.



If user responds a.
In response to your recommendation, the banks CFO says:
But thats going to put a heavy strain on the staff who need to both complete the paperwork for
manager authorization, and process the refund electronically. Id be inclined to simply switch over to
the new system on August 1, 20XX, and be done with it.

How do you respond to the CFOs comments:
1. Although its some extra work for the staff and managers for the first month, the level of risk
created by this system change is too high for us to switch over completely right away. We should
run both systems in parallel for the first month and see what happens.
2. Thats a good point. Directly cutting over to the new system probably makes sense given that the
bank policy has changed and the risk is relatively low. If something doesnt work, we can always
continue using the manual system, since the paperwork will still be in the branch and the staff are
familiar with it.

If user responds 1 or 2:
Suggested solution:
Under the circumstances, given that the policy is changing on a particular date and the relative
risk to the bank is low (because the transactions are initiated by trusted staff members and are for
low dollar amounts, a robust system of internal controls likely already exists in the bank, the IS
department suggests that the reprogramming requirements are minor, and the manual system
Module 5 self-test solution
Question 7 solution
can be used as an immediate back-up if needed), a direct cutover conversion strategy is
probably the most appropriate. This is, however, not the only solution, and arguments could be
made for alternate strategies. Parallel conversion is probably the second choice.



If user responds b.
In response to your recommendation, the banks CFO says:
But the policy goes into effect on August 1, 20XX. If only one or two branches are up and running
this way, it will be a public relations nightmare. Surely were more confident in our systems
implementation capabilities than to have to do that? Why dont we simply switch over to the new
system on August 1, 20XX and be done with it?

How do you respond to the CFOs comments:
1. Its not a matter of public relations, its a matter of risk. Although its not a consistent
implementation with the new policy right away, its more important that the system not fail. I still
think we should run a pilot at one or two branches first.
2. Thats true. The IS department did suggest that this would be a relatively straightforward job for
them, and when the new policy is implemented, our staff will expect it to be in effect at all
branches. A direct cutover conversion strategy is a good choice.

If user responds 1 or 2:
Suggested solution:
Under the circumstances, given that the policy is changing on a particular date and the relative
risk to the bank is low (because the transactions are initiated by trusted staff members and are for
low dollar amounts, a robust system of internal controls likely already exists in the bank, the IS
department suggests that the reprogramming requirements are minor, and the manual system
can be used as an immediate back-up if needed), a direct cutover conversion strategy is
probably the most appropriate. This is, however, not the only solution, and arguments could be
made for alternate strategies. Parallel conversion is probably the second choice.




If user responds c.
In response to your recommendation, the banks CFO says:
First of all, I have trouble defining what a clear error is. Secondly, the policy in place doesnt make
any distinction about this. The purpose of the policy is to empower our client service staff to make
these decisions. I think its best to simply switch over to the new system on August 1, 20XX and be
done with it.

How do you respond to the CFOs comments:
1. Good point. It might be optimistic to assume that we can define a clear error, and youre right
that this isnt the intent of the policy anyway. We should be able to directly cutover to the new
system after doing the necessary testing before it goes live.
2. A direct switch over to the new system is too risky for this change. I think we should sit down with
some branch managers and try to determine some errors that would certainly not need their
authorization to refund. Even though its not in the policy, its safer than simply allowing the staff
to suddenly start processing refunds.

If user responds 1 or 2:
Suggested solution:
Under the circumstances, given that the policy is changing on a particular date and the relative
risk to the bank is low (because the transactions are initiated by trusted staff members and are for
low dollar amounts, a robust system of internal controls likely already exists in the bank, the IS
department suggests that the reprogramming requirements are minor, and the manual system
can be used as an immediate back-up if needed), a direct cutover conversion strategy is
probably the most appropriate. This is, however, not the only solution, and arguments could be
made for alternate strategies. Parallel conversion is probably the second choice.



If user responds d.
In response to your recommendation, the banks CFO says:
That sounds very risky to me. If the system doesnt work properly, the bank will look bad and our
branch staff will be upset. I think we should consider running a pilot in one or two branches until
adequate testing is done, just to be sure.

How do you respond to the CFOs comments:
1. I disagree with running a pilot because once the policy is in effect, the system should be
consistent across all of the branches. The IS department suggested that the change is relatively
straightforward, and if something doesnt work, we could always continue using the manual
system while the situation is corrected. I still think that a direct cutover conversion strategy is the
best option.
2. Thats true, the potential impact if the direct cutover doesnt go well would be quite significant.
We probably should run a pilot in one or two branches first to get adequate testing done.

If user responds 1 or 2:
Suggested solution:
Under the circumstances, given that the policy is changing on a particular date and the relative
risk to the bank is low (because the transactions are initiated by trusted staff members and are for
low dollar amounts, a robust system of internal controls likely already exists in the bank, the IS
department suggests that the reprogramming requirements are minor, and the manual system
can be used as an immediate back-up if needed), a direct cutover conversion strategy is
probably the most appropriate. This is, however, not the only solution, and arguments could be
made for alternate strategies. Parallel conversion is probably the second choice.


Module 5 summary
Systems implementation, testing, and support
Recommend and defend a strategy for converting from an existing system to a new system,
based on situational factors.
Different users of the system require different documents. There is user documentation, and
system documentation. For a purchases system, many of the user documentation will already be
included.
User Documentation:
Functional description: introductory document that describes what the
system does
Installation document: how to install the system hardware and
software
Introductory manual: describe how to use the system
System reference guide: The types of error messages that arise and
how to correct them
System Documentation:
Requirements document: describing the system in logically modeled
terms
Systems architecture: how the system modules, programs connect
and relate
Source code list: programs, tables and field listing
Systems conversion and installation is the process of upgrading or replacing the existing system
with the new system.
There are four approaches for system conversion: parallel, pilot, phased, and direct cutover. See
Exhibit 5.1-1.
In parallel conversion the old system continues to be used at the same time the new system is
introduced. Both systems run in parallel for a predetermined amount of time. In a parallel
conversion, people use both systems but gradually increase the amount of time that they use the
new system until it is in use the majority of the time. Then the old system is discontinued.
In a pilot conversion, the new system is introduced in a single unit or location for a set period of
time before it is installed in other parts of the organization. The pilot conversion allows an
organization to test out a new system in a controlled way.
During phased conversion, a new information system is broken down into smaller functional
components that can be brought into operation one at a time, with each one adding more
improvements and functionality to the overall system. A phased installation will be gradual,
incremental, and easier to manage than the other installation approaches.
In a direct cutover conversion, the old system is discarded and the new system takes over all at
once. It is essentially turning the old system off and turning the new system on.
Assess the advantages and disadvantages of the four different conversion strategies.
Parallel conversion
Parallel conversion allows for a comparison between the new system and the old so
that you can benchmark and quantify its effectiveness.
It minimizes the risks of operational and data processing failures because the old
system continues to function along with the new system.
The duplication of effort associated with running two systems is very costly.
Some users will continue to insist on access to the old system. This will delay the
benefits of the new system.
Parallel is considered the least risky conversion approach.
Pilot conversion
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Pilot conversion allows an organization to test out a new system in a controlled
way.
It limits the amount of disruption and harm a new system can produce in an
organization.
The success of a pilot installation can be used to overcome user resistance and sell
the new system to the rest of the business.
If you select the easiest site with very positive conditions for adoption of the
system, you may hide some potential problems. The inverse is also true.
A pilot conversion creates additional burdens for the IS staff in the maintenance
and support of two different systems.
This approach also runs the risk of delaying the full implementation of a new
system because the pilot is constantly being improved.
The pilot approach to systems conversion is a middle of the road method designed
to minimize risk.
Phased conversion
Phased conversion is easier to manage than the other installation approaches. A
phased approach will keep the risk fairly low by spreading the conversion out over
time.
With a phased conversion, a completion point can be difficult to define since it
takes place over such a long period of time.
The old and the new systems must be able to work together seamlessly.
With a phased approach, it is easier to manage the risks and costs in the short
term, but it can become a never-ending project.
Direct-cutover conversion
Direct cutover can be the least expensive of the different methods and can occur in
the shortest time.
With this approach, the users and management have a high interest in making the
new system work because by design there is no turning back. A direct-cutover
conversation may be the only option if the old and new systems cannot co-exist in
any form.
The greatest risk is the impact that errors and failures would have on the
organization.
The riskiest strategy for new systems installation, direct cutover-conversion can be
low in cost and the benefits of the new system can be realized without delay.
Distinguish between the different methods of testing in systems implementation and formulate
strategies to mitigate the limitations of testing.
There are two basic phases of testing: software testing and acceptance testing.
Software testing includes unit testing, integration testing, and system testing.
With unit testing, the individual modules of the system are tested for any potential
errors in the code.
Integration testing is used to check interoperability.
The goal of system testing is to see how all of the components of the new systems
will work under various conditions, including normal loads and peak loads.
Acceptance testing involves the actual users of the completed system and how well it meets their
expectations and requirements. There are two phases: alpha testing where simulated data is
used and beta testing where the actual data of the completed system is used. This is the final
stage before installation.
Acceptance testing includes
recovery testing to examine how the system responds when it has been forced to
fail
security testing, which focuses on whether the security policies have been
implemented as intended in the final system
stress testing, which tries to break the system by explicitly not following the rules
and procedures as laid out. Stress testing also considers the performance of the
system under heavy usage conditions.
performance testing, which examines the use of the system in different
environments
There are limitations to testing, including the fact that it is impossible to test a program for every
conceivable condition. See Example 5.2-1.
No matter how exhaustive your test, you cannot guarantee a bug-free system.
Testing can be a never-ending proposition. There needs to be a "good enough" point in any
testing program.
Identify the different characteristics of systems quality.
Completeness: The system meets all the performance specifications and provides all the functions
specified.
Correctness: The system performs the required functions accurately.
Reliability: The system can handle normal and peak loads.
Consistency: Operations are carried out in the same manner each time they are requested.
Efficiency: The amount of computer resources required by the system is reasonable.
Integrity: Access to software or data by unauthorized persons can be controlled.
Testability: The system is designed so that it can be tested to ensure it performs the intended
functions.
User-friendliness: The system is easy to use, and users can understand interactive dialogue.
Maintainability: Programming is structured and well documented, changes can be made easily,
and errors can be corrected with relative ease.
Scalability: A system is able to automatically expand to accept more users or throughput without
intervention, and with little to no effect on user performance
Distinguish between quality assurance and quality control.
Quality assurance is concerned with the quality of the environment in which products
(information systems) are developed.
Quality control is concerned with the quality of the products (information systems) themselves.
Explain the basic tasks of systems maintenance.
Systems maintenance includes corrections to things such as program bugs, design bugs,
documentation errors, operating procedure errors, and test data errors.
Systems maintenance can consume about 80% of systems development budget.
Systems maintenance consists of five activities:
Define and validate the problem. The exact nature of the problem must be
identified.
Benchmark the programs and applications. The system should be tested to establish
a benchmark.
Understand the application and its programs. You must study the programs to
understand how they interact.
Edit and test the programs. Once you fully understand the corrective actions
required, changes are made to the appropriate programs.
Update documentation. It is important to document your changes so that someone
else can maintain the system in the future. Documentation provides a record of the
issues raised and what changes were made to the system.
Defend the need for sufficient systems maintenance.
Sufficient systems maintenance is needed to debug and correct errors that were not detected
during testing, and to fine-tune systems and improve efficiency.
Maintenance activities provide ongoing support for users and can help them adapt to changing
needs.
Assess a non-IT managers role in evaluating systems enhancements.
Assess the implications especially around business rules.
Anticipate changes.
Make cost-benefit assessments.
Relate the issues that surround legacy systems to the challenges of replacing them.
Problems with legacy systems include
obsolete programming languages
poor system documentation
lost program documentation
risk of replacing a working system with a new system
interfaces from new systems to legacy systems
Assess the possible benefits of a new system and evaluate some strategies for measuring
those benefits.
inventory reduction
personnel reduction
productivity improvement
order management improvement
financial close cycle reduction
IT cost reduction
procurement cost reductions
cash management improvements
revenue increases
transportation/logistics cost reduction
maintenance reduction
on-time delivery improvements
information visibility
improvement processes
customer responsiveness
cost reduction
integration
standardization
flexibility
globalization
business performance
supply chain management
Some strategies for identifying benefits include
Parallel implementation
Obtaining information from managers
Obtaining customer response and observe customer behaviour
Intangible benefits

Potrebbero piacerti anche