Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
We all are involved in the decision making process in our day-to-day life, whether in office or at
home or elsewhere. We say good about those who can take quick decisions; we call them
efficient and effective. One may feel it is futile to discuss about this topic stating that this trait is
an innate part of our character and nothing much is possible to be done. But people are trainable,
arent they?
We see some people taking decisions and moving forward while some others stay put with the
current situation due to indecision. Some make bold decisions, take risks and jump far ahead and
out of reach of others while some exercise extreme caution and make little progress. Is this a
quality that is inherent in a person or is it one that a person develops over time. We can perhaps
leave this matter to be debated by philosophers and psychologists, and move ahead and discuss
the consequences of poor decision making.
Taking a stand
In our day to day professional life we are confronted with various situations where we are
required to take a stand. Our decisions may relate to choosing of a technology solution or a
partner, fixing up something that has gone wrong in the datacenter, resolving unreasonable user
demands and many such things, but these decision points are critical and essential part of our
work. If we keep taking decisions, work moves on and we embark on the next important matter.
However, if we are stuck with indecision, matters come to a standstill and it affects our
productivity and efficiency.
I am in no way hinting at hurried decision making or taking matters casually. Decisions need to
be taken with utmost care; and these need to be well considered and sound. The process may take
longer in some cases which require analysis and research but it is important to take a call after a
reasonable time has elapsed.
Not all decisions we take may be correct and perfect; we may make mistakes at times and that is
humana corollary would be to say that we would not make a mistake if we do not get into
decision making. I am reminded of one of our Prime Ministers, who had adopted a unique
approach of postponing decisions hoping that circumstances will take care of the problem on its
own. To quote Paul Newman, A man who waits to do something so perfect that nobody would
find fault it, he would do nothing.
Effects of indecision
Consequences of indecision or delayed decision are enormous. First, you lose on time; things
that needs to be done today will be done tomorrow or the day after. Any work getting stretched
over days leads to inefficiency and higher costs. If work does not proceed as scheduled, there is a
sheer waste of resources put on the job. Both our assistants and also the end users, so affected,
develop a wrong impression and lose confidence in us. By not acting on time we may lose out an
opportunity to make use of a situation or lose out on making a difference to the environment
when it was most essential.
On occasions when we stretch the time for making a decision, the situation changes and the
decision loses its relevance. Delay in decision could also alter your position from a winner to a
loser. I would like to share the example of my son who is a brilliant amateur chess player. He
makes well considered moves and often surprises opponents ranked higher than him, but loses a
few games due to time constraints. He considers all possibilities to choose the correct move and
hence consumes more time. The lesson he has to learn is that if he develops the habit of faster
decision making, he could win many more games and the championship. Rather than looking for
the perfect move, he can move with faster with quicker decisions and not mind if there is a rare
move that is incorrect.
Decisions are taken easily taken by people who have courage; as they proceed ahead, their
confidence grows. If they make mistakes they consider these as opportunities to learn and move
on.
Introductory Definitions
signal
Information Transmitter Receiver Destination
Source
message message
Noise
Source
Source Develops information in form of message
message
Destination Accepts message from transmitter
Problem Definition
Data Collection / Gap Assessment (information is received from the IS in
this stage)
Monitoring System
reductionist by design
In MIS, the information is recognized as a major resource like capital and time. If this resource
has to be managed well, it calls upon the management to plan for it and control it, so that the
information becomes a vital resource for the system.
Quantity, content and context of information - how much information and exactly what
should it describe.
Nature of analysis and presentation - comprehensibility of information.
Availability of information - frequency, contemporariness, on-demand or routine,
periodic or occasional, one-time info or repetitive in nature and so on
Accuracy of information.
Reliability of information.
Security and Authentication of the system.
Planning for MIS
MIS design and development process has to address the following issues successfully:
There should be effective communication between the developers and users of the
system.
There should be synchronization in understanding of management, processes and IT
among the users as well as the developers.
Understanding of the information needs of managers from different functional areas
and combining these needs into a single integrated system.
Creating a unified MIS covering the entire organization will lead to a more economical,
faster and more integrated system, however it will increase in design complexity
manifold.
The MIS has to be interacting with the complex environment comprising all other sub-
systems in the overall information system of the organization. So, it is extremely
necessary to understand and define the requirements of MIS in the context of the
organization.
It should keep pace with changes in environment, changing demands of the customers
and growing competition.
It should utilize fast developing in IT capabilities in the best possible ways.
Cost and time of installing such advanced IT-based systems is high, so there should not
be a need for frequent and major modifications.
It should take care of not only the users i.e., the managers but also other stakeholders
like employees, customers and suppliers.
Once the organizational planning stage is over, the designer of the system should take the
following strategic decisions for the achievement of MIS goals and objectives:
Problem Definition
Feasibility Study
Systems Analysis
System Design
Detailed System Design
Implementation
Maintenance
Devices
Data center systems - It is the environment that provides processing, storage,
networking, management and the distribution of data within an enterprise.
Enterprise software - These are software system like ERP, SCM, Human Resource
Management, etc. that fulfill the needs and objectives of the organizations.
IT services - It refers to the implementation and management of quality IT services by IT
service providers through people, process and information technology. It often includes
various process improvement frameworks and methodologies like six sigma, TQM, and
so on.
Telecom services
Purpose
Definition
test inputs
detailed specification of test procedure
details of expected outputs
Each sub-system and all their components should be tested using various test procedures and
data to ensure that each component is working as it is intended.
The testing must include the users of the system to identify errors as well as get the feedback.
System Operation
Before the system is in operation, the following issues should be taken care of:
Once the system is fully operational, it should be maintained throughout its working life to
resolve any glitches or difficulties faced in operation and minor modifications might be made to
overcome such situations.
Factors for Success and Failure
MIS development projects are high-risk, high-return projects. Following could be stated as
critical factors for success and failure in MIS development:
An information system architecture is a formal definition of the business processes and rules,
systems structure, technical framework, and product technologies for a business or organizational
information system.
The architecture should document: What data is stored?, How does the system function?, Where
are components located?, When do activities and events occur in the system?, and Why does the
system exist?
Information system architecture is a blue print used to develop, implement, maintain the elements
in the organization.
Business architecture is a blue print of enterprise, that provides a common understanding of the
organization and is used to align objectives and demands.
A System architecture is the conceptual model that defines the structure, behavior and more views
of the systems. here we set the conventions, rules and standards employed in a system.
A Technical architecture defines all the Technical requirements like operational, performance, etc.
Quantitative techniques may be defined as those techniques which provide the decision
makes a systematic and powerful means of analysis, based on quantitative data. It is a scientific method
employed for problem solving and decision making by the management. With the help of quantitative
techniques, the decision maker is able to explore policies for attaining the predetermined objectives. In
short, quantitative techniques are inevitable in decision-making process.
Classification of Quantitative Techniques:
There are different types of quantitative techniques. We can classify them into three
categories.
They are:
A technique in which quantitative data are used along with the principles of mathematics is known as
mathematical quantitative techniques.
Permutation means arrangement of objects in a definite order. The number of arrangements depends
upon the total number of objects and the number of objects taken at a time for arrangement.
Combination means selection or grouping objects without considering their order.The number of
combinations is calculated by using the formula.
2.Set Theory:-
Set theory is a modern mathematical device which solves various types of critical problems.
3.Matrix Algebra:
Matrix is an orderly arrangement of certain given numbers or symbols in rows and columns. It is a
mathematical device of finding out the results of different types of algebraic operations on the basis of
the relevant matrices.
4.Determinants:
It is a powerful device developed over the matrix algebra. This device is used for finding out values of
different variables connected with a number of simultaneous equations.
5.Differentiation:
It is a mathematical process of finding out changes in the dependent variable with reference to a
small change in the in dependent variable.
6.Integration:
7.Differential Equation:
It is a mathematical equation which involves the differential coefficients of the dependent variables.
Statistical techniques are those techniques which are used in conducting the statistical enquiry
concerning to certain Phenomenon. They include all the statistical methods beginning from the
collection of data till interpretation of those collected data.
1.Collection of data:
One of the important statistical methods is collection of data. There are different methods for
collecting primary and secondary data.
Measures of Central tendency is a method used for finding he average of a series while measures of
dispersion used for finding out the variability in a series. Measures of Skewness measures asymmetry of
a distribution while measures of Kurtis measures the flatness of peakedness in a distribution.
Correlation is used to study the degree of relationship among two or more variables. On the other
hand, regression technique is used to estimate the value of one variable for a given value of another.
4.Index Numbers:
Index numbers measure the fluctuations in various Phenomena like price, production etc over a period
of time,They are described as economic barometres.
Analysis of time series helps us to know the effect of factors which are responsible for changes:
Interpolation is the statistical technique of estimating under certain assumptions, the missing figures
which may fall within the range of given figures. Extrapolation provides estimated figures outside the
range of given data.
Statistical quality control is used for ensuring the quality of items manufactured. The variations in
quality because of assignable causes and chance causes can be known with the help of this tool.
Different control charts are used in controlling the quality of products.
8. Ratio Analysis:
Ratio analysis is used for analyzing financial statements of any business or industrial concerns which help
to take appropriate decisions.
9. Probability Theory:
Theory of probability provides numerical values of the likely hood of the occurrence of events.
Programming Techniques:
Programming techniques are also called operations research techniques. Programming techniques are
model building techniques used by decision makers in modern times. Programming techniques involve:
1. Linear Programming:
Linear programming technique is used in finding a solution for optimizing a given objective under certain
constraints.
2. Queuing Theory:
Queuing theory deals with mathematical study of queues. It aims at minimizing cost of both servicing
and waiting.
3. Game Theory:
This is concerned with making sound decisions under conditions of certainty, risk and uncertainty.
5. Inventory Theory:
Inventory theory helps for optimizing the inventory levels. It focuses on minimizing cost associated with
holding of inventories.
It is a technique of planning, scheduling, controlling, monitoring and co-ordinating large and complex
projects comprising of a number of activities and events. It serves as an instrument in resource
allocation and adjustment of time and cost up to the optimum level. It includes CPM, PERT etc.
7. Simulation:
It is a technique of testing a model which resembles a real life situations
8. Replacement Theory:
It is concerned with the problems of replacement of machines, etc due to their deteriorating efficiency
or breakdown. It helps to determine the most economic replacement policy.
It is a programming technique which involves finding an optimum solution to a problem in which some
or all variables are non-linear.
10. Sequencing:
Sequencing tool is used to determine a sequence in which given jobs should be performed by minimizing
the total efforts.
It is a recently developed technique. This is designed to solve the combination problems of decision
making where there are large number off easible solutions. Problems ofplant location, problems of
determining minimum cost of production etc. are examples of combination problems.
6. To help in minimizing the total processing time required for performing a set of jobs
Quantitative techniques render valuable services in the field of business and industry. Today, all
decisions in business and industry are made with the help of quantitative techniques.
Some important uses of quantitative techniques in the field of business and industry are given below:
1. Quantitative techniques of linear programming is used for optimal allocation of scarce resources in
the problem of determining product mix
2. Inventory control techniques are useful in dividing when and how much items are to be purchase so
as to maintain a balance between the cost of holding and cost of ordering the inventory
3. Quantitative techniques of CPM, and PERT helps in determining the earliest and the latest times for
the
events and activities of a project. This helps the management in proper deployment of resources.
4. Decision tree analysis and simulation technique help the management in taking the best possible
course of action under the conditions of risks and uncertainty.
5. Queuing theory is used to minimize the cost of waiting and servicing of the customers in queues.
6. Replacement theory helps the management in determining the most economic replacement policy
regarding replacement of an equipment.
Even though the quantitative techniques are inevitable in decision-making process, they are not free
from short comings. The following are the important limitations of quantitative techniques:
1. Quantitative techniques involves mathematical models, equations and other mathematical
expressions
2. Quantitative techniques are based on number of assumptions. Therefore, due care must be ensured
while using quantitative techniques, otherwise it will lead to wrong conclusions.
3. Quantitative techniques are very expensive.
4. Quantitative techniques do not take into consideration intangible facts like skill, attitude etc.
5. Quantitative techniques are only tools for analysis and decision-making. They are not decisions itself.
RAOS MANAGEMENT STUDIES
UNIT-II
BASIC STRUCTURAL CONCEPTS
The architecture of an enterprise information system determines relation of its individual parts
with its surrounding environment. An effective use of information system requires, apart from
other things, its painless integration in to the enterprise management system. The information
system architecture can be seen by different angels. The structural design of information system
might be closely interconnected with enterprise organizational architecture.
There are certain basic concepts relating to the structure of management
information system .They are:
FORMAL AND INFORMAL INFORMATION SYSTEM
The information processing system in an organization can be divided into two categories namely
public system and private system.
Public information system is a part of an organization and used by the relevant persons in an
organization. Such information can be utilized by all personnel who have authority to access the
information.
On the other hand private information system are kept by individuals . private system may help
and supplement the public system .There are both formal and informal system with in the public
and private system.
The formal information system are represented by records and document .There are certain well
prescribed rules and procedures to be followed in formal information system.
The informal information system also process information needed for organizational functioning
.But such an information system is not represented by any records and documents.
Management information system with its specified rules and procedures is a part of the formal
public system .
In addition to the public formal system ,there also exist informal public system in order to provide
benefits to all persons in the organization who really need such information .strict rules and
procedures may not be
found in such systems .
Besides these formal and informal public information system , there are also formal and informal
private system in an organization.
Many individuals may maintain their own private informal information system for discharging their
duties more effectively and efficiently .
The use of computer is to execute a variety of office operations, such as word processing,
accounting, and e-mail. Office automation almost always implies a network of computers with a
variety of available programs
Office automation refers to the varied Computer machinery and Software used to digitally
create, collect, store, manipulate, and relay office information needed for accomplishing basic
tasks. Raw data storage, electronic transfer, and the management of electronic business
information comprise the basic activities of an office automation system. Office automation
helps in optimizing or automating existing office procedures. The backbone of office automation
is a LAN, which allows users to transfer data, mail and even voice across the network. All office
functions, including dictation, typing, filing, copying, fax, Telex, microfilm and records
management, telephone and telephone switchboard operations, fall into this category. Office
automation was a popular term in the 1970s and 1980s as the desktop computer exploded onto
the scene.
Advantages are:
5. Businesses can easily purchase and stock their wares with the aid of technology. Many of the
manual tasks that used to be done by hand can now be done through hand held devices and UPC
and SKU coding. In the retail setting, automation also increases choice. Customers can easily
process their payments through automated credit card machines and no longer have to wait in
line for an employee to process and manually type in the credit card numbers.
6. Office payrolls have been automated which means no one has to manually cut checks, and those
checks that are cut can be printed through computer programs. Direct deposit can be
automatically set up and this further reduces the manual process and most employees who
participate in direct deposit often find their paychecks come earlier than if they'd have to wait for
their checks to be written and then cleared by the bank.
7. Other ways automation has reduced employee manpower on tasks is automated voice direction.
Through the use of prompts, automated phone menus and directed calls, the need for employees
to be dedicated to answer the phones has been reduced, and in some cases, eliminated.
The term office automation refers to all tools and methods that are applied to office activities
which make it possible to process written, visual, and sound data in a computer-aided manner.
Office automation is intended to provide elements which make it possible to simplify,
improve, and automate the organization of the activities of a company or a group of people
(management of administrative data, synchronization of meetings, etc.).
Considering that company organizations require increased communication, today, office
automation is no longer limited to simply capturing handwritten notes. In particular, it also
includes the following activities:
exchange of information
management of administrative documents
handling of numerical data
meeting planning and management of work schedules
The term "office suite" refers to all software programs which make it possible to meet office
needs. In particular, an office suite therefore includes the following software programs:
word processing
a Spread sheet
a presentation tool
a Database
a scheduler
Microsoft Office
Sun Star Office
Example:
Security control: This function controls which users have access to which information.
Any system that you use must be able to protect not-public records as defined by the
MGDPA.
Version Control: The EDMS should allow users to add documents to the system and
designate a document as an official record. It should also automatically assign the correct
version designation.
Metadata Capture: The EDMS should allow you to capture and use the metadata
appropriate for your agency.
Decision support systems (DSS) are interactive software-based systems intended to help
managers in decision-making by accessing large volumes of information generated from various
related information systems involved in organizational business processes, such as office
automation system, transaction processing system, etc.
DSS uses the summary information, exceptions, patterns, and trends using the analytical models.
A decision support system helps in decision-making but does not necessarily give a decision
itself. The decision makers compile useful information from raw data, documents, personal
knowledge, and/or business models to identify and solve problems and make decisions.
Programmed decisions are basically automated processes, general routine work, where:
Decision support systems generally involve non-programmed decisions. Therefore, there will be
no exact report, content, or format for these systems. Reports are generated on the fly.
Attributes of a DSS
Characteristics of a DSS
Benefits of DSS
Components of a DSS
Database Management System (DBMS): To solve a problem the necessary data may
come from internal or external database. In an organization, internal data are generated
by a system such as TPS and MIS. External data come from a variety of sources such as
newspapers, online data services, databases (financial, marketing, human resources).
Model Management System: It stores and accesses models that managers use to make
decisions. Such models are used for designing manufacturing facility, analyzing the
financial health of an organization, forecasting demand of a product or service, etc.
Support Tools: Support tools like online help; pulls down menus, user interfaces,
graphical analysis, error correction mechanism, facilitates the user interactions with the
system.
Classification of DSS
There are several ways to classify DSS. Hoi Apple and Whinstone classifies DSS as follows:
Text Oriented DSS: It contains textually represented information that could have a
bearing on decision. It allows documents to be electronically created, revised and viewed
as needed.
Database Oriented DSS: Database plays a major role here; it contains organized and
highly structured data.
Spreadsheet Oriented DSS: It contains information in spread sheets that allows create,
view, modify procedural knowledge and also instructs the system to execute self-
contained instructions. The most popular tool is Excel and Lotus 1-2-3.
Solver Oriented DSS: It is based on a solver, which is an algorithm or procedure written
for performing certain calculations and particular program type.
Rules Oriented DSS: It follows certain procedures adopted as rules.
Rules Oriented DSS: Procedures are adopted in rules oriented DSS. Export system is the
example.
Compound DSS: It is built by using two or more of the five structures explained above.
Types of DSS
Status Inquiry System: It helps in taking operational, management level, or middle level
management decisions, for example daily schedules of jobs to machines or machines to
operators.
Data Analysis System: It needs comparative analysis and makes use of formula or an
algorithm, for example cash flow analysis, inventory analysis etc.
Information Analysis System: In this system data is analyzed and the information report
is generated. For example, sales analysis, accounts receivable systems, market analysis
etc.
Accounting System: It keeps track of accounting and finance related information, for
example, final account, accounts receivables, accounts payables, etc. that keep track of
the major aspects of the business.
Model Based System: Simulation models or optimization models used for decision-
making are used infrequently and creates general guidelines for operation or
management.
Definition of KMS
A knowledge management system comprises a range of practices used in an organization to identify,
create, represent, distribute, and enable adoption to insight and experience. Such insights and
experience comprise knowledge, either embodied in individual or embedded in organizational processes
and practices.
Purpose of KMS
Improved performance
Competitive advantage
Innovation
Sharing of knowledge
Integration
Continuous improvement by:
o Driving strategy
o Starting new lines of business
o Solving problems faster
o Developing professional skills
o Recruit and retain talent
All the systems we are discussing here come under knowledge management category. A
knowledge management system is not radically different from all these information systems, but
it just extends the already existing systems by assimilating more information.
As we have seen, data is raw facts, information is processed and/or interpreted data, and
knowledge is personalized information.
What is Knowledge?
Personalized information
State of knowing and understanding
An object to be stored and manipulated
A process of applying expertise
A condition of access to information
Potential to influence action
Intranet
Data warehouses and knowledge repositories
Decision support tools
Groupware for supporting collaboration
Networks of knowledge workers
Internal expertise
Start with the business problem and the business value to be delivered first.
Identify what kind of strategy to pursue to deliver this value and address the KM
problem.
Think about the system required from a people and process point of view.
Finally, think about what kind of technical infrastructure are required to support the
people and processes.
Implement system and processes with appropriate change management and iterative
staged release.
Artificial Intelligence
A branch of Computer Science named Artificial Intelligence pursues creating the computers or
machines as intelligent as human beings.
According to the father of Artificial Intelligence, John McCarthy, it is The science and
engineering of making intelligent machines, especially intelligent computer programs.
AI is accomplished by studying how human brain thinks, and how humans learn, decide, and
work while trying to solve a problem, and then using the outcomes of this study as a basis of
developing intelligent software and systems.
Philosophy of AI
While exploiting the power of the computer systems, the curiosity of human, lead him to wonder,
Can a machine think and behave like humans do?
Thus, the development of AI started with the intention of creating similar intelligence in
machines that we find and regard high in humans.
Goals of AI
To Create Expert Systems The systems which exhibit intelligent behavior, learn,
demonstrate, explain, and advice its users.
To Implement Human Intelligence in Machines Creating systems that understand,
think, learn, and behave like humans.
Out of the following areas, one or multiple areas can contribute to build an intelligent system.
A computer program without AI can answer the A computer program with AI can answer the
specific questions it is meant to solve. generic questions it is meant to solve.
What is AI Technique?
AI Technique is a manner to organize and use the knowledge efficiently in such a way that
AI techniques elevate the speed of execution of the complex program it is equipped with.
Applications of AI
Gaming AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc.,
where machine can think of large number of possible positions based on heuristic
knowledge.
Natural Language Processing It is possible to interact with the computer that
understands natural language spoken by humans.
Expert Systems There are some applications which integrate machine, software, and
special information to impart reasoning and advising. They provide explanation and
advice to the users.
Vision Systems These systems understand, interpret, and comprehend visual input on
the computer. For example,
o A spying aeroplane takes photographs, which are used to figure out spatial
information or map of the areas.
o Doctors use clinical expert system to diagnose the patient.
o Police use computer software that can recognize the face of criminal with the
stored portrait made by forensic artist.
Speech Recognition Some intelligent systems are capable of hearing and
comprehending the language in terms of sentences and their meanings while a human
talks to it. It can handle different accents, slang words, noise in the background, change
in humans noise due to cold, etc.
Handwriting Recognition The handwriting recognition software reads the text written
on paper by a pen or on screen by a stylus. It can recognize the shapes of the letters and
convert it into editable text.
Intelligent Robots Robots are able to perform the tasks given by a human. They have
sensors to detect physical data from the real world such as light, heat, temperature,
movement, sound, bump, and pressure. They have efficient processors, multiple sensors
and huge memory, to exhibit intelligence. In addition, they are capable of learning from
their mistakes and they can adapt to the new environment.
This decision requires the input from a number of different units within the organization, such as
marketing, engineering, manufacturing, etc. Let's say the CEO of the company has set up a task
force to develop a recommendation. Each unit in the organization is represented by one of its
managers. How is the task force going to work together to come up with the best decision?
There are a number of ways for the group members to collaborate. They can have meetings to
share information and discuss the decisions that need to be made. If meeting face-to-face is not
practical, they can use a technology, like videoconferencing. They can also communicate with
each other by e-mail to share ideas and provide updates.
While these approaches can be productive, many decisions in today's world are very complex
and require a lot of different considerations. Having access to the same information can
contribute to better decision making. However, this can quickly become overwhelming, and not
all participants may have the time, skill or interest to analyze all this information. Imagine
having to read through hundreds of pages of a document just to prepare for a meeting.
One strategy to not get bogged down by complexity and information overload is to use
computer-based tools for group decision making. A Group Decision Support System, or
GDSS, consists of interactive software that allows for making decisions by a group of
participants. The goal of a GDSS is to improve the productivity of a group to come to a decision.
A GDSS is sometimes also referred to as a 'computerized collaborative work system.'
Characteristics of a GDSS
The most important characteristic, however, is that it provides support for a group to come to a
decision. A number of different approaches can be used.
The group consensus approach forces members to come to a unanimous decision. This is sort
of like locking a team up in a room, and they can't leave before a decision is reached - but the
room could be virtual, and the communications could all be electronic.
The nominal group technique gives each participant an equal voice, and the final decision is
reached by voting. Contrary to regular voting, however, the group comes up with a number of
different solutions, and these are ranked by using a voting process. Whatever the specific
decision-making strategy employed, a GDSS is designed to facilitate this process.
UNIT-III
SYSTEM DEVELOPMENT METHODOLOGIES
INTRODUCTION
Different types of system development methodologies are used in designing information system.
Depending upon the actual requirement of the system, different approaches for data processing
are adopted. However, some system groups recommend a centralized
data processing system while others may go in for a distributed data processing system. In a
centralized data processing, one or more centralized computers are used for processing and the
retrieval of information is done from them. The distributed processing systeminvolves a number
of computers located remotely in the branches/
departments of the organization. The client/server technology is also gaining popularity these
days.
OBJECTIVES
various approaches to the information system
l explain networking environment
l explain the meaning of client/server technology
DATA PROCESSING SYSTEM
Data processing techniques are very much dependent on the kind of applications and the
working environment. The activities involved in data processing are along departmental lines
and application based such as Store Management, Production Planning & Control, Sales
Accounting, Financial Accounting, Student Information System, etc.The basic input data are the
real resource of data processing. With the progress in technology the concept of integrated data
processing has also come into being. In integrated data processing the outputdata of one
application can be used as the input of anotherapplication.
two approaches to data processing: l Centralized data processing Decentralized data processing
MULTI-USER ENVIRONMENT
The necessity of sharing of data and information gave rise to multiuser environment. In a multi-
user environment, there is a concept of file server and user nodes or user terminals connected to
the file server. There are various ways of developing a multi-user environment depending upon
the connectivity. There is local area network (LAN)
where nodes are connected with the file server with cables through which the data and
information are transferred from file server to the different nodes connected to the file server and
vice-versa. In a Wide Area Network (WAN), the nodes are connected through MODEM or
through satellite.
NETWORKING/FILESERVER SYSTEM
In a Local Area Network, all the data and programme files are stored in a file server. A file
server is the central node in the network. All the users connected to the file server through
different nodes can access the data and information stored in the fileserver simultaneously.
The role of the software tester has undergone significant upheaval and change in recent years. To
help get you situated in today's landscape, we filled this guide with advice, research, and user
reviews of popular test management tools.
Various SDLC methodologies have been developed to guide the processes involved, including the
waterfall model (which was the original SDLC method); rapid application development (RAD); joint
application development (JAD); the fountain model; the spiral model; build and fix; and synchronize-
and-stabilize. Frequently, several models are combined into some sort of hybrid methodology.
Documentation is crucial regardless of the type of model chosen or devised for any application, and is
usually done in parallel with the development process. Some methods work better for specific types of
projects, but in the final analysis, the most important factor for the success of a project may be how
closely the particular plan was followed.
The systems development life cycle (SDLC) is a conceptual model used in project management
that describes the stages involved in an information system development project, from an initial
feasibility study through maintenance of the completed application.
The role of the software tester has undergone significant upheaval and change in recent years. To
help get you situated in today's landscape, we filled this guide with advice, research, and user
reviews of popular test management tools.
Various SDLC methodologies have been developed to guide the processes involved, including
the waterfall model (which was the original SDLC method); rapid application development
(RAD); joint application development (JAD); the fountain model; the spiral model; build and fix;
and synchronize-and-stabilize. Frequently, several models are combined into some sort of hybrid
methodology. Documentation is crucial regardless of the type of model chosen or devised for
any application, and is usually done in parallel with the development process. Some methods
work better for specific types of projects, but in the final analysis, the most important factor for
the success of a project may be how closely the particular plan was followed.
1. The existing system is evaluated. Deficiencies are identified. This can be done by
interviewing users of the system and consulting with support personnel.
2. The new system requirements are defined. In particular, the deficiencies in the existing
system must be addressed with specific proposals for improvement.
3. The proposed system is designed. Plans are laid out concerning the physical construction,
hardware, operating systems, programming, communications, and security issues.
4. The new system is developed. The new components and programs must be obtained and
installed. Users of the system must be trained in its use, and all aspects of performance
must be tested. If necessary, adjustments must be made at this stage.
5. The system is put into use. This can be done in various ways. The new system can phased
in, according to application or location, and the old system gradually replaced. In some
cases, it may be more cost-effective to shut down the old system and implement the new
system all at once.
6. Once the new system is up and running for a while, it should be exhaustively evaluated.
Maintenance must be kept up rigorously at all times. Users of the system should be kept
up-to-date concerning the latest modifications and procedures.
Varying SDLC methodologies among development teams: Software development life cycle
(SDLC) methodologies have their own advantages, tools and so forth. Expert David Christiansen
explains why development groups should be allowed to embrace a variety of methodologies.
How use cases facilitate the SDLC: Why incorporate use case techniques into your software
development lifecycle? Requirements expert Betty Luedke explains how use cases benefit the
SDLC and how practitioners can get the most out of them
When to begin testing in the SDLC: Testing is one of the most important components of the
software development lifecycle, but when to begin testing depends on a number of factors.
Expert Mike Kelly discusses how certain issues affect the timing of testing in the SDLC.
Prototypes are instruments used within the software development process and different kinds of
prototypes are employed to achieve different goals. The product prototype has been variously
defined within the prototyping literature and an early definition is that of Naumann and Jenkins
(1982) who consider an information
systems prototype to be:
... a system that captures the essential features of a la prototype system, intentionally
incomplete, is to be modified, supplemented, or
supplanted.Budde et al (1992) classify along two different vectors - according to the purpose
they fulfil and according to the manner of their construction. Four types of prototypes:
presentation prototypes, prototype proper, breadboard prototypes and pilot system prototypes are
identified according to the different tasks they accomplish.
clients by a software manufacturer in order to convince them of the feasibility of anew project.
As such it is their first impression of the future system.
The prototype proper is constructed and tested, to clarify user needs, while the actual information
system is under construction.
The breadboard prototype is used mainly by development staff to ascertain the feasibility of
certain technical aspects of the system.
A pilot system prototype is a type of prototype which constitutes the core of an application
system. After one or more iterations of evolutionary prototyping a pilot system prototype reaches
enough sophistication to become the final system. the most appropriate definition of a
prototype. A prototype system,
Systems development is systematic process which includes phases such as planning, analysis,
design, deployment, and maintenance. Here, in this tutorial, we will primarily focus on
Systems analysis
Systems design
Systems Analysis
It is a process of collecting and interpreting facts, identifying the problems, and decomposition of
a system into its components.
System analysis is conducted for the purpose of studying a system or its parts in order to identify
its objectives. It is a problem solving technique that improves the system and ensures that all the
components of the system work efficiently to accomplish their purpose.
Systems Design
It is a process of planning a new business system or replacing an existing system by defining its
components or modules to satisfy the specific requirements. Before planning, you need to
understand the old system thoroughly and determine how computers can best be used in order to
operate efficiently.
Systems
Processes
Technology
What is a System?
The word System is derived from Greek word Systema, which means an organized relationship
between any set of components to achieve some common cause or objective.
Constraints of a System
A system must have some structure and behavior which is designed to achieve a
predefined objective.
Interconnectivity and interdependence must exist among the system components.
The objectives of the organization have a higher priority than the objectives of its
subsystems.
For example, traffic management system, payroll system, automatic library system, human
resources information system.
Properties of a System
Organization
Organization implies structure and order. It is the arrangement of components that helps to
achieve predetermined objectives.
Interaction
It is defined by the manner in which the components operate with each other.
Interdependence
Interdependence means how the components of a system depend on one another. For proper
functioning, the components are coordinated and linked together according to a specified plan.
The output of one subsystem is the required by other subsystem as input.
Integration
Integration is concerned with how a system components are connected together. It means that the
parts of the system work together within the system even if each part performs a unique function.
Central Objective
The objective of system must be central. It may be real or stated. It is not uncommon for an
organization to state an objective and operate to achieve another.
The users must know the main objective of a computer application early in the analysis for a
successful design and conversion.
The main aim of a system is to produce an output which is useful for its user.
Inputs are the information that enters into the system for processing.
Output is the outcome of processing.
Processor(s)
The processor is the element of a system that involves the actual transformation of input
into output.
It is the operational component of a system. Processors may modify the input either
totally or partially, depending on the output specification.
As the output specifications change, so does the processing. In some cases, input is also
modified to enable the processor for handling the transformation.
Control
The behavior of a computer System is controlled by the Operating System and software.
In order to keep system in balance, what and how much input is needed is determined by
Output Specifications.
Feedback
Environment
A system should be defined by its boundaries. Boundaries are the limits that identify its
components, processes, and interrelationship when it interfaces with another system.
Each system has boundaries that determine its sphere of influence and control.
The knowledge of the boundaries of a given system is crucial in determining the nature of
its interface with other systems for successful design.
Types of Systems
Physical systems are tangible entities. We can touch and feel them.
Physical System may be static or dynamic in nature. For example, desks and chairs are
the physical parts of computer center which are static. A programmed computer is a
dynamic system in which programs, data, and applications can change according to the
user's needs.
Abstract systems are non-physical entities or conceptual that may be formulas,
representation or model of a real system.
An open system must interact with its environment. It receives inputs from and delivers
outputs to the outside of the system. For example, an information system which must
adapt to the changing environmental conditions.
A closed system does not interact with its environment. It is isolated from environmental
influences. A completely closed system is rare in reality.
Adaptive System responds to the change in the environment in a way to improve their
performance and to survive. For example, human beings, animals.
Non Adaptive System is the system which does not respond to the environment. For
example, machines.
Permanent System persists for long time. For example, business policies.
Temporary System is made for specified time and after that they are demolished. For
example, A DJ system is set up for a program and it is dissembled after the program.
Natural systems are created by the nature. For example, Solar system, seasonal system.
Manufactured System is the man-made system. For example, Rockets, dams, trains.
UNIT-IV
PLANNING
COMMUNICATION
SDLC
DEPLOYMENT
MODELLING
CONSTRUCTION
Listen
Prepare before communicate
Face to face communication best
If something is unclear draw a picture
Take notes and document decision
MODELING: Create models to gain a better understanding of the software to built.Two types
of models are used in this activity.Those are:
1. Analysis modeling
2. Design modeling
CONSTRUCTION: In this activity mainly concentrates on the codeing and testing principles of
the product
Select data structures that will meet the needs of the design
Keep the conditional logics as simple as possible
Create nested loops that makes them Easley
Testing principles: testing process of executing a program with the intent of finding errors
DEPLOYMENT: it consists of three action they are delivery ,support and feedback.
In the context of testing, Verification and Validation are very widely and commonly used
terms. Most of the times, we consider the terms same, but actually the terms are quite different.
Producers view of quality, in simpler terms means the developers perception of the final
product.
Consumers view of quality means users perception of final product.
When we carry out the V&V tasks, we have to concentrate both of these view of quality.
To begin, lets try to understand the terms first and try to explore them with different standards.
What is Verification?
Now the question here is : What are the intermediary products? Well, These can include the
documents which are produced during the development phases like, requirements specification,
design documents, data base table design, ER diagrams, test cases, traceability matrix etc. We
sometimes tend to neglect the importance of reviewing these documents but we should
understand that reviewing itself can find out many hidden anomalies when if found or fixed in
the later phase of development cycle, can be very costly.
In other words we can also state that verification is a process to evaluate the mediator products of
software to check whether the products satisfy the conditions imposed during the beginning of
the phase.
What is Validation?
Validation is the process of evaluating the final product to check whether the software meets the
business needs. In simple words the test execution which we do in our day to day life are actually
the validation activity which includes smoke testing, functional testing, regression testing,
systems testing etc
Verification Validation
Evaluates the intermediary products to check whether it Evaluates the final product to check whether
meets the specific requirements of the particular phase it meets the business needs.
Checks whether the product is built as per the specified It determines whether the software is fit for
requirement and design specification. use and satisfy the business need.
Verification Validation
Checks Are we building the product right? Checks Are we building the right product?
This is done without executing the software Is done with executing the software
Involves all the static testing techniques Includes all the dynamic testing techniques.
Requirement verification Prepare the test requirements documents, test cases, and
Involves review of the requirements other test specifications to analyze the test results.
Design Verification Evaluate that these test requirements, test cases and other
Involves reviews of all the design specifications reflects the requirements and is fit for use.
documents including the HLD and LDD
Test for boundry values, stress and the functionalities
Code verification
Code review Test for error messages and in case of any error, the
application is terminated gracefully.
Documentation Verification
Verification of user manuals and other Test that the software meets the business requirements and
related documents. is fit for use.
CMMI
Security testing is a testing technique to determine if an information system protects data and
maintains functionality as intended. It also aims at verifying 6 basic principles as listed below:
Confidentiality
Integrity
Authentication
Authorization
Availability
Non-repudiation
Injection
Broken Authentication and Session Management
Cross-Site Scripting (XSS)
Insecure Direct Object References
Security Misconfiguration
Sensitive Data Exposure
Missing Function Level Access Control
Cross-Site Request Forgery (CSRF)
Using Components with Known Vulnerabilities
Unvalidated Redirects and Forwards
What is Simulation?
A simulation is a computer model that mimics the operation of a real or proposed system and it is
time based and takes into account all the resources and constraints involved.
Cost
Repeatability
Time
Example:
A mobile simulator also known as emulator, a software that can be installed on a normal desktop which
creates a virtual machine version of a mobile device such as a mobile phone, iPhone, other smartphone
within the system.
Mobile simulator allows the user to execute the application under test on their computer.
Smoke Testing is a testing technique that is inspired from hardware testing, which checks for the
smoke from the hardware components once the hardware's power is switched on. Similarly in
Software testing context, smoke testing refers to testing the basic functionality of the build.
If the Test fails, build is declared as unstable and it is NOT tested anymore until the smoke test
of the build passes.
Electronic devices feature a large number of small components, each piece being highly
important to their overall functionality and safety. The responsibility for ensuring that each
minute component is performing adequately rests firmly on the shoulders of the device and
component manufacturers. However, their jobs are being made more difficult by the growing
issue of counterfeiting in the electrical market.
Printed codes and marks enable manufacturers to track and trace products throughout the supply
chain. These also, importantly, help distinguish between genuine and counterfeited components.
However, counterfeiters are becoming increasingly sophisticated and have developed capabilities
to replicate codes. In order to counter this problem, manufacturers find it necessary to employ
more sophisticated coding methods such as smart code techniques.
There are numerous smart coding techniques that component manufacturers can employ to
enhance their basic lot/batch codes. Each technique (explained below) varies in complexity and
application:
Self-verifying codes.
These are one of the most basic ways to identify the authenticity of products with a visual check.
With this type of marking, a pre-determined pattern is set for the code; for example, all digits add
up to a certain number.
Interleaved marking.
With interleaved marking, characters within a code are partially overlapped. While this overlay
is visible to the naked eye, it is highly difficult to replicate this character formulation.
Dynamically-altered font.
Varied numbers and letters within a code can have minute sections of characters missing in order
to create distinct codes. The advantage of this code is that, visually it is difficult to recognise the
small differencesonly supply chain partners who have a trained understanding of the code
formation can observe the differences.
Verifiable codes.
Covert codes.
Ultraviolet and infrared inks can be utilised to create covert codes that are not visible to the
naked eye. Codes are only visible under ultraviolet or other high-frequency lighting, and these
are inconspicuous methods of tracking products throughout the supply chain.
Increased productivity and maximised profitability are the main objectives of any production
facility, and manufacturers ultimately need to ensure their production line is streamlined in order
to safeguard these objectives.
What is Error?
Error is a condition when the output information does not match with the input information.
During transmission, digital signals suffer from noise that can introduce errors in the binary bits
travelling from one system to other. That means a 0 bit may change to 1 or a 1 bit may change to
0.
Error-Detecting codes
Whenever a message is transmitted, it may get scrambled by noise or data may get corrupted. To
avoid this, we use error-detecting codes which are additional data added to a given digital
message to help us detect if an error occurred during transmission of the message. A simple
example of error-detecting code is parity check.
Error-Correcting codes
Along with error-detecting code, we can also pass some data to figure out the original message
from the corrupt message that we received. This type of code is called an error-correcting code.
Error-correcting codes also deploy the same strategy as error-detecting codes but additionally,
such codes also detect the exact location of the corrupt bit.
In error-correcting codes, parity check has a simple way to detect errors along with a
sophisticated mechanism to determine the corrupt bit location. Once the corrupt bit is located, its
value is reverted (from 0 to 1 or 1 to 0) to get the original message.
To detect and correct the errors, additional bits are added to the data bits at the time of
transmission.
The additional bits are called parity bits. They allow detection or correction of the errors.
The data bits along with the parity bits form a code word.
It is the simplest technique for detecting and correcting errors. The MSB of an 8-bits word is
used as the parity bit and the remaining 7 bits are used as data or message bits. The parity of 8-
bits transmitted word can be either even parity or odd parity.
Even parity -- Even parity means the number of 1's in the given word including the parity bit
should be even (2,4,6,....).
Odd parity -- Odd parity means the number of 1's in the given word including the parity bit
should be odd (1,3,5,....).
The parity bit can be set to 0 and 1 depending on the type of the parity required.
For even parity, this bit is set to 1 or 0 such that the no. of "1 bits" in the entire word is
even. Shown in fig. (a).
For odd parity, this bit is set to 1 or 0 such that the no. of "1 bits" in the entire word is
odd. Shown in fig. (b).
Parity checking at the receiver can detect the presence of an error if the parity of the receiver
signal is different from the expected parity. That means, if it is known that the parity of the
transmitted signal is always going to be "even" and if the received signal has an odd parity, then
the receiver can conclude that the received signal is not correct. If an error is detected, then the
receiver will ignore the received byte and request for retransmission of the same byte to the
transmitter.
validation
data validation is the process of ensuring that a program operates on clean, correct and useful
data. It uses routines, often called "validation rules" "validation constraints" or "check routines", that
check for correctness, meaningfulness, and security of data that are input to the system. The rules may
be implemented through the automated facilities of a data dictionary,[1] or by the inclusion of explicit
application program validation logic.
In evaluating the basics of data validation, generalizations can be made regarding the different
types of validation, according to the scope, complexity, and purpose of the various validation
operations to be carried out.
For example:
Data-type validation
Data type validation is customarily carried out on one or more simple data fields.
The simplest kind of data type validation verifies that the individual characters provided through
user input are consistent with the expected characters of one or more known primitive data types;
as defined in a programming language or data storage and retrieval mechanism.
For example, many database systems allow the specification of the following primitive data
types: 1) integer; 2) float (decimal); or 3) string.
For example, many database systems allow the specification of the following l(, and ) (plus,
minus, and parentheses). A more sophisticated data validation routine would check to see the
user had entered a valid country code, i.e., that the number of digits entered matched the
convention for the country or area specified.
A validation process involves two distinct steps: (a) Validation Check and (b) Post-Check action.
The check step uses one or more computational rules (see section below) to determine if the data
is valid. The Post-validation action sends feedback to help enforce validation.
Simple range and constraint validation may examine user input for consistency with a
minimum/maximum range, or consistency with a test for evaluating a sequence of characters,
such as one or more tests against regular expressions.
Code and cross-reference validation includes tests for data type validation, combined with one or
more operations to verify that the user-supplied data is consistent with one or more external
rules, requirements, or validity constraints relevant to a particular organization, context or set of
underlying assumptions. These additional validity constraints may involve cross-referencing
supplied data with a known look-up table or directory information service such as LDAP.
For example, an experienced user may enter a well-formed string that matches the specification
for a valid e-mail address, as defined in RFC 5322 [4][5][6] but that well-formed string might not
actually correspond to a resolvable domain connected to an active e-mail account.
Structured validation
Structured validation allows for the combination of any of various basic data type validation
steps, along with more complex processing. Such complex processing may include the testing of
conditional constraints for an entire complex data object or set of process operations within a
system.
A cost-benefit analysis is a process by which business decisions are analyzed. The benefits of a
given situation or business-related action are summed, and then the costs associated with taking
that action are subtracted. Some consultants or analysts also build the model to put a dollar value
on intangible items, such as the benefits and costs associated with living in a certain town, and
most analysts will also factor opportunity cost into such equations.
The first step in the process is to compile a comprehensive list of all the costs and benefits
associated with the project or decision. Costs should include direct and indirect costs, intangible
costs, opportunity costs and the cost of potential risks. Benefits should include all direct and
indirect revenues and intangible benefits, such as increased production from improved employee
safety and morale, or increased sales from customer goodwill. A common unit of monetary
measurement should then be applied to all items on the list. Care should be taken to not
underestimate costs or overestimate benefits. A conservative approach with a conscious effort to
avoid any subjective tendencies when calculating estimates is best suited when assigning value
to both costs and benefits for the purpose of a cost-benefit analysis.
The final step is to quantitatively compare the results of the aggregate costs and benefits to
determine if the benefits outweigh the costs. If so, then the rational decision is to go forward with
project. In not, a review of the project is warranted to see if adjustments can be made to either
increase benefits and/or decrease costs to make the project viable. If not, the project may be
abandoned.
For projects that involve small to mid-level capital expenditures and are short to intermediate in
terms of time to completion, an in-depth cost-benefit analysis may be sufficient enough to make
a well-informed rational decision. For very large projects with a long-term time horizon, cost-
benefit analysis typically fails to effectively take into account important financial concerns such
as inflation, interest rates, varying cash flows and the present value of money. Alternative capital
budgeting analysis methods including net present value (NPV) or internal rate of return (IRR) are
more appropriate for these situations.
Through Investopedia Academy's Become A Day Trader an experienced Wall Street trader will
teach you proven, profitable strategies you can start using today. After this self-paced, on-
demand course you'll understand money management and trading psychology, speak the
language of the market, and use six profitiable, tried-and-true trading techniques. Watch Free
Trailer >>
Shadow Pricing
Shadow pricing is used to refer to either one of two things: the actual market value of a money
market fund share, or more commonly, the assignment of a dollar value to an abstract
commodity that is not ordinarily quantifiable as having a market price, but needs to be assigned a
valuation to conduct a cost-benefit analysis. In the latter instance, a shadow price is assigned to
goods that are not generally bought and sold as separate assets in a marketplace, such as
production costs or intangible assets.
Shadow pricing as it relates to money market funds refers to the practice of accounting the price
of securities based on amortized costs rather than on their assigned market value. Money market
fund shares are always assigned a nominal net asset value (NAV) of $1, even though the actual
NAV is usually slightly more or less than $1. Such funds are required by law to disclose the
actual NAV the shadow share price to show the fund's performance to investors more
accurately.
Shadow Pricing
Shadow pricing is used to refer to either one of two things: the actual market value of a money
market fund share, or more commonly, the assignment of a dollar value to an abstract
commodity that is not ordinarily quantifiable as having a market price, but needs to be assigned a
valuation to conduct a cost-benefit analysis. In the latter instance, a shadow price is assigned to
goods that are not generally bought and sold as separate assets in a marketplace, such as
production costs or intangible assets.
Shadow pricing as it relates to money market funds refers to the practice of accounting the price
of securities based on amortized costs rather than on their assigned market value. Money market
fund shares are always assigned a nominal net asset value (NAV) of $1, even though the actual
NAV is usually slightly more or less than $1. Such funds are required by law to disclose the
actual NAV the shadow share price to show the fund's performance to investors more
accurately.
Intangible Cost
The management of risk data and information is key to the success of any risk management
effort regardless of an organization's size or industry sector. Risk management information
systems/services (RMIS) are used to support expert advice and cost-effective information
management solutions around key processes such as:
Typically, RMIS facilitates the consolidation of information related to insurance, such as claims
from multiple sources, property values, policy information, and exposure information, into one
system. Often, RMIS applies primarily to casualty claims/loss data systems. Such casualty
coverages include auto liability, auto physical damage, workers' compensation, general liability
and products liability.
RMIS products are designed to provide their insured organizations and their brokers with basic
policy and claim information via electronic access, and most recently, via the Internet. This
information is essential for managing individual claims, identifying trends, marketing an
insurance program, loss forecasting, actuarial studies and internal loss data communication
within a client organization. They may also provide the tracking and management reporting
capabilities to enable one to monitor and control overall cost of risk in an efficient and cost-
effective manner.
In the context of the acronym RMIS, the word risk pertains to an insured or self-insured
organization. This is important because prior to the advent of RMIS, insurance company loss
information reporting typically organized loss data around insurance policy numbers. The
historical focus on insurance policies detracted from a clear, coherent and consolidated picture of
a single customer's loss experience. The advent of the first PC and UNIX based standalone
RMIS was in 1982, by Mark Dorn, under the trade name RISKMASTER. This began a
breakthrough step in the insurance industry's evolution toward persistent and focused
understanding of their end-customer needs. Typically, the best solution for an organization
depends on whether it is enhancing an existing RMIS system, ensuring the highest level of data
quality, or designing and implementing a new system while maintaining a focus on state-of-the-
art technology.
Most major insurance companies (carriers), broker/agents, and third party administrators
(TPAs)offer/provide at least one external RMIS product to their insureds (clients) and any
brokers involved in the insurance program. Most commonly, RMIS products allow individual
claim detail look-up, basic trend report production, policy summaries and ad hoc queries. The
resulting information can then be shared throughout the client's organization, usually for
insurance program cost allocation, loss prevention and effective claim management at the local
level. More advanced products allow multiple claim data sources to be consolidated into one
Master RMIS, which is essential for most large client organizations with complex insurance
programs.
Insurance companies normally use a different version of externally provided RMIS for internal
use, such as by underwriting and loss control personnel. Occasionally, there could be timing or
other differences that could cause data discrepancies between the internal system and externally
provided RMIS.
Insurance brokers have a similar need for access to their insured client's claim data. Brokers are
normally added as an additional user to the RMIS product provided to their clients by the
insurance carrier and TPAs. The information available from RMIS is critical to the broker for
interfacing effectively with their counterparts in the insurance carrier and TPAs. Additionally,
effectively presented RMIS information that shows trends and analysis is essential to
successfully marketing their clients' insurance programs.
Insurance carrier and a TPA claim adjusters traditionally use claims management systems to
collect and manage claim information and to administer claims. Some client organizations,
however, may choose to manage certain types of claims or those within a loss retention layer and
thus use this type of system as well.
Typically, the claims management system provides the primary data to RMIS products. RMIS
products in turn provide an externally accessed view into the client's claims data. RMIS products
are commonly available directly from larger insurance carriers and TPAs, but the most advanced
systems are often offered by independent RMIS vendors. Independent RMIS vendor systems are
most desirable when a client organization needs to consolidate claims data from multiple current
insurance programs and/or past programs with current program information.
Along with insurance carriers, broker/agents and TPAs that offer their own proprietary systems,
there are a variety of direct RMIS technology companies who sell to direct insureds and even the
carriers, broker/agents and TPAs themselves.
UNIT-V
1. Emphasis on Clerical System: Just taking over an existing clerical system and modifying
it without upgrading or changing it does not help. The clerical system has to be upgraded
to a management system. On the other hand, computers have been put to work on those
things that are best understood and easily structured and which require little management
involvement.
3. Lack of a Master Plan: A systematic long range plan/planned approach is necessary for
establishing an effective Management Information System. Increased focus on the area
of problems definition is required in the systems analysis. The dramatic changes in
business strategy together with changes in the top management personnel and
organisation structure call for a through plan.
8. Voluminous and Unstructured Nature of Data: Sometimes the volume of data itself can
be a hurdle unless careful sifting is done. On the other hand, it may also be difficult to
locate and retrieve relevant data.Often, the data required by top management is
unstructured, non- programmed, future oriented. inexact and external and hence difficult
to capture.
9. Limited Use of Management Science and or Techniques: Some of the ways of increasing
the effective of Management Information System include motivating managers to
participate and get involved in Management Information System, establishing consistent
performance and work criteria for Management Information System, maintaining
simplicity and ease of use, training systems analysts and careful consideration of basic
computer feasibility criteria like volume and repetitive nature of transactions, degree
of mathematical processing, quick turnaround time, accuracy and validity of data,
common source documents and well understood processing logic.
10. Enormous Time, Effort and Resources Required: MIS budget includes data processing
costs, hardware costs, personnel costs, supplies, services, etc.
This is the step when the problem has to be defined. Sometimes one may confuse the
symptoms or the exhibition of a behavior to be a problem but actually it may only be a
symptom of a larger malaise. It may just exhibit the behavior of a larger phenomenon. It
is vital to drill deep into an issue and clearly understand the problem rather than having
a superficial understanding of the problem. One must appreciate that this in the initial
stage of problem solving and if the problem itself is not correctly diagnosed then the
solution will obviously be wrong. Systems approach is therefore used to understand the
problem in granular detail to establish requirement and objectives in-depth.
This the logical next step in the systems approaches for problem solving. In this stage
alternative solutions are generated. This requires creativity and innovation. In this
stage-the analyst uses creativity to come up with possible solutions to the problem.
Typically in this stage only the outline of solutions are generated rather than the actual
solutions.
Selecting a Solution
In this step, the solution that suits the requirement and objectives in the most
comprehensive manner is selected as the 'best' solution. This is done after evaluating all
the possible solutions and then comparing the possible set of solutions to find the most
suitable solution lot of mathematical, financial and technical models is used to select the
most appropriate solution.
Once the most appropriate solution is chosen, it is then made into a design document to
give it the shape of an actionable solution, as in the evaluation stage, only the outline of
the solution is used. At this stage the details of the solution are worked out to create the
blueprint for the solution. Several design diagrams are used to prepare the design
document. At this stage the requirement specifications are again compared with the
solution design to double check the suitability of the solution for the problem.
It is the next step in the process. The solution that has been designed is implemented as
per the specifications -laid down in the design document. During implementation care is
taken to ensure that there are no deviations from the design.
This is the final step in the problem solving process where the review of the impact of
the solution is noted. This is a stage for finding out if the desired result has been
achieved that was set out.
Let us assume that A is the coach of the Indian cricket team. Let us also assume that the
objective that A has been entrusted with is to secure a win over the touring Australian
cricket team. The coach uses a systems approach to attain this objective. He starts by
gathering information about his own team.
Through systems approach he views his own Indian team as a system whose
environment would include the other team in the competition, umpires, regulators,
crowd and media. His system, i.e., team itself maybe conceptualized as having two
subsystems, i.e., players and supporting staff for players. Each subsystem would have its
own set of components/entities like the player subsystem will have openers, middle
order batsmen, fast bowlers, wicket keeper, etc. The supporting staff subsystem would
include bowling coach, batting coach, physiotherapist, psychologist, etc. All these
entities would indeed have a bearing on the actual outcome of the game. The coach
adopts a systems approach to determine the playing strategy that he will adopt to ensure
that the Indian side wins. He analyses the issue in a stepwise manner as given below:
Step 1: Defining the problem-In this stage the coach tries to understand the past
performance of his team and that of the other team in the competition. His objective is
to defeat the competing team. He realizes that the problem he faces is that of losing the
game. This is his main problem.
Step 2: Collecting data-The coach employs his supporting staff to gather data on the
skills and physical condition of the players in the competing team by analyzing past
performance data, viewing television footage of previous games, making psychological
profiles of each player. The support staff analyses the data and comes up with the
following observations:
1. Both teams use an aggressive strategy during the period of power play. The competing
Australian team uses the opening players to spearhead this attack. However, recently the
openers have had a personal fight and are facing interpersonal problems.
2. The game is being played in Mumbai and the local crowd support is estimated to be of
some value amounting to around fifty runs. Also the crowd has come to watch the Indian
team win. A loss here would cost the team in terms of morale.
3. The umpires are neutral and are not intimidated by large crowd support but are lenient
towards sledging.
Step 3: Identifying alternatives-Based on the collected data the coach generates the
following alternate strategies:
1. Play upon the minds of the opening players of the competitors by highlighting their
personal differences using sledging alone.
2. Employ defensive tactics during power play when the openers are most aggressive and
not using sledging.
3. Keep close in fielders who would sledge and employ the best attacking bowlers of the
Indian team during the power play.
Step 5: Selecting the best alternative-The coach selects the third alternative as it
provides him with the opportunity of neutralizing the aggressive playing strategy of the
openers as well as increases the chances of getting breakthrough wickets.
The goal of software engineering is, of course, to design and develop better software. However,
what exactly does "better software" mean? In order to answer this question, this lesson
introduces some common software quality characteristics. Six of the most important quality
characteristics are maintainability, correctness, reusability, reliability, portability, and efficiency.
efficiency. This relationship is known as the space-time tradeoff. When it is not possible to
design a software product with efficiency in every aspect, the most important resources of the
software are given priority.