Sei sulla pagina 1di 178

1

2
3
4
5
Execution Is a Discipline. “Execution – The Discipline of Getting Things Done”. Larry Bossidy and Ram
Charam
People think of execution as the tactical side of business. That’s the first big mistake. Tactics are central to
execution, but execution is not tactics. Execution is fundamental to strategy and has to shape it. No worth while
strategy can be planned without taking into account the organization’s ability to execute it. If you’re talking
about the smaller specifics of getting things done, call the process implementation, or sweating the details, or
whatever you want to. But don’t confuse execution with tactics.

Execution is a systematic process of rigorously discussing hows and whats, questioning, tenaciously following
through, and ensuring accountability. It includes making assumptions about the business environment, assessing
the organization’s capabilities, linking strategy to operations and the people who are going to implement the
strategy synchronizing those people and their various disciplines,
strategy, disciplines and linking rewards to outcomes.
outcomes It also
includes mechanisms for changing assumptions as the environment changes and upgrading the company’s
capabilities to meet the challenges of an ambitious strategy.

In its most fundamental sense, execution is a systematic way of exposing reality and acting on it. Most
companies don’t face reality very well. As we shall see, that’s the basic reason they can’t execute. Much has
been written about Jack Welch’s style of management - especially his toughness and bluntness, which some
people call ruthlessness. We would argue that the core of his management legacy is that he forced realism into
ah of GE’s management processes, making it a model of an execution culture.

The heart of execution lies in the three core processes: the people process, the strategy process, and the
operations process. Every business and company uses these processes in one form or the other. But more often
than not they stand apart from one another hike silos. People perform them by rote and as quickly as possible, so
they can get back to their perceived work. Typically the CEO and his senior leadership team allot less than half a
day each year to review the plans - people, strategy, and operations. Typically too the reviews are not
particularly interactive. People sit passively watching PowerPoint presentations. They don’t ask questions.

6
In short, a foundation for execution is the IT infrastructure and digitized business processes
automating
i a company’s ’ core capabilities.
bili i As
A with
i h human
h d l
development, a company’s ’ foundation
f d i for f
execution evolves - usually beginning with a few basic infrastructure services (e.g., employee hiring
and recruiting, purchasing, desktop support, and telecommunications), then encompassing basic
transaction processes (sales, accounts payable), and eventually including unique and distinguishing
business capabilities. Building a foundation doesn’t focus only on competitively distinctive
capabilities - it also requires rationalizing and digitizing the mundane, everyday processes that a
company has to get right to stay in business.

Paradoxically, digitizing core business processes makes the individual processes less flexible while
making a company more agile. To return to the human analogy, a great athlete will have muscles,
reflexes, and skills that are not easily changed. But these capabilities give athletes a tremendous
ability to react, improvise, and innovate in their chosen sport. Similarly, digitizing business processes
requires making clear decisions about what capabilities are needed to succeed. And once these new
processes are installed, they free up management attention from fighting fires on lower value
activities, giving them more time to focus on how to increase profits and growth. Digitized processes
also
l provideid better
b tt information
i f ti on customers
t andd product
d t sales,
l providing
idi ideas
id f new products
for d t andd
services. The foundation for execution provides a platform for innovation.

7
The foundation for execution results from carefully selecting which processes and IT systems to
standardize
d di andd integrate.
i J as humans
Just h must learn
l h to ride
how id a bicycle
bi l (and
( d think
hi k hard
h d about
b what
h
they are doing while they are learning), the processes built into a foundation for execution require a
great deal of concentration - for a while. Eventually routine business activities - just like bicycle
riding—become automatic. Outcomes become predictable. The foundation for execution takes on
another layer. A company’s identity becomes clearer, and executives can focus their attention on the
future.

To build an effective foundation for execution,


execution companies must master three key disciplines:
1. Operating model. The operating model is the necessary level of business process integration and
standardization for delivering goods and services to customers. Different companies have
different levels of process integration across their business units (i.e., the extent to which
business units share data). Integration enables end-to-end processing and a single face to the
customer, but it forces a common understanding of data across diverse business units. Thus,
companies need to make overt decisions about the importance of process integration.
Management also must decide on the appropriate level of business process standardization (i.e.,
the extent to which business units will perform the same processes the same way). Process
standardization creates efficiencies across business units but limits opportunities to customize
services. The operating model involves a commitment to how the company will operate.

2. Enterprise architecture. The enterprise architecture is the organizing logic for business processes
and IT infrastructure, reflecting the integration and standardization requirements of the
company’s operating model. The enterprise architecture provides a long-term view of a
company’s ’ processes, systems,
t andd technologies
t h l i so that
th t individual
i di id l projects
j t can build
b ild
capabilities—not just fulfill immediate needs. Companies go through four stages in learning how
to take an enterprise architecture approach to designing business processes: Business Silos,
Standardized Technology, Optimized Core, and Business Modularity. As a company advances
through the stages, its foundation for execution takes on increased strategic importance.

3. IT engagement model. The IT engagement model is the system of governance mechanisms that 8
ensure business and IT projects achieve both local and companywide objectives The IT
This slide illustrates how companies apply these three disciplines to create and exploit their
f
foundation
d i forf execution.i B d on the
Based h vision
i i off how
h the
h company will ill operate (the
( h operating
i
model), business and IT leaders define key architectural requirements of the foundation for execution
(the enterprise architecture). Then, as business leaders identify business initiatives, the IT
engagement model specifies how each project benefits from, and contributes to, the foundation for
execution.

Business agility is becoming a strategic necessity. Greater globalization, increasing regulation, and
faster cycle times all demand an ability to quickly change organizational processes.
processes Managers cannot
predict what will change, but they can predict some things that won’t change. And if they digitize
what is not changing, they can focus on what is changing. In this way the foundation for execution
becomes a foundation for agility.

9
An operating model has two dimensions: business process standardization and integration. Although
we often
f think
hi k off standardization
d di i andd integration
i i as two sides
id off the
h same coin,
i they
h impose
i diff
different
demands. Executives need to recognize standardization and integration as two separate decisions.

Standardization of business processes and related systems means defining exactly how a process will
be executed regardless of who is performing the process or where it is completed. Process
standardization delivers efficiency and predictability across the company. For example, using a
standard process for selling products or buying supplies allows the activities of different business
units to be measured,
measured compared,
compared and improved.
improved The result of standardization - a reduction in
variability - can be dramatic increases in throughput and efficiency.

Integration links the efforts of organizational units through shared data. This sharing of data can be
between processes to enable end-to-end transaction processing, or across processes to allow the
company to present a single face to customers. For example, an automobile manufacturer may decide
to integrate processes so that when a sale is recorded, the car is reserved from among the cars
currently
cu e t y in pproduction.
oduct o . Byy seaseamlessly
ess y ssharing
a g data betwee
between tthee oorder
de management
a age e t aandd
manufacturing scheduling processes, the company improves its internal integration and,
consequently, its customer service. In financial services, sharing data across processes enables a loan
officer to review a customer’s checking, savings, and brokerage accounts with the bank, providing
better information about the customer’s financial situation and enabling better risk assessments for
loans.

10
Focusing on the operating model rather than on individual business strategies gives a company better
guidance
id f developing
for d l i
and business process capabilities. This stable foundation enables IT to become a proactive rather
than reactive force in identifying future strategic initiatives. In selecting an operating model, future
strategic initiatives. In selecting an operating model, management defines the role of business
process standardization and integration in the company’s daily decisions and tasks.

The operating model concept requires that management put a stake in the ground and declare which
business
b i processes will
ill distinguish
di i i h a company fromf i competitors.
its i A poor choice
h i off operating
i
model one that is not viable in a given market will have dire consequences. But not choosing an
operating model is just as risky. Without a clear operating model, management careens from one
market opportunity to the next, unable to leverage reusable capabilities. With a declared operating
model, management builds capabilities that can drive profitable growth.

Because the choice of an operating model guides development of business and IT capabilities, it
determines which strategic opportunities the company should and should not seize.
seize In other words,
words
the operating model, once in place, becomes a driver of 1 business strategy. In addition, the required
architecture as well as the management thinking, practices, policies, and processes characteristic of
each operating model is different from one operating model to another. As a result, the operating
model could be a key driver of the design of separate organizational units.

11
Distinguishing Between Enterprise Architecture and IT Architecture
The IT unit typically addresses four levels architecture below the enterprise architecture: business
process architecture (the activities or tasks composing major business processes identified by the
business process owners); data or information architecture (shared data definitions); applications
architecture (individual applications and their interfaces); and technology architecture (infrastructure
services and the technology standards they are built on).

The term enterprise architecture can be confusing because the unit in some companies refers to one
off these
h architectures
hi - or the
h set off all
ll four
f architectures
hi - as the
h enterprise
i architecture.
hi Our use off
the term refers to the high-level logic for business processes and IT capabilities. For the most part,
non - IT people need not be involved in the development of the detailed technical and applications
architectures guiding development of technical capabilities. However, they need to provide enough
detail on how they will execute processes, and what data those processes depend on, for the IT unit
to develop current solutions meeting long-term needs.

A high - level enterprise architecture creates shared understanding of how a company will operate,
operate
but the convergence of people, process, and technology necessary to implement that architecture
demands shared under standing of processes and data at a more detailed level. The IT unit will
develop far more detailed architectures of applications, data and information, and technology. When
these drawings elaborate on enterprise architecture, they have considerable long - term value because
they provide the long - term context for immediate solutions.

12
While the architecture for a new building is captured in blue prints, enterprise architecture is often
representedd in
i principles,
i i l policies,
li i andd technology
h l choices.
h i Th
Thus, the
h concept can be
b difficult
diffi l for
f
managers to get their arms around. We have found that a simple picture, which we refer to as the
“core diagram,” helps managers debate and eventually come to understand their company’s
enterprise architecture. This simple one-page picture is a high-level view of the processes, data, and
technologies constituting the desired foundation for execution.

The core diagram pro- vides a rallying point for managers responsible for building out and exploiting
the enterprise architecture.
architecture It also has implications for the design of organizational roles and
structures. Although these structural requirements are not usually captured in the core diagram, roles
and reporting relationships also need to be aligned with the enterprise architecture.

All companies have entrenched legacy systems that are the accumulation of years of IT-enabled
business projects. Intentionally or not, the resulting capability locks in assumptions about internal
and external relationships and business process definitions. The role of the one-page core diagram is
to help
e p facilitate
ac tate ddiscussions
scuss o s betwee
between bus
business
ess aandd IT managers
a age s to cclarify
a y requirements
equ e e ts foro tthee
company’s foundation for execution and then communicate the vision.

13
Enterprise Architecture for a Unification Model
In a unification operating model, both integration and standardization of business processes are
required to serve different key customer types. technology is used to link as well as to automate
processes.

The top half of the slide identifies the process for designing the enterprise architecture core diagram
of a Unification company. For the core diagram of a Unification company, three elements are
required. Start by identifying the key customers (i.e. segments and/or channels) the company serves.
Next, list
li the
h key
k processes to be b standardized
d di d andd integrated.
i d Then
h identify
id if theh shared
h d data
d neededd d
to better integrate processes and serve customers. Finally, it there are key technologies that either
automate or link processes, these can he shown as well (optional elements are identified with a
dashed outline in the slide).

The bottom half of slide presents the enterprise architecture core diagram of a company with a
Unification operating model. The diagram reflects a highly standardized and integrated environment
with standard processes accessing shared data to make products and services available to customers.
customers
The core diagram may or may not show key technologies, depending on the significance of any
particular technology to the management vision.

14
Enterprise Architecture for a Diversification Model
The Diversification operating model is the opposite of the Unification model and entails both low
integration and low standardization. Each business is run more or less independently, although there
can he opportunities for shared services across the company.

More often, companies adopting the Diversification operating model establish economies of scale
through a shared technology platform. These shared technologies are the key element of the
enterprise architecture core diagram. Shared technologies and services often include data centers, the
telecommunications
l i i network,k offshore
ff h systems development
d l andd maintenance
i capability,
bili centralized
li d
vendor negotiations, and help desks. Diversification companies that value other shared services
might also represent some standard processes or even shared data in their core diagram, particularly
if a subset of business units is sharing data but hasn’t created a formal structure to manage it.

When designing a Diversification model core diagram, start with the technologies that can be shared
to provide economies of scale standardization, or other benefits. Incorporate the remaining elements
- key customer types,
types business processes,
processes and data - only when needed for the operating model.
model For
example, some Diversification companies require a standardized process and data for financial
reporting, risk management, and compliance across their business units. Providing a single interface
to common customers in a Diversification company, however, is rare.

15
Enterprise Architecture for a Coordination Model
The Coordination model provides integrated service to each key customer group. The integration
results from sharing key data across the business units to present a common face to the customer.
Because of their wide range of distinctive products, many large and insurance companies have
adopted a Coordination model.

The enterprise architecture core diagram for the Coordination operating model encapsulates a
company’s integration emphasis thus focuses on shared data (see this slide). Often, the core diagram
will
ill also
l highlight
hi hli h important
i technology
h l that
h depicts
d i how h stakeholders
k h ld can access thath data.
d Because
most processes in a Coordination model are unique, it is less important to show them on core
diagram. However, it can be useful to show at least some the processes to be coordinated.

When designing a Coordination model core diagram, start with key customers (e.g., segments and
channels) to be shared across business units. Next, identify the subset the company data that must be
shared across the business units serve key customers. Then, identify any technology that is key the
data integration.
integration It is not essential to reflect the technology,
technology it is usually helpful for business and IT
managers to understand, at a high level, the key to data integration. Finally, consider whether to
include business process elements.

16
Enterprise Architecture for a Replication Model
Replication operating models are successful when key processes are standardized across the
company and supported by automating technology. This Replication allows rapid expansion and
scalability of the business.

Replication operating models revolve around standardized processes. Thus an enterprise architecture
core diagram will show standard processes and, in most cases, the key technologies enabling those
processes. Data rarely appears in the Core diagram because Replication companies don’t typically
share
h data
d across business
b i units.
i To improve
i the
h economics
i off Replication
li i these
h companies
i automate
key processes, often creating reusable business modules (shown as business processes surrounded by
technology in the slide). The enterprise architecture core diagram also typically shows shared
technologies linking the standardized processes.

When designing a Replication model core diagram, start with the key processes to be standardized
and replicated across the business units. Next, identify the technologies automating those key
processes Then consider what linking technologies,
processes. technologies if any,
any can be shared across the business units.
units It
is not usually necessary in a Replication model to share data or identify key customers. Instead, each
business unit makes those decisions locally.

17
Stage 1: Business Silos
In the Business Silos stage, companies focus their IT investments on delivering solutions for local business
problems and opportunities. These companies may take advantage of opportunities for shared infrastructure
services like a data center, but such shared services accommodate the unique needs of the local business units.
Companies in this stage do not rely on an established set of technology standards. The role of IT in the Business
Silos stage is to automate specific business processes.

Stage 2: Standardized Technology


In the Standardized Technology stage, companies shift some their IT investments from local applications to
shared infrastructure (see slide). In this stage, companies establish technology standards intended to decrease the
number of platforms they manage. Fewer platforms mean lower cost. In some studies, Standardized Technology
companies had IT budgets that were 15 percent lower than Business Silos companies. But fewer platforms also
mean fewer choices for IT solutions. Companies are increasingly willing to accept this trade-off.

Stage 3: Optimized Core


In the Optimized Core stage, companies move from a local view of data and applications to an enterprise view, IT
staff eliminate data redundancy by extracting transaction data from individual applications and making it
accessible to all appropriate processes. In this stage companies are also developing interfaces to critical corporate
data and, if appropriate, standardizing business processes and IT applications. Thus, IT investments shift from
local applications and shared infrastructure to enterprise systems and shared data. The role of IT in the Optimized
Core stage is to facilitate achievement of company objectives by building reusable data and business process
platforms.

Stage 4: Business Modularity


The Business Modularity architecture enables strategic agility through customized or reusable modules. These
modules extend the essence of the business built into the infrastructure in the Optimized Core stage. Few
companies have reached the Business Modularity stage - o it is difficult to assess how IT investment patterns
change as companies move from the third to the fourth stage. In either case, the role of IT in a Business
Modularity architecture is to provide seamless linkages between business process modules. Modularity does not 18
reduce the need for standardization. individual process modules build on the standard core and to other internal
The Engagement Model
We define the engagement model as the system of governance mechanisms assuring that business and IT projects
achieve both local and company-wide objectives.

At top performing companies the engagement model has three main ingredients:
1. Company wide IT governance: decision rights and accountability framework to encourage desirable behavior
in the use of IT
2
2. Project management: formalized project methodology,
methodology with clear deliverables and regular checkpoints
3. Linking mechanisms: processes and decision-making bodies that align incentives and connect the project-
level activities to the overall IT governance

In large companies, the IT engagement model contains six key stakeholder groups: the overall company
management, business management, and line or project management—each of which exists on both the IT and
business sides of the company. The different perspectives, objectives, and incentives of these groups create two
challenges: coordination and alignment.
alignment At the company level,
level senior leaders set direction,
direction create a climate for
success, and design incentives to meet company wide goals. Business unit leaders focus on the performance of
their business unit. Project leaders are typically entirely focused on the success of their projects, garnering all the
company resources they find, beg, borrow, or steal to get the job done. The IT engagement model coordinates
these three different levels: company, business unit, and project. The IT governance establishes high-level goals
and incentives. Project management applies best practices of company-specific project management tools
techniques to every major project, ensuring local project success. Linking mechanisms ensure that, as projects
move forward, reflect and inform the goals and priorities of all parties.

19
IT Governance
IT governance is the decision rights and accountability framework for encouraging desirable behaviors in the use
of IT. IT governance reflects broader corporate governance principles while focusing on the management and use
of IT to achieve corporate performance goals. IT governance shouldn’t be considered in isolation because IT is
linked to other key company assets (i.e., financial, human, know- how/intellectual property, physical, and
relational assets). Thus, IT governance might share mechanisms, such as executive committees and budget
processes, with other asset-governance processes, thereby aligning companywide decision-making processes.

IT governance encompasses five


fi majorj decision
d i i areas related
l d to the
h management andd use off IT in
i a firm,
fi all
ll off
which should be driven by the operating model:
1. IT principles: high-level decisions about the strategic role of Ii in the business
2. Enterprise architecture: the organizing logic for business processes and IT infrastructure
3. IT infrastructure: centrally coordinated, shared IT services providing part of the foundation for execution
4. Business application needs: business requirements for purchased or internally developed IT applications.
5
5. Prioritization and investment: decisions about how much and where to invest in IT including project approval
and justification techniques

Each of these decisions can be made by corporate, business or functional managers - or some combination - with
the operating model as a guide. Thus, the first step in designing IT governance is to determine who should make,
and be held accountable for, each decision area.

20
Project Management
Project management has emerged as a critical competence in many, it hot most, companies. Increasingly,
companies are adopting standardized project methodologies - either homegrown or industry - developed
approaches. A good project management methodology ha’ sell-defined process steps with clear deliverables to be
reviewed at regular checkpoints, often called “gates.” Many companies design metrics for assessing project
performance and conducts implementation reviews to improve project manager’s skills and the company’s
methodology.

IT - related
l d projects
j h
have l
long b
been guided
id d by
b a project
j life
lif cycle.
l Variations
i i off the
h life
lif cycle
l define
d fi a set off four
f
to eight project phases (e.g., proposal, requirements, specification, development, implementation, and change
management), each with a specific set of objectives, deliverables, and metrics. Good project management
establishes a set of gates that check on projects’ progress and assess their chances for meeting their goals.
Companies may have as many as twelve to fifteen gates during a project. Disciplined project management
processes are a necessary condition for good engagement. They ensure that all projects execute certain tasks at
certain times.

21
Linking Mechanisms
Companies with effective IT governance and disciplined project management can still have ineffective IT
engagement. The third essential ingredient of the IT engagement model is the linking mechanisms connecting
companywide governance and projects.

Good IT governance ensures that there’s clear direction on how to evolve the company’s foundation.

Good project management ensures that projects are implemented effectively,


effectively efficiently,
efficiently and in a consistent
manner to maximize learning.

Good linking mechanisms ensure that projects incrementally build the company’s foundation and that the design
of the company’s foundation (its operating model and enterprise architecture) is informed by projects.

22
Types of Linking Mechanisms
This slide describes three important types of linking mechanisms for any IT engagement model: architecture
linkage, business linkage, and alignment linkage. These three types of linking mechanisms address the key
alignment and coordination concerns of company as long as key stakeholders take responsibility for them - and IT
governance and project management are effective.

Architecture linkage establishes and updates standards, reviews projects for compliance, and approves exceptions.
Architecture linkage connects the IT governance decisions about architecture with project design decisions. For
example,
l a company working ki to increase
i i
integration
i may have
h a mechanism
h i f insisting
for i i i thath a supply l chain
h i
project - rather than focus narrowly on its own data needs - restructure an inventory database so that it facilitates
anticipated future uses of the inventory data.

Business linkage ensures that business goals are translated effectively into project goals. Business linkage
coordinates projects, connects them to larger transformation efforts, and focuses projects on attacking specific
problems in the best possible way. For example, a key linking mechanism for companies pursuing companywide
standardized processes is the use of process owners with primary responsibility for designing and updating
processes.

Alignment linkage mechanisms ensures communication and negotiation between IT and business concerns.
Business-relationship managers or business unit CIOs are typically a critical linkage for translating back and forth
between business goals and IT constraints. Other mechanisms in this category include a project management
office, training and certification of project managers, and metrics for assessing projects.

23
A primary objective of the business viewpoint is to understand the processes of the business and the relationships
with
i h other
h entities.
i i One
O usefulf l vehicle
hi l to promote this
hi understanding
d di isi the
h Enterprise
E i Value
V l Network
N k model,
d l
which identifies the core value-adding processes executed by the enterprise and the relationships of the enterprise
with strategic partners. Every business has an enterprise value network; most successful companies use effective
partnerships to leverage minimal capital investments.

Their value is not based primarily on their products and services, but on the network of partnerships they have
created. The rapid creation and dissolution of partnerships is key to their agility. The dominant characteristic of
leading companies is the management of their value networks.
networks Therefore,
Therefore analysis of the value network is a
required activity for the development of an enterprise architecture, since the architecture must support the value
network from a process, information and technology perspective.

The model must be objectively accurate, and is not a picture of the organization chart. The purpose of developing
an Enterprise Value Network model is to clarify the requirements that the value network places on the
architecture.

24
Service
A service is a means of delivering value to customers by facilitating outcomes customers want to achieve without
the ownership of specific costs and risks.

Services are a means of delivering value to customers by facilitating outcomes customers want to
achieve without the ownership of specific costs and risks. Outcomes are possible from the performance
of tasks and are limited by the presence of certain constraints. Broadly speaking, services facilitate
outcomes by y enhancingg the pperformance and by y reducing g the g
grip
p of constraints. The result is an
increase in the possibility of desired outcomes. While some services enhance performance of tasks,
others have a more direct impact. They perform the task itself.

Value composition
From the customer’s perspective, value consists of two primary elements: utility or fitness for purpose
and warranty or fitness for use.

Utility is perceived by the customer from the attributes of the service that have a positive effect on the
performance of tasks associated with desired outcomes. Removal or relaxation of constraints on
performance is also perceived as a positive effect.

Warranty is derived from the positive effect being available when needed, in sufficient capacity or
magnitude, and dependably in terms of continuity and security.

Utility is what the customer gets, and warranty is how it is delivered.

Customers cannot benefit from something that is fit for purpose but not fit for use, and vice versa.
25
A business process or business method is a collection of related, structured activities or tasks that produce a
specific
ifi service
i or product
d (serve
( a particular
i l goal)l) for
f a particular
i l customer or customers.

There are three types of business processes:


1. Management or governing processes, the processes that govern the operation of a system. Typical
management processes include "Corporate Governance" and "Strategic Management".
2. Operational or core processes, processes that constitute the core business and create the primary value
stream. Typical
yp operational
p pprocesses are Purchasing,
g, Manufacturing,
g, Marketingg and Sales.
3. Supporting processes, which support the core processes. Examples include Accounting, Recruitment,
Technical support.

A business process begins with a customer’s need and ends with a customer’s need fulfillment. Process oriented
organizations break down the barriers of structural departments and try to avoid functional silos.

A business
b i process can be
b decomposed
d d into
i severall sub-processes,
b which
hi h have
h their
h i own attributes,
ib b also
but l
contribute to achieving the goal of the super-process. The analysis of business processes typically includes the
mapping of processes and sub-processes down to activity level.

Business Processes are designed to add value for the customer and should not include unnecessary activities. The
outcome of a well designed business process is increased effectiveness (value for the customer) and increased
efficiency (less costs for the company).

26
Governance for process improvement:
1. Executive Team: Sets and communicates strategic direction, develops process goals, responds to
recommendations and requests from Process Owner, includes process performance as part of operations
review.
2. Process Owner: Serves as executive champion/advocate for the process, serves as “voice of the customer” for
the process, chairs Process Management Team, sponsors improvement projects and best practices,
coordinates process improvement activities, helps resolve “white space” disputes, communicates process
performance information, monitors for process sub optimization and ensures process documentation is
maintained.
maintained
3. Design Team or Process Improvement Team (including Process Owner and Team Leader): Validates process
goals, develops, communicates, and oversees the implementation of process plans, ensures appropriate
measures are in place, monitors and take action to improve process performance, evaluates and responds to
changes that could affect process performance, initiates and serves as Steering Team for process
improvements, determines need and secures resources for the process, develops guidelines/process
requirements for functional plans, evaluates and rewards performers based on their contribution to the
process.
4. Process Improvement Facilitator: Possesses knowledge and control about de process improvement
methodology, documentation and standards (international and regulatory), facilitate key meetings, and
advise Process Improvement Teams and Process Owners.
5. Functional Managers or Leaders: Report to Process Owner on process matters, set function goals to support
process goals, develop function plans to fulfill process requirements, allocate required resources to the
process, ensures functional core competencies provide competitive advantages for the process, ensure
performers have necessary skills, knowledge, and tools, ensures that functional tasks are executed to process
standards,
t d d monitorit appropriate
i t process measures andd ensures performers
f receive
i appropriate
i t feedback.
f db k

27
28
29
30
31
32
Boiling the Ocean. Boiling the Ocean By: Martin Kihn 2007.
The phrase in question gets its own special berth in ex-consultant
ex consultant Louis Gerstner
Gerstner'ss memoir of his years at IBM,
IBM
Who Says Elephants Can't Dance? (Harper Business, 2002): "Boil the Ocean -- to use all means and options
available to get something done." And that's just a drop in the bucket: An A.T. Kearney consultant told author
Betty Vandenbosch his firm's approach was for "when you don't have time to do all the boil-the-ocean analyses."
And after WorldCom settled its case with the SEC last year, a board member chastised critics, saying, "You
could boil the ocean and not satisfy people."

Does this expression hold water? Smelling something fishy, the CDU fired up its sonar and charted a course for
the truth, asking: How many consultants would it take to boil the ocean? And what would happen if they did?

Our first port of call was the National Oceanic and Atmospheric Administration (NOAA). Experts there helped
us figure that the world's oceans consist of 275 million cubic miles -- enough to fill 1 trillion boardrooms. Okay,
but can you boil it? "I don't think so," said NOAA's Carmeyia Gillis. "Maybe a little part of it, if you're right on
top of an active volcano or something." The problem is energy -- getting enough of it.

To find out how much, we consulted Michio Kaku, professor of theoretical physics at City University of New
York. He confirmed our worst fears: "It would take a lot of energy" -- 4.7 x 1026 joules, give or take. "It would
probably require more energy than all the fuel on earth." Could a particularly powerful consulting firm do it?
"They wouldn't even know where to start," he snorted.

By our calculations, one day of "heavy consulting" involves about 1 x 107 joules of energy. Assuming no
vacations, this means every single person on earth would have to consult for more than 26 million years to
actually "boil the ocean."

Not to mention what would happen if they succeeded. Our final ahoy was to Jeffrey Chanton, professor of
oceanography at Florida State University. What would happen if consultants could boil the ocean? Chanton was
not encouraging: "It would mean the end of the life we know on earth. It is a terrible idea."

The phrase was popularized by Will Rogers, who was asked what could be done about U U-boats.
boats. "Boil
Boil the
ocean," he suggested. When pressed for exactly how, he is supposed to have said: "I'm just the idea man here.
Get someone else to work out the details." Now that's true consulting-speak.

Martin Kihn is a management consultant and author of House of Lies: How Management Consultants Steal Your
Watch and Then Tell You the Time (Warner Books, March 2005).

33
34
NGOSS or "New Generation Operations Systems and Software" is the TeleManagement Forum’s programme to
provide ways to help Communication Service Providers to manage their business. NGOSS includes a set of
principles and technical deliverables.

NGOSS is based around 5 key principles:


1. Separation of Business Process from Component Implementation. When Operations Support Systems
(OSSs) are linked together, the business processes they support become distributed across the IT estate. In
effect the situation is reached where a process starts with application A, which processes some data and
then knows that it must call application B, which also does some processing and then calls C, etc., etc. The
result of this is that it's extremely difficult to understand where any of these flows actually are (e.g. if the
process flow is one intended to take a customer order, is it Application A or B or C that’s currently
handling that order?) and it it'ss even more difficult to change the process owing to its distributed nature.
nature
NGOSS proposes that the process is managed as part of the centralized infrastructure, using a workflow
engine that is responsible for controlling the flow of the business process between the applications.
Therefore, the workflow engine would initiate a process on application A, which would then return control
to workflow engine, which would then call application B, and so on. In this way it's always possible to find
out where an individual process flow is, since it’s controlled by the central workflow engine, and process
modifications can be made using the engine’s process definition tools. Clearly some lower level process
flows will be embedded in the individual applications, but this should be below the level of business-
significant processing (i.e. below the level at which business policy and rules are implemented).
2. Loosely Coupled Distributed System. "Loosely coupled" means that each application is relatively
independent of the other applications in the overall system. Therefore, in a loosely coupled environment,
one application
li i can be b altered
l d without
ih the
h alteration
l i necessarily
il affecting
ff i others.
h T k to extreme, this
Taken hi can
sometimes be viewed as producing the ability to "plug and play" applications, where they are so
independent that they can be changed without affecting the overall system behavior. That extreme is
considered an unlikely nirvana at the present time. The "distributed system" is emphasizing that NGOSS is
not based on a Communication Service Provider (CSP) using a single monolithic application to manage all
its activities, but is instead using a set of integrated and co-operating applications.
3. Shared Information Model. Integrating OSSs means that data must be shared between the applications.
For this to be effective, either each application must understand how every other application
understands/interprets that part of the data that is shared, or there must be a common model of the shared
data. To understand this, consider an order handling application which has gone through a process to enter
a customer order and where it now needs to send out a bill using application B (a billing system).
Application A will have a record of the customer address and it therefore needs to ensure that application B
sends the bill to this address. Passing this data between the systems simply requires a common format for
the address information – each system needs to expect the same number of address lines, with each line
being the same length. That’s fairly straightforward. But imagine the difficulty that would occur if the
ordering application worked on products that consists of bundles of sub-products (e.g. a broadband access
product made from a copper line, a modem, a set of filters and a broadband conversion), whereas the billing
application only expected single product/order lines. A single information model for data that is shared
between applications in this way provides a solution to this problem. The TMF solution to this is called the
Shared Information/Data Model (SID).
35
4. Common Communications Infrastructure. Originally (typically in the mid 1980s), computer-based OSSs
were developed as stand
stand-alone
alone applications. However, during the early 1990s it became apparent that
employing these as purely isolated applications was highly inefficient, since it led to a situation where, for
example, orders would be taken on one system but the details would then need to be re-keyed into another
in order to configure the relevant network equipment. Major efficiency gains were shown to be available
from linking the standalone OSSs together, to allow such features as "Flow-through provisioning", where
an order could be placed online and automatically result in equipment being provisioned, without any
human intervention. However, for large operators with many 100’s of separate OSSs, the proliferation of
interfaces became a serious problem. Each OSS needed to "talk to" many others, leading to the number of
interfaces increasing with the square of the number of OSSs. NGOSS describes the use of a Common
Communications Infrastructure (CCI). In this model, OSSs interface with the CCI rather than directly with
each other. The CCI thus allows applications to work together using the CCI to link them together. In this
way, each application only requires one interface (to the CCI) rather than many (to other applications). The
complexity is therefore reduced to one of order n, rather than n2. The CCI may also provide other services,
including security, data translation, etc.

5. Contract defined interfaces. Given the description above of how applications interface to the CCI, it’s
clear that we need a way of documenting those interfaces, both in terms of the technology employed (e.g. is
it Java/JMS or Web services/SOAP?) but also the functionality of the application, the data used, the pre-
and post-conditions, etc. The NGOSS contract specification provides a means to document these interfaces,
and these are therefore contract defined interfaces. NGOSS contracts can be seen as extensions of
Application Programming Interface (API) specifications.

NGOSS Technical Deliverables


1. Process Model. The eTOM (enhanced Telecom Operations Map) is the NGOSS business process
framework.
2. Shared Information Model. The NGOSS Information is the Shared Information/Data Model (SID).
3. Lifecycle Model. The NGOSS lifecycle model is aimed at defining the use and deployment of NGOSS
within an organization, and provides a framework for using the SID, eTOM and the NGOSS architecture.
The model is based on considerable earlier work, including Zachman Framework, Kernighan, Yourdon,
and the Object Management Group's Model Driven Architecture. The NGOSS lifecycle divides systems
development
p into 4 stages:
g requirements,
q system
y design,
g implementation
p and operation.
p
4. Contract Specifications. As stated earlier, the NGOSS Contract is the fundamental unit of interoperability
in an NGOSS system. Interoperability is important for each of the four views defined by the NGOSS
Lifecycle. For example, the Contract is used to define the service to be delivered, as well as to specify
information and code that implement the service. The Contract is also used to monitor, administer and
maintain the service and ensure that any external obligations of the contract (e.g., from an SLA) are met
and to define what measures to take if they are violated in some way.
5. Telecom Application Map. The Telecom Application Map (TAM) links process views and
data/information views to describe IT-type applications that service providers can procure.
36
Business View
This view is about “the
the identification of the business need
need”. The purpose of the view is to document the business
requirements and all associated business activities that help to define the business requirements such as process
definition, policies, stakeholders, resource, etc.

In this view there are items relating to the business context, business process (eTOM “verbs”), business entities
(SID “nouns” and “verbs”) and the business flow. Together these make up a vocabulary for expressing the
business requirements. They will be expressed using an NGOSS Use Case styled narrative and captured in the
Contract consistent with reuse in later Views;
• SID business view definition and description;
• NGOSS Input:
p NGOSS Lifecycle y Use Case for the Business View;;
• NGOSS Output: NGOSS Contract from the Business View.

System View
This is about "modeling the system solution". In this View there is formal information modeling of business
needs and the desired system solution, done using a “grey box” perspective that places a focus on the points of
interoperability (interactions). There are items relating to system Contracts, COTS capabilities and policy,
process flow in terms of systems, information model, data specification, built s/w components and built COTS
components.

NGOSS Artifacts used/produced:


• SID system view definition and description;
• Distributed computing architecture (TNA);
• NGOSS Input: NGOSS Lifecycle Use Case for the System View + Business View Artifacts;
• NGOSS Output: NGOSS Contract for the System View + Business View Artifacts.

37
Implementation View
This is about "validating
validating the proposed solution
solution". The Implementation View maps the System View solution
models onto target technologies, potentially including a COTS component base assembly. There are items
relating to Contract implementations, Class instance diagrams, Data models, Execution environments, Trial/pilot
of system solution and Technology Neutral guidelines.

NGOSS Artifacts used/produced:


• Proposed solution model (built using the SID) and proposed distributed computing harness (specified using
the TNA);
• NGOSS Input: NGOSS Lifecycle Use Case for the Implementation View plus the Artifacts produced in the
Business and System
y Views;;
• NGOSS Output: NGOSS Contract for the Implementation View + Artifacts from the Business View +
System View + NGOSS components built from the Business View and System View contracts and SID.

Deployment View
This is about "realizing the solution". Here there is the observable behavior of the solution operating in the ‘real
world’. There are items relating to Contract Instances, Components, full-scale run-time solution and technology
specific guidelines.

NGOSS Artifacts used/produced:


• NGOSS Input: NGOSS Lifecycle Use Case for the Deployment View and Artifacts produced from the
Business, System and Implementation Views;
• NGOSS Output: NGOSS Deployment View Contracts + NGOSS Components built from the Business
View + System View + Implementation View + Deployment View Contracts and SID.

NGOSS is based on 4 key toolsets that form the NGOSS Toolkit:


1. Business Process Framework – the eTOM
2. Enterprise wide information framework – the SID
3. Systems integration framework – the TNA
4. Applications Framework –TAM

38
The eTOM (enhanced Telecom Operations Map), published by the TM Forum, is a guidebook, the most widely
used and accepted standard for business processes in the telecommunications industry. The eTOM model
describes the full scope of business processes required by a service provider and defines key elements and how
they interact.

The eTOM is a business process framework, i.e. a reference framework or model for categorizing all the
business activities that a service provider will use. It is NOT a service provider business model. In other words,
it does not address the strategic issues or questions of who a service provider’s target customers should be, what
market segments should the service provider serve, what are a service provider’s vision, mission, etc. A business
process framework is one part of the strategic business model and plan for a service provider.

Th eTOM
The TOM model d l consists
i t off Level-0,
L l 0 Level-1,
L l 1 Level-2
L l 2 andd Level-3
L l 3 processes. Each
E h level
l l drills
d ill down
d t the
to th
more specific processes.

The graphic representation of an eTOM model consists of rows and columns. The intersections of these rows
and columns point out to specific processes as specified by eTOM. The topmost row denotes the customer
facing activity i.e. marketing while the bottom most row indicates the supplier facing activity and the support
activities. In this manner the eTOM map indicates the whole value chain. The map thus also gives a good
indication of the interaction between the processes.

The broadly classified sections are Strategy,


Strategy Infrastructure & Product and Operations.
Operations

39
Description
This grouping focuses on the knowledge of running and developing the Core Business for an ICSP Enterprise.
Enterprise It
includes functionalities necessary for defining strategies, developing new products, managing existing products
and implementing marketing and offering strategies especially suitable for information and communications
products and services. Marketing and offer management are well known business processes, especially in the
more competitive ebusiness environment, where the rate of innovation and brand recognition determine success.
Although most companies carry out all these activities, depending upon the size of the company, they are
combined in a variety of ways. These processes are enabling processes, but also the key processes that are
accountable for commitment to the enterprise for revenue, overall product performance and profit and loss.
These processes deal with product, markets and channels; they manage market and product strategies, pricing,
sales, channels, new product development (and retirement), marketing communications and promotion.

40
The Shared Information/Data (SID) business view model can be viewed as a companion model to the eTOM, in
that it provides an information/data reference model and a common information/data vocabulary from a business
entity perspective. The business view model uses the concepts of domains and aggregate business entities (or
sub-domains) to categorize business entities, so as to reduce duplication and overlap. Based on data affinity
concepts, the categorization scheme is necessarily layered, with each layer identifying in more detail the
“things” associated with the immediate parent layer. This partitioning of the SID business view model also
allows distributed work teams to build out the model definitions while minimizing the flow-on impacts across
the model.

Teamedd with
T ith the
th eTOM,
TOM the th SID modeld l provides
id enterprises
t i with
ith nott only
l a process view
i off their
th i business
b i b t
but
also an entity view. That is to say, the SID provides the definition of the ‘things’ that are to be affected by the
business processes defined in the eTOM. The SID and eTOM in combination offer a way to explain ‘how’
things are intended to fit together to meet a given business need. It should be noted that while both the eTOM
process framework and the SID business view model are layered, there is not necessarily a one-one relationship
between the layers in each model.

The primary relationship identifies the element of the eTOM process framework, which is responsible for
creating instances of the SID business entity. The presumption is that only one primary relationship exists
between any SID business entity and an eTOM process element. In other words an enterprise should only use
one process to create and delete instances of a specific business entity,
entity to reduce the risks of misaligned and/or
non-unique information within the enterprise. This relationship underpins the concept of the single database of
record or master database.

The SID business view addresses the information and communication service industry’s need for shared
information/data definitions and models. The definitions in the business view focus on business entity
definitions and associated attribute definitions. A business entity is a thing of interest to the business, while it’s
attributes are facts that further describe the entity. Together the definitions provide a business oriented
perspective of the information and data.

41
Customer Billing Credit

The creation of CustomerBillingCredit instances is usually governed by ProductOfferingPriceRules. The


ProductOfferingPriceRule or some other business logic can grant a certain credit to a customer account. When a
part of the credit is applied during the billing process, the available credit is reduced. The CustomerBillingCredit
and related business entities keep track of the credit. Because the balance of the CustomerBillingCredit changes
over the time, the CustomerBillingCreditBalance entity represents the balance valid for a certain time period.

The CustomerDiscount business entity is a type of the CustomerBillingCredit. It keeps track of the available
discount quantity for the associated DiscountProductPriceAlteration. For example, the
DiscountProductPriceAlteration grants $100 discount on the Product charge. As the customer uses the service,
the
h billing
billi process is i creating
i AppliedCustomerBillingCharge
A li dC Billi Ch i
instances d i eachh billing
during billi cycle,
l andd theh
available discount amount is reduced. The billing process utilizes the CustomerDiscountBalance to keep track of
the available discount (that has not been used yet).

The CustomerAllowanceBalance business entity similarly keeps track of the available allowance (that has not
been used yet).

This slide serves merely as an example of possible types of the CustomerBillingCredit that can be easily
extended to keep track of other credits.

42
The Telecom Applications Map provides the bridge between the NGOSS framework building blocks (eTOM
and SID) and real, deployable, potentially procurable applications by grouping together process functions and
information data into recognized OSS and BSS applications or services.

The Telecom Applications Map has been designed to be of use by the entire spectrum of players in the telecom
software value chain. It may be used for a variety of functions and allows both the operator and supplier
communities worldwide to have a common frame of reference in describing both their current and future needs
and intentions. For example, an operator could use the Map to model their current (as-is) OSS applications in a
structured format; as well as developing a (to-be) future model and deriving a clear gap analysis. By using this
common layout and nomenclature, the current and future landscape would be much easier for consultants,
suppliers or system integrators to understand the situation and requirement.

Alternatively, a supplier may wish to use the Map to highlight the systems that they supply and the systems that
they partner with other companies to deliver. It may be used to show both current and future portfolios. Investors
or financial analysts may find the Map useful to describe the OSS market in terms of its growth, value etc.
Others may find the Map a useful starting point in assembling directories of suppliers active in each segment of
the Map.

43
Campaign Management
Application Identifier: 3.1
31

Overview
The Campaign Management applications are responsible for managing the lifecycle of marketing campaigns,
sometimes referred to as "closed loop marketing". Service Provider marketers need to respond to changing
market environments with marketing initiatives that push highly targeted messages to increasingly focused
segments. Marketers need an adaptable and flexible campaign management application that can adjust to
evolving customer lifecycles with corresponding targeted marketing strategies. Marketers need to deliver
coordinated outbound and inbound campaigns across all points of interaction- focusing marketing resources
where the greatest potential value exists. The campaign management application needs to:
1. Leverage a single, consistent view of customer data.
2. Be highly usable, which increases marketing productivity and effectiveness.
3. Provide valuable insight into marketing performance through analytics that enable marketers to continually
adjust and improve marketing investments.

Functionality
The campaign management applications have the following capabilities:
1. Campaign Analytics
2
2. C
Campaign
i Design
D i
3. Lead Generation
4. Campaign Execution & Refinement
5. Performance Tracking

Supported Contracts
1. Manage Business Intelligence
2. Manage Dashboard
3
3. Manage Predictive
di i Analytics
A l i
4. Send Recommendation(s)
5. Consume Recommendation Success

44
A Contract specifies the requirements for a business interaction between two systems. It does this by considering
a managed entity, or entities, over an extended period, such as the lifetime of that entity (but not necessarily so).

Contracts specify the relationship between telecom applications. TAM provides the application map and
Contracts provide the relationship between the applications. The Contracts are specified with a data model that
conforms to SID.

At its most general, an NGOSS Contract is designed to have a similar role to a real world contract in that it
relates two trading entities. This may be customer and service provider, service provider and vendor or trading
entities that are internal to the service provider (SP). This brings out the additional value of a contract over
traditional interfaces as it not only describes functions over an interface, but a trading relationship with
associated
i t d service
i level
l l agreements t (SLAs)
(SLA ) andd lifecycle.
lif l

45
46
What an Iterative Methodology Provides
• A way of defining a solution from your business problem through to operation
• A systematic and formalized approach to specification and development
• Final solution built through an iterative process
• A balanced handling of all stakeholder concerns
• Concurrent involvement from all stakeholders
• Local flavors to global solutions

Benefits
1
1. Ensures the needs of all stakeholders are managed/controlled
2. Traceability throughout solution lifecycle
3. Change requirements can be easily incorporated and their impact assessed
4. Promotes progressive and collective understanding
5. All stakeholders win
6. Clear roles for all stakeholders

47
SANRR Methodology
• SCOPE : Define Solution Boundary including Solution Mission Goals,
Goals and High-Level
High Level Use Cases
• ANALYZE: Document existing (legacy) and desired environments with detailed Use Cases, Process Maps,
Activities and Policy Lists
• NORMALIZE: Map current view onto common vocabulary to achieve a “single unified model” (using SID)
• RATIONALIZE: Examine normalized model for needed changes (Gap Analysis, Replication Analysis,
Conflict Analysis). Terminate when no more changes needed
• RECTIFY: Modify, delete or add functionality (Contractually Specified) to resolve needed changes
identified in Step 4. Once complete, cycle to Step 3.

48
SANRR Methodology
• SCOPE : Define Solution Boundary including Solution Mission Goals,
Goals and High-Level
High Level Use Cases
• ANALYZE: Document existing (legacy) and desired environments with detailed Use Cases, Process Maps,
Activities and Policy Lists
• NORMALIZE: Map current view onto common vocabulary to achieve a “single unified model” (using SID)
• RATIONALIZE: Examine normalized model for needed changes (Gap Analysis, Replication Analysis,
Conflict Analysis). Terminate when no more changes needed
• RECTIFY: Modify, delete or add functionality (Contractually Specified) to resolve needed changes
identified in Step 4. Once complete, cycle to Step 3.

49
What Reusability Provides
• Leverages existing industry & corporate knowledge and functionality
• Supports the identification and use of applications in multiple contexts

Benefits
1. Build from global experience
2. Draw from global knowledge to create focussed local solutions
3. Reduce integration cost & time to market
4. Reduce business and project risks

50
What Model Based Provides
• A unifying information framework
• A definition of all assets and concerns, and their relationships in a formal way
• A framework for capturing the information needs from all stakeholders
• A set of guidelines for transformations that can:
 Generate data models from the NGOSS common information model
 Translate and map high-level business and regulatory rules into deployable, low-level network and
system policies and process

Benefits
1. Provides a common language for use across the business
2. Ensures traceability between business requirements and outcomes
3. Simplifies interoperability

51
What Federation Provides
• Links distributed & diverse information into a common structure
• An holistic governance framework for controlling distributed problem solving
• Support for interoperability across organizational, corporate and regulatory boundaries

Benefits
1. Allows problem solving across organizational boundaries
2. Draws on knowledge from other industry associations, standards bodies and authorities
3
3. Recognizes expertise from other industries
4. Enables a “Divide & Conquer” approach to problem solving

52
What Policy Enabled Provides
• A mechanism for linking system control to business objectives
• Visibility and access to business rules throughout the systems

Benefits
1. Ability to modify process and system behavior based on changing business circumstances
2. Allows policy makers at all levels to govern the behavior of the systems
3. Allows tuning of systems to improve organizational effectiveness
4
4. Reduces time to make system changes

53
What Process Provides
• An industry-wide
industry wide business process framework with a common modeling approach
• A mechanism for process control & coordination

Benefits
1. Provides a common process language
2. Facilitates chaining of processes and end-to-end visibility of process flows
3. Improves the ability to re-use existing process implementations by other applications
4. Facilitates the extension of business processes across organizational and company boundaries

54
What Contract Provides
• A de-coupling
de coupling of solution specification from implementation technologies
• Solutions are delivered as a set of structured capabilities
• A mechanism to fully describe the exposed interactions between the delivered capabilities
• A mechanism to identify and locate offered capabilities
• A contractual obligation between capability user and provider

Benefits
1
1. Eases extension and scaling of already delivered solutions
2. Simplifies the orchestration of processes, policies and capabilities to meet changing business needs

55
NOGSS Conclusions
• Builds on industry best practices and foundations already in place,
place
• Organizes relevant information so that it is useful to all stakeholders,
• Maintains integrity of the solution throughout its lifecycle allowing “what-if” analysis across the lifecycle,
• Is collaborative and balanced across all stakeholder ecosystems,
• Puts policy and process control in the hands of the business.

56
The Process Framework and Information Framework are naturally related. Processes act on entities defined
within the Information Framework. The Integration Framework defines the interaction between processes and
entities in more detail by describing the interaction in terms of details that characterize the entities, contained
within Business Service (aka Contract) and interface specifications and their implementations. The Application
Framework, composed of application areas, describes the processes and entities supported by the application
areas and also serves as an application oriented catalog of Business Services, also referred to as Services in this
guide book, and interfaces.

There are various entry points in to the frameworks based upon focus/needs of the frameworks’ user. For
example, if focus is on a reusable set of web services, the entry point would lead into the Application
Framework and/or the Integration Framework.

57
Standards need to be a help to companies at whatever evolutionary phase they are at. The four phases are:
• Application Silos
• Application Standardization
• Process and Information Standardization
• Service Oriented Enterprise

If a company is till treating architecture as a procurement checklist, then something like the Application
Framework is a useful thing. Companies which start to launch process harmonization initiatives should be able
to turn to something like the Process Framework. IT departments which find the courage to tackle data diversity
will train their solution designers in the Information Framework. These early phase approaches to architecture
progressively use all or parts of the frameworks in a phased implementation do generate significant benefit –
reducing complexity, increasing speed, and starting to codify process within the organization so that
organizational learning is streamlined.

The use of these standards is shown in this slide - Leveraging Industry Frameworks and Guides.

58
Service providers/operators often follow the approach to implementation planning described in the following
example.
Immediate uses identified during the implementation planning workshop included:
1. IMS (IP Multimedia Subsystem) planning
2. Next generation network planning
3. New service planning
4. Fixed-mobile convergence planning.

IMS planning, next generation network planning, and new service planning can be facilitated by employing both
the Process Framework and the Information Framework. Applicable level 3 processes within the Product
Lifecycle
if l Management, Infrastructure
f Lifecycle
if l Management, andd Operations
O i S
Support & Readiness
di andd their
h i
descriptions will be used as check list in the planning process to assist the IMS planning and to ensure that all
processes are enabled for the new networks and/or services. Similarly, the Information Framework that supports
these Level 1 vertical processes will be used to ensure that all information areas are considered during the
planning process and that information is available to support operational processes.

During the discussion that surrounded the frameworks’ support of the planning process, it is often found that a
service provider does not have a complete process model that could be used to support planning.

As the first step,


step the TM forum Application Framework (TAM) can be used as a reference map for mapping the
enterprise application inventory. The mapping process surveys the deployed product and solutions functionality
against the functionality description in the Application Framework. This step allows the discovery of overlapped
functionality between deployed applications or domain complementary applications. In our example of fixed and
mobile line of business convergence it is likely that we will find two applications handling product inventory.
One application is used for mobile product inventory management and another application for fixed product
inventory management. Function wise, the two applications are carrying similar processes for inventory
management on two domains. The next step would be to harmonize the information models in support of the
two domains.

The Information Framework


Framework’ss Unified Modeling Language (UML) model can be used to support the
convergence of fixed and mobile lines of business from a database convergence perspective. One example used
during the planning workshop was the use of the Information Framework to support the convergence of fixed
and mobile product inventories.

59
Short term (three to six month project duration) uses identified during the implementation planning workshop
included:
1. Application inventory – Process and Information frameworks
2. Process mapping – Process Framework
3. Process, Information, and Service frameworks for new process automation projects.

An inventory of applications mapped to the Process and Information frameworks can be used to identify
possible duplicate functionality and can be used to perform impact analysis when making changes to related
applications. It can also be used to identify possible areas for process automation.
Mapping current processes that are documented to the Process Framework can be used to show gaps in current
process documentation where the frameworks
frameworks’ processes can be used as a starting point.
point

Using the frameworks for new process automation projects probably has the largest benefit. Rather than starting
from scratch, applicable frameworks can be used as a starting point for projects. This use of the frameworks is
also easy to justify quantitatively by estimating the cost to develop what already exists and that can be used.

60
Long term (six month plus project duration) uses identified during the implementation planning workshop
included:
1. Revenue Assurance, along application contents of the Process Framework and the Information Framework
2. Data warehouse for Customer, Product, and Billing - use of Information Framework and model
3. Converged business lines and applications, for example fixed and mobile
4. Integration Framework (APIs from the Integration Framework) combined with an Information Framework
based integration hub.

Revenue Assurance is of great interest to the service provider. There is a wealth of information available from
the TM Forum regarding this topic. Rather than starting this effort from scratch, but should use existing TM
Forum documentation.
documentation

The convergence of business lines and their associated applications is also of interest to the service provider. An
example of how the Information Framework can be used to support this was provided in the “Immediate Uses”
section of this chapter. In addition to the Information Framework, the Process and Integration frameworks can
be used to support this effort. Quantitative benefits can easily be calculated based on the amount of time it
would take for the service provider to begin developing converged information and process models from scratch.

As new applications are developed and existing applications are enhanced, the service provider should consider
usingg the Information Framework as ppart of an application
pp integration
g framework.

61
Long term (six month plus project duration) uses identified during the implementation planning workshop
included:
1. Revenue Assurance, along application contents of the Process Framework and the Information Framework
2. Data warehouse for Customer, Product, and Billing - use of Information Framework and model
3. Converged business lines and applications, for example fixed and mobile
4. Integration Framework (APIs from the Integration Framework) combined with an Information Framework
based integration hub.

Revenue Assurance is of great interest to the service provider. There is a wealth of information available from
the TM Forum regarding this topic. Rather than starting this effort from scratch, but should use existing TM
Forum documentation.
documentation

The convergence of business lines and their associated applications is also of interest to the service provider. An
example of how the Information Framework can be used to support this was provided in the “Immediate Uses”
section of this chapter. In addition to the Information Framework, the Process and Integration frameworks can
be used to support this effort. Quantitative benefits can easily be calculated based on the amount of time it
would take for the service provider to begin developing converged information and process models from scratch.

As new applications are developed and existing applications are enhanced, the service provider should consider
usingg the Information Framework as ppart of an application
pp integration
g framework.

62
63
64
65
66
67
Process requires three separate but interdependent efforts. First, there’s the need for planning. Next comes
performance, or execution. The third effort involves measurement and management support. The power of the
overall process depends on a healthy balance between all three.

Historically, most of the effort invested in process improvement has been spent on component #2:perform.
Organizations polish their process execution to a high sheen. They eliminate unnecessary steps. They reduce the
number of handoffs. They minimize the amount of time wasted on low-value-adding work.

Eventually the process itself becomes a work of art. Problem is, this bright and shiny process that’s receiving so
much attention is more or less an orphan. Both the front end effort (planning) and the back end work
(measurement and management support) are missing to a large degree. As a result, the process is misdirected. .
.disconnected
di t d from
f th company strategy.
the t t . .or impotent
i t t due
d tot a lack
l k off follow
f ll through.
th h

This tendency to over-focus on process execution is unfortunate. Organizations spend roughly 80 to 90 percent
of their time there. But our experience suggests that the greatest opportunities for gain exist in the other two
areas.

Take planning. Why invest more time and effort there? Well, this increase the odds that the process will be
pointed in the right direction. It’s an alignment issue. Good planning ensures that the process is linked to the
organization’s overall business strategy. That’s essential. Because even an outstanding process can still be of
very questionable value if it it’ss off target in serving the company
company’ss higher level goals.
goals Likewise,
Likewise careful planning
is necessary to link the goals of your process to other core processes in the organization. No process exists in
isolation it exists within a system of processes.

Planning also helps sort out precisely what you want a process to do. Set clear performance expectations up
front, and you’re in a position to measure outcomes. You need to know exactly what you’re shooting for, and
communicate it very clearly to all the people involved. Without that, any attempt at measurement later on is a
rather worthless endeavor.

On the heels of planning and execution, the process needs measurement and management support. You analyze
to improve
i the
h process’’ effectiveness
ff i going
i f
forward.
d Theh idea
id here
h i to dispassionately
is di i l critique
ii process
outcomes, proceeding according to sound logic instead of managing by gut feel. No more going on guesswork in
trying to improve the process. You rely on disciplined measurement to remove the emotion from the situation.
Instead of being subjective, or playing a hunch, you can react to real problems.

68
69
Activity-based process modeling
Here the overall process is decomposed into tasks that are ordered based on the dependencies among
them. The fundamental entity of a business process for the Activity-based approach is the unit of
work and a business process is considered to be a succession of activities, or units of work, following
a specific control flow.

Definition: Activity
An activityy represents
p a unit of work p
performed by
y a ppartyy or system.
y Activities transform inputs
p into
outputs and are associated with triggers and outcomes (pre and post conditions).

Communication-based Process Modeling


In this approach, an action in a process flow is represented by the communication between a
consumer and a provider. In the communication - based approach the communication is the message.
So a business process can be expressed as an exchange of messages, or transaction, between two or
more roles and everyy state change
g within a company
p y can be associated with the pprocessingg of a
message.

Artifact-based Process Modeling


In the artifact-based approach objects, or artifacts, are created, modified and used during the process
and thus the model is based on work products and their paths through a series of workflow activities.

H b id approach
Hybrid h to Process
P M d li
Modeling
The hybrid approach uses a combination of these general approaches to produce a set of models for
an organization’s processes. Typical models might be based on an information flow model (from the
communication-based approach), a capabilities model (from the artifact-based approach) and a
process-model (activity-based approach).

70
71
72
73
74
75
Note:
An exercise is underway to identify the SID ABEs (Aggregate Business Entities) which are
associated with eTOM Process Elements.

76
77
Notes:
1. The definition of the eTOM Process Elements themselves does not address these types of flow.
However the eTOM does include in Addendum F sample process flows and depictions of
process interaction in swimlanes. These are examples of control flow.
2. Traceability also applies to swimlanes in eTOM process flows. (See Principle eTOM.08)

78
Notes:
1. The definition of the eTOM Process Elements themselves does not address these types of flow.
However the eTOM does include in Addendum F sample process flows and depictions of
process interaction in swimlanes. These are examples of control flow.
2. Traceability also applies to swimlanes in eTOM process flows. (See Principle eTOM.08)

79
The eTOM Process Elements and example process flows are a process view of the enterprise
b h i based
behavior, b d on sequences off activity.
i i However,
H there
h are also
l dynamic
d i aspects pertaining
i i to the
h
processes and their interaction. These are considered below.
1. Temporal aspects. There may be time-based requirements in the triggering of processes,
triggering frequencies and possible delays between process steps. Process step durations
(minimum, maximum, average durations) can also be indicated
2. Co-operative activities. In practice, it is common that two or more activities of two different
processes must work co-operatively, e.g. to exchange messages or objects. Methods include
message passing and patterns.
patterns
3. Process communication. In the case where processes must communicate, this means that some
activities of one process must interact with activities of other processes. The previous
mechanisms for co-operative activities can be used.
4. Process synchronization. Process synchronization can happen in three different forms: (1)
synchronization by events, (2) synchronization by messages and (3) synchronization by object
flows.
5
5. Exception handling mechanisms.
mechanisms Process models often only model the ideal structure of a
business process. Real-world situations mostly consist of dealing with exceptions. Exceptions
can either be predictable or unpredictable.

80
The eTOM Process Elements and example process flows are a process view of the enterprise
b h i based
behavior, b d on sequences off activity.
i i However,
H there
h are also
l dynamic
d i aspects pertaining
i i to the
h
processes and their interaction. These are considered below.
1. Temporal aspects. There may be time-based requirements in the triggering of processes,
triggering frequencies and possible delays between process steps. Process step durations
(minimum, maximum, average durations) can also be indicated
2. Co-operative activities. In practice, it is common that two or more activities of two different
processes must work co-operatively, e.g. to exchange messages or objects. Methods include
message passing and patterns.
patterns
3. Process communication. In the case where processes must communicate, this means that some
activities of one process must interact with activities of other processes. The previous
mechanisms for co-operative activities can be used.
4. Process synchronization. Process synchronization can happen in three different forms: (1)
synchronization by events, (2) synchronization by messages and (3) synchronization by object
flows.
5
5. Exception handling mechanisms.
mechanisms Process models often only model the ideal structure of a
business process. Real-world situations mostly consist of dealing with exceptions. Exceptions
can either be predictable or unpredictable.

81
A layered approach to the handling of responsibilities and information is taken in the eTOM.
R
Responsibility
ibili for
f association
i i / translation
l i between
b l
layers i generally
is ll positioned
i i d at the
h lower
l l
layer.
For example, the Customer Relationship Management (CRM) layer manages Customer Problems
and the Service Management & Operations (SM&O) layer manages the Service Problems that may
be associated, but it is the responsibility of the SM&O processes to map between these Service
Problems.

Thus CRM provides the Customer Problem (or some appropriate information from this) to SM&O,
which must then associate the one (or more) Service Problems that derive from this Customer
Problem. Any ongoing interaction between Customer and Service layers is therefore in terms of
Customer Problems (or information based on these) and not Service Problems, which are managed
wholly within the Service layer.

82
The process which is managing data creation, update etc has a prime responsibility for ensuring that
the
h results
l off data
d which
hi h it
i is
i manipulating
i l i via i the
h process are appropriately
i l stored.d

Principle eTOM.17
A process has prime responsibility for ensuring that the results of data manipulation are stored
appropriately.

Consequently, the Manage Resource Inventory processes have no processes to create or update the
Consequently
data elements maintained in the repository.

Principle eTOM.18
The Manage Resource Inventory processes have no processes to create or update the data elements
maintained in the repository.
The only exception to this Principle is the aspect associated with data quality. In the inventory
processes there
h are processes associated
i d with
i h discovery
di i looking
i.e. l ki at comparing
i what
h isi maintained
i i d
in the inventory with what actually exists on the ground. The results of any inventory differences
found would be in the form of some form of report, which could be used by process quality
processes to review and fix any processes which are leading to bad data in the inventory. Note: there
is no need for any “informing” of the original process as to data change.

83
The process which is managing data creation, update etc has a prime responsibility for ensuring that
the
h results
l off data
d which
hi h it
i is
i manipulating
i l i via i the
h process are appropriately
i l stored.d

Principle eTOM.17
A process has prime responsibility for ensuring that the results of data manipulation are stored
appropriately.

Consequently, the Manage Resource Inventory processes have no processes to create or update the
Consequently
data elements maintained in the repository.

Principle eTOM.18
The Manage Resource Inventory processes have no processes to create or update the data elements
maintained in the repository.
The only exception to this Principle is the aspect associated with data quality. In the inventory
processes there
h are processes associated
i d with
i h discovery
di i looking
i.e. l ki at comparing
i what
h isi maintained
i i d
in the inventory with what actually exists on the ground. The results of any inventory differences
found would be in the form of some form of report, which could be used by process quality
processes to review and fix any processes which are leading to bad data in the inventory. Note: there
is no need for any “informing” of the original process as to data change.

84
85
86
87
88
89
90
91
92
93
The eTOM Framework shows seven end-end vertical process groupings, that are the end-to-end
processes that
h are required
i d to support customers andd to manage the
h business.
b i A
Amongst these
h E d d
End-end
Vertical Process Groupings, the focal point of the eTOM framework is on the core customer
operations processes of Fulfillment, Assurance and Billing (FAB). Operations Support & Readiness
(OSR) is differentiated from FAB real-time processes to highlight the focus on enabling support and
automation in FAB, i.e. on-line and immediate support of customers, with OSR ensuring that the
operational environment is in place to let the FAB processes do their job. Outside of the Operations
process area - in the Strategy, Infrastructure & Product (SIP) process area - the Strategy & Commit
vertical, as well as the two Lifecycle Management verticals, are differentiated. These are distinct
because, unlike Operations, they do not directly support the customer, are intrinsically different from
the Operations processes and work on different business time cycles.

The Framework also includes views of functionality as they span horizontally across an enterprise’s
internal organizations. The horizontal functional process groupings in Figure 1 distinguish functional
operations processes and other types of business functional processes, e.g., Marketing versus Selling,
Service Development versus Service Configuration, etc. Amongst these Horizontal Functional
P
Process G
Groupings,
i th
those on the
th left
l ft (that
(th t cross the
th Strategy
St t & Commit,
C it Infrastructure
I f t t Lif
Lifecycle
l
Management and Product Lifecycle Management vertical process groupings) enable, support and
direct the work in the Operations process area.

So, eTOM is structured in three main areas (known as Level 0 processes): Operations (OPS),
Strategy Infrastructure and Product (SIP) and Enterprise Management (EM). Each contains more
detailed process components at Level 1, Level 2, etc as the processes are decomposed. This
hierarchical decomposition enables detail to be defined in a structured way and also allows the
eTOM Framework to be adopted at varying levels and/or for different processes. The Level number
is an indication of the degree of detail revealed at that level - the higher the number, the more
detailed are the process elements described there.

94
Operations is the heart of eTOM and much of the original TOM work has carried through into OPS.
Th “FAB” processes (Fulfillment,
The (F lfill A
Assurance, Billi ) provide
Billing) id the
h core off the
h Operations
O i area, The
Th
vertical Level 1 processes in FAB represent a view of flow-through of activity, whereas the
horizontal Level 1 processes (CRM, SM&O, RM&O, S/PRM) represent functionally-related activity,
Both views are valid and the model supports both to accommodate different uses made of the
processes. As a separate issue, OSR (Operations Support & Readiness) has been separated from FAB
to reflect the separation between “front-office” real-time operations (in FAB) from “back-office”
near real-time or even off-line support processes. This split may not apply in all organizations (in
which case, the OSR and FAB processes can be merged) but is necessary to allow for the important
situation where they are handled separately.

95
The OPS area is shown with Level 2 processes visible. Note, in general, a Level 2 process is part of a
vertical,
i l andd also
l a horizontal,
h i l Level
L l 1 process. Hence,
H L l 2 processes can be
Level b reached
h d in
i the
h
process hierarchy by either path (to reflect the different interests and concerns of users). However,
whichever path is used, as shown here, there is a single, common set of Level 2 processes. In some
cases, a Level 2 process is “stretched” across several vertical Level 1s (e.g. Resource Data
Collection, Analysis and Control in RM&O). This is because the process concerned is needed in
several vertical Level 1s (e.g. for Resource Data Collection, Analysis and Control, the data collected
from the network (say) can represent usage data for Billing but can also support fault handling or
performance assessment in Assurance).

96
97
98
Strategy, Infrastructure & Product has a similar structure to OPS with corresponding vertical and
h i
horizontall Level
L l 1 processes. In
I theh verticals,
i l Strategy
S & Commit
C i covers the
h processes involved
i l d in
i
forming and deciding company strategy and gaining commitment from the business for this.
Infrastructure Lifecycle Management covers control of the infrastructures used in the business – the
network is the most obvious, but also IT infrastructure and even the human resources of the
company. Product Lifecycle Management covers the products themselves – note that eTOM
distinguishes Product (as sold to Customers) from Service (used internally to represent the
“technical” part of the product, i.e. excluding commercial aspects such as tarifying, T&Cs, support,
etc) and Resource (physical and non-physical components used to support Service).

The horizontal functional groupings in SIP are aligned with those in OPS, so that if desired the
processes included can be considered to link across smoothly from the SIP domain to the OPS
domain, if this is relevant to some aspects of business behavior in enterprises.

99
Strategy, Infrastructure & Product has a similar structure to OPS with corresponding vertical and
h i
horizontall Level
L l 1 processes. In
I theh verticals,
i l Strategy
S & Commit
C i covers the
h processes involved
i l d in
i
forming and deciding company strategy and gaining commitment from the business for this.
Infrastructure Lifecycle Management covers control of the infrastructures used in the business – the
network is the most obvious, but also IT infrastructure and even the human resources of the
company. Product Lifecycle Management covers the products themselves – note that eTOM
distinguishes Product (as sold to Customers) from Service (used internally to represent the
“technical” part of the product, i.e. excluding commercial aspects such as tarifying, T&Cs, support,
etc) and Resource (physical and non-physical components used to support Service).

The horizontal functional groupings in SIP are aligned with those in OPS, so that if desired the
processes included can be considered to link across smoothly from the SIP domain to the OPS
domain, if this is relevant to some aspects of business behavior in enterprises.

100
Enterprise Management is shown in a different view – this is a typical hierarchy diagram as provided
f
from process analysis
l i andd modeling
d li toolsl usedd for
f eTOM.
TOM TheTh top box
b is i EM itself
i lf (Level
(L l 0),
0) the
h
next horizontal row shows the Level 1 processes in EM, and the columns below each Level 1 box
shows Level 2 processes within that Level 1 process.

Now, with this overall view of the process structure to Level 2 (and descriptions for all these process
elements, as well as for Level 3 process elements, it is important, however, to note that this view of
the
h processes provides
id very little
li l insight
i i h into
i h
how the
h processes interact.
i To gaini this
hi valuable
l bl
additional perspective, we must look to process flows.

101
Process decompositions provide an essential insight into the process definition and content. To
understand
d d further
f h how h the
h processes behave,
b h process flows
fl can be
b developed
d l d thath examinei how
h
some or all of the processes support some larger, “end-to-end” or “through” process view across the
enterprise. Such process flows are not constrained to bridge across the entire enterprise – they can
have any scope that is considered meaningful and helpful to analyze - but typically such process
flows involve a broad area of the enterprise processes, and thus of the eTOM framework.

Thus, process flows examine some specific scenario in which the processes achieve an overall
business purpose.
purpose To begin with,
with though,
though this slide shows only a fragment of a process flow,
flow where
several eTOM Level 2 OPS processes can be recognized, and labeled linkages between these
indicate the nature of the transfer that arises in operation. In this case, we can see that part of
handling a customer order is shown.

The process flow approach has these general characteristics:


1. It analyzes a typical (specific) scenario
2. It provides insight into the behavior and interaction amongst processes
3. It chooses to model the flow at an appropriate level of process detail
4. It can use process decompositions (and vice versa) to enhance/refine detail
5. The aim is to provide only an example of the process flows - i.e. only some of the possible
interactions are described in each scenario
6. Thus, it typically provides a partial view of process behavior (because flows are based on
p
specific scenarios))
7. It represents a dynamic perspective of process

102
103
This diagram type is developed directly from a process analysis and modeling tool (rather than a
generall drawing
d i software).
f ) Here
H we are working
ki withi h Level
L l 2 process elements
l b other
but h Levels
L l can
be used depending on the detail required. This diagram type positions the eTOM processes in
relatively the same way that they can be seen on the eTOM model diagrams, which assists with
recognition and avoids confusion. Each process only appears once, and so sequencing of the
interactions is not explicit in this diagram (it is on the process dynamics diagrams later).

An important element in flow diagrams of this kind is that of “swimlanes”. These are areas in the
process flow diagram,
diagram containing typically several process elements that contribute to the overall
process flow, which scope a useful area of attention to assist the user. In this example, the swimlanes
have been drawn to represent the four horizontal functional process groupings of the Operations area
of the eTOM Framework, since the scenario involved is focused in the Operations domain. In this
arrangement, all the process elements in a specific swimlane in the diagram (e.g. in the lower set
swimlane for Supplier/Partner Management & Operations) are components of that horizontal
functional process grouping. It should be noted that swimlanes (despite their name) need not be only
horizontal, although this is a common choice for clarity, and is the approach used in eTOM process
fl diagrams.
flow di

This diagram is for the main Ordering phase of Fulfillment. It kicks-off with the Customer placing an
order, and then tracks through Selling, Order handling, and the service and resource layer processes
that actually configure the product instance. As the product instance is brought into service, there are
external interactions with Billing to set up charging for this.

104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
The reconciliation of business and IT views is not a unique issue for our industry, as the same challenge faces
other industries too, and we are better placed than some in having a strong business perspective through eTOM
and NGOSS to build upon. It is not always recognized that ITIL does not itself seek directly to establish such a
business perspective, possibly because it is applied in so many different businesses. ITIL instead looks to
provide more a view of how services (and that overloaded term will be one of the difficulties as we will see) are
provided through IT systems and the company departments that support them.

We can thus visualize the problem of reconciling ITIL and eTOM (and NGOSS) in terms of reconciling these
different perspectives. Each has validity, and each is an aspect of the overall reality, but neither on its own
explores all the areas of interest and concern about how the enterprise works. Because of this, we can benefit
from seeking to build on both frameworks, rather than seeing the situation in terms of “either/or”. Much of this
document explores
p this and shows how the insights
g of both frameworks together
g can add value.

Taking this line, in a very real sense, both frameworks are needed to get a rounded picture. If either is used in
isolation, the result is that the missing elements will still need to be filled in somehow and, arguably, trying to do
this without taking advantage of the insights provided by existing work, just makes the problems worse. If a
company applies eTOM without ITIL, it would still have to build a bridge to its IT environment, but would now
have to do this on its own. Equally, using ITIL without eTOM would mean mapping into the business without
the support and structure that eTOM provides, making this a much more difficult task.

124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178

Potrebbero piacerti anche