Sei sulla pagina 1di 74

Using ODI 11g with Hyperion Planning and 2014

Essbase 11.1.2.2 Part1

USING ODI 11G WITH


HYPERION PLANNING AND
ESSBASE 11.1.2.2 PART1
Abstract
This technical article is intended to demonstrate the usage of Oracle Data Integrator
(ODI) 11g in conjunction with Hyperion Planning and Essbase 11.1.2.2. Part-1 of this
series covers architecture overview of ODI 11g and illustrates data integration
capabilities of ODI 11g with Hyperion Planning 11.1.2.2 with six typical ETL scenarios.
The scenarios discussed can be used as re-usable guidelines to implement advanced
cases of the same in a real time data integration project involving Hyperion and ODI.

Saptarshi Bose
Oracle EPM Consultant, Wipro Technologies

Disclaimer: This article draws references from ODI 11g documentations from Oracle. At certain points, further
reading references are also provided using URLs from ODI online documentation library. Besides, it takes
references from certain websites on ODI 11g and 10g.

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Contents
Introduction ............................................................................................................................................ 3
Where and How Does ODI fit in with Hyperion Modules? ..................................................................... 3
Overview of ODI Architecture and Graphical Modules .......................................................................... 5
Repositories ....................................................................................................................................... 5
Studio ................................................................................................................................................. 7
Topology Navigator (TN) ................................................................................................................ 7
Designer Navigator (DN) ................................................................................................................ 9
Operator Navigator (ON) ................................................................................................................ 9
Security Navigator (SN) ................................................................................................................ 10
Console ............................................................................................................................................ 10
ODI Architecture How it Works at Runtime?................................................................................. 10
ODI Knowledge Modules (KMs)....................................................................................................... 11
ODI Knowledge Modules (KMs) for Hyperion Modules ................................................................... 12
Hyperion Planning KMs ................................................................................................................ 12
Essbase KMs ................................................................................................................................ 12
HFM KMs ...................................................................................................................................... 13
Generic Flow Diagram to put Together ODI Components for Data Integration .................................. 14
Hyperion Planning 11.1.2.2 and ODI 11g Data Integration POC Cases............................................. 15
High Level Data Flow Model for the POC Cases ............................................................................ 15
Setting up the Topology ................................................................................................................... 15
Topology Setup File Technology ............................................................................................... 16
Topology Setup Oracle Database Technology ......................................................................... 19
Topology Setup Hyperion Planning Technology ....................................................................... 23
Topology Setup Hyperion Essbase Technology ....................................................................... 26
Enter the Designer: Create Project and Import Hyperion Planning and Essbase KMs .................. 27
Case 1: Load a Member Hierarchy (Metadata) from a Flat File Source to a Planning Application
Target Simple Load ....................................................................................................................... 29
Case 2: Load a Member Hierarchy (Metadata) from a Flat File Source to a Planning Application
Target Advanced Load - Source File Manipulation/Transformation Techniques ......................... 44
Case 3: Load Member Formula from a Flat File Source to a Planning Application Target ............. 49
Case 4: Delete Member(s) from Hyperion Planning ........................................................................ 54
Case 5: Load a Member Hierarchy (Metadata) from a Relational Database Source to a Planning
Application Target ............................................................................................................................ 57

pg. 1

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Case 6: Create a ODI File Watch Package to Load Metadata from Flat File Source to a Planning
Application Target Demonstrating File Watch, Error Log Capture and E-mail Sending Features
.......................................................................................................................................................... 64
Overview of Topics to be covered in Part-2......................................................................................... 73

pg. 2

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Introduction
This technical article is intended to demonstrate the usage of Oracle Data Integrator (ODI) 11g in
conjunction with Hyperion Planning and Essbase 11.1.2.2 for typical ETL (Extract-Transform-Load)
scenarios involved in a Hyperion Planning and Essbase project.
Part-1 of this series covers architecture overview of ODI 11g and illustrates the data integration
capabilities of ODI 11g when used in conjunction with Hyperion Planning 11.1.2.2 with six typical use
cases. The cases discussed can be used as re-usable guidelines to implement advanced cases of
the same in real time project scenarios.
A key purpose of this article is to document the implementation steps of a set of POC (Proof of
Concept) scenarios to demonstrate ODI 11g data integration features and capabilities with
Hyperion Planning and Essbase 11.1.2.2 for one of the top US Banks. Demonstration of the POC
scenarios to the client management is intended to expand the scope of current Hyperion
engagement in the bank in the forthcoming calendar year of 2015.

Where and How Does ODI fit in with Hyperion Modules?


ODI Adapters for Hyperion platform, help ODI to connect and integrate Hyperion Applications (refer
to Fig. 1) with any database.
The adapter provides a set of Oracle Data Integrator Knowledge Modules (KMs) for loading
metadata and data into Planning, Oracle's Hyperion Workforce Planning, Oracle's Hyperion
Capital Expense Planning, Essbase and Hyperion Financial Management (HFM) applications.
Prior to the Oracle acquisition, there were a number of different routes available to Essbase and
Hyperion Planning customers looking to load their databases or applications. At the most basic level,
rules files can be used to define SQL queries ( when the source is a view/table in a relational
database) or basic transformations within a rules file (when the source is a flat file) to load data into
Essbase outlines and databases . But rules files dont provide the flexibility to include complex ETL
logic and transformations. Over the past few years, Oracle has positioned Oracle Data Integrator, as
the strategic ETL option for loading Essbase databases and planning applications. As shown in the
diagram, ODI 11g can be used to load Essbase databases, Hyperion Planning and Hyperion
Financial Management (HFM) applications through their own individual APIs, with ODI providing a
set of code template knowledge modules that allows developers to write to, and read from
abstracted data objects, with, ODI under the covers issuing the correct API calls to load these
various applications.

pg. 3

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Oracle Hyperion
Planning

Oracle Hyperion
Financial Mgt

Oracle Hyperion
Essbase

Planning API

HFM API

Essbase API

Oracle Hyperion Application Adapters

Metadata Discovery
& Model Creation
Oracle | Hyperion Data Access
Authentication

Data Services

Oracle Data Integration Suite

Hyperion
Financial
Management

Hyperion
Essbase

Extract Data

Use
Essbase KM

Extracts Dimension
Members

Use
Essbase KM

Logging Services

API Layer

Hyperion
Planning

Loads Data

Loads Dimension
Members

Cube Refresh

Consolidate

Data Distribution & Delivery APIs

Other Features
Metadata
Lineage

Bulk/Trickle
Loading

Changed Data
Capture

Master
Data

Data Quality
& Profiling

ODI Knowledge Module Framework

Bulk and Real-Time


Data Processing
Information
Assets
Other
Sources

Oracle
EBS

CDC

SAP/R3

PeopleSoft

Data
MessageWarehouse
Queues

Figure 1 : ODI Positioning with Hyperion Applications

pg. 4

Calculate

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Overview of ODI Architecture and Graphical Modules


Repositories
The central component of ODI architecture is the Oracle Data Integrator Repository (refer to Fig. 2).
This repository is stored on a relational database accessible in client/server mode from the different
Oracle Data Integrator modules.

Figure 2: ODI Repositories

pg. 5

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Master Repository

Stores security information including users, profiles and rights for ODI platform.
Stores topology information including technologies, server definitions, schemas,
contexts, languages and so forth.
Stores versioned and archived objects
Normally one ODI installation would have a single Master Repository. However
for certain scenarios multiple Master Repositories might be required a) Project
construction over several sites not linked by a high-speed network (off-site
development, for example). b) Necessity to clearly separate the interfaces'
operating environments (development, test, production), including on the
database containing the Master Repository. This may be the case if these
environments are on several sites.

Work Repository(s)

The work repository is the one that contains actual developed objects.
Several work repositories may coexist in the same ODI installation (for example,
to have separate environments or to match a particular versioning life cycle).
Stores Models, including schema definition, data stores structures and metadata,
fields and columns definitions, data quality constraints, cross references, data
lineage and so forth.
Stores Projects, including business rules, packages, procedures, folders,
Knowledge Modules, variables and so forth.
Stores scenario execution, including scenarios, scheduling information and logs.
When the Work Repository contains only the execution information (typically for
production purposes), it is then called an Execution Repository.
The database that will host the repository does not have to be on the same hardware as the ODI
Studio. Multiple developers will share the same repository when projects are developed, so it is
convenient to install the repository in a central location.
The Studio will make very frequent access to the repository. From that perspective, the Studio and
the repository will have to be on the same LAN (and since distance adds to latency, they should
preferably be at a reasonable distancenot in a different country or continent for instance).
The repository will use a few gigabytes of disk space to store the metadata, transformation rules,
and (mostly) the logs. Make sure that you have enough disk space for the database. A good starting
point for the size of the repository is 1 GB each for the Master and Work repository.
Each repository (Master or Work) is typically installed in a dedicated schema. The privileges required
by ODI are "Connect" and "Resource" on an Oracle database, but it is important to note that the

pg. 6

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

installation program will have more stringent requirements (the RCU utility will require sysdba
privileges to be able to create the repositories).

Studio
The ODI Studio is the graphical interface provided to all users to interact with ODI (refer to Fig. 3).
People who need to use the Studio usually install the software on their own machine and connect to
a shared repository. The only exception would be when the repository is not on the same LAN as the
Studio. In that case, most customers use Remote Terminal Service technologies to ensure that the
Studio is local to the repository (same LAN). Only the actual display is then sent over the WAN.

Figure 3: ODI Studio Interactions with Repositories

Topology Navigator (TN)


This navigator is usually restricted to DBAs and System administrators. Through this interface, they
declare the systems where the data resides (sources, targets, references, and so on), along with the
credentials that ODI will use to connect to these systems. Developers and operators will leverage
the information stored in the repository, but will not necessarily have the right to modify, or even view
that information. They will be provided with a name for the connections and this is all they will need.
In a nutshell Topology manager would contain the following:

pg. 7

Physical Architecture :
o Data Server and Physical Schemas - The physical architecture defines the different
elements of the information system, as well as their characteristics taken into
account by Oracle Data Integrator. Each type of database (Oracle, DB2, etc.), file
format (XML, Flat File), or application software is represented in Oracle Data
Integrator by a technology.
o A technology handles formatted data. Therefore, each technology is associated with
one or more data types that allow Oracle Data Integrator to generate data handling
scripts.
o The physical components that store and expose structured data are defined as data
servers. A data server is always linked to a single technology. A data server stores
information according to a specific technical logic which is declared into physical
schemas attached to this data server. Every database server, JMS message file,

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

pg. 8

group of flat files, and so forth that is used in Oracle Data Integrator must be
declared as a data server. Every schema, database, JMS Topic, etc., used in Oracle
Data Integrator, must be declared as a physical schema.
o Finally, the physical architecture includes the definition of the Physical Agents. These
are the Java software components that run Oracle Data Integrator jobs.
Contexts :
o Contexts bring together components of the physical architecture (the real
Architecture) of the information system with components of the Oracle Data
Integrator logical architecture (the Architecture on which the user works).
o For example, contexts may correspond to different execution environments
(Development, Test and Production) or different execution locations (Boston Site,
New-York Site, and so forth.) where similar physical resources exist.
Note: During installation the default GLOBAL context is created.
Logical Architecture:
o The logical architecture allows a user to identify as a single Logical Schema a group
of similar physical schemas - that is containing datastores that are structurally
identical - but located in different physical locations. Logical Schemas, like their
physical counterpart, are attached to a technology.
o Context allows resolving logical schemas into physical schemas. In a given context,
one logical schema resolves in a single physical schema.
Agents:
o At design time, developers generate scenarios from the business rules that they
have designed. The code of these scenarios is then retrieved from the repository by
the Run-Time Agent. This agent then connects to the data servers and orchestrates
the code execution on these servers. It retrieves the return codes and messages for
the execution, as well as additional logging information such as the number of
processed records, execution time and so forth - in the Repository.
o Agent has 2 flavors:
JAVA EE agent can be deployed as a web application and benefit Application
server features, in terms of high availability and pooling of the connections.
The standalone agents are very lightweight and can easily be installed on
any platform. They are small Java applications that do not require a server.
o Both Agents are multi-threaded Java Programs that support load balancing and can
be distributed across the information system. This agent holds its own execution
schedule which can be defined in Oracle Data Integrator, and can also be called from
an external scheduler. It can also be invoked from a Java API or a web service
interface.
o A common configuration is to use the JEE agent as a "Master" agent, whose sole
purpose it is to distribute execution requests across several child agents. These
children can very well be standalone agents. The master agent will know at all times
which children are up or down. The master agent will also balance the load across all
child agents.
o In a pure standalone environment, the Agent is often installed on the target server.
Agents are also often installed on file servers, where they can leverage database
loading utilities to bulk load data into the target systems. Load balancing can also be
done with a standalone master agent. Multiple standalone agents can run on the

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

same server, as long as they each have a dedicated port. This port number is
defined in the Topology navigator, where the agent is defined.

Designer Navigator (DN)


This navigator is used by developers and data custodians alike. Metadata are imported and enriched
through this navigator. The metadata is then used to define the transformations in objects called
Interfaces. The Interfaces are finally orchestrated in workflows called Packages.
Key activities would be:
o

Models and Datastores: A Model is a set of datastores corresponding to data


structures contained in a Physical Schema. In a model RE (Reverse Engineering) is
done to import metadata to ODI work repository from either source or target.
Metadata can be created directly inside a model as well by creating a datastore
manually.
Projects: Once metadata for sources and targets are available, Projects can be built
in ODI Designer navigator. A project is a group of objects developed using ODI, the
components to a project are: Interface: An interface is a set of rules that define the loading of a Data
source from one or more source datastores, an example of an interface is
load members into planning dimension.
Procedure: A procedure is a re-usable component that groups similar
operations that do not fit into the Interface framework. Procedures should be
considered only when some operation can't be achieved by an interface. In
this case, rather than writing an external program or script, the code would be
included in Oracle Data Integrator and execute it from a package.
Procedures require you to develop all the code manually, as opposed to
interfaces. For Example, a specific index on a database cannot be dropped
through an interface, though writing a simple command DROP INDEX
<<INDEX_NAME>> in a procedure the index can be dropped by ODI.
Packages: A package is a sequence of steps organized into a flow diagram,
so for example, if there is case to wait for a file to arrive, next, to load it into
planning dimension, push it to essbase and then to send a successful or
failure email then the package is the place for to implement the same.
Variable: This is a value which is stored in ODI, this value can be set or
evaluated to create conditions in packages
Sequence: A sequence is a variable automatically incremented when used.
Knowledge Modules: Discussed Later in details.

Operator Navigator (ON)


This navigator is used by developers and operators. In a development environment, developers will
use the Operator views to check on the code generated by ODI, to debug their transformations, and
to validate and understand performance of their developments. In a production environment,
operators use this same navigator to view which processes are running, to check whether processes
are successful or not, and to check on the performance of the processes that are running.

pg. 9

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Security Navigator (SN)


This navigator is typically used by system administrators, security administrators, and DBAs.
Through this interface, they can assign roles and privileges to the different users, making sure that
they can only view and modify objects that they are allowed to handle. One user having a designer
profile can access only designer. Generally developer would have designer and operator profile and
while administrator would have Topology and Security profiles. Business user can have only an
Operator profile tagged to check logs.

Console
The Console is an HTML interface to the repository. The Console is installed on a WebLogic Server
(other application servers will be supported with later releases of the product). The Console can be
used to browse the repository, but no new developments can be created through this interface. The
Console is useful for viewing lineage and impact analysis without having the full Studio installed on a
machine. Operators can also perform most of the tasks they would perform with the Studio, including
starting or restarting processes. The exact information that is available in the Operator Navigator of
the Studio will be found in the matching view of the Console: generated code, execution statistics,
and statuses of executed processes are all available.

ODI Architecture How it Works at Runtime?


Oracle Data Integrator is organized around a modular repository that is accessed by Java graphical
modules and scheduling agents (refer to Fig. 4 & 5). The graphical modules are used to design and
build the integration process, with agents being used to schedule and coordinate the integration
task. When Oracle Data Integrator projects are moved into production, data stewards can use the
Web-based Metadata Navigator application to report on metadata in the repository. Out-of-the-box
Knowledge Modules extract and load data across heterogeneous platforms, using platform-specific
code and utilities.

Figure 4: ODI Run Time Architecture All Components Orchestrated

pg. 10

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Figure 5: Working of ODI Components In a nutshell

ODI Knowledge Modules (KMs)


Knowledge Modules (KM) form the basis of 'plug-ins' (.xml files) that allow ODI to generate the
relevant execution code, across various technology areas. They are code templates. Each KM is
dedicated to an individual task in the overall data integration process. The code in the KMs appears
in nearly the form that it will be executed except that it includes Oracle Data Integrator (ODI)
substitution methods enabling it to be used generically by many different integration jobs. The code
that is generated and executed is derived from the declarative rules and metadata defined in the ODI
Designer module.

A KM will be reused across several interfaces or models. To modify the behavior of hundreds of
jobs using hand-coded scripts and procedures, developers would need to modify each script or
procedure. In contrast, the benefit of Knowledge Modules is that developers can make a change
once and it is instantly propagated to hundreds of transformations. KMs are based on logical
tasks that will be performed. They don't contain references to physical objects (data stores,
columns, physical paths, etc.)

KMs can be analyzed for impact analysis.

KMs can't be executed standalone. They require metadata from interfaces, data stores and
models.

Knowledge modules are also fully extensible. Their code is opened and can be edited through a
graphical user interface by technical experts willing to implement new integration methods or
best practices (for example, for higher performance or to comply with regulations and corporate
standards). Without having the skill of the technical experts, developers can use these custom
Knowledge Modules in the integration processes.

Knowledge Modules are named after the specific database for which they have been optimized,
the utilities that they leverage, and the technique that they implement. For instance, an IKM
Teradata to File (TTU) will move data from Teradata into a flat file, and leverage the TTU utilities
for that operation, or an LKM File to Oracle (EXTERNAL TABLE) will expose a flat file as an
external table for Oracle. Similarly, an IKM Oracle Slowly Changing Dimension will generate

pg. 11

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

code optimized for the Oracle database which implements a Slowly Changing Dimension (Type
2) type of integration.

Knowledge Module

Reverse-engineering KM
(RKM)

Check KM (CKM)

Loading KM (LKM)
Integration KM (IKM)

Description

Usage

Retrieves metadata to the Oracle Data


Integrator work repository. The RKM is
in charge of connecting to the
application or metadata provider then
transforming and writing the resulting
metadata into Oracle Data Integrator's
repository.
Checks consistency of data against
constraints

Loads heterogeneous data to a staging


area.
Integrates data from the staging area to
a target

Journalizing KM( JKM)

Creates the Change Data Capture


framework objects in the source staging
area

Service KM (SKM)

Generates data manipulation web


services

Used in models to perform a


customized reverseengineering
Used in models, sub models
and data stores for data
integrity audit
Used in interfaces for flow
control or static control
Used in interfaces with
heterogeneous sources
Used in interfaces
Used in models, sub models
and data stores to create, start
and stop journals and to
register subscribers.
Used in models and data
stores

ODI Knowledge Modules (KMs) for Hyperion Modules


Hyperion Planning KMs
Knowledge Module

Description

RKM Hyperion Planning

Reverse-engineers Planning applications and creates data models to use


as targets in Oracle Data Integrator interfaces.
Each dimension (standard dimension and attribute dimension) is reversed
as a datastore with the same name as the dimension with appropriate
columns. Creates a datastore named "UDA" for loading UDA's.

IKM SQL to Hyperion


Planning

Loads metadata and data into Planning applications.

Essbase KMs
Knowledge Module
RKM Hyperion Essbase
IKM SQL to Hyperion
Essbase (DATA)
IKM SQL to Hyperion
Essbase (METADATA)
LKM Hyperion Essbase

pg. 12

Description
Reverse-engineers Essbase applications and creates data models to use
as targets or sources in Oracle Data Integrator interfaces
Integrates data into Essbase applications.
Integrates metadata into Essbase applications
Loads data from an Essbase application to any SQL compliant database

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

DATA to SQL
LKM Hyperion Essbase
METADATA to SQL

used as a staging area.


Loads metadata from an Essbase application to any SQL compliant
database used as a staging area.

HFM KMs
Knowledge Module
RKM Hyperion Financial
Management
IKM SQL to Hyperion
Financial Management
Data
IKM SQL to Hyperion
Financial Management
Dimension

Description
Reverse-engineers Financial Management applications and creates data
models to use as targets or sources in Oracle Data Integrator interfaces.
Integrates data into Financial Management applications.

Integrates metadata into Financial Management applications.

LKM Hyperion Financial


Management Data to SQL

Loads data from a Financial Management application to any SQL


compliant database used as a staging area. This knowledge module will
not work if you change the column names of the HFM data store reverse
engineered by the RKM Hyperion Financial Management knowledge
module.

LKM Hyperion Financial


Management Members To
SQL

Loads member lists from a Financial Management application to any SQL


compliant database used as a staging area.

ODI 11g architecture details can be read at:


http://docs.oracle.com/cd/E28280_01/integrate.1111/e12641/overview.htm#sthref4

pg. 13

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Generic Flow Diagram to put Together ODI Components


for Data Integration
Configure Master and
Work Repositories

Define Technology and


Data Servers - Physical
and Logical Schema

Define Context or use


Global Context
TN

Create Interfaces ,
Procedures Packages ,
Variables etc. - Define
Mapping, Flows,
Configure KM and KM
parameters

Create Folders and


Models - Create or
Import( Reverse
Engineer) Source and
Target Datastores
DN

TN

TN

Create Project and


Import required KMs for
integration project
DN

View Logs through


Operator OR Configure
e-mail to get logs
attached from Operator
base tables

Execute Package or
Interface
DN

ON

Figure 6: ODI Components Generic Flow Diagram for Data Integration

pg. 14

DN

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Hyperion Planning 11.1.2.2 and ODI 11g Data Integration


POC Cases
High Level Data Flow Model for the POC Cases

Figure 7: Data Integration Generic Flow

Setting up the Topology


Following technologies would be used to implement the data integration cases:

File System
Oracle Database
Hyperion Planning
Hyperion Essbase

These technologies are first setup in the TN. Once setup they can be accessed by any Work
Repository that is tagged to the Master Repository of a particular installation.
Key steps followed are:
Define Data Server and
Properties

Add Physical Schema

Figure 8: TN Generic Setup Steps

pg. 15

Add Context and


Logical Schema

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Topology Setup File Technology

Add a new
server

Provide Server
Definition

pg. 16

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Provide JDBC
Definition

Add Physical
Schema

pg. 17

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Define Physical
Location of
Source Files

Add Context and


Logical Schema

Final State for File Technology:

Physical Model
for File System

Logical Model
for File System

pg. 18

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Topology Setup Oracle Database Technology

Add a new
server
Here the Oracle database ODIDB is connected with a user DBF_HYP_POC_USER specifically
setup to simulate a staging area for the POC cases. The Data Server name at ODI level can be
anything meaningful.

Provide Server
Definition and
Credentials

Provide JDBC
connection
details

Next, database connectivity is checked using Test Connection.

pg. 19

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

pg. 20

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Add New
Physical
Schema

Add Physical Schema


definition choosing the
Schema from the drop
down.

Add Context and Logical


Schema

pg. 21

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Final State for Oracle Database Technology:

Physical Model
for Oracle
Database

Logical Model
for Oracle
Database

pg. 22

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Topology Setup Hyperion Planning Technology

Add a new
server

Provide Server
Definition and
Credentials.
Note: The port used
to connect ODI to
Hyperion Planning is
the RMI Port
configured during
Hyperion
installations.

Connections cannot be tested for Hyperion Planning. Therefore, it is automatically greyed out. This
is kind of a drawback at Topology level as to understand whether connections are proper or not, one
has to wait till RE step in the DN.

Add New
Physical
Schema

pg. 23

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Add Physical Schema


definition.
Note: Dropdowns are
not populated
automatically. Need to
type in the application
name to connect. In this
case the application is
Sample Planning
application PLN_Samp

Add Context and


Logical Schema

pg. 24

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Final State for Hyperion Planning:

Physical Model
for Hyperion
Planning

Logical Model
for Hyperion
Planning

pg. 25

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Topology Setup Hyperion Essbase Technology


Similar steps are followed for Essbase as done in case of Planning and the final state looks like the
following:

Note: Only difference for Essbase is that while defining Physical Schema both Application and
Database names are required to be entered. In this example, Sample. Basic is chosen.

pg. 26

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Enter the Designer: Create Project and Import Hyperion Planning and
Essbase KMs
In the DN, a new project ODI_POC_PROJECT is created to host the interfaces, procedures,
variables, packages etc. for the POC scenarios being setup.

Next, the required KMs for Hyperion Planning and Essbase (discussed in earlier sections), File
Technology and Oracle Database Technologies required for the POC scenarios are imported within
the project.

pg. 27

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Hyperion KM

File KM
Hyperion KM

RDBMS KM

Hyperion KM

pg. 28

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Case 1: Load a Member Hierarchy (Metadata) from a Flat File Source to a


Planning Application Target Simple Load
Source: Flat File to Load Account dimension hierarchy in PLN_Samp Hyperion application.
File Format:

This file is meant to load 3 children members (513000, 514000, 516000) under the parent
member 500000 within the Account dimension of the target planning application. The name of the
file used in the POC scenario is ACC_METADATA_LOAD_ODI_POC_1.csv and is placed in the
folder as defined in the File system physical schema.
Target: Account dimension hierarchy in PLN_Samp Hyperion application.

Objective is to add the 3 members as shown in the source file under the member 500000
Operating Expenses.
Next, in the designer a new Model Folder is created for the Files to be used for the POC scenarios.

pg. 29

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

A new model is added within the folder created.

The Name, Alias are entered. The Resource Name source file is selected.

pg. 30

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

File Format and other details are fed to ODI.

Note: These settings would


vary from source file to source
file and the platform being
operated on.

Next, RE is performed on the file source. This type of RE for File technology is known as Delimited
Files Reverse-Engineering.
Note: Once RE is complete, the Parent and Account data types are changed to String to avoid
errors in future steps.

pg. 31

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Therefore, the source datastore is now configured.

pg. 32

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Next, the RKM for Hyperion Planning is used to RE Hyperion Planning Model and create the
dimension level datastores.
The model is hosted within the folder ODI_PLANNING_POC_MODELS.

A Customized RE is required to import the metadata for Hyperion Planning inside ODI repository.
The KM used is RKM Hyperion Planning which is already tagged to the ODI_POC_PROJECT.

pg. 33

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Details of the RE process can be viewed in the ON.

Note: Successful completion of the RE suggests that connectivity of ODI with Hyperion Planning
server is correct this is the only indirect way to find out connectivity issues with Hyperion
technology unlike Oracle Database technology where connectivity can be tested at the TN level only.
Once the RE is completed, the dimension level datastores are created within the Planning model.
Each dimension (standard dimension and attribute dimension) is reversed as a datastore with the
same name as the dimension with appropriate columns. RE also creates a datastore named "UDA"
for loading UDA's.

Expanding Account datastore, all the properties are seen. This datastore would be required to be
mapped to the source, as the POC scenario is intended to load metadata under Account dimension
in PLN_Samp Hyperion application.

pg. 34

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Therefore, the target datastore is now configured.

pg. 35

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Details on this topic can be read at:


http://docs.oracle.com/cd/E23943_01/integrate.1111/e12644/hyperion_plan.htm#ODIKM660
Next, an Interface named POC_ACC_DIM_LOAD_FF2 is created within
ODI_POC_PROJECT under the First Folder, to create the source and target mapping.

the

Note: Staging Area Different from Target


check box is checked and the In-Memory
Engine: SUNOPSIS_MEMORY_ENGINE is
selected. This allows any transformations
between the source and target to be handled
by the memory engine, an example being if
there is a column header named TB in the
source file and in the planning target it goes to
Time Balance then the memory engine will
handle
this mapping.
On the Mapping tab, source and target datastores are
dragged
and dropped and automatic

mapping is performed. Due to same name in source and target automatic mapping happens
between Parent and Account fields, rest of the source fields are dragged over to the target and
mapped.

pg. 36

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Following operations are possible at the target (note the Operation field in the target datastore):

Update This is the default and is used if not populated, it adds, updates, moves the
member being loaded.
Delete Level0 - Deletes the member being loaded if it has no children
Delete IDescendantsDeletes the member being loaded and all of its descendants.
Delete DescendantsDeletes the descendants of the member being loaded, but does not
delete the member itself.

In this example, the default UPDATE operation is being used to add the metadata elements. For all
the fields Staging Area option is chosen for mapping purpose.
Next, Flow tab settings are done. On the Flow tab, ODI creates a logical data flow automatically as
shown below:

pg. 37

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

End to end data flow for POC scenarios where metadata is loaded to Hyperion Planning from Flat
File can be summarized as:
ODI Staging Layer or
Sunopsis Memory
Engine

Flat File

LKM File to SQL

Hyperion Planning

IKM SQL to
Hyperion Planning

Figure 9: Data Flow Load Metadata from Flat File to Hyperion Planning

LKM File to SQL: LKM will load the source file into the staging area, and all transformations will
take place in the staging area afterwards.
IKM SQL to Hyperion Planning: Loads metadata and data into Hyperion planning applications.

In the Flow tab following settings are done to set the IKM Load Options on the Target:
Following settings are possible:
IKM SQL to Hyperion Planning Load Options

Possible Values

LOAD_ORDER_BY_INPUT

Possible values: Yes or No; default: No If set to


Yes; members are loaded in the same order as in
the input records.

SORT_PARENT_CHILD

Possible values: Yes or No; default: No If set to


Yes; incoming records are sorted so that all parents
are inserted before children.

LOG_ENABLED

Possible values: Yes or No; default: No If set to


Yes; logging is done during the load process to the
file specified by the LOG_FILE_NAME option.

LOG_FILE_NAME

The name of the file where logs are saved; default


value: Java temp folder/ dimension.log

MAXIMUM_ERRORS_ALLOWED

Maximum number of errors before the load process


is stopped; default value: 0. If set to 0 or a negative
number, the load process is not stopped regardless
of the number of errors.

LOG_ERRORS

Possible values: Yes or No; default: No. If set to


Yes, error records are logged to the file specified
by the ERROR_LOG_FILE property.

ERROR_LOG_FILE

The name of the file where error records are


logged; default value: Java temp folder/
dimension.err

ERR_COL_DELIMITER

The column delimiter used for the error record file;


default value: comma (,)

pg. 38

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

ERR_ROW_DELIMITER

The row delimiter used for the error record file;


default value: \r\n. Note: Row and column
delimiters values can also be specified in
hexadecimal. A value that starts with 0x is treated
as hexadecimal; for example, 0x0041 is treated as
the letter A.

ERR_TEXT_DELIMITER

The text delimiter to be used for the column values


in the error record file

ERR_LOG_HEADER_ROW

Possible values: Yes or No; default: Yes


If set to Yes, the row header (with all column
names) is logged in the error records file.

REFRESH_DATABASE

If set to Yes, completion of the load operation


invokes a cube refresh. Possible values: Yes or No;
default: No

Following settings are done for the POC scenario:

pg. 39

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Note: IKM Bug - There is a bug in ODI 11.1.1.6 when using IKM SQL to Hyperion Planning. While
preparing this POC, it was observed that the IKM is not visible in the IKM selector on the Flow tab
even after doing the setting of Staging Area Different from Target. The mechanisms which
define the KMs which ODI offers in the dropdown are tied to the technologies of the Interface's
Source, Staging Area and Target. Checking the IKM in its raw format that was being imported it
showed the following:

ODI is very strict about the selected technologies and since the datastore being used was of
technology Oracle, the KM is not visible on the interface. To resolve this, the source technology was
set to <undefined> and the IKM now became visible in the interface.

pg. 40

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Once, the Load Option settings are done, the interface is saved and executed. Note the red box
around the green triangular button the execution icon.

pg. 41

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Next, the session logs are checked via ON.

3 rows were successfully processed.

pg. 42

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Next, Hyperion Planning is accessed to check the addition of the metadata elements from the source
file. It is observed, that 3 source file accounts are added successfully to the metadata in Hyperion
Planning.

Next, Essbase is accessed via EAS console to check whether the Cube Refersh property being set
in the IKM Load Option properly refreshed the cube. It is observed, that refresh was successful and
the 3 accounts got refreshed to essbase successfully from planning repository.

pg. 43

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Case 2: Load a Member Hierarchy (Metadata) from a Flat File Source to a


Planning Application Target Advanced Load - Source File
Manipulation/Transformation Techniques
Source: Flat File to Load Segment dimension hierarchy in PLN_Samp Hyperion application post
source file manipulation.
File Format:

Target: The objective is to have the following hierarchy loaded under a Parent member called
Product in the Segment Dimension using ODI file manipulation techniques on the source file.
As-Is State:

pg. 44

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

To-Be State:

For this POC scenario, other setups like creating a model, RE etc. are all similar to Case-1.
Therefore, those initial set-ups are skipped.
To create the mapping, an interface POC_SEG_DIM_LOAD_FILE_MANIPULATION is created.

The source and target datastores are dragged and dropped in the mapping area.

pg. 45

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Following manipulations are done on the target datastore using the Expression Editor under the
Mapping Properties to map the source file columns. The transformation codes are self-explanatory.

pg. 46

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Segments:

Parent:

pg. 47

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Alias:

pg. 48

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Next, similar settings (as done for Case-1) are done in the Flow tab and the interface is executed.

Once the session, is successfully complete, planning application is accessed and the hierarchy is
checked to confirm the metadata elements added successfully.

Same could be checked via EAS console as well, as shown in Case-1.

Case 3: Load Member Formula from a Flat File Source to a Planning


Application Target
Source: Following Flat File named ACC_METADATA_FORMULA_LOAD_POC.csv is intended
to load the Member Formula in column K of the flat file for the existing member 514000 under
Planning Account dimension in PLN_Samp application.

pg. 49

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Target: Account member 514000 under parent member 500000 in Planning Account dimension
within PLN_Samp application. The member formula is intended to be loaded for the account
514000
Following similar steps as in Case-1, source file RE is done and corresponding datastore is created.

pg. 50

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Note the column


Member_Formula being
reversed this would be
the key column in this POC
case. In case the size of
the formula column is large
the physical and logical
lengths can be adjusted at
ODI level.

The source datastore is ready to be plugged into a new interface to simulate this case post RE is
done successfully.

The target datastore was already RE during Case-1.


Next, under the project ODI_POC_PROJECT,
ODI_ACC_DIM_FORMULA_LOAD_FF4.

new

interface

is

created

named

In the Mapping tab, source and target datastores are dragged and dropped and mappings are
performed.

pg. 51

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Next, IKM level settings are done in the Flow tab; the interface is saved and executed.

In the ON, the session logs and details are subsequently viewed

pg. 52

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

1 row was successfully processed.


Next, PLN_Samp application is accessed and it is confirmed that the member formula got
successfully loaded to the application as intended.

The same is then verified from EAS console to check the proper cube refresh from the IKM.

pg. 53

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Case 4: Delete Member(s) from Hyperion Planning


To delete a member or members from Hyperion Planning the Operation column in the target
datastore is used. This setting is done in the Mapping window. Following values can be used on
the Operation column

Delete Level0 - Deletes the member being loaded if it has no children. Removes security
attached to the member as well.
Delete IDescendantsDeletes the member being loaded and all of its descendants.
Delete DescendantsDeletes the descendants of the member being loaded, but does not
delete the member itself.

In this POC case, a level0 member is deleted from Hyperion Planning PLN_Samp application using
ODI.
Source: Flat file ACC_METADATA_LOAD_POC_2.csv contains a member 516000 that requires
to be deleted from Account dimension of the planning application PLN_Samp.

Target: Account member 516000 under Account dimension of PLN_Samp application. This
account would be deleted in this POC scenario using ODI.

pg. 54

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

At the interface Mapping window the setting Delete Level 0 is done on the Operation column:

Other settings are all same as previous POC cases.


Post running the interface, execution details and logs are observed in the ON.

pg. 55

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

One row is successfully processed.


Next, PLN_Samp application is accessed to check, whether member 516000 was deleted from the
application. It is observed, that, the member has been deleted from the planning application.

pg. 56

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Case 5: Load a Member Hierarchy (Metadata) from a Relational Database


Source to a Planning Application Target
This POC Case is very case in real time projects involving a ERP system like Oracle e-Biz suite and
Hyperion Planning where metadata elements are refreshed in Hyperion Planning from the interface
tables of the transactional systems, as for example, Chart of Account elements of Oracle General
Ledger like Accounts, Cost Centers, Payroll Salary Elements etc. required for carry out Planning and
Budgeting in an organization are pulled from standard or open interface tables via ODI to Hyperion
Planning to build and refresh various dimensions.
In this POC scenario, metadata elements are added under Segments dimension in PLN_Samp
Hyperion Planning application from Oracle database table.
Source: The source table ODI_POC_PLN_HIER_MEMBERS is hosted under a custom schema
named DBF_HYP_POC_USER in Oracle database. Snapshot below shows the metadata element
to be loaded in the Segment dimension of PLN_Samp application.

Target: Within Segment dimension in PLN_Samp application, under parent member HA a child
element PTAS would be added using ODI from the source table.
In the DN, a new model folder is created to host Oracle Database related models and datastores.

Next, a new model ODI_RDBMS_SOURCE_PLN is created with the details as shown below,
followed by RE of the table ODI_POC_PLN_HIER_MEMBERS

pg. 57

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

A standard RE is done in this case.

pg. 58

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Once RE is complete successfully a datastore for the source table is imported inside the ODI
repository under the model created. Therefore, the source datastore is ready to be used in mapping
interface.

Reversed columns
of Oracle table

Target datastore, is already RE in previous examples and is ready to be plugged into an interface.

Reversed columns
of Segment
dimension under
PLN_Samp
Planning
Application

pg. 59

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Next, an interface named POC_SEG_DIM_LOAD_RDBMS is created.

On the Mapping window, source and target datastores are dragged and automatic mapping is
done. Fields not mapped automatically are mapped dragging the source fields directly onto the
target fields.

Before moving to the Flow tab settings it is important to note the data flow for scenarios where
metadata is loaded to Hyperion Planning from RDBMS source:
Oracle Database Table

LKM SQL to SQL

ODI Staging Layer or


Sunopsis Memory
Engine

Hyperion Planning

IKM SQL to
Hyperion Planning

Figure 10: Data Flow Load Metadata from RDBMS to Hyperion Planning

LKM SQL to SQL: LKM will load the source database table into the staging area, and all
transformations will take place in the staging area afterwards. This is a generic KM.
IKM SQL to Hyperion Planning: Loads metadata and data into Hyperion planning applications.

pg. 60

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Note: There is another KM called LKM SQL to Oracle, when staging area is hosted on Oracle
database specifically. Though LKM SQL to SQL and LKM SQL to Oracle does the same job, for
large data volumes its better to use LKM SQL to Oracle.
Details about loading strategies can be read at:
http://docs.oracle.com/cd/E23943_01/integrate.1111/e12645/lkm.htm
Next, Load Option settings are done on Planning IKM at Target on the Flow tab of the interface.

The interface is then saved and executed. Execution details and logs are observed from the ON.

pg. 61

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

1 row was successfully processed.

pg. 62

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Next, accessing PLN_Samp application it is verified whether the member PTAS got added
successfully.

Same is verified via Essbase EAS as well.

pg. 63

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Case 6: Create a ODI File Watch Package to Load Metadata from Flat File
Source to a Planning Application Target Demonstrating File Watch, Error
Log Capture and E-mail Sending Features
This POC scenario is about creating an ODI package where a file watch mechanism is implemented.
The idea is that metadata load from a flat file named ACC_METADATA_LOAD_ODI_POC.csv to
PLN_Samp application would happen if a 0 kb file named AccLoad.txt is present in a designated
folder. Success or failure of the package execution would be notified to intended recipients via email. Failure mails would be having the failure reason logs as attachments.
Source: Source file format is as shown below:

Target: Account dimension in PLN_Samp planning application, where the ODI package would load
the account POC_Account.
Interface POC_ACC_DIM_LOAD_FF1 is built as the following (steps are similar for RE, building
interface as discussed in previous POC cases).

pg. 64

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Definition

Mapping

pg. 65

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Flow

Once the interface is ready, a package POC_FILE_WAIT_ATTACH_LOGS is created with the


project ODI_POC_PROJECT:

pg. 66

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

On the Diagram tab following ODI Tool elements are dragged and dropped to create the workflow.

The flow goes like the following:


1. Process waits for File Watch Tool ODIFileWait with the settings shown below. Tool
Properties has the 0 kb file definition and the path where the package has to look for it.

2. Next, the interface POC_ACC_DIM_LOAD_FF1 is called.


3. From Step-2, in case the interface has failed- the process routes to a failure ODI tool
OdiOutFile that is intended to create a log file. Properties are as shown below. The error
log path and the Text to append are provided in the definition. The text uses and ODI API
getPrevStepLog to get the log message from the previous step as illustrated.

pg. 67

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

4. Next, a failure mail is sent attaching the logs using OdiSendMail tool. Settings are shown
below:

5. From Step-2, in case the interface execution is a success - the process routes to an ODI tool
OdiSendMail that is intended send success mail to intended users. Settings are :

pg. 68

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

6. Final step is to delete the 0 kb file which is achieved using an ODI tool OdiFileDelete.
Settings are shown below.

Success Case: On running the package and viewing the session details from ON, following step
executions are observed.

pg. 69

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Success Email is received as shown:

Accessing PLN_Samp shows that the member got added correctly.

pg. 70

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Failure Case: To make the interface fail, the source file is removed from the source folder.

Next, the package is run. As expected the interface step has failed.

Failure mail is received. Error log file is attached.

pg. 71

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Having a look at the error logs shows the reason for failure:

In case the 0kb file is not present in the source folder and the package is executed following
happens:

AccLoad.txt
not present

The session keeps on waiting for the file to arrive for the time delay defined in the properties. As
soon as it arrives, the process kicks off.

AccLoad.txt
has arrived.

pg. 72

Using ODI 11g with Hyperion Planning and Essbase 11.1.2.2 Part1

Process completes and success mail arrives.

This suggests that this package can be scheduled during the business day which would monitor the
source folder using the file watch mechanism to look for the 0 kb flag file to load data from source file
to the target.

Overview of Topics to be covered in Part-2


Part-2 of this series would concentrate on using ODI with Essbase to simulate the following POC
cases:

Data loading to Essbase cube ( valid for a native Essbase cube and Planning Essbase cube
both)
Outline extraction
Loading SQL metadata to Essbase,
Loading file metadata to Essbase
Writing from Essbase to a flat file Data Export
Writing from Essbase to a RDBMS Data Export

-------------------------------

pg. 73

Potrebbero piacerti anche