Sei sulla pagina 1di 67

INFORMATICA

POWERCENTER 8.1.0
DESIGNER

Content

WORKING WITH POWERCENTER 8 DESIGNER

integration * intelligence * insight

Designer Overview
Designer
Designer is used to create mappings that contain transformation instructions for the
Integration Service. The Designer has the following tools that we use to analyze sources,
design target schemas, and build source-to-target mappings.
Source Analyzer
It imports or creates source definitions.
Target Designer
It imports or creates target definitions.
Transformation Developer
Develop transformations to use in mappings. we can also develop user-defined
functions to use in expressions.
Mapplets Designer
It Creates sets of transformations to use in mappings.
Mapping Designer
It Creates mappings that the Integration Service uses to extract, transform, and load
data.

integration * intelligence * insight

Designer Overview
The following things are displayed in
Designer
Navigator
It connect to repositories, and open
folders within the Navigator. we can
also copy objects and create
shortcuts within the Navigator.
Workspace
It opens different tools in this
window to create and edit
repository objects, such as sources,
targets, mapplets, transformations,
and mappings.

Output
View details about tasks you
perform, such as saving your work
or validating a mapping.
Designer Windows
integration * intelligence * insight

Designer Overview
Status bar
It Displays the status of the
operation you perform.

Overview
An optional window to simplify
viewing a workspace that
contains a large mapping or
multiple objects. Outlines the
visible area in the workspace
and highlights selected objects
in color.
Instance data
View transformation data while
you run the Debugger to debug
a mapping.
Target data
View target data while you run
the Debugger to debug a
mapping.
Designer Windows
integration * intelligence * insight

Designer Overview
Informatica PowerCenter 8 can access the following data sources
into the following targets .
Application
Hyperion Essbase

Sources
IBM MQSeries

IBM DB2 OLAP Server

Relational
JMS

Oracle
Microsoft Message Queue
Sybase ASE

Informix
Mainframe
IBM DB2
Adabas

Microsoft SQL Server


Datacom

Teradata
IBM DB2 OS/390

IBM DB2 OS/400

File
Flat file
Other
COBOL file
Microsoft Excel
XML file
Microsoft Access
web log
External web services
integration * intelligence * insight

and load the data


PeopleSoft
SAP NetWeaver
SAS
Siebel
TIBCO
WebMethods
IDMS
IDMS-X
IMS
VSAM

Designer Overview
Targets
Relational
Oracle
Sybase ASE
Informix
IBM DB2
Microsoft SQL Server
Teradata
File
Flat file
XML file

Application
Hyperion Essbase
IBM MQSeries
IBM DB2 OLAP Server
JMS
Microsoft Message Queue
Mainframe
IBM DB2 OS/390
IBM DB2 OS/400

MY SAP
PeopleSoft EPM
SAP BW
SAS
Siebel
TIBCO
WebMethods

VSAM

Other
Microsoft Access
External web services

integration * intelligence * insight

About Transformation
The transfer of data is called transformation.
A transformation is a repository object that generates, modifies, or passes data.
We configure logic in a transformation that the Integration Service uses to transform data.

The Designer provides a set of transformations that perform specific functions.


Transformations in a mapping represent the operations the Integration Service performs on the
data.
Data passes into and out of transformations through ports that we link in a mapping or mapplet.
Transformations can be Active or Passive.
An active transformation can change the number of rows that pass through it.
A passive transformation does not change the number of rows that

pass through it.

Transformations can be connected to the data flow.


An unconnected transformation is not connected to other transformations in the mapping.
It is called within another transformation, and returns a value to that transformation.

integration * intelligence * insight

About Transformation
Designer Transformations
Tasks to incorporate a Aggregator - to do things like "group by".
Expression - to use various expressions.
transformation into a
mapping
Filter - to filter data with single condition.
Joiner - to make joins between separate databases, file, ODBC sources.
Create the
Lookup - to create local copy of the data.
transformation
Normalizer - to transform denormalized data into normalized data.
Configure
the
Rank - to select only top (or bottom) ranked data.
transformation
Sequence Generator - to generate unique IDs for target tables.
Link the
Source Qualifier - to filter sources (SQL, select distinct, join, etc.)
transformation to
Stored Procedure - to run stored procedures in the database - and
other
capture their returned values.
transformations
and target
Update Strategy - to flag records in target for insert, delete, update
definitions
(defined inside a mapping).
Router - same as filter but with multiple conditions
Mapping Designer
Java Transformation- It provides a simple native programming interface
to define transformation functionality with the Java programming
Transformation
Developer
language.
Reusable transformation- is a transformation that can be used in
Mapplet Designer
multiple mappings
integration * intelligence * insight

Types Of Transformation
Active Transformation

Filter
Router
Update Strategy
Aggregator
Sorter
Rank
Joiner
Normalizer

Passive transformations

Sequence Generator
Stored Procedure
Expression
Lookup

integration * intelligence * insight

10

Aggregator Transformation
The Aggregator is an active transformation.
The Aggregator transformation allow us to perform aggregate calculations, such as averages
and sums.
The Aggregator transformation is unlike the Expression transformation, in that we can use the
Aggregator transformation to perform calculations on groups.
The Expression transformation permit us to perform calculations on a row-by-row basis only.
We can use conditional clauses to filter rows, providing more flexibility than SQL language.
The Integration Services performs aggregate calculations as it reads, and stores
necessary
data group and row data in an aggregate cache.
Components of the Aggregator Transformation
Aggregate expression
Group by port
Sorted input
Aggregate cache
Aggregate Expression
An aggregate expression can include conditional clauses and non-aggregate functions. It can
also include one aggregate function nested within another aggregate function, such as.
MAX( COUNT( ITEM )
Aggregate Functions
The aggregate functions can be used within an Aggregator transformation.
We can nest one aggregate function within another aggregate function.
AVG
COUNT
integration * intelligence * insight

11

Aggregator Transformation
Aggregate Functions

FIRST
LAST
MEDIAN
MAX
MIN
STDDEV
PERCENTILE
SUM
VARIANCE

Conditional Clauses
We use conditional clauses in the aggregate expression to reduce the number of rows used
in the aggregation. The conditional clause can be any clause that evaluates to TRUE or
FALSE.
Null Values in Aggregate Functions
When we configure the Integration Service, we can choose how we want the Integration
Service to handle null values in aggregate functions. We can choose to treat null values in
aggregate functions as NULL or zero. By default, the Integration Service treats null values
as NULL in aggregate functions.

integration * intelligence * insight

12

Creating Aggregator Transformation


In the Mapping Designer, click Transformation > Create.
Select the Aggregator transformation. Enter a name for the
Aggregator, click Create. Then click Done.
The Designer creates the Aggregator transformation.
Drag the ports to the Aggregator transformation.
The Designer creates input/output ports for each port we
include.
Double-click the title bar of the transformation to open the
Edit Transformations dialog box .
Select the Ports tab.
Click the group by option for each column you want the
Aggregator to use in creating groups.
Click Add and enter a name and data type for the
aggregate expression port. Make the port an output port
by clearing Input (I). Click in the right corner of the
Expression field to open the Expression Editor. Enter the
aggregate expression, click Validate, and click OK.
Add default values for specific ports.
Select the Properties tab. Enter settings as necessary.
Click OK.
Choose Repository-Save.
integration * intelligence * insight

13

Expression Transformation
We can use the Expression transformation to calculate values in a single row before we
write to the target
We can use the Expression transformation to test conditional statements
To perform calculations involving multiple rows, such as sums or averages we can use
aggregator transformation
We can use the Expression transformation to perform any non-aggregate calculations
Creating an Expression Transformation
In the Mapping Designer, click Transformation > Create.
Select the Expression transformation and click OK.
The naming convention for Expression transformations is
EXP_TransformationName
Create the input ports
If we have the input transformation available, we can
select Link Columns from the Layout menu and then
drag each port used in the calculation into the
Expression transformation or we can open the
transformation and create each port manually.
Repeat the previous step for each input port we want to
add to the expression
Create the output ports we need
integration * intelligence * insight

14

Expression Transformation
Setting Expression in Expression
Transformation
Enter the expression in the
Expression Editor we have
disable to in port.
Check the expression syntax by
clicking Validate.
Connect to Next Transformation
Connect the output ports to the
next transformation or target.
Select a Tracing Level on the
Properties Tab
Select a tracing level on the
Properties tab to determine the
amount of transaction detail
reported in the session log file.
Choose Repository-Save.

integration * intelligence * insight

15

Filter Transformation
A Filter transformation is an Active Transformation.
We can filter rows in a mapping with Filter transformation.
We pass all the rows from a source transformation through the Filter transformation and
then enter a filter condition for the transformation.
All ports in a Filter transformation are input/output, and only rows that meet the condition
pass through the Filter transformation.
Creating a Filter Transformation
In the Mapping Designer, click Transformation > Create.
Select the Filter transformation. Enter a name, and click
OK.
The naming convention for Filter transformations is
FIL_TransformationName.
Select and drag all the ports from a source qualifier or other
transformation to add them to the Filter transformation.
After we select and drag ports, copies of these ports
appear in the Filter transformation. Each column has both
an input and an output port.
Double-click the title bar of the filter transformation to edit
transformation properties.
integration * intelligence * insight

16

Filter Transformation

Click the Value section of the condition, and then click the Open button.
The Expression Editor appears.
Enter the filter condition we want to apply.
Use values from one of the input ports in the transformation as part of this condition
However, we can also use values from output ports in other transformations.
We may have to fix syntax errors before continuing.
Click OK.
Select the Tracing Level, and click OK to return to the Mapping Designer.
Choose Repository-Save.

Filter Transformation Tips


Use the Filter transformation early in the mapping.
Use the Source Qualifier transformation to filter.

integration * intelligence * insight

17

Joiner Transformation
A Joiner transformation is an active transformation.
Joiner transformation is used to join source data from two related heterogeneous sources
residing in different locations or file systems.
We can also join data from the same source.
The Joiner transformation joins sources with at least one matching column.
The Joiner transformation uses a condition that matches one or more pairs of columns
between the two sources.
We can use the following sources

Two relational tables existing in separate databases.


Two flat files in potentially different file systems.
Two different ODBC sources.
A relational table and an XML source.
A relational table and a flat file source.
Two instances of the same XML source.

Creating a Joiner Transformation


In the Mapping Designer, click Transformation > Create. Select the Joiner transformation.
Enter a name, and click OK.
The naming convention for Joiner transformations is JNR_TransformationName.
integration * intelligence * insight

18

Joiner Transformation
Drag all the input/output ports from the first source into the Joiner transformation.
The Designer creates input/output ports for the source fields in the Joiner transformation as
detail fields by default. We can edit this property later.
Select and drag all the input/output ports from the second source into the Joiner
transformation.
The Designer configures the second set of source fields and master fields by default.
Edit Transformation
Double-click the title bar of the Joiner
transformation to open the
Edit
Transformations dialog box.
Select the port tab.
Add default values for specific ports
as necessary.
Setting the Condition
Select the Condition tab and set the
condition.
Click the Add button to add a
condition.
Click the Properties tab and configure
properties for the transformation.
Click OK .
integration * intelligence * insight

19

Joiner Transformation
Defining the Join Type
Join is a relational operator that combines data from multiple tables into a single result set.
We define the join type on the Properties tab in the transformation.
The Joiner transformation supports the following types of joins.

Normal
Master Outer
Detail Outer
Full Outer

Joiner Transformation Tips


Perform joins in a database when possible.
Join sorted data when possible.
For an unsorted Joiner transformation, designate the source with fewer rows as the master
source.
For a sorted Joiner transformation, designate the source with fewer duplicate key values as
the master source.

integration * intelligence * insight

20

Lookup Transformation
A Lookup transformation is a passive transformation.
Use a Lookup transformation in a mapping to look up data in a flat file or a relational table,
view, or synonym.
We can import a lookup definition from any flat file or relational database to which both the
PowerCenter Client and Integration Service can connect.
We can Use multiple Lookup transformations in a mapping.
The Integration Service queries the lookup source based on the lookup ports in the
transformation.
It compares Lookup transformation port values to lookup source column values based on
the lookup condition.

integration * intelligence * insight

21

Types of Lookup Transformation


Connected Lookup

Unconnected Lookup

Receives input values directly from


the pipeline

Receives input values from other


transformation calling: LKP expression

Cache includes all lookup columns


used in the mapping

You can use a static cache

If there is no match for the lookup


condition, it returns the
default value for all output ports

Cache includes all lookup/output ports in


the lookup condition

If there is no match for the lookup


condition, returns null

Pass multiple output values to


another transformation

Pass one output


transformation

Supports user-defined default


values

Does not support user-defined default


values

integration * intelligence * insight

value

to

another

22

Lookup Transformation
Tasks Of Lookup Transformations
Get a related value.
Perform a calculation.
Update slowly changing dimension tables.
Connected or unconnected.
Cached or uncached.
Lookup Components
We have to define the following components when we
configure a Lookup transformation in a mapping.
Lookup source
Ports
Properties
Condition

Edit Transformation

Metadata extensions
integration * intelligence * insight

23

Lookup Transformation
Creating a Lookup Transformation
In
the
Mapping
Designer,
click
Transformation > Create. Select the
Lookup transformation. Enter a name for
the transformation and Click OK. The
naming
convention
for
Lookup
transformation
is
LKP_Transformation
Name.
In the Select Lookup Table dialog box, we
can choose the following options.
Choose an existing table or file
definition.
Choose to import a definition from a
relational table or file.
Skip to create a manual definition.
If we want to manually define the lookup
transformation, click the Skip button.
Define input ports for each
condition we want to define.

Lookup

integration * intelligence * insight

24

Lookup Transformation
For Lookup transformations that use a dynamic lookup
cache, associate an input port or sequence ID with each
lookup port.

On the Properties tab, set the properties for the lookup.


Click OK.
Configuring Unconnected Lookup Transformations
An unconnected Lookup transformation is separate
from the pipeline in the
mapping. we write an
expression using the :LKP reference qualifier to call
the lookup within another transformation.
Adding Input Ports.
Adding the Lookup Condition.
ITEM_ID = IN_ITEM_ID
PRICE <= IN_PRICE
Designating a Return Value.
Calling the Lookup Through an Expression.
LKP.lookup_transformation_name(argument,
argument, ...)
Double click on lookup transformation
transformation opens.

edit

integration * intelligence * insight

Edit Transformation
25

Lookup Transformation
Setting the properties to port tab And properties tab

Properties Tab

Port Tab
Lookup Transformation Tips
Add an index to the columns used in a
lookup condition
Place conditions with an equality
operator (=) first.
Cache small lookup tables.

Join tables in the database.


Use a persistent lookup cache for
static lookups.
Call
unconnected
transformations
with
reference qualifier.

integration * intelligence * insight

Lookup
the
:LKP
26

Lookup Caches
The Integration Service builds a cache in memory when it processes the first row of data in
a cached Lookup transformation.
It allocates memory for the cache based on the amount we configure in the transformation
or session properties.
The Integration Service stores condition values in the index cache and output values in the
data cache.
The Integration Service queries the cache for each row that enters the transformation.
The Integration Service also creates cache files by default in the $PMCacheDir.
Types of lookup caches

Persistent cache
Recache from database
Static cache
Dynamic cache
Shared cache
integration * intelligence * insight

27

Rank Transformation
The Rank transformation is Active Transformation
The Rank transformation allow us to select only the top
or bottom rank of data.
The
Rank
transformation
differs
from
the
transformation functions MAX and MIN, to select a
group of top or bottom values, not just one value.
Creating Rank Transformation
In the Mapping Designer, click Transformation >
Create. Select the Rank transformation. Enter a
name for the Rank. The naming convention for
Rank transformations is
RNK_TransformationName.
Enter a description for the transformation. This
description appears in the Repository Manager.
Click Create, and then click Done.
The Designer creates the Rank transformation.
Link columns from an input transformation to the
Rank transformation.
Click the Ports tab, and then select the Rank (R)
option for the port used to measure ranks.
If we want to create groups for ranked rows,
select Group By for the port that defines the
group.
integration * intelligence * insight

Port Tab
28

Rank Transformation
Click the Properties tab and select whether we
want the top or bottom rank
For the Number of Ranks option, enter the
number of rows we want to select for the rank.
Change the other Rank transformation
properties, if necessary.
Click OK.
Click Repository > Save.

Properties Tab

integration * intelligence * insight

29

Sequence Generator Transformation

A Sequence Generator transformation is a passive transformation.


The Sequence Generator transformation generates numeric values.
We can use the Sequence Generator to create unique primary key values, cycle through a
sequential range of numbers.
The Sequence Generator transformation is a connected transformation.
The Integration Service generates a value each time a row enters a
connected
transformation, even if that value is not used.
When NEXTVAL is connected to the input port of another transformation, the Integration
Service generates a sequence of numbers.
When CURRVAL is connected to the input port of another transformation, the Integration
Service generates the NEXTVAL value plus one.
We can make a Sequence Generator reusable, and use it in multiple mappings.
Web might reuse a Sequence Generator when we perform multiple loads to a single target.
If we have a large input file we can separate into three sessions running in parallel, we can
use a Sequence Generator to generate primary key values.
If we use different Sequence Generators, the Integration Service might accidentally
generate duplicate key values.
Instead, we can use the reusable Sequence Generator for all three sessions to provide a
unique value for each target row.

integration * intelligence * insight

30

Sequence Generator Transformation


Tasks with a Sequence Generator Transformation
Create keys
Replace missing values
Cycle through a sequential range of numbers
Creating a Sequence Generator Transformation
In the Mapping Designer, select Transformation-Create. Select
the
Sequence Generator
transformation. The naming
convention for Sequence Generator transformations is
SEQ_TransformationName.
Enter a name for the Sequence Generator, and click Create.
Click Done.
The Designer creates the Sequence Generator transformation.
Edit Transformation
Double-click the title bar of the transformation to open the Edit
Transformations dialog box.
Properties Tab
Select the Properties tab. Enter settings as necessary.
Click OK.
To generate new sequences during a session, connect the
NEXTVAL port to at least one transformation in the mapping.
Choose Repository-Save.
integration * intelligence * insight

31

Sequence Generator Transformation


Sequence Generator Ports
The Sequence Generator provides two output ports:
NEXTVAL and CURRVAL.
Use the NEXTVAL port to generate a sequence of
numbers by connecting it to a transformation or target.
We connect the NEXTVAL port to a downstream
transformation to generate the sequence based on the
Current Value and Increment By properties.
Connect NEXTVAL to multiple transformations to
generate unique values for each row in each
transformation.
T_ORDERS_PRI
We might connect NEXTVAL to two target tables in a MARY TABLE:
mapping to generate unique primary key values.
PRIMARY KEY
NEXTVAL to Two Target Tables in a Mapping
We configure the Sequence Generator transformation as
follows: Current Value = 1, Increment By = 1.
When we run the workflow, the Integration Service
generates the following primary key values for the
T_ORDERS_PRIMARY
andT_ORDERS_FOREIGN
target tables.
integration * intelligence * insight

T_ORDERS_FOR
EIGN TABLE:
PRIMARY KEY

10
32

Sequence Generator Transformation

Sequence Generator and Expression


Transformation
We configure the Sequence
Generator
transformation
as
follows: Current Value = 1,
Increment By = 1
Output
key values for the
T_ORDERS_PRIMARY and
T_ORDERS_FOREIGN target
tables

T_ORDERS_PRIMA
RY TABLE:
PRIMARY KEY

T_ORDERS_FOREIG
N TABLE:
PRIMARY KEY

integration * intelligence * insight

33

Sequence Generator Transformation


CURRVAL is the NEXTVAL value plus one or
NEXTVAL plus the Increment By value.
We typically only connect the CURRVAL port when
the
NEXTVAL port is already connected to a
downstream transformation.
When a row enters the transformation connected to
the CURRVAL port, the Informatica Server passes
the last-created NEXTVAL value plus one.
Connecting CURRVAL and NEXTVAL Ports to a
Target
We configure the Sequence Generator transformation
as follows: Current Value = 1, Increment By = 1.
When we run the workflow, the Integration Service
generates the following values for NEXTVAL and
CURRVAL.
OUT PUT
When we run the workflow, the Integration Service
generates the following values for NEXTVAL and
CURRVAL.
If we connect the CURRVAL port without connecting
the NEXTVAL port, the Integration Service passes a
constant value for each row.
integration * intelligence * insight

NEXTVAL

CURRVAL

6
34

Sequence Generator Transformation

Only the CURRVAL Port to a Target

For example, we configure the Sequence


Generator transformation as follows.

OUTPUT
Current Value = 1, Increment By = 1
When we run the workflow, the Integration
Service generates the following constant
values for CURRVAL.

CURRVAL
1
1
1
1
1

integration * intelligence * insight

35

Source Qualifier Transformation


A Source Qualifier is an active transformation.
The Source Qualifier represents the rows that the Integration
Service reads when it executes a session.
When we add a relational or a flat file source definition to a
mapping source
Qualifier transformation automatically
comes.
Task of Source Qualifier Transformation
We can use the Source Qualifier to perform the following
tasks.
Join data originating from the same source database.
Filter records when the Integration Service reads source data.
Specify an outer join rather than the default inner join
Specify sorted ports.
Select only distinct values from the source.
Create a custom query to issue a special SELECT statement
for the Integration Service to read source data.
Default Query of Source Qualifier
For relational sources, the Integration Service generates a
query for each Source Qualifier when it runs a session.
The default query is a SELECT statement for each source
column used in the mapping.
integration * intelligence * insight

36

Source Qualifier Transformation


To view the Default Query
To view the default query.
From the Properties tab, select SQL Query
Click Generate SQL
Click Cancel to exit
Example of source Qualifier Transformation
We might see all the orders for the month, including
order number, order amount, and customer name.
The ORDERS table includes the order number and
amount of each order, but not the customer name.
To include the customer name, we need to join the
ORDERS and CUSTOMERS tables.
Setting the properties to Source Qualifier
Double-click the title bar of the transformation to
open the Edit Transformations dialog box.
Select the Properties tab. Enter settings as
necessary.
integration * intelligence * insight

37

Source Qualifier Transformation


SQL Query
We can give query in the Source Qualifier
transformation.
From the Properties tab, select SQL Query The SQL
Editor displays. Click Generate SQL.
Joining Source Data
We can use one Source Qualifier transformation to join
data from multiple relational tables. These tables must
be accessible from the same instance or database
server.
Use the Joiner transformation for heterogeneous
sources and to join flat files.
Sorted Ports
In the Mapping Designer, open a Source Qualifier
transformation, and click the Properties tab.
Click in Number of Sorted Ports and enter the number
of ports we want to sort.
The Integration Service adds the configured number of
columns to an ORDER BY clause, starting from the top
of the Source Qualifier transformation.
The source database sort order must correspond to the
session.
integration * intelligence * insight

38

Stored procedure Transformation


A Stored Procedure is a passive transformation
A Stored Procedure transformation is an important tool for populating and maintaining
databases. Database administrators create stored procedures to automate tasks that are too
complicated for standard SQL statements.
Stored procedures run in either connected or unconnected mode. The mode we use
depends on what the stored procedure does and how we plan to use it in a session. we can
configure connected and unconnected Stored Procedure transformations in a mapping.
Connected: The flow of data through a mapping in connected mode also passes
through the Stored Procedure transformation. All data entering the transformation
through the input ports affects the stored procedure. We should use a connected
Stored Procedure transformation when we need data from an input port sent as an
input parameter to the stored procedure, or the results of a stored procedure sent as an
output parameter to another transformation.
Unconnected: The unconnected Stored Procedure transformation is not connected
directly to the flow of the mapping. It either runs before or after the session, or is called
by an expression in another transformation in the mapping.

integration * intelligence * insight

39

Stored procedure Transformation


Creating a Stored Procedure Transformation
After we configure and test a stored procedure in the
database, we must create the Stored Procedure
transformation in the Mapping Designer

To import a stored procedure


In the Mapping Designer, click Transformation >Import Stored
Procedure.
Select the database that contains the stored procedure from
the list of ODBC sources. Enter the user name, owner name,
and password to connect to the database and click Connect
Select the procedure to import and click OK..
The Stored Procedure transformation appears in the mapping.
The Stored Procedure transformation name is the same as
the stored procedure we selected.
Open the transformation, and click the Properties tab
Select the database where the stored procedure exists from
the Connection Information row. If we changed the name of
the Stored Procedure transformation to something other than
the name of the stored procedure, enter the Stored Procedure
Name.
Click OK.
Click Repository > Save to save changes to the mapping.
integration * intelligence * insight

40

Update Strategy
An Update Strategy is an active transformation .
When we design a data warehouse, we need to decide what type of information to store in
targets. As part of the target table design, we need to determine whether to maintain all the
historic data or just the most recent changes.
The model we choose determines how we handle changes to existing rows. In
PowerCenter, we set the update strategy at two different levels.
Within a session
Within a mapping

Setting the Update Strategy


We use the following steps to define an update strategy
To control how rows are flagged for insert, update, delete, or reject within a mapping, add
an Update Strategy transformation to the mapping. Update Strategy transformations are
essential if we want to flag rows destined for the same target for different database
operations, or if we want to reject rows.
Define how to flag rows when we configure a session. We can flag all rows for insert,
delete, or update, or we can select the data driven option, where the Integration Service
follows instructions coded into Update Strategy transformations within the session mapping.
Define insert, update, and delete options for each target when we configure a session. On a
target-by-target basis, we can allow or disallow inserts and deletes.
integration * intelligence * insight

41

Update Strategy
Creating an Update Transformation
In the Mapping Designer, select Transformation-Create. Select the
Update
transformation. The naming convention for Update
transformations is UPD_TransformationName.
Enter a name for the Update transformation , and click Create. Click
Done.
The Designer creates the Update transformation.
Drag all ports from another transformation representing data we
want to pass through the Update Strategy transformation.
In the Update Strategy transformation, the Designer creates a copy
of each port we drag. The Designer also connects the new port to
the original port. Each port in the Update Strategy transformation is a
combination of input/output port.
Normally, we would select all of the columns destined for a particular
target. After they pass through the Update Strategy transformation,
this information is flagged for update, insert, delete, or reject.
Double-click the title bar of the transformation to open the Edit
Transformations dialog box.
Click the Properties tab.
Click the button in the Update Strategy Expression field.
The Expression Editor appears.
integration * intelligence * insight

42

Update Strategy
Enter an update strategy expression to flag
rows as inserts, deletes, updates, or rejects.
Validate the expression and click OK.
Click OK to save the changes.
Connect the ports in the Update Strategy
transformation to another transformation or a
target instance.
Click Repository > Save
Setting the Update Strategy for a Session
When we configure a session, we have
several options for handling specific database
operations, including updates.
Specifying an Operation for All Rows
When we configure a session, we can select
a single database operation for all rows using
the Treat Source Rows As setting.
Configure the Treat Source Rows As session
property.
Treat Source Rows displays the options like.
Insert
Delete
Update
Data Driven
integration * intelligence * insight

43

Update Strategy
Specifying Operations for Individual Target Tables
Once we determine how to treat all rows in the
session, we also need to set update strategy options
for individual targets. Define the update strategy
options in the Transformations view on Mapping tab
of the session properties.
We can set the following update strategy options for
Individual Target Tables.
Insert. Select this option to insert a row into a target
table.
Delete. Select this option to delete a row from a
table..
Update. You have the following options in this
situation.
Update as Update. Update each row flagged
for update if it exists in the target table.
Update as Insert. Inset each row flagged for
update.
Update else Insert. Update the row if it exists.
Otherwise, insert it.
Truncate table. Select this option to truncate the
target table before loading data.

integration * intelligence * insight

44

Router Transformation
A Router transformation is an Active Transformation.
A Router transformation is similar to a Filter transformation because both transformations
allow us to use a condition to test data.
A Filter transformation tests data for one condition and drops the rows of data that do not
meet the condition. However, a Router transformation tests data for one or more conditions
and gives us the option to route rows of data that do not meet any of the conditions to a
default output group.
If we need to test the same input data based on multiple conditions, use a Router
transformation in a mapping instead of creating multiple Filter transformations to perform
the same task.
Creating a Router Transformation
In the Mapping Designer, click Transformation >
Create. Select the Router transformation. Enter a
name for the transformation and Click OK.
The naming convention for router transformation is
RTR_TransformationName.
Input values in the Router Transformation
Select and drag all the desired ports from a
transformation to add them to
the Router
transformation.
Double-click the title bar of the Router
transformation to edit transformation properties.
integration * intelligence * insight

45

Router Transformation
Setting the properties to port tab And properties tab

Port Tab

Properties Tab

Group tab in Router Transformation


Click the Group Filter Condition
field to open the Expression Editor.
Enter a group filter condition.
Click Validate to check the syntax
of the conditions we entered.
Click OK.
Connect group output ports to
transformations or targets.
Choose Repository-Save.

integration * intelligence * insight

46

Router Transformation
A Router transformation
following types of groups.

has

the

Input
Output
There are two types of output groups.
User-defined groups
Default group

Router Transformation Components

Working with Ports


A Router transformation has input ports and
output ports.
Input ports reside in the input group, and
output ports reside in the output groups.
We can create input ports by copying
them from another transformation or by
manually creating them on the Ports tab.

Port tab in Router Transformation

integration * intelligence * insight

47

Router Transformation
Connecting Router Transformations in a Mapping
When we connect transformations to a Router
transformation in a mapping consider the following
rules.
We can connect one group to one transformation or
target.
Connect one port to Multiple Target
We can connect one output port in a group to
multiple transformations or targets.
Connect Multiple out ports to Multiple Target
We can connect multiple output ports in one group
to multiple transformations or targets.

integration * intelligence * insight

48

Reusable Transformation

Reusable transformation is a transformation that can be used in multiple mappings.


We can create most transformations as a non-reusable or reusable but only create the
External Procedure transformation as a reusable transformation .
When we add a reusable transformation to a mapping, we add an instance of the
transformation. The definition of the transformation still exists outside the mapping.
Methods To Create Reusable Transformation

Design it in the Transformation Developer


In the Transformation Developer, we can build new reusable transformations.
Promote a non-reusable transformation from the Mapping Designer
After we add a transformation to a mapping, we can promote it to the status of reusable
transformation. The transformation designed in the mapping then becomes an instance of a
reusable transformation maintained elsewhere in the repository.

integration * intelligence * insight

49

Reusable Transformation
Creating Reusable Transformation
Goto
transformation
developer<Transformation <create.
To promote an existing transformation to
re-usable: Goto mapping designer>double
click on transformation>>transformation
tab>make reusable.
Changes that can invalidate mapping
When we delete a port or multiple ports in a
transformation.
When we change a port datatype, you
make it impossible to map data from that
port to another port using an incompatible
datatype.
When we
change a port name,
expressions that refer to the port are no
longer valid.
When we enter an invalid expression in the
reusable transformation, mappings that use
the transformation are no longer valid. The
Integration Service cannot run sessions
based on invalid mappings

integration * intelligence * insight

50

Java Transformation
The Java transformation is a Active/Passive Connected transformation that provides a
simple native programming interface to define transformation functionality with the Java
programming language.
You create Java transformations by writing Java code snippets that define transformation
logic.
The Power Center Client uses the Java Development Kit (JDK) to compile the Java code
and generate byte code for the transformation. The Integration Service uses the Java
Runtime Environment (JRE) to execute generated byte code at run time.
Steps To Define Java Transformation
Create the transformation in the Transformation Developer or Mapping Designer.
Configure input and output ports and groups for the transformation. Use port names as
variables in Java code snippets.
Configure the transformation properties.
Use the code entry tabs in the transformation to write and compile the Java code for the
transformation.
Locate and fix compilation errors in the Java code for the transformation.

integration * intelligence * insight

51

Java Transformation
Enter the ports and use that ports as identifier in java
code.
Go to java code and enter the java code and click
compile and check the output in the output window.
Create session and workflow and run the session.
Functions
Some functions used in designer are
AVG
Syntax : AVG( numeric_value [, filter_condition ] )
MAX
Syntax: MAX( value [, filter_condition ] )
MIN
Syntax :MIN( value [, filter_condition ] )
INSTR
Syntax :INSTR( string, search_value [, start [,
occurrence ] ] )
SUBSTR
Syntax :SUBSTR( string, start [, length ] )
IS_DATE
Syntax:IS_DATE( value )
integration * intelligence * insight

52

Working With Flat Files


To use flat files as sources, targets, and
lookups in a mapping we must import or
create the definitions in the repository .
We can import or create flat file source
definitions in the Source Analyzer or create
flat file target definitions in the Target
Designer or import flat files lookups or use
existing file definitions in a Lookup
transformation.
When we create a session with a file source,
we specify a source file location different
from the location we use , when we import
the file source definition.
Importing a flat file
Goto sources<import from file<select the file.
Click on delimited and next.
Name the different columns and change the
datatype if needed.
Click finish.

integration * intelligence * insight

53

Working With Flat Files


Editing a flat file definition
Table tab
Edit properties such as table name,
business name, and flat file properties.
Columns tab
Edit column information such as column
names, datatypes, precision, and formats.

Properties tab
We can edit
the default numeric and
datetime format properties in the Source
Analyzer and the Target Designer.
Metadata Extensions tab
We can extend the metadata stored in the
repository by associating information with
repository objects, such as flat file
definitions.

integration * intelligence * insight

54

User Defined Functions


We can create user-defined functions using
the PowerCenter transformation language.
Create user-defined functions to reuse
expression logic and build complex
expressions. User-defined functions are
available to other users in a repository.
Once you create user-defined functions, we
can manage them from the User-Defined
Function Browser dialog box. We can also
use them as functions in the Expression
Editor. They display on the User-Defined
Functions tab of the Expression Editor.
We create a user-defined function in the
Transformation Developer. Configure the
following information when we create a
user-defined function.
Name
Type
Description
Arguments
Syntax

Steps to Create User-Defined Functions


In the Transformation Developer, click
Tools > User-Defined Functions.
Click New
The Edit User-Defined Function dialog
box appears
Enter a function name
Select a function type
If we create a public user-defined
function, we cannot change the function
to private when we edit the function.

integration * intelligence * insight

55

User Defined Functions


Optionally, enter a description of the user-defined function.
We can enter up to 2,000 characters.

Create arguments for the user-defined function.


When we create arguments, configure the argument name, data type, precision, and scale.
We can select transformation data types.
Click Launch Editor to create an expression that contains the arguments we defined.

Click OK
The Designer assigns the data type of the data the expression returns. The data types have
the precision and scale of transformation data types.
Click OK
The expression displays in the User-Defined Function Browser dialog box.

integration * intelligence * insight

56

Mapplet Designer
A mapplet is a reusable object that we create in the Mapplet Designer. It contains a set of
transformations and we reuse that transformation logic in multiple mappings.
When we use a mapplet in a mapping, we use an instance of the mapplet. Like a reusable
transformation, any change made to the mapplet is inherited by all instances of the mapplet.
Usage of Mapplets
Include source definitions
Use multiple source definitions and source qualifiers to provide source data for a mapping.
Accept data from sources in a mapping
If we want the mapplet to receive data from the mapping, we use an Input transformation to
receive source data.
Include multiple transformations
A mapplet can contain as many transformations as you need.
Pass data to multiple transformations
We can create a mapplet to feed data to multiple transformations.
Contain unused ports
We do not have to connect all mapplet input and output ports in a mapping
integration * intelligence * insight

57

Mapplet Designer
Limitations of Mapplets
We cannot connect a single port in the Input transformation to multiple transformations in
the mapplet.

An input transformation must receive data from a single active source.


A mapplet must contain at least one Input transformation or source definition with at least
one port connected to a transformation in the mapplet and same applies for output
transformation.
When a mapplet contains a source qualifier that has an override for the default SQL query,
we must connect all of the source qualifier output ports to the next transformation within the
mapplet.
We cannot include PowerMart 3.5-style LOOKUP functions in a mapplet.
We cannot include the following objects : Normalizer transformations, Cobol sources, XML
Source Qualifier transformations, XML sources and targets, Pre- and post- session stored
procedures and other mapplets.
integration * intelligence * insight

58

Data Profiling
Data profiling is a technique used to analyze source data. PowerCenter Data Profiling can
help us to evaluate source data and detect patterns and exceptions. we can profile source
data to suggest candidate keys, detect data patterns and evaluate join criteria.
Use Data Profiling to analyze source data in the following situations.
During mapping development .
During production to maintain data quality.
To profile source data, we create a data profile. we can create a data profile based on a
source or mapplet in the repository. Data profiles contain functions that perform calculations
on the source data.
The repository stores the data profile as an object. we can apply profile functions to a
column within a source, to a single source, or to multiple sources.
We can create the following types of data profiles.
Auto profile
Contains a predefined set of functions for profiling source data. Use an auto profile during
mapping development.
Custom profile
Use a custom profile during mapping development to validate documented business rules
about the source data. we can also use a custom profile to monitor data quality or validate
the results of BI reports.

integration * intelligence * insight

59

Data Profiling

Steps To Create Auto Profile


When we create an auto profile, we can
profile groups or columns in the source.

Or, we can profile the entire source.


To create an auto profile.
Select the source definition in the
Source Analyzer or mapplet in the
Mapplet Designer you want to profile.
Launch the Profile Wizard from the
following Designer tools.
Source Analyzer. Click Sources >

Profiling > Create Auto Profile.


Mapplet Designer. Click Mapplets
> Profiling > Create Auto Profile.
You set the default data profile
options to open the Auto Profile
Column Selection dialog box when
you create an auto profile.
The source definition contains 25
or more columns.

Optionally, click Description to add a description


for the data profile. Click OK.
Enter a description up to 200 characters.
Optionally, select the groups or columns in the
source that you want to profile.
By default, all columns or groups are selected
Select Load Verbose Data if you want the
Integration Service to write verbose data to the
Data Profiling warehouse during the profile
session. By default, Load Verbose Data option
is disabled.
Click Next.
Select additional functions to include in the auto
profile. We can also clear functions we do not
want to include.

integration * intelligence * insight

60

Data Profiling
Optionally, click Save As Default to create new default functions based on the functions
selected here.
Optionally, click Profile Settings to enter settings for domain inference and structure inference
tuning.
Optionally, modify the default profile settings and click OK.
Click Configure Session to configure the session properties after you create the data profile.
Click Next if you selected Configure Session, or click Finish if you disabled Configure Session.
The Designer generates a data profile and profile mapping based on the profile functions.
Configure the Profile Run options and click Next.
Configure the Session Setup options.
Click Finish.

integration * intelligence * insight

61

Data Profiling

We can create a custom profile from


the following Designer tools.

Source Analyzer. Click Sources >


Profiling > Create Custom Profile.

Mapplet Designer. Click Mapplets


> Profiling > Create Custom
Profile.

Profile Manager. Click Profile >


Create Custom.
To create a custom profile, complete
the following.

Enter a data profile name and


optionally add a description.

Add sources to the data profile.

Add, edit, or delete a profile


function and enable session
configuration.
Configure profile functions.
Configure the profile session
if we enable session
configuration.

integration * intelligence * insight

62

Profile Manager
Profile Manager is a tool that helps to
manage data profiles. It is used to set default
data profile options, work with data profiles in
the repository, run profile sessions, view
profile results, and view sources and
mapplets with at least one profile defined for
them. When we launch the Profile Manager,
we can access profile information for the
open folders in the repository.
There are two views in the Profile Manager

Profile View: The Profile View tab


displays the data profiles in the open
folders in the repository.
Source View: The Source View tab
displays the source definitions in the
open folders in the repository for which
we have defined data profiles.

Profile View

Source View
integration * intelligence * insight

63

Debugger Overview
We can debug a valid mapping to gain troubleshooting information about data and error
conditions.
Debugger used in the following situations
Before we run a session

After we save a mapping, we can run some initial tests with a debug session
before we create and configure a session in the Workflow Manager.
After we run a session

If a session fails or if we receive unexpected results in the target, we can run the
Debugger against the session. we might also run the Debugger against a session if
we want to debug the mapping using the configured session properties.
Create breakpoints. Create breakpoints in a mapping where we want the Integration
Service to evaluate data and error conditions.
Configure the Debugger. Use the Debugger Wizard to configure the Debugger for the
mapping. Select the session type the Integration Service uses when it runs Debugger.
Run the Debugger. Run the Debugger from within the Mapping Designer. When we run
the Debugger the Designer connects to the Integration Service. The Integration Service
initializes the Debugger and runs the debugging session and workflow.
Monitor the Debugger. While we run the Debugger, we can monitor the target data,
transformation and mapplet output data, the debug log, and the session log.
Modify data and breakpoints. When the Debugger pauses, we can modify data and see
the effect on transformations, mapplets, and targets as the data moves through the
pipeline. we can also modify breakpoint information.
integration * intelligence * insight

64

Debugger Overview
Create Breakpoints
Goto
mapping<<debugger<<edit
transformations.
Choose the instant name, breakpoint
type.
And then ADD to add the breakpoints.
Give the condition for data breakpoint
type.
Give no. of errors before we want to
stop.
Run The Debugger
Got mapping<debugger<start debugger
Click next and then choose the session
as create debug session other wise
choose existing session
Click on next

integration * intelligence * insight

65

Debugger Overview
Choose connections of target and
source and click next.
Click on next

Debug Indicators

integration * intelligence * insight

66

The End

integration * intelligence * insight

67

Potrebbero piacerti anche