Sei sulla pagina 1di 4

Agent in ODI

1 - If I have an agent running as a "scheduler" agent, do I also


have to set up a separate "listener" agent or will the scheduler do
both tasks?
2 - When setting up the ODIPARAMS.bat file for use in scheduling,
you have to define a work repository - what happens if you have
multiple work repositories (i.e. for dev, test and prod) or does it
not make in difference which work repository it is using (and in
which case what is the point of specifying a work repository).
1) A "scheduler" agent has all the functionality of a "listener"
agent as well, so you don't need two.
2) Because you tie a scheduler agent to a specific repository,
you should create separate scheduler agents for each of
your different repositories. (Although not recommended, and
advised against, a scheduler agent can act as a listener
agent for tasks in any repository. The repository information
is passed to the agent as part of the scenario invocation
parameters)
- "listener" type agent, which can "work for anybody", you pass it
all parameters at invocation time
- "scheduler" type agent which takes it parameters from the
odiparams file for connecting to the repository in order to read its
schedules.
- "command line" - where you invoke a scenario from the
command line, (startscen) in which case it will also read the
odiparams file for the repository info (you don't want to have to
pass all those in on the command line)
In the case where you want multiple scheduler agents, you set up
additional odiparams.bat/sh file, and corresponding
scheduleragent.bat/sh, which calls the appropriate one.

Faster and simpler development and maintenance:


The declarative rules driven approach to data integration
greatly reduces the learning curve of the product and
increases developer productivity while facilitating ongoing
maintenance. This approach separates the definition of the
processes from their actual implementation, and separates
the declarative rules (the "what") from the data flows (the
"how").
Data quality firewall: Oracle Data Integrator ensures that
faulty data is automatically detected and recycled before
insertion in the target application. This is performed without
the need for programming, following the data integrity rules
and constraints defined both on the target application and in
Oracle Data Integrator.
Better execution performance: traditional data
integration software (ETL) is based on proprietary engines
that perform data transformations row by row, thus limiting
performance. By implementing an E-LT architecture, based
on your existing RDBMS engines and SQL, you are capable of
executing data transformations on the target server at a setbased level, giving you much higher performance.
Simpler and more efficient architecture: the E-LT
architecture removes the need for an ETL Server sitting
between the sources and the target server. It utilizes the
source and target servers to perform complex
transformations, most of which happen in batch mode when
the server is not busy processing end-user queries.
Platform Independence: Oracle Data Integrator supports
all platforms, hardware and OSs with the same software.

Data Connectivity: Oracle Data Integrator supports all


RDBMSs including all leading Data Warehousing platforms
such as Oracle, Exadata, Teradata, IBM DB2, Netezza,
Sybase IQ and numerous other technologies such as flat
files, ERPs, LDAP, XML.
Cost-savings: the elimination of the ETL Server and ETL
engine reduces both the initial hardware and software
acquisition and maintenance costs. The reduced learning
curve and increased developer productivity significantly
reduce the overall labor costs of the project, as well as the
cost of ongoing enhancements.

Q) How Many Work Repositories can be created in a project?


You can create multiple work repositories for single master
repository. So there is no specific number. All work repositories
will use information stored in master repository (topology, agents
etc). And projects are stored in work repository. So you cannot
access project stored in work repository A inside WR B. You need
to import /export the project to other WR.
Q) What is remote file in odi? And what is the purpose of the
remote file?
Remote file is nothing but if you want to process any files
available in other system which access as a remote from your
local system. So if odi is installed in your local and you want
process those remote files then you have to use one agent
running in your remote system.
I have 10 interfaces in packages. if i got error at 6th interface
then how can i rollback previous all 5 interfaces?

I have 10 interfaces in packages. If I got error at 6th interface


then how can i restart from 6th interface?
For 1st one: You have to make the dml operation for all interfaces
in one transaction and at the end you have to commit that is after
10th interface.
For 2nd one. You have to delete the drop temp table step from the
LKM, CKM, IKM (Might be the last step of each knowledge module.
Just check it once from your side). So that whenever you will
restart the session all records will be available in your temp
tables. At the same time you have arrange the transaction for
INSERT ROWS STEP (i.e. I$ to Target Table). So that this
transaction wont be executed.

Potrebbero piacerti anche