Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
INFORMATICA INTERVIEW
QUESTIONS
Tuesday, 22 December 2015
http://biinformaticainterviewquestions.blogspot.in/ Page 1 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
For example in each department we have several employees. In order to know how
much salary we are paying for each department then first we group the rows based on
department number and then we use sum() aggregate function.
Eg:-
http://biinformaticainterviewquestions.blogspot.in/ Page 2 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
Input:
ENO NAME SAL
100 RAVI 2000
100 RAVI 2000
101 VINAY 3000
Output:
ENO NAME SAL
100 RAVI 2000
101 VINAY 3000
a. Sum()
b. Max()
c. Min()
d. Avg()
e. First()
f. Last() etc
Eg:- SUM(IIF(CITY='HYDERABAD',SAL,0))
Eg:- MAX(SUM(SAL))
The above example returns which department is paying highest salaries. Inner
function calculates how much salary we are paying for each department.
All above information you can see in below video with examples
http://biinformaticainterviewquestions.blogspot.in/ Page 3 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
https://www.youtube.com/watch?v=SS0-XdluF1Q
a. Sorted Input
b. Incremental Aggregation
Sorted Input option available under properties tab of aggregator transformation. When
you select this option you need to send the sorted data to the aggregator. You need
sort group by ports before sending data to the aggregator. For example if you take
DNO is the group by port then you need to sort on DNO.
When we sort the data and send to aggregator, then immediately performs aggregate
calculations. It won't wait till all rows entered into aggregator. Informatica can't read
all rows at a time, it will read block by block. If you don't select sorted input,
aggregator will wait till all rows entered into aggregator.
Ouput :- 10 5000
this time when you use incremental aggregation it performs 5000+4000 instead of
again adding all employee salaries 2000+3000+4000
25. When cache files deleted for incremental aggregation what happens?
When cache files not available in informatica server, then session won't fail, it will
recreate the cache automatically. But it takes more time compared to normal run,
reason is recreation of cache takes some time.
26. Can we select both sorted input and incremental aggregation at a time?
No, we can't select. Reason is if you select sorted input it performs calculations in
memory. When you select incremental aggregation it performs calculations on existing
http://biinformaticainterviewquestions.blogspot.in/ Page 4 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
cache values.
All the cache files by default stored under server/infa_shared/cache directory. You
can change the default directory value also by assigning some other directory.
Automatically cache files will be deleted after session succeeded. No need of manual
intervention.
If session fails, cache files won't delete, we need to manually delete. Connect to
informatica server and then go to the cache directory and then apply rm command in
unix/linux.
By default you can get last record from aggregator, to get first record use First()
function for each port. Source, SQ, two aggregators, by default one aggregator
returns last row connect to one target isntance, another aggregator select first()
function on each port and connect to other target instances.
There are two different types of transformations available in informatica power center
tool.
Examples for active transformations are: aggregator, filter, router, rank, sorter,
normalizer, joiner, update strategy, source qualifier etc.
We use expression transformation to perform calculations on each and every row. For
example to calculate tax and net salary for each and every employee.
http://biinformaticainterviewquestions.blogspot.in/ Page 5 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
aggregate function. For example if you want to know who is taking highest salary from
each department then we use max() aggregate function.
See the below link you can get more information (definition, tools and different types
of jobs on etl tools) on this question
https://youtu.be/3-kZaJOtcfs
We can eliminate duplicate records using distinct option under properties tab of sorter
transformation. For example if you have 5 rows in source and 2 rows are duplicate
then 4 rows come as output.
40. Can you sort data on more than one port in sorter transformation?
Yes we can sort data more than one port, but the default port order for sorting is top
to bottom. You can select one port ascending and another port descending also. You
can move the ports up and down also if you are expecting different sort order.
There are two ways to eliminate duplicate rows which are coming from the source. If
your source is relational then you can use either first or second option below.
Performance wise first option is the best one. If your source is files then you need to
go for second option only.
42. What are different types of caches exist for sorter transformation?
In general transformations that contains caches internally contains two types data
cache and index cache. Sorter transformation has only one cache i.e data cache. This
data cache contains output ports from the sorter transformation.
There are four different types of joins exist in joiner transformation. In order to use
joiner transformation you should have at least two sources. The sources may be tables
or files or combination of both.
1. Normal Join
2. Master Outer Join
3. Detail Outer Join
4. Full Outer Join
http://biinformaticainterviewquestions.blogspot.in/ Page 6 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
We can join only two sources at a time. One we need to consider it as master and the
other need to consider it as detail. For performance point of view we take less no.of
rows as master and huge no.of rows as detail. Even if you take in reverse direction the
output is same. Internally it builds cache for master source.
Two joiner transformations required to join 3 tables. You can join only two tables at a
time. First take two tables and then connect to joiner. The third table and first joiner
output need to connect to second joiner.
(n-1) joiner transformations required to join n tables in informatica power center tool.
Normal join means, matching rows come as output from both the sources. You should
have at least one common column to join sources.
Matching rows from both the sources and non-matching rows from detail source or
normal join + non-matching rows from details source.
Matching rows from both the sources and non-matching rows from master source or
normal join + non-matching rows from master source.
Matching rows come as output from both the sources and non-matching rows from
master and detail sources or normal join + non-matching rows from both master and
detail sources.
Index Cache contains join condition columns. The join conditions might be on single
column or multiple columns.Data Cache contains output ports from joiner
transformation.
51. Can you assign more than one port as rank port in rank transformation?
No, we can assign only one port as rank port in the rank transformation. Double click
on rank transformation, select ports tab, under this you can select rank port. If you try
to select more than one port, the last port you selected is only checked remaining all
unchecked.
52. How many maximum number of ranks you can give in rank
transformation?
http://biinformaticainterviewquestions.blogspot.in/ Page 7 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
try to give more than this number, then it will throw an error.
Input/output ports
Input Ports
Variable Ports
Output Ports
55. Can you update target table data without using update strategy
transformation?
Yes we can update target table data without using update strategy transformation in
two ways.
1. Use update override option, available at target side. Double Click on target
instance.
2. In the session level you can select update as update option and you need to set
treat source as update. For this key column should be defined in the target.
SCD means slowly changing dimensions, whenever there is a change in the source,
what we need to do in our DWH tables. All dimension tables use SCD in data
warehousing projects.
SCD-I: Only maintains current information, If a new record is coming from the source
we will insert otherwise we will update.
SCD-II: Maintains complete history. If a new record is coming from the source or a
source row with changes exist then we will insert the record. Old record just we will
update flag or version or record_end_dt columns.
SCD-III: Only Maintains partial history (current and previous only). We maintain two
columns for this. Curr_Sal and Previous_Sal. For a new employee we assign value to
Curr_Sal column and Previous_Sal column to null.
b. Connected lookup returns multiple ports, unconnected lookup can return only one
http://biinformaticainterviewquestions.blogspot.in/ Page 8 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
return port
c. Connected lookup supports user defined default values, unconnected lookup doesn't
support user defined default values
If you want return multiple ports, you can choose connected lookup. If you want to
return only one port you can choose either connected lookup or unconnected lookup.
If you want to call the same lookup multiple times with different inputs for a single
row, then we choose unconnected lookup.
Static Cache : The cache values won't change during session run
Dynamic Cache
Persistent Cache
A dynamic cache is a cache where you can modify the data in your cache while loading
data in to your target table. Generally we go for dynamic cache to filer out one record
randomly from the source, if the source has duplicates and target has key columns. To
over come the failure on target side, we can use dynamic cache and we load only one
record into the target.
Persistent cache is a fixed cache, once you build, you can use same cache in multiple
mappings. Generally persistent cache we use, if the same lookup data using in
multiple mappings, instead of building same cache multiple times we build only one
time and reuse. For first lookup we assign a name, this name we use for all other
lookup caches.
The newlookuprow port comes into picture when we use dynamic cache. This port
automatically adds when you select dynamic cache option in lookup transformation. It
returns following values.
The associated port comes automatically into picture when we select dynamic cache in
lookup transformation. It represents with what value need to update the data in the
cache.
Using Normalizer transformation, you can convert columns to rows. After you add the
http://biinformaticainterviewquestions.blogspot.in/ Page 9 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
ports into normalizer tab, then you need select occurs clause. If you want to convert 4
columns to 4 rows then you need to select occurs clause with value 4.
When you drag cobol source into the mapping designer tool from navigation bar,
normalizer comes as source qualifier automatically.
68. What is the difference between GC_ID and GK_ID in normalizer when you
define ID with occurs clause?
GC_ID and GK_ID comes automatically for ports, when you select occurs clause for
that port. GC_ID means Generated Character ID, it gives repetition of values to rows
based on occurs clause. If we give occurs 3, then it repeats the sequence 1,2,3 again
1,2,3 etc
GK_ID means Generated Key Value, it gives continuous sequence 1,2,3,4,5,6 etc
69. What is the priority in source qualifier if we give filter condition (dno=10)
and also sql override (dno=20)?
If you double click on source qualifier, you can see both the properties filter condition
and sql override. The highest priority is sql override, it takes the condition dno=20. If
you don't provide sql override then it will take value from the filter condition.
70. Can we connect more than one source to a single source qualifier?
yes, we can connect more than one source to a single source qualifier. When you drag
multiple sources, for each source you can see one source qualifier, you need manually
delete all source qualifier except one and then all other sources ports you can connect
to one source qualifier.
71. What is the difference you observed between source and source
qualifier?
Source contains source DB data types, where as source qualifier contains informatica
data types. Informatica cannot understand source data types so automatically
converts to informatica data types. For example if you drag oracle source, number is
converted to integer, varchar is converted to string etc
73. What is the difference between top down and bottom up approach?
https://youtu.be/3-kZaJOtcfs
If a mapping contains multiple flows, which flow target table should load first decides
target load plan. Generally we use this, if one flow target becomes source for another
flow. This option is available under mappings menu.
We can create reusable transformations using transformation developer tool. Once you
create reusable transformation you can see in the navigation bar under
transformations folder.
http://biinformaticainterviewquestions.blogspot.in/ Page 10 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
If there is a change in the logic, you need not touch all the mapping where ever you
used reusable transformation, just change at only one place i.e
at transformation developer tool, automatically that change will reflect to all of the
mappings.
a) Active Mapplets
b) Passive Mapplets
If a mapplet contains atleast one active transformation then it is called active mapplet.
Every mapplet contains one mapplet input and mapplet out transformation.
a) Take less no.of rows as master and huge number of rows as detail
83. When you select sorted input option in joiner what ports you need to
sort?
The join condition columns in both the sources need to be sorted. You can sort those
ports using sorter transformations before joiner or in the source qualifier itself. We can
sort the port either in ascending or descending order.
84. What happens if you don't sort the data and you selected sorted input
option?
If you select the sorted input option and don't sort the sources data, then the session
will fail.
http://biinformaticainterviewquestions.blogspot.in/ Page 11 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
c) Persistent Cache
Export the code as xml from one repository and import that xml into another
repository or you can create a deployment group in one repository and copy that
deployment group to another repository.
Invalid mapping shows is red color ball symbol. In order to execute the mapping it
should be valid state. For example if you use joiner transformation in a mapping and
don't give any join condition then mapping becomes invalid.
The mapping parameter values stored in a parameter file. A parameter file is a text
file that contains parameter header followed by parameters and values.
[GLOBAL]
[session_name]
[workflow name]
[folder_name.wf:workflowname.st:session_name] etc
The session won't fail, it uses default value. In the session log we can verify whether
the parameter has taken value or not.
In power center designer tool, menu bar select mappings and then parameters and
variables. You can declare multiple parameters with in a mapping.
http://biinformaticainterviewquestions.blogspot.in/ Page 12 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
Parameter values can't change during session run, variable values can change during
session run.
SetVariable()
SetMaxVariable()
SetMinVariable() etc
In workflow manager tool, select workflow designer tool, select the session right click
and select view persistent values option.
Yes we can reset the value of a variable. In workflow manager tool, select workflow
designer tool, select the session right click and select view persistent values option.
Here you can see reset option.
By using parameter file, we can override the current value of a variable. We need to
give parameter file path in the session level or workflow level.
Tracing level represents the amount of information written into the session log. You
can set tracing level option in the transformation level or session level. If you want to
set at transformation level, double click on the transformation and select properties
tab. Under properties tab you can select tracing level. If you want to set at session
level, double click on the session and select config object and then select override
tracing option.
Terse: Lowest amount of information compared to all other tracing levels, logs just
status information and errors.
Normal: Logs initialization Information, Status information and rows skipped due to
transformation errors.
Verbose Initialization: Normal tracing information and logs names of data and index
file names.
Verbose Data: All information you can get here, status initialization, errors, column by
column and row by row values
We use verbose data, if we want debug our code. Verbose data gives more
information compared to all other tracing levels.
This option is available at session level under config object. The default value for this
option is 1024 bytes. We need to alter the value for this option when the length of the
record in the flat file exceeds 1024 characters. If the length of the row in a flat file is
2000 characters then we need to set line sequential buffer length as 2000.
http://biinformaticainterviewquestions.blogspot.in/ Page 13 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
There are two different types of flat files. A flat file is a notepad file or text file that
contains organised data.
a) Fixed Width: In this file each and every field is defined with fixed length. If the
value of the filed exceeds the field length, the exceeded value moved to next field
value.
b) Delimited: In this file each and every field is separated by some symbol. In real
time most of the times we see comma and pipe delimited flat files.
Informatica can handle multiple delimited flat files. We can specify list of delimiters
one after another.
100|ravi,2000|New York
Here we need to specify two delimiters to handle this data , and | symbols.
When you drag flat file source into a mapping, along with source, source qualifier
comes automatically. In the source qualifier properties tab you need to select
"Currently Processed File Name" option. Then you can connect this port to target
table.
You need to set the property number of initial rows to skip to 1. This option you can
set in the session level or under source analyzer tool.
You need to set the property number of initial rows to skip to 3. This option you can
set in the session level or under source analyzer tool.
Read the footer value into the first port and use filter transformation and then you can
skip the footer.
https://youtu.be/3-kZaJOtcfs
111. How to you load multiple flat file data at the same time?
Use file indirect or file list method. In the session level you need to set source file type
as Indirect. The source file name you need to give the file that contains list of all other
files
112. What are the prerequisites to use file list or file indirect method?
a) All files should have same no.of columns
b) All files have same delimiter
c) All files may or may not have header
d) All files should have same data types for the columns
113. What is the most frequently used row delimiter in a flat file?
http://biinformaticainterviewquestions.blogspot.in/ Page 14 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
If the value of the field exceeds the length of the field, then that value read into the
next field.
115. How many ways you can get the source structure of a flat file?
If file is already available, we can directly select the option import from file under
source analyzer tool. If file not exists then we need to select the create option under
source analyzer.
116. How many ways you can get the target structure of a flat file?
If file is already available, we can directly select the option import from file under
target designer tool. If file not exists then we need to select the create option under
target designer.
117. How can you eliminate duplicate records from a flat file?
You need to user sorter transformation. Under properties tab need to select distinct
option.
The default value of Stop On Errors is zero. If any error comes it will by default write
into the session log and session will succeed.
You need to set Stop On Errors to 1. This property is available under session config
object.
You need to set Stop On Errors to 5. This property is available under session config
object.
Task Developer
Worklet Designer
Worflow Designer
If we create a session under task developer tool i.e automatically reusable and it is
available in the navigation bar under sessions folder.
If you create any task under task developer i.e reusable, you can use that task in n
number of workflows. If you create any task under workflow manager i.e non-reusable
i.e restricted to that workflow only
http://biinformaticainterviewquestions.blogspot.in/ Page 15 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
Session
Email
Command
Event Wait
Event Raise
Timer
Decision
Assignment
Control
Generally E-mail task is used at the end of the workflow, to send a mail to business
about today's data loads status
Command task is used to execute any unix commands or to call a shell script
Event wait task is used to wait for a file and check that file exists at particular path
frequently at regular intervals. Generally this is used as the first task in the workflow,
to check the source system is ready or not
The timer task is used to wait the process particular time. At that time the timer task
will execute. For example you have two sessions, after one session succeeds you need
to wait 1 min and then you need to start another session. In this scenario we can use
timer for 1 min
The decision task is used to avoid multiple link conditions. For example after 10
sessions succeeds then you want to run another task then we can use single decision
task after 10 sessions
135. When session fails how can you fail workflow also?
You need to check the property, Fail Parent If this Task Fails.
Double click on that task and check the property Disable the task
Yes we can convert from non-reusable task to reusable task, double click on the task
and top corner you need to select check box reusable.
http://biinformaticainterviewquestions.blogspot.in/ Page 16 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
Parallel: From start icon in the workflow, we can connect multiple tasks, all these will
run at the same time.
Sequential: From start icon in the workflow, we can connect one task after another
task sequentially.
You can give parameter file and path name either in the session level or workflow
level. Some times we can directly use parameter file name along with pmcmd
command.
The default commit interval is 10000. Once 10000 rows reached to target it will apply
commit operation. We have the flexibility to increase or decrease commit interval
value.
142. Will session log override when you run the session second time?
Yes by default the session log will override, when ever we run a session, unless you
specify properties number of runs or by time stamp.
144. What might be the reasons, your session failed with table or
view doesn't exist ?
145. What happens to third session if second session failed in the workflow,
workflow contains 3 sessions?
By default third session will execute even if the second session fails and also it will
show workflow as succeeded, unless if you specify fail parent if this task fails option.
146. How to make it happen in a workflow only one task succeeds then only
need to run another task?
We need to connect all tasks with in a workflow using links. By using link task, task
we can set status=succeeded. Just double click on the link then it opens a dialogue
box. If you set this if first task succeeds in a workflow then only second task will start.
Domain is a collection of nodes and services. You can configure domain details at the
time of informatica server installation. You can use power center informatica
http://biinformaticainterviewquestions.blogspot.in/ Page 17 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
Repository Service: It is responsible for retrieving or inserting data into the repository
database.
Integration Service: It is responsible for running workflows and talks with repository
service.
When you connect to repository using userid and passowrd, service manager is
responsible for authorization and authentication. Service manager is one of the
component in power center tool.
https://www.youtube.com/watch?v=A5_U3P7K2o0
We can know the current value of the sequence generator transformation without
running the session. From the navigation bar, disconnect to the folder and connect
again, open the mapping and then double click on the sequence generator
transformation, select properties tab, you can see the current value.
The reset option is available under properties tab of the sequence generator
transformation. When you select reset option, every time when you run the session
the sequence generator starts with same value. In real time we set this option if the
target table is truncate and load.
154. What happens if you connect same sequence generator to two targets?
When you connect same sequence generator transformation to two different targets, it
gives different values. For example if it assigns values 1,2,3,4 to first target then for
second target it assigns values 5,6,7,8.
155. What happens if you connect only currval port from sequence
generator?
If you connect only currval port from sequence generator transformation, then it
assigns same currval to all the rows. Currval port automatically comes into sequence
generator when you create, you can't delete this port from the transformation.
http://biinformaticainterviewquestions.blogspot.in/ Page 18 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
https://www.youtube.com/watch?v=A5_U3P7K2o0
The union transformation won't eliminate duplicates, it is equal to union all operation
in oracle. If you want to delete duplicate rows then you need to place a sorter
transformation after union and you need to enable distinct option which is available
under properties tab of sorter transformation.
You can connect n number of sources to union transformation. All the sources should
have same number of columns and data types also. Suppose if one of the source has
less number of columns, we can create a dummy port and we can use this while
connecting to union transformation.
The target pre sql option is available at the session level. Doublce click on the session,
select mappings tab, select targets left side, right side you can see this option. If you
want to execute any sql statement before loading into your target you can use this. In
real time we use this option to drop indexes before loading data into the target table.
The target post sql option is available at the session level. Doublce click on the
session, select mappings tab, select targets left side, right side you can see this
option. If you want to execute any sql statement after loading into your target you can
use this. In real time we use this option to re create indexes after loading data into
the target table.
We have predefined option at the session level, to send a mail when the session fails.
Double click on the session, select components tab, there is a property "On Failure E-
mail". You can select either reusable or non-reusable e-mail task.
We can fail a session explicitly by calling abort() function in the mapping. We can call
abort() function from the expression transformation. For example if you want to fail a
session when the date is invalid, then in the expression you need to create output port
and need to set value like this.
IIF(IS_DATE(EXP_DATE,'YYYYMMDD')!=1,ABORT("Invalid Date"))
A short cut is a repository object from a share folder. You can create shortcuts for
sources, targets and mappings also. In real time we use short cuts if same sources or
targets or mappings used by multiple departments or line of businesses.
http://biinformaticainterviewquestions.blogspot.in/ Page 19 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
When you click on stop, immediately it stops reading, continue processing and writing
data to the target for commit, process waits till commit applied on the target. Abort
also stops reading, continue processing and writing data to the target for commit. If
commit doesn't happen in 60 seconds, it immediately kills writer thread.
167. Can you tell 4 output files that informatica generates during session
running?
Session Log
Workflow Log
Error Log
Bad File
Yes we can return multiple ports from unconnected lookup indirectly by concatenating
multiple ports as single and then you can select that port as return port. Once we get
this port value, using expression we can break value into multiple ports.
This option is available at session level. Double click on the session, select mappings
tab, select targets and right side you can see the option "Target Load Type". By
default it is bulk, you can also set normal. When you select bulk load option, the
target writing performance is high. It internally calls bulk utility and by passes redo
log file.
The disadvantage of bulk load utility is you cannot recover the data, why because it
writes data into the data file, it won't write data into the redo log file. If you want to
recover data the data should be available in the redo log file also.
171. Can you use bulk load if indexes exist on target table?
Bulk load won't support if you have defined indexes on your target table. In order to
use bulk load, drop indexes before load and create indexes after loading data into the
target table.
A code page represents the character set. Character set represents alphabets, digits
and special characters.There are different types of code pages ASCII, EBCIDIC, UTF
etc. Source and target data falls under some code pages.
We give code page while creating relational connections. If you want to write data into
the database other than alphabets, digits and special characters for example non-
http://biinformaticainterviewquestions.blogspot.in/ Page 20 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
English characters then we need to change accordingly. But your target data base also
should support that code page.
A surrogate key is an artificial key, which is not coming from the source. We are going
to generate surrogate key values using sequence generator transformation.
Push down optimization means pushing business logic handing into the database side
instead of handling in the informatica server level. When we use this option, internally
it creates sql for transformations in the mapping and it will fire on the database.
PMCMD is a command line utility, used to run the workflow from command line.
Generally we use this command in unix shell scripts.There are different options
available with this command, you can set parameter file also in the command itself.
A worklet is a group of reusable tasks. You can link multiple tasks with in a worklet.
You can link the tasks parallel or sequential. The tasks might be session, email,
command, decision, timer, control etc
You can create worklet under worklet designer tool. The worklet designer tool is
available under power center workflow manager tool. When you crate a worklet you
can see start icon similar to workflow icon.
If you want to execute some number of tasks in multiple workflows in order to fulfill
particular business logic.In future if you wan to add one more task, you need not
touch all workflows, just add at only one place under worklet designer, that change
automatically will reflect in all workflows. It saves a lot of time for build.
Yes you can include one worklet in another worklet. Including one worklet in another
worklet known as nesting worklets. In worklet designer tool select insert option and
then select another worklet you want to include.
We can't run a worklet without workflow even though start symbol exists for worklet
also. If you want to execute a worklet you need to place in workflow only similar to
the tasks in the workflow.
http://biinformaticainterviewquestions.blogspot.in/ Page 21 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
Yes we can run a workflow with out session task. We can run any other tasks email or
command or timer or control or event wait etc with in the workflow.
Throughput represents number of rows read from source per second and the number
of rows writing in to the target. It shows number of bytes along with number of rows.
Using workflow monitor we can view the load statistics, how many rows read from
source, how many rows written and failure and success statuses. Using this we can
see history runs also.
Task View: This view displays workflow run details in chronological format.
GanttChart View: This view displays workflow run details in report format.
187. What is the difference between applied rows and affected rows?
Applied rows on the target side means number of rows reached to the target and
affected rows means number of rows actually updated or deleted or inserted in the
target.
There are different ways you can create ports. One way is double click and go to ports
tab, select the icon add a new port to the transformation another option is you can
directly drag from another transformation.
Using this option you can propagate attribute names, data types and size from one
transformation to other entire flow. Generally we use this option in enhancements of
the project.
Under workflow manager tool, in the menu bar select connections, select database,
give connection name, userid and password. Generally we can create relational
connections only for databases.
The default condition for filter transformation is TRUE. Just if we connect filter and
don't give any condition then, what ever the rows entered into filter are moving to
next level transformation or target.
Eg:- replacechr(0,'abcd','ac','fx')
output: fbxd
Eg:- replacestr(0,'abcd','ac','fx')
http://biinformaticainterviewquestions.blogspot.in/ Page 22 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
output: abcd
We can remove spaces at the beginning and end of a string using LTRIM and RTRIM
functions. Both these functions you can apply at the same time on a string.
ltrim(rtrim(String)).
We can check date is valid or not using predefined function IS_DATE(). This function
takes two arguments, first argument is date and the second argument is format of the
date. If it is valid date then it returns 1 else it returns 0.
SYSDATE gives current time of that node at that time, it will change while loading
huge data you can observe. SESSIONSTARTTIME is the time represents constant value
during entire session run, that contains time at the session started.
We can remove new line character in a character column using replacestr() function.
In real time we face this kind of issue, if the column value contains huge amount of
data.
REPLACESTR(1,STRING_PORT,CHR(10),CHR(13),'')
History Load: First we load entire data into the DWH from the beginning of the
business to till date.
Incremental Load Or Delta Load: Incremental data include daily data after history load
or some times weekly data.
A data map is a file layout. Generally we create data maps when we work with
EBCIDIC files. We use power exchange tool to create data maps. A data map contains
record and table.
When you go for informatica interview, interviewers test your skills on Informatica,
http://biinformaticainterviewquestions.blogspot.in/ Page 23 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
Database (Oracle), Unix and Data warehousing. You can demand package only when
you perform interview extra-ordinary. So interview is the key for job and package.
To get complete set of questions below, you have to do the payment. For more
details on below questions and answers contact me:
https://www.youtube.com/watch?v=noWePDfRa9c
If you are already trained on informatica and looking for real time project assignment,
contact me.
Docs:
Trainer: Venkat
Moblile: 91-8008829289
91-9886895594
E-mail: informaticatrainer.expertise@gmail.com
https://informaticaonlinetraing.blogspot.com
https://www.youtube.com/watch?v=W6e9kUmiIF0
Home
http://biinformaticainterviewquestions.blogspot.in/ Page 24 of 25
informatica interview questions and answers 16/05/2017, 10*04 AM
http://biinformaticainterviewquestions.blogspot.in/ Page 25 of 25