Sei sulla pagina 1di 13

1)veeru

ibm interview question


suppose if a flat file like this
nellri satya bsc

i want to write nelluri in one target,satya in another target and bsc in another
target

i know this we can achive by using substr in exp and router with length argument
but if the file contains multiple rcords how will u apporoach thisin another wa
y? Jan 13 (21 hours ago) delete
RamaSurReddy
Hi Veeru,
Write One Stored Procedure and call this Procedure as unconnected inside ur mapp
ing and made this as normal and call this SP 3 times by passing String and Table
Name separetly each time Or Use SQL Transformation
Procedure Logic : (Use Dynamic querys to make the insert statement dynamically(T
able Name and use execute immediate statement.
Regards,
RSReddy
********************************************************************************
*****************************
2) veeru
very urgent
1.suppose if my source contains 100000 of records,i want to load 1-10000 recods
in one day,10000-20000 records in second day in the sanme manner 90000-100000 re
cods in th 10 th day how can we achive this using informatica
let us take the normal dept table as an example
2 how can we get particular day data without having any timestamp from a source
table Jan 13 (22 hours ago) delete RamaSurReddy
Hi Veeru,
1)
Through Informatica Its Not Possible.Using Unix Script and Informatica PMCMD Com
mand we will do the Same.Please loigc for the unix Script at high Level :
1)Get the no of records from the File Say (n)
2) Devide that number by 10000 say (s=n/10000)
3)start=1,end=10000
4)For i=1 to s
{
sed start end filename > newoutputfilename
Todaydate=extract day from the date
start= i*10000
end=start+10000
while todaydate=extract day from the date
{
sleep()
}
pmcmd startworkflow workflowname(Infformatica Workfow with file direct)
}
2) Use trunc example select count(*),trunc(insert_dt) from tables group by trunc
(insert_dt)
Regards,
RSReddy
********************************************************************************
************************
3)WIPRO interview questions
1. my source data is like this column1
a
a
b
b
b
In the target i want this column1 column2
a 1
a 2
b 1
b 2
b 3
2.we want to load the number of rows inserted and number rows update data to a t
able.How can we achieve in informatica
RamaSurReddy
Hi All,
Use Oracle Analytical Functions(Rownum and Over by Partition Clause) in source q
ualifer query,if ur source is Oracle
If its a Flat-File use Aggregator and Sorter Transformation Combination
********************************************************************************
********************************
5)..::RAHUL::..
Field value insertion using Parameter file
I am facing a problem with parametrization
I have about 5-10 mappings and in which all target tables have a common field PR
OID ,
I need to insert a data ID123 to PROID in all the tables using parameter that is
with out hard coding , can any one please tell me how it can be achieved my a s
ingle parameter file for all 10 mappings
Thanks Rahul 12/23/09 delete
RamaSurReddy
Hi All,
Create One Common Parameter File for all ur 10 Mappings.Use the Same Parameter i
n all your Mappings Means For each Mapping Create One Output Variable and Assign
the Parameter Value to the O/P Port.
Regards,
Suri
********************************************************************************
*******************************
7)sandeep
Sceneario to implement
I have source records coming as flat file
EMPNO|PR|AMOUNT
123456|P|20
123456|R|6000
This needs to be split into single row to reach the target defination as we repo
rt only one row per EMPNO
Target should produce 1 record per Employee as a flatfile
123456|P|20|R|6000
********************************************************************************
******************************
8)
mallika
scenario
Source table has 100 records, half of the 100 records(1 to 50) should go to targ
et-1 and remaining half of the record (51 to 100)should go to targte-2 ? how to
achieve this?
I tried by hardcoding. without hardcoding how to get the result.
RamaSurReddy
Hi All,
Here For You : Source is RDBMS So, Follow the Below Steps , It will increase the
Performance :
1) in Source Qualifier : select rownum,Columns from the tables
2)Sources Post SQL : select count(*)/2 into $Valriable from table ($ varaible is
nothing but Informatica Variable)
3)Take the Router and create a Port and antoher one is defefallt
First Port Cond---> Rownum <= $variable
Default Port Cond--->Default
4)Pass the Ports to Indivisual Targets
Let me know if u need more info ?
********************************************************************************
*************************
8)Surya Keran
Need answer
Hi every one..
Needed help for the below scenario.
Let us assume that the mapping is running daily @ 8 pm to capture latest data fr
om the sources and it will completes by 9 pm. After 9 pm some changes takes plac
e at the source database.
I used mapping variable and it will assigned to current date and condition in th
e transformation is grater than current date. So for next session run it capture
s data from 12 AM of next day. In this scenario data missing which was updated a
fter 9 PM.
So how can i get that missing data including latest data from source?
Thanks in advance....
RamaSurReddy
Hi All,
Usually For Extracting Data From Source System incrementaly using dates will wil
l use 2 variables/Parameters say From_date and To_date : Use the Below Logic.U n
evel miss Any Data :
When Worlflow Stars : Update/Assign the to_date with WorkflowStartTime/SESSSTART
TIME.
End Of the Workflow : Swap the Dates Means (From_date=To_date)
Example :
1 Run : From _date : 28-12-2009 08:08:12 PM
To_date : SESSSTARTIME (29-12-2009 09:09:00 PM)
End Of the Run Swap the Dates : Now
From_date : 29-12-2009 09:09:00 PM
To_date : 29-12-2009 09:09:00 PM
2 Run : From_date : 29-12-2009 09:09:00 PM
To_date : SESSSTARTIME (30-12-2009 09:09:00 PM)
End Of the Run Swap the Dates : Now
From_date : 30-12-2009 09:09:00 PM
To_date : 30-12-2009 09:09:00 PM
Let me Know If u need more info
********************************************************************************
*********************
10)ashok~
Query
Hi,
I have a oracle source and flat file as a target. As soon as i update my target.
I want to update my source(some field) for those records which are sent to the
target.
Please suggest how this can be done.
Post SQL?or creating another flow?
Hope my question is clear. 12/8/09 delete RamaSurReddy
Hi Ashok,
Here For you , Source---> Oralce and Target ---> Flat-File
The Post/Pre SQl is not possible if u have Flat-File as Target. so u need to Cre
ate a Separte flow.
First Flow---> Source (Oralce)--->Target(Flat_file)
Seconds Flow-->Source(Flat-File)---SQ-->Filter Trans (Trunc(Last Update Filed)=T
runc(SESSSTARTTIME)--->UPD Trans (DD_UPDATE)--->Target(Oracle)
enable the Target Load Plan .
Regards,
Suri
********************************************************************************
******************************************************************************
10)
Divz.......
Plz help me..
Am working in informatica 8.6, flat file as source, need to load the data into s
taging area first in that i need to satify
1) Check to see if the input file has changed from the previous day. Only load i
f the input file has changed from the previous day.
sol i got is .Compare of file s need to be done using file level checksum and not
by file size. any one have any idea...
RamaSurReddy
Hi All,
Create One Command Task that will call the Unix Shell script to do the Data Vali
dations and to inwoke the Workflow. Logic for the unix script is :
1) using mtime command in unix , get the last modification date of the file and
then check that date with today's data :
1.1) If there is a change then calucuate the Checksum, if the check sum is not c
orrect
exit from the unix script
1.2) Else (Dates are Different means the file is the latest one and Check sum al
so matching then using PMCMD command inwoke the workflow
********************************************************************************
**********************************************************************
11)
Prasant
need help
Hi ,
I have two source files
file 1
A B C
1 abc def
1 ijk lmn
1 123 opq

File 2
A E F
1 x y
1 a b
1 t p

out put should be


A B C E F
1 abc def x y
1 ijk lmn a b
1 231 opq t p

without using Joiner

please advice 11/29/09 delete RamaSurReddy


Hi Prasnth,
Use two Flows/Target Load Plan and Insert Else Update Logic in side the Mapping
Firsr Flow--- (Only Insert)
First Sorce--->Target (insert first file 3 columns and remaing 2 columns as null
)
Send Flow (Only Update
Srcond Source--->Same Target ( Update the Remainging null Columns Based on the K
ey )
********************************************************************************
********************************************************************************
******
14)
RBS interview quesion
Hi,
My input is like this..
id version value
1 v1 x
1 v2 y
1 v3 z
2 x
I need output like this
1 x y z
2 null null x

How can we achieve it in informatica


RamaSurReddy
Hi All,
To achive above req do the below steps :
1) select * from table order by first column (in Source Qualifer)
2) In expression develop a logic like below
Create 3 O/put Ports
if 2 column='V1' then 3 column value else null into first o/p port
if 2 column='V2' then 3 column value else null into second o/p port
if 2 column='V3' then 3 column value else null into Third o/p port
3) Now link the 3 oputput ports and First Column to the traget (USe insert else
Update Logic based on the First Column)
********************************************************************************
********************************************************************************
**********
19)
satya
i have source (for ex emp) load into tgt.fine but next session i want load data
from emp10(say) ,have more records than emp with out change the mapping how we c
an do that?
Answer: in mapping tab-source properties--> source table (if we give emp10 it wi
ll work.i know that).
But How can we declare Parameter file?
i tried that also but geting error pls correct me..
$table_name=emp20--in parameterfile with two.txt in some location
and given parameter file name in session properties also..
but session got failed..
error is invalid charater(oracle)

can any body help me pls..........


S A T
initially declare the mapping parameter...i think it should prefix two dollor si
gns...$$..not single.
And make sure ur parameter file has the valid content..i mean structure shud be
perfect.
one of the valid structures can be....
[<folder_name>.WF:<wf_name>.ST:<session_name>]
$$table_name=<xyz>
hope u'll get through this time..
RamaSurReddy
Hi All,
As per my Understanding, Here source and Target table names are Dynamic Oracle T
ables.If they are Flat-Files then Paramter File will help you. But here Soure an
d Target are Relational. And here One More Catch is say in future a new column i
s added in source then u target will not reflect the column . for this u need to
change ur mapping. so resolve all these kind of issues :
1) Create Dummy Source and Target in Informatica
2) Create One Proc. (Use Oracle Dynamic Sql and cursors Concepts and build the l
ogic accroding to ur REQ.
********************************************************************************
********************************************************************************
******
13)Logic needed
Hi,
Can anyone suggest me the logic to implement this.
I need the values to be rounded to a multiple of 25.
For eg., 23 should be rounded to 25
45 should be rounded to 50
67 should be rounded to 75 and so on.
I am not sure of the maximum value in my source.
Please help me in getting the logic without hardcoding the values.
Thanks
anu
RamaSuriReddy
Say N is your Input ;
a=n mod 25
if (a >=13)
N= (n-a)+25
else
N= n-a
********************************************************************************
********************************************************************************
**********
15)Subhro Bikash
help needed regarding scheduler
i need to run a session continuosly which get triggered on an event wait.
but its stopping after the first run once it get the trigger file. the trigger f
ile gets deleted in the post-session command script of the session. but what i n
eed is that it should run continuously and each time it gets a new trigger file
the session should run again.
is this feasible through Informatica scheduler?
Subhro Bikash
thanx all for ur inputs. i have been able to schedule the workflow.
all i needed to do was to use the session task after the event wait and schedule
the whole process to run continously.
so the process runs again after a successful run, and waits for the trigger file
. previously i was using some properties of the session like "fail parent if tas
k fails" which might have been the problem.
RamaSurReddy
Write One Command task and use PMCMP commands
********************************************************************************
********************************************************************************
*
16)venu
How to achieve this through informatica
Hi ,
I have a simple MAPPING in which target is csv .
data from src files for one field as follows.....
001
002
003
if i load the above data into tgt csv , then first 2 prefixed zero's wil be remo
ved .
Because of default csv file property.
How to overcome this thru informatica only..
i feel we can overcome this thru a simple unix cmd(sed), but i want to do overco
me this thru informatica only
Srinivas Kuncham
Hi Venu,
I don't think it is possible for .csv file. But I can give solution for .xls(exc
el file).
assume ur field name IN_CCODE.
then in expr transformation give expr as (out_ccode string 20)
out_ccode= '=text('||IN_CCODE||','||chr(34)||'000'||chr(34)||chr(41)
/*in excel we have a function called TEXT(number,format)*/
and connect to target field.
Note: Target field length should be more than 3 because u r passing entire strin
g. When the above entire string goes to excel file the function will be executed
and u will see the result as
001
002
003
Let me know if I am wrong..
********************************************************************************
****************************************************************************
21)
How to cahnge the source table name dynamically ?
Use Mapping Parameter
********************************************************************************
****************************************************************************
22)
multiple parameter files in a session
Use a single parameter file to group parameter information for related sessions.
When sessions are likely to use the same database connection or directory, you m
ight want to
include them in the same parameter file. When existing systems are upgraded, you
can update
information for all sessions by editing one parameter file.
Use pmcmd and multiple parameter files for sessions with regular cycles.
When you change parameter values for a session in a cycle, reuse the same values
on a regular
basis. If you run a session against both the sales and marketing databases once
a week, you
might want to create separate parameter files for each regular session run. Then
, instead of
changing the parameter file in the session properties each time you run the sess
ion, use pmcmd
to specify the parameter file to use when you start the session.
*********************************************************************
23)Hi every body.
Can any one tell me the solution for the below question.
Need solution in Informatica and Oracle
NAME:
Apple
Apaa
mama
kaaa
aaaa

Let us assume that the above is table with single column called NAME
and the below are the records of the table.
The question here is
Want to display how many times that the 'A' reoccurs in each record should be di
splayed in separate column of different table.
Ans:
Hi Surya,
Select Name,REGEXP_COUNT(name, 'a', 1, 'i') as "Count" from Source_table
Fire the Above query in ur Source Qualifier Transformation then drag the two col
umns into your O/P Table Name
Mapping Flow--->Source(One Column)--->Source Qualifer--->Target(2 columns Name,C
ount)
REGEXP_COUNT is a new function added from Oracle 10g Onwards.This function will
give the no of times that a single character is repeated in a given string.
Example : I/P--->'RaMASURIREDDY'.I want to find out How many time 'A' Repeated.
select REGEXP_COUNT('RAMASURIREDDY','A',1,'i') from dual will give you=2
Here 'i' means Ignore case.
Let me know if you need more info ?
Regards,
RAMASURREDDY
********************************************************************************
***************************************************
25)
Hi all
I have a query which i am illustarting through the below example
SHOP_NAME , COUNTER_NO SHOP_ADDRESS
SHOP_1, 1, ABC
SHOP_1, 2, ABC
SHOP_1, 3, ABC
SHOP_2, 1, DEF
SHOP_2, 2, DEF
SHOP_3, 1, GHI
I need the COUNTER_NO in sequencial order w.r.t he SHOP_NAME
which transformation should i use and in which way
i tried with sequence generator.. but it gives a seqential no irrespective of an
y field(SHOP_NAME)
Ans :
Hi Kanhu,
Use Row_number Analytical Functions in Oracle. (Diff Between RANK/DENSE RANK,ROW
NUM,ROW_NUMBER()
Regards,
RAMASUREDDY
********************************************************************************
***********************************
26)
RAJESH
Need help to solve
In my source the table data like this in two column:
parent child
p1 p2
p2 p3
p3 p4
I need the result like this in target table:
level1 level2 level3 level4
p1 p1 p1 p1
p1 p2 p2 p2
p1 p2 p3 p3
p1 p2 p3 p4

Please provide the logic to implement in Informatica as soon as possible.


Thanks in advance!

ANS :
Hi Rajesh,
Interesting Question ......

If Data(Parent/Child Relationship) is Dynamic then through informatica ,its not


possible to load the data into target because (Target Table Columns are depedent
on Source Data(Parent/Child Relationship).
Through Procedure we can achive the same.
Procedure Logic is :
1) Take the Maximum number from the sources data (How many columns u want in tar
get)
select max(substr(child,2)) into count from source
Dynamic_sql varchar2(4000);
Dynamic_sql :='Create table test('
execute immediate 'drop table test'; --(drop the Table if it exist)
for i in 1 to count
{
sql :='Level' || i 'varchar2(10),';
dynamic_sql := dynamic_sql || sql; ----Table with columns Creation Dynamically
}
execute immediate dynamic_sql | ')';

Dynamic_sql :='insert into test values('


for i in 1 to count
{
for j in 1 to count
{
if (i==j) then ---Values insert into table dynamically
temp=i
else
temp=j
sql :=''p' || temp'',';
dynamic_sql := dynamic_sql || sql;
}
execute immediate dynamic_sql | ')';
commit;
}
Let me know if you need any other information ?
Regards,
RAMASURREDDY
********************************************************************************
***************************
25)
$ cat file| awk '{for(i=1;i<=NF;i++){if (prev != $i){printf("%s ", $i);prev=$i}
else prev=$i}printf "\n"}'
********************************************************************************
***********************************************************************
scenario
id name
1 abc
2def
3ghi
1 abc
2 def
2 def
3 ghi
----------
Target
-----------
id name
1 abc
1 abc1
2 def
2 def2
2 def3
3 ghi1
3 ghi2
how to acheive this logic
Hi Charan,
select id,Name || decode (row_number() over(partition by id order by id),1,null,
row_number() over(partition by id order by id)-1) from Table
Use the above query in Source Qualifier and load the records into your target.
Regards,
RSReddy

Potrebbero piacerti anche