Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
The report would be fine IF Impromptu chose the expected path. The choice is not always right.
Eliminating looping joins prevents Impromptu from making the wrong choice. To eliminate looping joins, you
can either break unnecessary joins, e.g. reports do not need a join between table F and D, e.g.
B -> C -> D
A -> <
E -> F
But if you need all the joins, use ALIAS tables to break the looping join. Add an alias table for table A and
break the join from table A and E, e.g.
A -> B -> C
> -> D
Alias A -> E -> F
Both solutions could affect existing reports.
Title:
Created:
Applies To:
Looped joins
Nov 04, 1999
Impromptu - 2.0
Impromptu - 3.01
Impromptu - 3.03
Impromptu - 3.04
Impromptu - 3.5
Impromptu - 4.0
Impromptu - 5.0
Impromptu - 7.1
Problem Description
Under the Joins dialog on the Analyze Tab it states a Loop join is present. What does this mean and how can
it be resolved?
Solution Description
A Loop Join occurs when there are multiple paths between database tables. An example of this is A joins to B
and B joins to C and C joins to A.
The proper definition of join strategies in an Impromptu catalog is crucial to the success of an ad-hoc reporting
environment. Impromptu shelters the user from having to know any of the technical information about the
database, including name, location, table and column names, and join strategies. The Impromptu
Administrator must be very thorough in their definition and testing of the join strategies. Impromptu provides
an ability to analyze the joins and determine any anomalies. The most common is the Loop Join.
The implications of the loop join are that there is no way to predetermine which of the various join paths will be
used by Impromptu when creating the SQL. SQL is dynamically generated for each report as it is created and
before it executes. For example, to create a report using columns from tables A and C, we could join from
A=>B=>C or directly from A=>C. In some cases, both of these joins would result in the same data being
retrieved. However, in other cases it may result in different data. Impromptu will always try to use the shortest
route in joining multiple tables. It will also try to use the tables that are already included in the query, rather
than including an additional table.
There is no hard and fast rule to resolving Loop Joins. There are four basic resolutions:
1. Break the join
2. Create alias tables with different join strategies
3. Use the join expression editor to specify the join
4. Modify SQL
Each of these resolutions is done for a different reason and may have some issues associated with it.
Determine the best resolution for your situation by analyzing the data with regards to the results required from
the join structure.
Example:
The join structure looks like this:
A=B
A=C
B=C
This is producing incorrect results. To resolve this issue, make table C an alias to omit the loop in the join
structure and this will result in data displaying correctly.
Correct Join Structure:
A=B
A = C alias
B=C
8.Use this query i am retriving all years OCt data from 01-10-2004 to 30-10-2007
i need to restrict this query to current date and current year
You have a function called 'extract' in cognos.
Ex:- extract(month, the date field). By giving like this you will get month. So you can keep a filter to restrict
the rows only for October.
9.How to show the data reported horizontally:
(For example:)
employee skill
1a
1b
1c
2d
2e
2f
Report result:
1 abc
2 def
Assuming 3 records per grouped item:
1. Group on employee
2. Create a running count based on the skill field.
3. Create 3 calculated columns based on the count field.
Call them skill1, skill2, skill3:
if (count = 1) then (skill) else null
if (count = 2) then (skill) else null
if (count = 3) then (skill) else null
4. Create 3 more calculated columns using the maximum function. Call them maxskill1, maxskill2, maxskill3
maximum (skill1)
maximum (skill2)
maximum (skill3)
5. Group on employee on maxskill1 on maxskill2 on maxskill3
6. Report employee axskill1 maxskill2 maxskill3
10.How to pass multiple values from picklist prompt to sub report filter
The sub-report only includes the first value.
When the sub-report query runs, it checks for the first row in the Customer Name column and shows only
information for that customer. If you want a sub-report to show information for another row in the column,
place the main report in a form frame that shows only one row at a time. When you insert the sub-report into
the form frame as well, it changes as you click through the rows in the main report. For example, the main
and sub-report above are both in a form frame that shows only one row of the Customer Name column at a
time. Each time you scroll to another customer name, the sub-report shows only information for that
customer."
11.How can I create a dynamic column name in Cognos
1. Create a calculated column which contains the information that the header is to contain, such as "Report
for year 1999" (concatenated text and date to string sub string extraction).
2. Highlight the report, and then right-click.
3. Select Properties, and then click the Headers/Footers tab.
4.Clear the Column Title Header check box. This will remove the headers from your columns.
5. Reinsert the rest of the column headers; insert text will work. For the dynamic column, from the Insert
menu, click Data and select the calculated column you created and insert it into the report.
Some tables have columns such as AIRPORT_NAME or CITY_NAME which are stated as the primary keys
(according to the business users) but ,not only can these change, indexing on a numerical value is probably
better and you could consider creating a surrogate key called, say, AIRPORT_ID. This would be internal to
the system and as far as the client is concerned you may display only the AIRPORT_NAME.
7.Importance of Surrogate Key in Data warehousing?
Surrogate Key is a Primary Key for a Dimension table. Most importance of using it is it is independent of
underlying database. i.e Surrogate Key is not affected by the changes going on with a database.
8.What is the flow of loading data into fact & dimensional tables?
Fact table - Table with Collection of Foreign Keys corresponding to the Primary Keys in Dimensional table.
Consists of fields with numeric values.
Dimension table - Table with Unique Primary Key.
Load - Data should be first loaded into dimensional table. Based on the primary key values in dimensional
table, the data should be loaded into Fact table.
9.What is a linked cube?
A cube can be stored on a single analysis server and then defined as a linked cube on other Analysis
servers. End users connected to any of these analysis servers can then access the cube. This arrangement
avoids the more costly alternative of storing and maintaining copies of a cube on multiple analysis servers.
linked cubes can be connected using TCP/IP or HTTP. To end users a linked cube looks like a regular cube.
10.What is meant by metadata in context of a Datawarehouse and how it is important?
Metadata or Meta Data Metadata is data about data. Examples of metadata include data element
descriptions, data type descriptions, attribute/property descriptions, range/domain descriptions, and
process/method descriptions. The repository environment encompasses all corporate metadata resources:
database catalogs, data dictionaries, and navigation services. Metadata includes things like the name,
length, valid values, and description of a data element. Metadata is stored in a data dictionary and
repository. It insulates the data warehouse from changes in the schema of operational systems. Metadata
Synchronization The process of consolidating, relating and synchronizing data elements with the same or
similar meaning from different systems. Metadata synchronization joins these differing elements together in
the data warehouse to allow for easier access.
11.Differentiate Primary Key and Partition Key?
Primary Key is a combination of unique and not null. It can be a collection of key values called as composite
primary key. Partition Key is a just a part of Primary Key. There are several methods of partition like Hash,
DB2, Random etc..While using Hash partition we specify the Partition Key.
12.What are the possible data marts in Retail sales.?
Product information, sales information
13.What is degenerate dimension table?
In simple terms, the column in a fact table that does not map to any dimensions, neither it s a measure
column. for e.g Invoice no, Invoice_line_no in fact table will be a degenerate dimension (columns), provided
if you dont have a dimension called invoice.
14.What is the main differnce between schema in RDBMS and schemas in DataWarehouse?
RDBMS Schema
* Used for OLTP systems
* Traditional and old schema
* Normalized
* Difficult to understand and navigate
* Cannot solve extract and complex problems
* Poorly modelled
DWH Schema
* Used for OLAP systems
* New generation schema
* De Normalized
* Easy to understand and navigate
SCD Type 2,a new record with the new attributes is added to the dimension table. Historical fact table rows
continue to reference the old dimension key with the old roll-up attribute; going forward, the fact table rows
will reference the new surrogate key with the new roll-up thereby perfectly partitioning history.
SCDType 3, attributes are added to the dimension table to support two simultaneous roll-ups - perhaps the
current product roll-up as well as current version minus one, or current version and original.
23.What is VLDB
The perception of what constitutes a VLDB continues to grow. A one terabyte database would normally be
considered to be a VLDB.
24.What are non-additive facts
Non-additive facts are facts that cannot be summed up for any of the dimensions present in the fact table.
Example: temparature,bill number...etc
25.What are slowly changing dimensions
If the data in the dimension table happen to change very rarely,then it is called as slowly changing
dimension.
Ex: changing the name and address of a person, this happens rarely.
26.What does level of Granularity of a fact table signify
In simple terms, level of granularity defines the extent of detail. As an example, let us look at geographical
level of granularity. We may analyze data at the levels of COUNTRY, REGION, TERRITORY, CITY and
STREET. In this case, we say the highest level of granularity is STREET.
27.Which columns go to the fact table and which columns go the dimension table
The Aggregation or calculated value columns will go to Fact Table and details information will go to
dimensional table.
28.What is ODS?
ODS means Operational Data Store It is used to store current data through transactional web applications,
Sap, MQ series Current data means particular data from one date into one date. ODS contains 30-90 data.
29.What is Normalization, First Normal Form, Second Normal Form , Third Normal Form
Normalization: The process of decomposing tables to eliminate data redundancy is called Normalization.
1 N.F :- The table should contain scalar or atomic values.
2 N.F :- Table should be in 1N.F + No partial functional dependencies
3 N.F :-Table should be in 2 N.F + No transitive dependencies
30.What is real time data-warehousing?
Real-time data warehousing is a combination of two things: 1) real-time activity and 2) data warehousing.
Real-time activity is activity that is happening right now. The activity could be anything such as the sale of
widgets. Once the activity is complete, there is data about it. Data warehousing captures business activity
data. Real-time data warehousing captures business activity data as it occurs. As soon as the business
activity is complete and there is data about it, the completed activity data flows into the data warehouse and
becomes available instantly. In other words, real-time data warehousing is a framework for deriving
information from data as the data becomes available.
31.What are modeling tools available in the Market
Modeling Tool Vendor
Erwin Computer Associates
ER/Studio Embarcadero
Power Designer Sybase
Oracle Designer Oracle
32.What is a general purpose scheduling tool?
General purpose of scheduling tool may be cleansing and loading data at specific given time.
33.What is a lookup table?
reference table can be otherwise called as lookup table
Informatica
1.What are Target Options on the Servers?
Target Options for File Target type are FTP File, Loader and MQ. There are no target options for ERP target
type Target Options for Relational are Insert, Update (as Update), Update (as Insert), Update (else Insert),
Delete, and Truncate Table
2.How do you identify existing rows of data in the target table using lookup transformation?
Can identify existing rows of data using unconnected lookup transformation.
3.What are Aggregate transformation?
The Aggregator transformation allows you to perform aggregate calculations, such as averages and sums.
The Aggregator transformation is unlike the Expression transformation, in that you can use the Aggregator
transformation to perform calculations on groups
4.What are various types of Aggregation?
Various types of aggregation are SUM, AVG, COUNT, MAX, MIN, FIRST, LAST, MEDIAN, PERCENTILE,
STDDEV, and VARIANCE.
5.What are 2 modes of data movement in Informatica Server?
The data movement mode depends on whether Informatica Server should process single byte or multi-byte
character data. This mode selection can affect the enforcement of code page relationships and code page
validation in the Informatica Client and Server.
a) Unicode - IS allows 2 bytes for each character and uses additional byte for each nonascii character (such as Japanese characters)
b) ASCII - IS holds all data in a single byte
The IS data movement mode can be changed in the Informatica Server configuration parameters. This
comes into effect once you restart the Informatica Server.
6..What is Code Page Compatibility?
Compatibility between code pages is used for accurate data movement when the Informatica Sever runs in
the Unicode data movement mode. If the code pages are identical, then there will not be any data loss. One
code page can be a subset or superset of another. For accurate data movement, the target code page must
be a superset of the source code page.
Superset - A code page is a superset of another code page when it contains the character encoded in the
other code page, it also contains additional characters not contained in the other code page.
Subset - A code page is a subset of another code page when all characters in the code page are encoded in
the other code page.
7.What is Code Page used for?
Code Page is used to identify characters that might be in different languages. If you are importing Japanese
data into mapping, u must select the Japanese code page of source data.
8.What is Router transformation?
Router transformation allows you to use a condition to test data. It is similar to filter transformation. It allows
the testing to be done on one or more conditions. Router transformation is use to load data in multiple
targets depending on the test condition.
9.What is Load Manager?
While running a Workflow,the PowerCenter Server uses the Load Manager process and the Data
Transformation Manager Process (DTM) to run the workflow and carry out workflow tasks.When the
PowerCenter Server runs a workflow, the Load Manager performs the following tasks:
1. Locks the workflow and reads workflow properties.
2. Reads the parameter file and expands workflow variables.
3. Creates the workflow log file.
4. Runs workflow tasks.
5. Distributes sessions to worker servers.
6. Starts the DTM to run sessions.
7. Runs sessions from master servers.
8. Sends post-session email if the DTM terminates abnormally.
When the PowerCenter Server runs a session, the DTM performs the following tasks:
1. Fetches session and mapping metadata from the repository.
2. Creates and expands session variables.
3. Creates the session log file.
4. Validates session code pages if data code page validation is enabled. Checks query
conversions if data code page validation is disabled.
5. Verifies connection object permissions.
6. Runs pre-session shell commands.
7. Runs pre-session stored procedures and SQL.
8. Creates and runs mapping, reader, writer, and transformation threads to extract,transform, and load data.
9. Runs post-session stored procedures and SQL.
10. Runs post-session shell commands.
11. Sends post-session email.
10.What is Data Transformation Manager?
After the load manager performs validations for the session, it creates the DTM process. The DTM process
is the second process associated with the session run. The primary purpose of the DTM process is to create
and manage threads that carry out the session tasks.
The DTM allocates process memory for the session and divide it into buffers. This is also
known as buffer memory. It creates the main thread, which is called the master thread.
The master thread creates and manages all other threads.
If we partition a session, the DTM creates a set of threads for each partition to allow
concurrent processing.. When Informatica server writes messages to the session log it
includes thread type and thread ID. Following are the types of threads that DTM creates:
Master thread - Main thread of the DTM process. Creates and manages all other threads. Mapping thread One Thread to Each Session. Fetches Session and Mapping Information. Pre and Post Session Thread-One
Thread each to Perform Pre and Post Session Operations.reader thread-One Thread for Each Partition for
Each Source Pipeline. WRITER THREAD-One Thread for Each Partition if target exist in the source pipeline
write to the target. TRANSFORMATION THREAD - One or More Transformation Thread For Each Partition
11.What is Session and Batches?
Session - A Session Is A set of instructions that tell the Informatica Server How and When to Move Data
from Sources to Targets. After creating the session, we can use either the server manager or the command
line program pmcmd to start or stop the session.Batches - It Provides A Way to Group Sessions for Either
Serial or Parallel Execution by the Informatica Server. There Are Two Types Of Batches :
Sequential - Run Session One after the Other. concurrent - Run Session At The Same Time.
12.What is a source qualifier?
When you add a relational or a flat file source definition to a mapping, you need to connect it to a Source
Qualifier transformation. The Source Qualifier represents the rows that the Informatica Server reads when it
executes a session
13.Why we use lookup transformations?
Lookup Transformations can access data from relational tables that are not sources in mapping. With
Lookup transformation, we can accomplish the following tasks:
Get a related value-Get the Employee Name from Employee table based on the Employee ID
Perform Calculation.
Update slowly changing dimension tables - We can use unconnected lookup transformation to
determine whether the records already exist in the target or not.
14.While importing the relational source defintion from database,what are the meta data of source U
import?
Source name
Database location
Column names
Data types
Key constraints
15.How many ways you can update a relational source defintion and what r they?
Two ways
1. Edit the definition
2. Reimport the definition
16.Where should U place the flat file to import the flat file defintion to the designer?
There is no such restriction to place the source file. In performance point of view its better to place the file in
server local source folder. if you need path please check the server properties available at workflow
manager. It doesn't mean we should not place in any other folder, if we place in server source folder by
default source will be selected at time session creation.
17.To provide support for Mainframes source data,which files r used as a source definitions?
COBOL Copy-book files
18.Which transformation should u need while using the cobol sources as source defintions?
Normalizer transformation which is used to normalize the data. Since cobol sources r oftenly consists of
Demoralized data.
19.How can U create or import flat file definition in to the warehouse designer?
U can not create or import flat file defintion in to warehouse designer directly. Instead U must analyze the file
in source analyzer, then drag it into the warehouse designer. When U drag the flat file source definition into
warehouse designer workspace, the warehouse designer creates a relational target defintion not a file
defintion.If u want to load to a file, configure the session to write to a flat file. When the informatica server
runs the session, it creates and loads the flat file.
20.What is the maplet?
Maplet is a set of transformations that you build in the maplet designer and U can use in multiple mapings.
For Ex:Suppose we have several fact tables that require a series of dimension keys.Then we can create a
mapplet which contains a series of Lkp transformations to find each dimension key and use it in each fact
table mapping instead of creating the same Lkp logic in each mapping. Set of transforamations where the
logic can be reusable.
21.What is a transforamation?
A transformation is repository object that pass data to the next stage (i.e to the next transformation or target)
with/with out modifying the data
22.What r the designer tools for creating tranformations?
Mapping designer
Transformation developer
Mapplet designer
23.What r the active and passive transforamtions?
Transformations can be active or passive. An active transformation can change the number of rows that
pass through it, such as a Filter transformation that removes rows that do not meet the filter condition.
A passive transformation does not change the number of rows that pass through it, such as an Expression
transformation that performs a calculation on data and passes all rows through the transformation.
24.What r the connected or unconnected transforamations?
An unconnected transforamtion is not connected to other transformations in the mapping.Connected
transforamation is connected to other transforamtions in the mapping.
25.How many ways u create ports?
Two ways
1.Drag the port from another transformation
2.Click the add button on the ports tab.
35. In which condtions we can not use joiner transformation(Limitaions of joiner transformation)?
Both pipelines begin with the same original data source.
Both input pipelines originate from the same Source Qualifier transformation.
Both input pipelines originate from the same Normalizer transformation.
Both input pipelines originate from the same Joiner transformation.
Either input pipelines contains an Update Strategy transformation.
Either input pipelines contains a connected or unconnected Sequence Generator transformation.
36. what r the settiings that u use to cofigure the joiner transformation?
Master and detail source
Type of join
Condition of the join
37. What r the join types in joiner transformation?
Normal (Default)
Master outer
Detail outer
Full outer
38.What r the joiner caches?
When a Joiner transformation occurs in a session, the Informatica Server reads all the records from the
master source and builds index and data caches based on the master rows. After building the caches, the
Joiner transformation reads records from the detail source and perform joins.
39.what is the look up transformation?
Use lookup transformation in ur mapping to lookup data in a relational table,view,synonym.
Informatica server queries the look up table based on the lookup ports in the transformation.It compares the
lookup transformation port values to lookup table column values based on the look up condition.
40. Why use the lookup transformation ?
To perform the following tasks.
Get a related value. For example, if your source table includes employee ID, but you want to include the
employee name in your target table to make your summary data easier to read. Perform a calculation. Many
normalized tables include values used in a calculation, such as gross sales per invoice or sales tax, but not
the calculated value (such as net sales). Update slowly changing dimension tables. You can use a Lookup
transformation to determine whether records already exist in the target.
41. What r the types of lookup?
Connected and unconnected.
42. Differences between connected and unconnected lookup?
Connected lookup
Unconnected lookup
Receives input values diectly from the Receives input values from the result of a lkp expression in
pipe line.
a another transformation.
U can use a dynamic or static cache U can use a static cache.
Cache includes all lookup columns
Cache includes all lookup out put ports in the lookup
used in the maping
condition and the lookup/return port.
Support user defined default values
Does not support user defiend default
43.What is meant by lookup caches?
The informatica server builds a cache in memory when it processes the first row af a data in a cached look
up transformation.It allocates memory for the cache based on the amount u configure in the transformation
or session properties.The informatica server stores condition values in the index cache and output values in
the data cache.
44. What r the types of lookup caches?
Persistent cache: U can save the lookup cache files and reuse them the next time the informatica server
processes a lookup transformation configured to use the cache.
Recache from database: If the persistent cache is not synchronized with he lookup table,U can configure the
lookup transformation to rebuild the lookup cache.
Static cache: U can configure a static or readonly cache for only lookup table.By default informatica server
creates a static cache.It caches the lookup table and lookup values in the cache for each row that comes
into the transformation.when the lookup condition is true,the informatica server does not update the cache
while it prosesses the lookup transformation.
Dynamic cache: If u want to cache the target table and insert new rows into cache and the target,u can
create a look up transformation to use dynamic cache.The informatica server dynamically inerts data to the
target table.
shared cache: U can share the lookup cache between multiple transactions.U can share unnamed cache
between transformations inthe same maping.
45.Difference between static cache and dynamic cache
Static cache
Dynamic cache
U can insert rows into the cache as u pass
U can not insert or update the cache
to the target
The informatic server returns a value from the lookup table or The informatic server inserts rows into
cache when the condition is true.When the condition is not
cache when the condition is false.This
true, informatica server returns the default value for
indicates that the the row is not in the cache
connected transformations and null for unconnected
or target table. U can pass these rows to the
transformations.
target table
46.Which transformation should we use to normalize the COBOL and relational sources?
Normalizer Transformation. When U drag the COBOL source in to the mapping Designer workspace, the
normalizer transformation automatically appears, creating input and output ports for every column in the
source.
47.How the informatica server sorts the string values in Ranktransformation?
When the informatica server runs in the ASCII data movement mode it sorts session data using Binary
sortorder.If U configure the seeion to use a binary sort order,the informatica server caluculates the binary
value of each string and returns the specified number of rows with the higest binary values for the string.
48.What r the rank caches?
During the session, the informatica server compares an inout row with rows in the datacache.If the input row
out-ranks a stored row, the informatica server replaces the stored row with the input row. The informatica
server stores group information in an index cache and row data in a data cache.
49. What is the Rankindex in Ranktransformation?
The Designer automatically creates a RANKINDEX port for each Rank transformation. The Informatica
Server uses the Rank Index port to store the ranking position for each record in a group. For example, if you
create a Rank transformation that ranks the top 5 salespersons for each quarter, the rank index numbers the
salespeople from 1 to 5:
50. What is the Router transformation?
A Router transformation is similar to a Filter transformation because both transformations allow you to use a
condition to test data. However, a Filter transformation tests data for one condition and drops the rows of
data that do not meet the condition. A Router transformation tests data for one or more conditions and gives
you the option to route rows of data that do not meet any of the conditions to a default output group.
If you need to test the same input data based on multiple conditions, use a Router Transformation in a
mapping instead of creating multiple Filter transformations to perform the same task.
78.How the informatica server increases the session performance through partitioning the source?
For a relational sources informatica server creates multiple connections for each parttion of a single source
and extracts separate range of data for each connection.Informatica server reads multiple partitions of a
single source concurently.Similarly for loading also informatica server creates multiple connections to the
target and loads partitions of data concurently.
For XML and file sources,informatica server reads multiple files concurently.For loading the data informatica
server creates a separate file for each partition(of a source file).U can choose to merge the targets.
79.Why u use repository connectivity?
When u edit,schedule the sesion each time,informatica server directly communicates the repository to check
whether or not the session and users r valid.All the metadata of sessions and mappings will be stored in
repository.
80.What r the tasks that Loadmanger process will do?
Manages the session and batch scheduling: When u start the informatica server the load maneger launches
and queries the repository for a list of sessions configured to run on the informatica server.When u configure
the session the load manager maintains list of list of sessions and session start times. When u sart a
session load manger fetches the session information from the repository to perform the validations and
verifications prior to starting DTM process. Locking and reading the session: When the informatica server
starts a session load manager locks the session from the repository. Locking prevents U starting the session
again and again. Reading the parameter file: If the session uses a parameter files, load manager reads the
parameter file and verifies that the session level parameters are declared in the file Verifies permission and
privileges: When the sesson starts load manger checks whether or not the user have privileges to run the
session.
Creating log files: Load manger creates log file contains the status of session.
81.What is DTM process?
After the loadmanger performs validations for session,it creates the DTM process.DTM is to create and
manage the threads that carry out the session tasks.I creates the master thread.Master thread creates and
manges all the other threads.
82.What r the different threads in DTM process?
Master thread: Creates and manages all other threads
Mapping thread: One mapping thread will be creates for each session.Fectches session and mapping
information.
Pre and post session threads: This will be created to perform pre and post session operations.
Reader thread: One thread will be created for each partition of a source.It reads data from source.
Writer thread: It will be created to load data to the target.
83.What r the data movement modes in informatcia?
Data movement modes determines how informatcia server handles the character data.U choose the data
movement in the informatica server configuration settings. Two types of data movement modes available in
Informatica.
ASCIImode
Uni code mode.
84.What r the out put files that the informatica server creates during the session running?
Informatica server log: Informatica server(on unix) creates a log for all status and error messages(default
name: pm.server.log).It also creates an error log for error messages.These files will be created in informatica
home directory.
Session log file: Informatica server creates session log file for each session.It writes information about
session into log files such as initialization process, creation of sql commands for reader and writer threads,
errors encountered and load summary. The amount of detail in session log file depends on the tracing level
that u set.
Session detail file: This file contains load statistics for each targets in mapping. Session detail include
information such as table name, number of rows written or rejected. U can view this file by double clicking on
the session in monitor window
Performance detail file: This file contains information known as session performance details which helps U
where performance can be improved. To generate this file select the performance detail option in the
session property sheet.
Reject file: This file contains the rows of data that the writer does not write to targets.
Control file: Informatica server creates control file and a target file when U runs a session that uses the
external loader. The control file contains the information about the target flat file such as data format and
loading instructions for the external loader.
Post session email: Post session email allows U to automatically communicate information about a session
run to designated recipients can create two different messages. One if the session completed successfully
the other if the session fails.
Indicator file: If u use the flat file as a target can configure the informatica server to create indicator file. For
each target row, the indicator file contains a number to indicate whether the row was marked for insert,
update, delete or reject.
Output file: If session writes to a target file, the informatica server creates the target file based on file
properties entered in the session property sheet.
Cache files: When the informatica server creates memory cache it also creates cache files. For the
following circumstances informatica server creates index and data cache files.
Aggregator transformation
Joiner transformation
Rank transformation
Lookup transformation
85.In which circumstances that informatica server creates Reject files?
When it encounters the DD_ Reject in update strategy transformation. Violates database constraint Filed in
the rows was truncated or overflowed.
86.What is polling?
It displays the updated information about the session in the monitor window. The monitor window displays
the status of each session when U poll the informatica server
87.Can u copy the session to a different folder or repository?
Yes. By using copy session wizard u can copy a session in a different folder or repository. But that
target folder or repository should consists of mapping of that session. If target folder or repository is not
having the mapping of copying session, u should have to copy that mapping first before u copy the session
88.What is batch and describe about types of batches?
Grouping of session is known as batch. Batches r two types
Sequential: Runs sessions one after the other
Concurrent: Runs session at same time.
If u has sessions with source-target dependencies u have to go for sequential batch to start the
sessions one after another. If u have several independent sessions u can use concurrent batches.
Which runs all the sessions at the same time.
89.Can u copy the batches?
NO
90.How many number of sessions that u can create in a batch?
Any number of sessions.
91.When the informatica server marks that a batch is failed?
If one of session is configured to "run if previous completes" and that previous session fails.
Source definitions
Target definitions
Transformations
106.What is power center repository?
The Power Center repository allows you to share metadata across repositories to create a data mart
domain. In a data mart domain, you can create a single global repository to store metadata used across an
enterprise, and a number of local repositories to share the global metadata as needed.
107.What r the new features in Informatica 5.0?
U can Debug Ur mapping in mapping designer
U can view the work space over the entire screen
The designer displays a new icon for a invalid mappings in the navigator window
U can use a dynamic lookup cache in a lookup transformation
Create mapping parameters or mapping variables in a mapping or maplet to make mappings more
flexible
U can export objects into repository and import objects from repository. when u export a repository object,
the designer or server manager creates an XML file to describe the repository metadata.
The designer allows u to use Router transformation to test data for multiple conditions. Router
transformation allows u route groups of data to transformation or target.
U can use XML data as a source or target.
Server Enhancements:
U can use the command line program pmcmd to specify a parameter file to run sessions or batches. This
allows you to change the values of session parameters, and mapping parameters and variables at runtime.
If you run the Informatica Server on a symmetric multi-processing system, you can use multiple CPUs to
process a session concurrently. You configure partitions in the session properties based on source qualifiers.
The Informatica Server reads, transforms, and writes partitions of data in parallel for a single session. This is
available for Power center only.
Informatica server creates two processes like load manager process, TM process to run the sessions.
Metadata Reporter: It is a web based application which is used to run reports against repository metadata.
U can copy the session across the folders and repositories using the copy session wizard in the informatica
server manager
with new email variables, you can configure post-session email to include information, such as the mapping
used during the session
108.What is incremental aggregation?
When using incremental aggregation, you apply captured changes in the source to aggregate calculations in
a session. If the source changes only incrementally and you can capture changes, you can configure the
session to process only those changes. This allows the Informatica Server to update your target
incrementally, rather than forcing it to process the entire source and recalculate the same calculations each
time you run the session.
109.What r the scheduling options to run a session?
U can schedule a session to run at a given time or interval, or u can manually run the session.
Different options of scheduling
Run only on demand: server runs the session only when user starts session explicitly
Run once: Informatica server runs the session only once at a specified date and time.
Run every: Informatica server runs the session at regular intervals as u configured.
Customized repeat: Informatica server runs the session at the data and times specified in the repeat dialog
box.
Unconnected
Unconnected
Connected or
Unconnected
Unconnected
Connected or
Unconnected
Pass parameters to the stored procedure and receive multiple output parameters.
Note: To get multiple output parameters from an unconnected Stored Procedure
transformation, you must create variables for each output parameter. For details, see
Calling a Stored Procedure From an Expression.
Run nested stored procedures.
Call multiple times within a mapping.
Connected or
Unconnected
Unconnected
Unconnected
141.In a filter expression we want to compare one date field with a db2 system field CURRENT
DATE.Our Syntax: datefield = CURRENT DATE (we didn't define it by ports, its a system field ), but
this is not valid (PMParser: Missing Operator)..
The db2 date format is "yyyymmdd" where as sysdate in oracle will give "dd-mm-yy" so conversion of db2
date format to local database date format is compulsory. Other wise u will get that type of error.
142.Briefly explian the Versioning Concept in Power Center 7.1?
In power center 7.1 use 9 Tem servers i.e. add in Look up. But in power center 6.x use only 8 tem servers.
And add 5 transformations. In 6.x any 17 transformations but 7.x uses 22 transformations.
143.How to join two tables without using the Joiner Transformation?
Its possible to join the two or more tables by using source qualifier. But provided the tables should have
relationship. When u drag n drop the tables u will getting the source qualifier for each table. Delete all the
source qualifiers. Add a common source qualifier for all. Right click on the source qualifier u will find EDIT
click on it. Click on the properties tab will find sql query in that u can write ur sql.
144.Can Informatica be used as a Cleansing Tool? If Yes, give example of transformations that can
implement a data cleansing routine.
Yes, we can use Informatica for cleansing data. Some time we use stages to cleansing the data. It depends
upon performance again else we can use expression to cleansing data.
For example an field X have some values and other with Null values and assigned to target feild where
target field is not null column, inside an expression we can assign space or some constant value to avoid
session failure.The input data is in one format and target is in another format, we can change the format in
expression.we can assign some default values to the target to represent complete set of data in the target.
145.How do you decide whether you need do aggregations at database level or at Informatica level?
It depends upon our requirement only. If you have good processing database you can create aggregation
table or view at database level else its better to use informatica. Here i'm explain why we need to use
informatica. what ever it may be informatica is a third party tool, so it will take more time to process
aggregation compared to the database, but in Informatica an option we called "Incremental aggregation"
which will help you to update the current values with current values +new values. No necessary to process
entire values again and again. Unless this can be done if nobody deleted that cache files. If that happened
total aggregation we need to execute on informatica also. In database we don't have Incremental
aggregation facility.
146.How do we estimate the depth of the session scheduling queue? Where do we set the number of
maximum concurrent sessions that Informatica can run at a given time?
147.How do we estimate the number of partitions that a mapping really requires? Is it dependent on
the machine configuration?
It depends upon the informatica version we r using. Suppose if we r using informatica 6 it supports only 32
partitions where as informatica 7 supports 64 partitions.
148.Suppose session is configured with commit interval of 10,000 rows and source has 50,000 rows.
Explain the commit points for Source based commit and Target based commit. Assume appropriate
value wherever required?
Source based commit will commit the data into target based on commit interval. So, for every 10,000 rows it
will commit into target.Target based commit will commit the data into target based on buffer size of the
target. i.e., it commits the data into target when ever the buffer fills. Let us assume that the buffer size is 6,
000.So, for every 6,000 rows it commits the data.
149.We are using Update Strategy Transformation in mapping how can we know whether insert or
update or reject or delete option has been selected during running of sessions in Informatica?
In Designer while creating Update Strategy Transformation uncheck "forward to next transformation". If any
rejected rows are there automatically it will be updated to the session log file.
Update or insert files are known by checking the target file or table only.
Operation
Constant
Numeric value
Insert
DD_INSERT
0
Update
DD_UPDATE
1
Delete
DD_DELETE
2
Reject
DD_REJECT
3
150.What is the procedure to write the query to list the highest salary of three employees?
SELECT sal
FROM (SELECT sal FROM my_table ORDER BY sal DESC)
WHERE ROWNUM < 4;
151.What is the limit to the number of sources and targets you can have in a mapping?
There is one formula..
no.of bloccks=0.9*( DTM buffer size/block size)*no.of partitions.
here no.of blocks=(source+targets)*2
the restriction is only on the database side. How many concurrent threads r u allowed to run on the db
server
152.Which is better among connected lookup and unconnected lookup transformations in
informatica or any other ETL tool?
Its not a easy question to say which is better out of connected, unconnected lookups. Its depends upon our
experience and upon the requirement.
When you compared both basically connected lookup will return more values and unconnected returns one
value. Conn lookup is in the same pipeline of source and it will accept dynamic caching. Unconn
lookup don't have that facility but in some special cases we can use Unconnected. if o/p of one lookup is
going as i/p of another lookup this unconnected lookups are favorable.
153.In Dimensional modeling fact table is normalized or denormalized?in case of star schema and
incase of snow flake schema?
In Dimensional modeling, Star Schema: A Single Fact table will be surrounded by a group of Dimensional
tables comprise of de- normalized data Snowflake Schema: A Single Fact table will be surrounded by a
group of Dimensional tables comprised of normalized dataThe Star Schema (sometimes referenced as star
join schema) is the simplest data warehouse schema, consisting of a single "fact table" with a compound
primary key, with one segment for each "dimension" and with additional columns of additive, numeric
facts.The Star Schema makes multi-dimensional database (MDDB) functionality possible using a traditional
relational database. Because relational databases are the most common data management system in
organizations today, implementing multi-dimensional views of data using a relational database is very
appealing. Even if you are using a specific MDDB solution, its sources likely are relational databases.
Another reason for using star schema is its ease of understanding. Fact tables in star schema are mostly in
third normal form (3NF), but dimensional tables in de-normalized second normal form (2NF). If you want to
normalize dimensional tables, they look like snowflakes (see snowflake schema) and the same problems of
relational databases arise - you need complex queries and business users cannot easily understand the
meaning of data. Although query performance may be improved by advanced DBMS technology and
hardware, highly normalized tables make reporting difficult and applications complex.The Snowflake
Schema is a more complex data warehouse model than a star schema, and is a type of star schema. It is
called a snowflake schema because the diagram of the schema resembles a snowflake.Snowflake schemas
normalize dimensions to eliminate redundancy. That is, the dimension data has been grouped into multiple
tables instead of one large table. For example, a product dimension table in a star schema might be
normalized into a products table, a Product-category table, and a product-manufacturer table in a snowflake
schema. While this saves space, it increases the number of dimension tables and requires more foreign key
joins. The result is more complex queries and reduced query performance.
Star schema--De-Normalized Dimensions
Snow Flake Schema-- Normalized Dimensions
154.What is difference between IIF and DECODE function?
You can use nested IIF statements to test multiple conditions. The following example tests for various
conditions and returns 0 if sales is zero or negative:
IIF( SALES > 0, IIF( SALES < 50, SALARY1, IIF( SALES < 100, SALARY2, IIF( SALES < 200, SALARY3,
BONUS))), 0 )
You can use DECODE instead of IIF in many cases. DECODE may improve readability. The following shows
how you can use DECODE instead of IIF :
SALES > 0 and SALES < 50, SALARY1,
SALES > 49 AND SALES < 100, SALARY2,
SALES > 99 AND SALES < 200, SALARY3,
SALES > 199, BONUS)
155.What are variable ports and list two situations when they can be used?
We have mainly tree ports Inport, Outport, Variable port. Inport represents data is flowing into
transformation. Outport is used when data is mapped to next transformation. Variable port is used when we
mathematical caluculations are required. If any addition i will be more than happy if you can share.
156.How does the server recognise the source and target databases?
By using ODBC connection.if it is relational.if is flat file FTP connection..see we can make sure with
connection in the properties of session both sources && targets.
157.How to retrive the records from a rejected file. explane with syntax or example?
There is one utility called "reject Loader" where we can findout the reject records.and able to refine and
reload the rejected records..
158.How to lookup the data on multiple tabels?
Using SQL override..we can lookup the Data on multiple tables.See in the properties.
159.What is the procedure to load the fact table.Give in detail?
We use the 2 wizards (i.e) the getting started wizard and slowly changing dimension wizard to load the fact
and dimension tables,by using these 2 wizards we can create different types of mappings according to the
business requirements and load into the star schemas(fact and dimension tables).
160.What is the use of incremental aggregation? Explain me in brief with an example?
Its a session option. When the informatica server performs incremental aggregator. it passes new source
data through the mapping and uses historical cache data to perform new aggregation calculations
incrementally. For performance we will use it.
161.How to delete duplicate rows in flat files source is any option in informatica?
Use a sorter transformation, in that u will have a "distinct" option make use of it.
162.How to use mapping parameters and what is their use?
Mapping parameters and variables make the use of mappings more flexible. and also it avoids creating of
multiple mappings. it helps in adding incremental data. Mapping parameters and variables has to create in
the mapping designer by choosing the menu option as Mapping ----> parameters and variables and the
enter the name for the variable or parameter but it has to be preceded by $$. and choose type as
parameter/variable, data type once defined the variable/parameter is in the any expression for example in
SQ transformation in the source filter properties tab. just enter filter condition and finally create a parameter
file to assign the value for the variable / parameter and configure the session properties. however the final
step is optional. if there parameter is npt present it uses the initial value which is assigned at the time of
creating the variable
163.In the concept of mapping parameters and variables, the variable value will be saved to the
repository after the completion of the session and the next time when u run the session, the server
takes the saved variable value in the repository and starts assigning the next value of the saved
value. for example i ran a session and in the end it stored a value of 50 to the repository.next time
when i run the session, it should start with the value of 70. not with the value of 51.
u can do one thing after running the mapping,, in workflow manager
Start-------->session.
Right click on the session u will get a menu, in that go for persistent values, there u will find the last value
stored in the repository regarding to mapping variable. Then remove it and put ur desired one, run the
session... i hope ur task will be done.
164.Significance of oracle 9i in informatica when compared to oracle 8 or 8i.
i mean how is oracle 9i advantageous when compared to oracle 8 or 8i when used in informatica
Its very easy Actually oracle 8i not allowed user defined data types but 9i allows and then blob,clob allow
only 9i not 8i and more over list partition is there in 9i only.
165.Can we use aggregator/active transformation after update strategy transformation?
We can use, but the update flag will not be remain. but we can use passive transformation
Use:
If the table you are trying to query is already analyzed, then oracle will go with CBO. If the table is not
analyzed, the Oracle follows RBO.
For the first time, if table is not analyzed, Oracle will go with full table scan.
186.What is mystery dimension?
Using Mystery Dimension ur maintaining the mystery data in ur Project.
187.What is difference b/w Informatica 7.1 and Abinitio?
In Informatica there is the concept of co-operating system, which makes the mapping in parallel fashion
which is not in Informatica.
188.Can i start and stop single session in concurrent batch?
Just right click on the particular session and going to recovery option.
189.What is difference between lookup cache and uncached lookup?
190. Can i run the mapping with out starting the informatica server?
The difference between cache and uncached lookup is when you configure the lookup transformation cache
lookup it stores all the lookup table data in the cache when the first input record enter into the lookup
transformation, in cache lookup the select statement executes only once and compares the values of the
input record with the values in the cache but in uncache lookup the the select statement executes for each
input record entering into the lookup transformation and it has to connect to database each time entering the
new record.
191.What is the difference between stop and abort?
Stop: If the session u want to stop is a part of batch you must stop the batch,if the batch is part of nested
batch, Stop the outer most batch
Abort: You can issue the abort command; it is similar to stop command except it has 60 second time out.
If the server cannot finish processing and committing data with in 60 sec
192.Can we run a group of sessions without using workflow manager?
Its Possible using pmcmd Command with out using the workflow Manager run the group of session.
193.How to perform a "Loop Scope / Loop condition" in an Informatica
program ? Give me few examples.
194.If a session fails after loading of 10,000 records in to the target.How can u load the records from
10001 th record when u run the session next time in informatica 6.1?
Using performance recovery option.
195.I have an requirement where in the columns names in a table (Table A) should appear in rows of
target table (Table B) i.e. converting columns to rows. Is it possible through Informatica? If so, how?
If data in tables as follows
Table A
Key-1 char(3);
table A values
_______
1
2
3
Table B
bkey-a char(3);
bcode char(1);
table b values
1T
1A
1G
2A
2T
2L
3A
and output required is as
1, T, A
2, A, T, L
3, A
the SQL query in source qualifier should be
select key_1,
max(decode( bcode, 'T', bcode, null )) t_code,
max(decode( bcode, 'A', bcode, null )) a_code,
max(decode( bcode, 'L', bcode, null )) l_code
from a, b
where a.key_1 = b.bkey_a
group by key_1
/
196.What is meant by complex mapping?
Complex mapping means involved in more logic and more business rules. Actually in my project complex
mapping is In my bank project, I involved in construct a 1 data ware house Many customer is there in my
bank project, They r after taking loans relocated in to another place that time I feel to difficult maintain both
previous and current addresses in the sense i am using scd2.This is an simple example of complex
mapping.
197.Explain use of update strategy transformation?
Maintain the history data and maintain the most recent changes data.
198.What are mapping parameters and varibles in which situation we can use it?
Mapping parameters have a constant value through out the session whereas in mapping variable the values
change and the informatica server saves the values in the repository and uses next time when u run the
session.
199.What is work let and what use of work let and in which situation we can use it?
Worklet is a set of tasks. If a certain set of task has to be reused in many workflows then we use worklets.
To execute a Worklet, it has to be placed inside a workflow. The use of worklet in a workflow is similar to the
use of mapplet in a mapping.
200.what is difference between dimention table and fact table and what are different dimention tables
and fact tables?
In the fact table contain measurable data and less columns and many rows, its contain primary key
Different types of fact tables:
Additive, non additive, semi additive
In the dimensions table contain textual description of data and also contain many columns, less rows
Its contain primary key.
201.How do you configure mapping in informatica?
You should configure the mapping with the least number of transformations and expressions to do the most
amount of work possible. You should minimize the amount of data moved by deleting unnecessary links
between transformations.
For transformations that use data cache (such as Aggregator, Joiner, Rank, and Lookup transformations),
limit connected input/output or output ports. Limiting the number of connected input/output or output ports
reduces the amount of data the transformations store in the data cache.
You can also perform the following tasks to optimize the mapping:
Configure single-pass reading.
Optimize datatype conversions.
Eliminate transformation errors.
Optimize transformations.
Optimize expressions. You should configure the mapping with the least number of transformations
and expressions to do the most amount of work possible. You should minimize the amount of data
moved by deleting unnecessary links between transformations.
For transformations that use data cache (such as Aggregator, Joiner, Rank, and Lookup
transformations), limit connected input/output or output ports. Limiting the number of connected
input/output or output ports reduces the amount of data the transformations store in the data cache.
You can also perform the following tasks to optimize the mapping:
o Configure single-pass reading.
o Optimize datatype conversions.
o Eliminate transformation errors.
o Optimize transformations.
o Optimize expressions.
202.Can i use a session Bulk loading option that time can i make a recovery to the session?
No, why because in bulk load u wont create redo log file, when u normal load we create redo log file, but in
bulk load session performance increase.
203.What is lookup transformation and update strategy transformation and explain with an example?
Look up transformation is used to lookup the data in a relation table,view,Synonym and Flat file.The
informatica server queries the lookup table based on the lookup ports used in the transformation.It compares
the lookup transformation port values to lookup table column values based on the lookup conditionBy using
lookup we can get the realated value,Perform a caluclation and Update SCD.
Two types of lookups
Connected
Unconnected
Update strategy transformation This is used to control how the rows are flagged for insert, update ,delete or
reject. To define a flagging of rows in a session it can be insert, Delete, Update or Data driven.
In Update we have three options
Update as Update
Update as insert
Update else insert.
204.What is the difference between Power Centre and Power Mart? What is the procedure for
creating Independent Data Marts from Informatica 7.1?
Power center
Power mart
No. of repository
n No.
n No.
Applicability
high end WH
low-minded range WH
Global repository
supported
not supported
Local repository
supported
supported
ERP support
available
not available
205.In the source, if we also have duplicate records and we have 2 targets, T1- for unique values and
T2- only for duplicate values. How do we pass the unique values to T1 and duplicate values to T2
from the source to these 2 different targets in a single mapping?
This is not a right approach friends. There is a good practice of identifying duplicates. Normally when you
ask someone how to identify a duplicate record in informatica, they say "Use aggregator transf". Well you
can just get a count from this, but not really identify which record is a duplicate. If it is RDBMS, you can
simply write a query "select ... from ...group by <key fields> having count (*) > 1. Great! But what if the
source is a flat file? you can use an aggregate and get the count of it. Then you will filter and wanted to
make sure it reached the T1 and T2 tgt's appropriately.This would be the easiest way.Use a sorter
transformation. Sort on key fields by which u want to find the duplicates. then use an expression
transformation.
Example:
Example:
field1-->
field2-->
SORTER:
field1 --ascending/descending
field2 --ascending/descending
Expression:
--> field1
--> field2
<--> v_field1_curr = field1
<--> v_field2_curr = field2
214.What is Transaction?
A transaction can be defined as DML operation means it can be insertion, modification or deletion of data
performed by users/ analysts/applicators.
215.What are the various test procedures used to check whether the data is loaded in the backend,
performance of the mapping, and quality of the data loaded in INFORMATICA?
216.1)What are the various test procedures used to check whether the data is loaded in the backend,
performance of the mapping, and quality of the data loaded in INFORMATICA.
2) What are the common problems developers face while ETL development?
217.What happens if you try to create a shortcut to a non-shared folder?
It only creates a copy of it..
218.In a joiner transformation, you should specify the source with fewer rows as the master source.
Why?
In joinner transformation informatica server reads all the records from master source builds index and data
caches based on master table rows. after building the caches the joiner transformation reads records from
the detail source and perform joins.
219.If you want to create indexes after the load process which transformation you choose?
a) Filter Tranformation
b) Aggregator Tranformation
c) Stored procedure Tranformation
d) Expression Tranformation
Stored procedure transformation.
220.Where is the cache stored in informatica?
Cache stored in informatica is in informatica server.
221.How to get two targets T1 containing distinct values and T2 containing duplicate values from
one source S1?
222.What will happen if you are using Update Strategy Transformation and your session is
configured for "insert"?
What are the types of External Loader available with Informatica?
If you have rank index for top 10. However if you pass only 5 records, what will be the output of such
a Rank Transformation?
223.What are the real time problems generally come up while doing/running mapping/any
transformation?can any body explain with example.
224.Can batches be copied/stopped from server manager?
225.What is rank transformation? where can we use this transformation?
Rank transformation is used to find the status. Ex if we have one sales table and in this if we find more
employees selling the same product and we are in need to find the first 5 0r 10 employee who is selling
more products. we can go for rank transformation.
226.What is exact use of 'Online' and 'Offline' server connect Options while defining Work flow in
Work flow monitor? . The system hangs when 'Online' server connect option. The Informatica is
installed on a Personal laptop.
227.How can you delete duplicate rows with out using Dynamic Lookup? Tell me any other ways
using lookup delete the duplicate rows?
Business Objects
BO Designer
1.What is Cardinality?
Expresses the minimum and the maximum number of instances of an entity B that can be associated with
an instance of an entity A. The minimum and the maximum number of instances can be equal to 0, 1, or N.
2.What is Cartesian product?
A situation in which a query includes two or more tables that are not linked by a join. If executed, this type of
query retrieves all possible combinations between each table and may lead to inaccurate results.
3.What is Class?
A class is a logical grouping of objects within a universe. In general, the name of a class reflects a business
concept that conveys the category or type of objects. For example, in a universe pertaining to human
resources, one class might be Employees.A class can be further divided into subclasses. In the human
resources universe, a subclass of the Employees class could be Personal Information.As designer, you are
free to define hierarchies of classes and subclasses in a model that best reflects the business concepts of
your organization.
4. What is Condition?
A component that controls the type and the amount of data returned by a specific object in a query. A
condition created in the Designer module is referred to as a predefined condition.
5.What is Connection?
Set of parameter that provides access to an RDBMS. These parameters include system information such as
the data account, user identification, and path to the database. Designer provides three types of
connections: secured, shared, and personal.
6.What is Context?
A method by which Designer can decide which path to choose when more than one path is possible from
one table to another in the universe.
7.What is Detail object?
An object qualified as a detail provides descriptive data about a dimension object. A detail object cannot be
used in drill down analysis.
8.What is Dimension object?
An object being tracked in multidimensional analysis; the subject of the analysis. Dimensions are organized
into hierarchies.
9.What is Document domain?
The area of the repository that stores documents, templates, scripts, and lists of values.
10.What is Drill?
There can be 3 types of Drill Analysis- Drill Down,Drill Up and Drill Through.Within the same universe one
can Drill Up/Down like Country-State-City;of course with facts that relate to the same grain.A Drill through is
possible when we can link different Data Marts like Profitability as defined by details of Asset, Liability,
Income and Expense.
11.What is Equi-join?
A join based on the equality between the values in the column of one table and the values in the column of
another. Because the same column is present in both tables, the join synchronizes the two tables.
12.What is Enterprise mode?
A work mode whereby a designer creates universes in an environment with a repository. The mode in which
a universe is saved determines whether other designers are able to access it. By default, a universe is
saved in the mode in which the designer is already working.
13.What is Hierarchy?
An ordered series of related dimensions used for multidimensional analysis.
14.What is Join?
A relational operation that causes two tables with a common column to be combined into a single table.
Designer supports equi-joins, theta joins, outer joins, and shortcut joins.
15.What is List of values?
A list of values contains the data values associated with an object. These data values can originate from a
corporate database, or a flat file such as a text file or Excel file. In Designer you create a list of values by
running a query from the Query Panel. You can then view, edit, purge, refresh and even export this file. A list
of values is stored as an .lov file in a subfolder of the UserDocs folder.
16.What is Loop?
A situation that occurs when more than one path exists from one table to another in the universe.
17.What is Measure object?
An object that is derived from an aggregate function. It conveys numeric information by which a dimension
object can be measured.
18.What is Object?
A component that maps to data or a derivation of data in the database. For the purposes of multidimensional
analysis, an object can be qualified as a dimension, detail, or measure. Objects are grouped into classes.
19.What is Offline mode?
The work mode in which the designer works with universes stored locally.
20.What is Online mode?
The work mode appropriate for a networked environment in which the general supervisor has set up a
repository.
21.What is Outer join?
A join that links two tables, one of which has rows that do not match those in the common column of the
other table.
22.What is Personal connection?
A personal connection is used to access resources such as universes or documents. It can be used only by
the user who created it. Information about a personal connection is stored in both the PDAC.LSI and
PDAC.SSI files; its definition is static and cannot be modified.
23.What is Qualification?
A property of an object that determines how it can be used in multidimensional analysis. An object can be
qualified as one of three types: a dimension, detail or measure.
24.What is Query?
In Designer a query is a technique for creating or modifying a list of values associated with an object. From
the Query Panel, a designer builds a query from the classes, objects, and conditions of a universe. In the
BusinessObjects User module, a query is a type of data provider. An end user builds a query from a
universe, and then runs the query to generate a BusinessObjects report.
25.What is Quick Design?
A wizard in the Designer module that provides guided instructions for creating a basic universe. It lets a
designer name a universe, set up a connection to a database, select strategies, create classes and objects,
as well as generate joins with cardinalities.
26.What is Repository?
A centralized set of relational data structures stored in a database. It enables BusinessObjects users to
share resources in a controlled and secured environment. The repository is made up of three domains: the
security domain, the universe domain, and the document domain.
40.What is Designer?
Designer is a Business Objects IS module used by universe designers to create and maintain universes.
Universes are the semantic layer that isolates end users from the technical issues of the database structure.
Universe designers can distribute universes to end users by moving them as files through the file system, or
by exporting them to the repository.
41.How do you design a universe?
The design method consists of two major phases. During the first phase, you create the underlying database
structure of your universe. This structure includes the tables and columns of a database and the joins by
which they are linked. You may need to resolve loops which occur in the joins using aliases or contexts. You
can conclude this phase by testing the integrity of the overall structure.During the second phase, you can
proceed to enhance the components of your universe. You can also prepare certain objects for
multidimensional analysis. As with the first phase, you should test the integrity of your universe structure.
You may also wish to perform tests on the universes you create from the BusinessObjects User module.
Finally, you can distribute your universes to users by exporting them to the repository or via your file system.
For a universe based on a simple relational schema, Designer provides Quick Design, a wizard for creating
a basic yet complete universe. You can use the resulting universe immediately, or you can modify the
objects and create complex new ones. In this way, you can gradually refine the quality and structure of your
universe.
42.What are the precautionary measures you will take in the project?
43.What is the drill up , drill down, drill by , drill trough ?
Drill up: UP-one level
Drill down: DOWN-one level
Drill by: selection of level Hierarchy
Drill trough: Hierarchy to another Hierarchy
44.Explain the SQL Queries activated to data base from Data provider in BO ?
BO automatically generates the SQL query when objects are selected into query panel. When you run the
query, the query is processed to database based on your connectivity. For ex: If you run query using, Local
Machine Full Client (BO reporter), the query directly connects to database through middleware
Full Client <---> Database
If you run query using Web, the web browser will connects to Web Server and Web server will process
request to Database.
WEBI <--->Web Server <---> Database
45.What are steps to be taken care to create a good Universe?
1)make the joins with optimization
2)reduce the creation user objects more in the universe
3)class should not be more than 60
4)try to use aggregate ware on measure objects
46.What are the steps to taken care to improve the Report performance?
In DESIGNER Level
1)eliminate the unnecessary joins
2)use conditions as much as at the database level
3)edit the SQL query in the Query Panel as per requirement
In REPORTER level
1)eliminate the filters as much as possible
2)try to reduce the user variables .
47.How can we achieve Correlated sub-query in Designer?can anyone help me in this regard.
Right click on any object, go to the properties. Specify the query in select and put the next query in where
clause, like select COLNAME from TABNAME1 where COLNAME IN (select colname2 from tab2)
p1 20/mar/2000 500
now i want query like product ,sum of actual year ,sum of business year
p1 450 750 here actual year means 1 Jan 1999 to 31 st Jan 1999
business year means 1 st APR 1999 to 31 st mar 200
58.What is slicing and dicing in business objects?
Slice & Dice is facility in BO. We can enables change the positions of data in Report. Here in Bo we slice &
dice panel by using this we can create cross tables and master details tables.
59.How to link two different universes ?
For link 2 universes we hae 2 approaches..
1. Go through Edit-->links
2. Go through parameters of Universes there is one tab like Links, by using those we can link the 2
universes...
60.What's the Functional & Architectural Differences between Business Objects & Web Intelligence
Reports?
61.How to Implement the the Built-in Strategy Script in BO Designer?
62.What's is the Guidelines to Build Universe with Better Performance? R Performance tuning Issues
of Universes?
Business Objects
1.What is a bo repository?
Generally Repository is the Metadata
BO 5.0 Repository create/maintain 50 tables, they are distributed as follows
25 tables for Security domain
24 Table for Universe domain
1 Table for Document Domain
2.Give the notes and functionalities of cascading prompts,@script in business objects?
Syntax
@Prompt ('message', ['type'], [lov], [MONO|MULTI], [FREE|CONSTRAINED])
where message is the text of a message within single quotes. type can be one of the following: 'A' for
alphanumeric, 'N' for number, or 'D' for date. can be either a list of values enclosed in brackets (each value
must be within single quotes and separated by commas) or the name of a class and object separated by a
backslash and within single quotes. MONO means that the prompt accepts only one value. MULTI means
that the prompt can accept several values. FREE refers to free input as opposed to CONSTRAINED, which
means that the end user must choose a value suggested by the prompt.
Description
Is used to create an interactive object. In the Query Panel, this type of object causes a message to appear.
This
message
prompts
the
end
user
to
enter
a
specific
value.
Note
The last four arguments are optional; however, if you omit an argument you must still enter the commas as
separators.
Example
In Where Clause:
City.city IN @Prompt ('Choose City', 'A', {'Chicago', 'Boston', 'New York'}, MULTI, FREE)
In the Query Panel, the object prompts the end user to choose a city.
3.When to use local filter and when to use global Filter?
Local Filter is to single report to which it is create, but a global filter is to all the reports which consists of that
column
4.What are the user requirements in a universe?
Database connections, key column, joins and check for loop if you need measures, metrics.
5. I have three predefined prompts. In a report it will come randomly. How they will come in a
specified format?
The Prompts will appear in the alphabetical order.To make them appear in the order of our requirement,
need to prefix a numerical with the prompt.
5.Whats universal join in BOs?
The level of join between two universes with a matching column.
6.Can we apply Rank and Sort at a time on a single report?
No we can't apply rank and sort at a time on one object in one single report.If we try to apply , BO asks if
you want to over write the previous condition.
7.What is difference between custom hierarchy and report based hierarchy?
By default one class having one hierarchy ie called report hierarchy. Custom hierarchy we can create in
designer according our req.
8.What is the multi value error ?Is there any types of Error in BO?
You will get the Multi Value Error when you are trying to retrivr mutiple values into the cell.
Ex: When u r tying to Insert cell into report and trying to assign a column to it which will have
multiple values in it. (In A single cell you cant show multiple values)
9.How many ways we test the universe & Report?
By doing integrity check we can test universe & By coping report query and run in backend(oracle,sql
server...) we can test the data by comparing both.