Sei sulla pagina 1di 20

SAP BI/BW INTERVIEW QUESTIONS AND ANSWERS

SAP BI/BW QUESTIONS ASKED FACE TO FACE AND TELEPHONIC INTERVIEWS


WIPRO
1) TELL ME ABOUT YOURSELF?
2) WHAT IS EXTENDEN STAR SCHEMA?
3) WHAT ARE OBJECTS OR FIELDS STORED IN SID TABLE?
4) WHAT DO YOU CALL FIELDS IN FACTTABLES OTHER THAN KEYFIGURES IN SAP BI??
5) WHAT IS THE DIFFERENCE BETWEEN TIME INDEPENDENT CHARACTERISTIC AND TIME
INDEPENDENT CHARACTERISTIC?
6) WHAT ARE THE VARIABLE TYPES?
7)WHAT IS DIFFERENCE BETWEEN FORMULA VARIABLE AND CHARACTERISTIC
VARIABLE?
8) WHAT ARE P X Y TABLES?

CAPGEMINI
1) WHAT IS PROCESS KEY IN EXTRACTION?
2) DATA IS LOADING FROM OTHER ODS INTO AN INFOCUBE THROUGH A LOOKUP ROUTINE
FOR THREE FIELDS.THE LOADING TIME TO INFOCUBE IS HIGH.SO THIS IS A
PERFORMANCE ISSUE. HOW DO YOU REDUCE THE LOADING TIME FOR LOOKUP
ROUTINE(TRANSFER ROUTINE)?
3)HAVE U WORKED ON VARIABLES GIVE ME A SCENARIO?
4)WHAT ARE TEXT AND CHARACTERICTIC VARIABLES?

BRISTLE CONE
1) WHAT IS ATTRIBUTE CHANGE RUN?
2) WHAT IS THE USE OF RSRT TCODE?
3)WHAT DO U DO IF DELTA LOAD FAILS?
HP

1)IN REPORTING WHAT IS THE DIFFERENCE BETWEEN KEY DATE AND DATE IN TIME
CHARACTERISTIC DATE?WHEN YOU USE THE KEYDATE?WHAT IS THE PURPOSE OF
KEYDATE?
2) WHAT ARE PERFORMANCE STATISTICS?HOW DO U FIND WHERE IS PERFORMANCE
ISSUE?WHAT YOU DO FOR RECTIFYING PERFORMANCE IN AGGREGATION?
3)WHEN SALES ORDER CANCELLED ,HOW DO U UPDATE THIS IN BI?
4)WHAT IS YOU DO IN YOUR DAILY JOB WORK?WHICH AREA OF BI YOU ARE HAVING
GOOD EXPERIENCES?WHAT ARE MODULES IN YOUR PROJECT?

SAP BI/BW INTERVIEW QUESTIONS AND ANSWERS


Q 1. What are the Key differences between an OLTP system and a Data
Warehousing system?Answer:Level of detail: The OLTP layer stores data with a very
high level of detail, whereas data
in the Data Warehouse is compressed for high-performance access
(aggregation).History: Archiving datain the OLTP area means it is stored with minimal
history. The Data
Warehouse area requires comprehensive historical data.Changeability: Frequent data
changes are a feature of the operative area, while in the
Data Warehouse, the data is frozen after a certain point for analysis.Integration: In
contrast to the OLTP environment, requests for comprehensive, integrated
information for analysis isare very high.Normalization: Due to the reduction in data
redundancy, normalization is very high for
operative use. Data staging and lower performance are the reasons why there is less
normalization in the Data Warehouse.Read access: An OLAP environment is optimized
for read access. Operative
applications (and users ) also need to carry out additional functions regularly, including
change, insert, and delete.
Q 2. List the differences between BW 3.5 and BI 7.0 versions.Answer:SAP BW 7.0
is called SAP BI and is one of the components of SAP NetWeave.
Some of the major differences are:
1. No Update rules or Transfer rules (Not mandatory in data flow)

2. Instead of update rules and Transfer rules new concept introduced called
transformations.
3. New ODS introduced in additional to the Standard and transactional.
4. ODS is renamed as DataStore to meet with the global data warehousing standards.
5. In Infosets now you can include Infocubes as well.
6. The Re-Modeling transaction helps you add new key figures and characteristics and
handle historical data as well. This facility is available only for info cube.
7. The BI accelerator (for now only for infocubes) helps in reducing query run time by
almost
a factor of 10 - 100. This BI accl is a separate box and would cost more.
8. The monitoring has been improved with a new portal based cockpit.
9. Search functionality has improved!! You can search any object. Not like 3.5
10. transformation replaces the transfer and update rules.
11. Remodeling of InfoProviders supports you in Information Lifecycle Management.
12. Push of XML data into BI system (into PSA) without Service API or Delta Queue
From BI, remote activation of Data Sources is possible in SAP source systems.
13.There are functional changes to the Persistent Staging Area (PSA).
14. BI supports real-time data acquisition.
15. SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements the
Enterprise Data Warehousing (EDW). The new features/ Major differences include:
16. Load through PSA has become a mandatory. You can't skip this, and also there is
no
IDoc transfer method in BI 7.0. DTP (Data Transfer Process) replaced the Transfer and

Update rules. Also in the Transformation now we can do "Start Routine, Expert Routine
and
End Routine". during data load.
17. User management (includes new concept for analysis authorizations) for more
flexible
BI end user authorizations.Q 3. What is the responsibilities & tasks of a BW
Technical person vs a BW
Functional person?Answer:Although the definitions vary from case to case, In general
the Functional consultant,
derives the funtional specification from the business requirement document. This job is
normally done either by the business analyst or system analyst who has a very good
knowledge of the business.
In some large organizations/projects there is a business analyst as well as a system
analyst.
In case of new business requirements or request for new reports or queries, the
requirements are discussed with the business by the business/system analyst. These
refined requirements are then used to generate the functional specification document.
In
the case of BW it could be the logical design in DATA MODELING.
After further review, this logical design is translated to physical design . The physical
design results in the definition of the required dimensions, key figures, master data, etc.
Once this process is approved and signed off by the requester(users), conversion of
this
into practically usable tasks using the SAP BW software is done.
This is the technical persons job. The whole process of creating an InfoProvider,
InfoObjects, InforSources, Source system, etc falls under the Technical domain.
Q 4. What do you understand by system landscape? What kind of landscapes are
possible with SAP Netweaver?Answer:A landscape is a logical grouping of systems.
The grouping of landscape can be horizontal
or vertical. Horizontal landscapes comprise of two or more SAP systems (system IDs

SIDs) that
support promote to production of software for a particular piece or set of functionality
for
example, the development, quality assurance and productive systems for the BI
functionality is the BI landscape. Vertical landscapes comprise of the systems in a
particular area of the landscape for
example, all of the systems that run productive services are the production
landscape.SAP NETWEAVER BASED SYSTEM LANDSCAPE DEPLOYMENT
OPTIONSAs with any software implementation, and SAP NetWeaver based systems
are no different
in this case, the ideal software landscape to support the implementation is comprised of
environments supporting three distinct needs that provide a solid promote to
production
change management and change control process for all configuration and
developments.These environments should provide:An environment where customizing
and development can be performed. The
environment should be representative of the productive environment and contain all
product
production customizing, developments and a sampling of production data. In addition
new
projects developments, customizing and data will exist in the system and this
environment
will be used for unit testing. This environment is used for as the initial environment for
resolution of production issues and routine maintenance support.
An isolated and stable environment for testing the customizing, developments, and
maintenance support changes. The environment is representative of the productive
environment and contains all product customizing, developments and in most cases
production quality data. In addition this environment will also have newly completed
customizing/developments that are in quality testing phase prior to productive release.
The
typical testing that occurs in this environment is regression and integration testing. No
development tasks are performed in this environment, just quality assurance tasks.
This
environment may also be used for replicating and debugging productive issues.
An isolated and stable production environment. The environment is the system of

record
and only contains productive customizing and developments. No development tasks
are
performed in this environment, just productive tasks. This environment may additionally
be
used for debugging productive issues.
This promote to production scenario is recommended when implementing any system
based on SAP NetWeaver.
This is typically called a Three System Landscape with 1 Production system (PRD),
1
Quality Assurance system (QAS), and 1 Development System (DEV).
Many customers supplement a three system landscape with a fourth environment: A
standalone sandbox environment used for destructive testing, learning, and testing. The
landscape is still called a three system landscape as the sandbox is not part of the
promote to production landscape.
A customer can choose to combine the above environments in a more minimal 2
system
landscape this landscape is not typical for SAP deployments and the customer must
manage the additional risk and challenges of separating and isolating the different
environment activities from each other and maintaining a stable and productive
environment.
Furthermore customers can choose to extend the three system landscape to become a
4
system landscape. This can be appropriate for customers who have:
Extensive distributed and parallel development teams, or
If the customer has the need to separate the quality assurance processes into two
distinct environments.
These can result in the following additions to the Three System Landscape to create a
4
system landscape:

Additional Development system: An additional consolidation system is inserted


into
the landscape to consolidate distributed developments and customizing.
Addition of an additional quality environment into the landscape and assigning
specific testing needs to each of the quality environments (typically it is seen that one
system performs Application and performance testing and the second is used for
integration, user acceptance and regression testing). Development System (DEV)All
customizing and development work is performed in this system. All system
maintenance
including break-fixes for productive processes is also performed in the system. After all
the
changes have been unit tested, these changes can be transferred to the quality
assurance
system (QAS) for further system testing.
The customizing, development and production break-fix changes are promoted to the
QAS
system using the change management system. This ensures consistency,
management,
tracking and audit capabilities thus minimizing risk and human error by eliminating
manual
repetition of development and customizing work in each system.Quality Assurance
System (QAS)After unit testing the customizing, development and break-fix changes in
the development
system (DEV), the changes are promoted to the quality assurance system (QAS). Here,
the configuration, development or changes undergo further tests and checks to ensure
that
they do not adversely affect other modules.
When the configuration, development or changes have been thoroughly tested in this
system and signed off by the quality assurance team, it can be promoted to the
production
system (PRD).Production System (PRD)A company uses the production system
(PRD) for its live, productive work. This system
containing the company's live datais where the real business processes are executed.
The other systems in the landscape provide a safe approach to guaranteeing that only
correct and tested (that is not defective) new developments and/or customizing

configurations get deployed into the productive system. Additionally they ensure that
changes to productive developments and configuration by either project enhancements
or
maintenance do not adversely affect the production environment when deployed.
Therefore
the quality of the DEV and QAS system and the implemented change management
processes directly impacts the quality of the production system.Q 5. We have frequent
load failures during extractions? How are you going to
analyse them?Answer:Loads can fail due to various reasons, some of which are
transient and others have to be
addressed specifically.
Some of the common failure reasons are:
- Data inconsistency in the source system.
You can monitor these issues in T-code -> RSMO and PSA (failed records).
- Issues of work process scheduling and IDoc transfer to target system from source
system. These issues can be re-initiated by canceling that specific data load and
( usually
by changing Request color from Yellow - > Red in RSMO).. and restarting the extraction.
- Invalid characters in the Source System
- Deadlock in the system
- Previous load failure , if the load is dependent on other loads
- Erroneous records
- Issues around RFC connections
Q 6. Can you give some business scenarios wherein you have used the standard
Datastore Object?The diagram below shows how standard DataStore objects are used
in this example of
updating order and delivery information, and the status tracking of orders, meaning
which
orders are open, which are partially-delivered, and so on.

There are three main steps to the entire data process:

1.
Loading the data into the BI system and storing it in the PSA.The data
requested by the BI system is stored initially in the PSA. A PSA is created for
each DataSource and each source system. The PSA is the storage location for
incoming
data in the BI system. Requested data is saved, unchanged, to the source system.

2.
Processing and storing the data in DataSource objectsIn the second step, the
DataSource objects are used on two different levels.
a.
On level one, the data from multiple source systems is stored in DataSource
objects.
Transformation rules permit you to store the consolidated and cleansed data in the

technical format of the BI system. On level one, the data is stored on the document level
(for
example, orders and deliveries) and constitutes the consolidated database for further
processing in the BI system. Data analysis is therefore not usually performed on the
DataSource objects at this level.
b.
On level two, transfer rules subsequently combine the data from several
DataStore
objects into a single DataStore object in accordance with business-related criteria. The
data is very detailed, for example, information such as the delivery quantity, the delivery
delay in days, and the order status, are calculated and stored per order item. Level 2 is
used specifically for operative analysis issues, for example, which orders are still open
from the last week. Unlike multidimensional analysis, where very large quantities of
data
are selected, here data is displayed and analyzed selectively.

3.
Storing data in the InfoCubeIn the final step, the data is aggregated from the
DataStore object on level two into an
InfoCube. The InfoCube does not contain the order number, but saves the data, for
example, on the levels of customer, product, and month. Multidimensional analysis is
also
performed on this data using a BEx query. You can still display the detailed document
data
from the DataStore object whenever you need to. Use the report/report interface from a
BEx query. This allows you to analyze the aggregated data from the InfoCube and to
target
the specific level of detail you want to access in the data.
Here are some more questions (without answers) on 'BW Modelling'
Q. Aggregates improve query performance. Give some examples where you felt the
need
to create aggregates. Q. When might you consider putting two un-related characteristics
in a single dimension
table? Q. We have a dimension with potentially million of records. We are planning to
use
Category Dimension. Can you explain how this works? Q. What step by step

approach would you recommend while designing an InfoCube? What


are the key factors that need to be kept in mind? Q. What kind of factors would you
consider before deciding whether to use BI Accelerator
or a BI Aggregate? Q. What is table partitioning? How does it improve performance?
What are the
prerequisites?
Q. We have existing live cubes, but due to new requirements, we need
to add
1. Adding a navigation attribute or a hierarchy
2. Adding a characteristic
3. Adding a key figure
4. Changing the dimension tables
What are the steps to be taken for each of these?
Q. What do you know about RDA
(Real Time DataAcquisition)? What are the methods of
transferring data to the BI system with RDA? Q. The concept of requests is
important in BI. What purpose do request ids serve? How
is this concept relevant for compression? Q. What is BI statistics? How does it help
with Query performance Optimization? Q. What are the two main transfer methods?
What are their advantages? Which one is
commonly used? Q. What is an expert routine? Can you provide a business scenario
where the expert
routine may be used?

SAP BI / Bw Interview Questions and Answers


Leave a reply

Please check following content on SAP Bi / BW Interview Question and Answers. This will help you
to crack your interview. All the best.
1. What is data integrity?
Data Integrity is about eliminating duplicate entries in the database. Data integrity means no
duplicate data.
2. What is the difference between SAP BW 3.0B and SAP BW 3.1C, 3.5?
The best answer here is Business Content. There is additional Business Content provided with BW
3.1C that wasnt found in BW 3.0B. SAP has a pretty decent reference library on their Web site that
documents that additional objects found with 3.1C.
3. What is the difference between SAP BW 3.5 and 7.0?
SAP BW 7.0 is called SAP BI and is one of the components of SAP NetWeaver 2004s. There are
many differences between them in areas like extraction, EDW, reporting, analysis administration and
so forth. For a detailed description, please refer to the documentation given on help.sap.com.
1. No Update rules or Transfer rules (Not mandatory in data flow)
2. Instead of update rules and Transfer rules new concept introduced called transformations.
3. New ODS introduced in additional to the Standard and transactional.
4. ODS is renamed as DataStore to meet with the global data warehousing standards.And lot
more changes in the functionalities of BEX query designer and WAD etc.
5. In Infosets now you can include Infocubes as well.
6. The Re-Modeling transaction helps you adding new key figures and characteristics and
handles historical data as well without much hassle. This facility is available only for info
cube.
7. The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a
factor of 10 100. This BI accl is a separate box and would cost more. Vendors for these
would be HP or IBM.
8. The monitoring has been improved with a new portal based cockpit. Which means you would
need to have an EP guy in your project for implementing the portal !
9. Search functionality has improved!! You can search any object. Not like 3.5

10. Transformations are in and routines are passe! Yes, you can always revert to the old
transactions too.
4. What is index?
Indices/Indexes are used to locate needed records in a database table quickly. BW uses two types of
indices, B-tree indices for regular database tables and bitmap indices for fact tables and aggregate
tables.
5. What is KPIs (Key Performance Indicators)?
(1) Predefined calculations that render summarized and/or aggregated information, which is useful in
making strategic decisions.
(2) Also known as Performance Measure, Performance Metric measures. KPIs are put in place and
visible to an organization to indicate the level of progress and status of change efforts in an
organization.
KPIs are industry-recognized measurements on which to base critical business decisions.
In SAP BW, Business Content KPIs have been developed based upon input from customers,
partners, and industry experts to ensure that they reflect best practices.
6. What is the use of process chain?
The use of Process Chain is to automate the data load process.
Used to automate all the processes including Data load and all Administrative Tasks like indices
creation deletion, Cube compression etc.
Highly controlled data loading.
7. Difference between Display Attribute and Navigational Attribute?
The basic difference between the two is that navigational attributes can be used to drilldown in a Bex
report whereas display attributes cannot be used so. A navigational attribute would function more or
less like a characteristic within a cube.
To enable these features of a navigational attribute, the attribute needs to be made navigational in
the cube apart from the master data info-object.
The only difference is that navigation attributes can be used for navigation in queries, like filtering,
drill-down etc.
You can also use hierarchies on navigational attributes, as it is possible for characteristics.
But an extra feature is that there is a possibility to change your history. (Please look at the relevant
time scenarios). If navigation attributes changes for a characteristic, it is changed for all records in
the past. Disadvantage is also a slow down in performance.
8. If there are duplicate data in Cubes, how would you fix it?
Delete the request ID, Fix data in PSA or ODS and re-load again from PSA / ODS.

9. What are the differences between ODS and Info Cube?


ODS holds transactional level data. Its just as a flat table. Its not based on multidimensional model.
ODS have three tables
1. Active Data table (A table containing the active data)
2. Change log Table (Contains the change history for delta updating from the ODS Object into
other data targets, such as ODS Objects or InfoCubes for example.)
3. Activation Queue table (For saving ODS data records that are to be updated but that have
not yet been activated. The data is deleted after the records have been activated)
Whereas Cube holds aggregated data which is not as detailed as ODS. Cube is based on
multidimensional model.
An ODS is a flat structure. It is just one table that contains all data.
Most of the time you use an ODS for line item data. Then you aggregate this data to an info cube
One major difference is the manner of data storage. In ODS, data is stored in flat tables. By flat I
mean to say ordinary transparent table whereas in a CUBE, it composed of multiple tables arranged
in a STAR SCHEMA joined by SIDs. The purpose is to do MULTI-DIMENSIONAL Reporting
In ODS; we can delete / overwrite the data load but in cube only add is possible, no overwrite.
10. What is the use of change log table?
Change log is used for delta updates to the target; it stores all changes per request and updates the
target.
11. Difference between InfoSet and Multiprovider
a) The operation in Multiprovider is Union where as in Infoset it is either inner join or Outer join.
b) You can add Info-cube, ODS, Info-object in Multiprovider whereas in an Infoset you can only have
ODS and Info-object.
c) An Infoset is an Info-provider that joins data from ODS and Info-objects( with master data). The
join may be a outer join or a inner join. Whereas a Multiprovider is created on all types of
Infoproviders Cubes, ODS, Info-object. These InfoProviders are connected to one another by a
union operation.
d) A union operation is used to combine the data from these objects into a MultiProvider. Here, the
system constructs the union set of the data sets involved. In other words, all values of these data
sets are combined. As a comparison: InfoSets are created using joins. These joins only combine
values that appear in both tables. In contrast to a union, joins form the intersection of the tables.

12. What is the T.Code for Data Archival and what is its advantage?
SARA.
Advantage: To minimize space, Query performance and Load performance
13. What are the Data Loading Tuning from R/3 to BW, FF to BW?

If you have enhanced an extractor, check your code in user exit RSAP0001 for expensive
SQL statements, nested selects and rectify them.

Watch out the ABAP code in Transfer and Update Rules, this might slow down performance

If you have several extraction jobs running concurrently, there probably are not enough
system resources to dedicate to any single extraction job. Make sure schedule this job
judiciously.

If you have multiple application servers, try to do load balancing by distributing the load
among different servers.

Build secondary indexes on the under lying tables of a DataSource to correspond to the
fields in the selection criteria of the datasource. ( Indexes on Source tables)

Try to increase the number of parallel processes so that packages are extracted parallelly
instead of sequentially. (Use PSA and Data Target in parallel option in the info package.)

Buffer the SID number ranges if you load lot of data at once.

Load master data before loading transaction data.

Use SAP Delivered extractors as much as possible.

If your source is not an SAP system but a flat file, make sure that this file is housed on the
application server and not on the client machine. Files stored in an ASCII format are faster to
load than those stored in a CSV format.

14. Performance monitoring and analysis tools in BW

System Trace: Transaction ST01 lets you do various levels of system trace such as
authorization checks, SQL traces, table/buffer trace etc. It is a general Basis tool but can be
leveraged for BW.

Workload Analysis: You use transaction code ST03

Database Performance Analysis: Transaction ST04 gives you all that you need to know about
whats happening at the database level.

Performance Analysis: Transaction ST05 enables you to do performance traces in different


are as namely SQL trace, Enqueue trace, RFC trace and buffer trace.

BW Technical Content Analysis: SAP Standard Business Content 0BWTCT that needs to be
activated. It contains several InfoCubes, ODS Objects and MultiProviders and contains a
variety of performance related information.

BW Monitor: You can get to it independently of an InfoPackage by running transaction RSMO


or via an InfoPackage. An important feature of this tool is the ability to retrieve important IDoc
information.

ABAP Runtime Analysis Tool: Use transaction SE30 to do a runtime analysis of a


transaction, program or function module. It is a very helpful tool if you know the program or
routine that you suspect is causing a performance bottleneck.

15. Difference between Transfer Rules and Update Rules


a) Transfer Rules:
When we maintains the transfer structure and the communication structure, we use the transfer rules
to determine how we want the transfer structure fields to be assigned to the communication structure
InfoObjects. We can arrange for a 1:1 assignment. We can also fill InfoObjects using routines,
formulas, or constants.
Update rules:
Update rules specify how the data (key figures, time characteristics, characteristics) is updated to
data targets from the communication structure of an InfoSource. You are therefore connecting an
InfoSource with a data target.
b) Transfer rules are linked to InfoSource, update rules are linked to InfoProvider (InfoCube, ODS).
i. Transfer rules are source system dependant whereas update rules are Data target dependant.
ii.The no. of transfer rules would be equal to the no. of source system for a data target.
iii.Transfer rules is mainly for data cleansing and data formatting whereas in the update rules you
would write the business rules for your data target.
iv. Currency translations are possible in update rules.
c) Using transfer rules you can assign DataSource fields to corresponding InfoObjects of the
InfoSource. Transfer rules give you possibility to cleanse data before it is loaded into BW.
Update rules describe how the data is updated into the InfoProvider from the communication
structure of an InfoSource.
If you have several InfoCubes or ODS objects connected to one InfoSource you can for example
adjust data according to them using update rules.
Only in Update Rules: a. You can use return tables in update rules which would split the incoming
data package record into multiple ones. This is not possible in transfer rules.
b. Currency conversion is not possible in transfer rules.

c. If you have a key figure that is a calculated one using the base key figures you would do the
calculation only in the update rules.
16. What is OSS?
OSS is Online support system runs by SAP to support the customers.
You can access this by entering OSS1 transaction or visit Service.SAP.Com and access it by
providing the user name and password.
17. How to transport BW object?
Follow the steps.
i. RSA1 > Transport connection
ii. In the right window there is a category all object according to type
iii. Select required object you want to transport.
iv. Expand that object, there is select object, double click on this you will get the number of objects,
select yours one.
v. Continue.
vi. Go with the selection, select all your required objects you want to transport.
vii. There is icon Transport Object (Truck Symbol).
viii. Click that, it will create one request, note it down this request.
ix. Go to Transport Organizer (T.code SE01).
x. In the display tab, enter the Request, and then go with display.
xi. Check your transport request whether contains the required objects or not, if not go with edit, if
yes Release that request.
Thats it; your coordinator/Basis person will move this request to Quality or Production.
18. How to unlock objects in Transport Organizer?
To unlock a transport use Go to SE03 > Request Task > Unlock Objects
Enter your request and select unlock and execute. This will unlock the request.
19. What is InfoPackage Group?
An InfoPackage group is a collection of InfoPackages.
20. Differences Between Infopackage Groups and Process chains
i.Info Package Groups are used to group only Infopackages
where as Process chains are used to automate all the processes.
ii Infopackage goups:
Use to group all relevent infopackages in a group, (Automation of a group of infopackages only for
dataload). Possible to Sequence the load in order.
Process Chains:
Used to automate all the processes including Dataload
and all Administrative Tasks like indices creation deletion, Cube compression etc
Highly controlled dataloading.
iii. InfoPackage Groups/Event Chains are older methods of scheduling/automation. Process Chains

are newer and provide more capabilities. We can use ABAP programs and lot of additional features
like ODS activation and sending emails to users based on success or failure of data loads.
21. What are the critical issues you faced and how did you solve it?
Find your own answer based on your experience.
22. What is Conversion Routine?
a) Conversion Routines are used to convert data types from internal format to external/display format
or vice versa.
b) These are function modules.
c) There are many function modules, they will be of type
CONVERSION_EXIT_XXXX_INPUT, CONVERSION_EXIT_XXXX_OUTPUT.
example:
CONVERSION_EXIT_ALPHA_INPUT
CONVERSION_EXIT_ALPHA_OUTPUT
23. Difference between Start Routine and Conversion Routine
In the start routine you can modify data packages when data loading. Conversion routine usually
refers to routines bound to InfoObjects (or data elements) for conversion of internal and display
format.
24. What is the use of setup tables in LO extraction?
The use of setup table is to store your historical data in them before updating to the target system.
Once you fill up the setup tables with the data, you need not to go to the application tables again and
again which in turn will increase your system performance.
25. R/3 to ODS delta update is good but ODS to Cube delta is broken. How to fix it?
i. Check the Monitor (RSMO) whats the error explanation. Based on explanation, we can check the
reason
ii. Check the timings of delta load from R3 ODS CUBE if conflicting after ODS load
iii. Check the mapping of Transfer/Update Rules
iv. Fails in RFC connection
v. BW is not set as source system
vi. Dump (for a lot of reasons, full table space, time out, sql errors)
Do not receive an IDOC correctly.
vii. There is a error load before the last one and so on

26. What is short dump and how to rectify?


Short dump specifies that an ABAP runtime error has occurred and the error messages are written to
the R/3 database tables. You can view the short dump through transaction ST22.
You get short dumps bcoz of runtime errors. The short dump u got is due to the termination of
background job. This could be of many reasons.
You can check short dumps in T-code ST22. U can give the job tech name and your userid. It will
show the status of jobs in the system. Here you can even analyze short dump. U can use ST22 in
both R/3 and BW.
OR To call an analysis method,
choose Tools > ABAP Workbench > Test > Dump-Analysis from the SAP Easy Access menu.
In the initial screen, you must specify whether you want to view todays dump or the dump from
yesterday. If these selection criteria are too imprecise, you can enter more specific criteria. To do
this, choose Goto > Select Short Dump
You can display a list of all ABAP dumps by choosing Edit > Display List. You can then display and
analyze a selected dump. To do this, choose Short Dump > Dump Analysis.
- See more at: http://www.sappers.co.in/sap-bi-bw-interview-questions-andanswers/#sthash.G0IpZkCC.dpuf

Potrebbero piacerti anche