Sei sulla pagina 1di 14

In BI7.

0 from PSA to DSO the Delta or Full update is done on the basis of requests
i.e. data mart status of the data source. Here you need not set Repair Full indicator
and neither there is any option to do so. If you want to achieve this functionality you
will first need to do full load from the PSA to DSO and then Set the Processing mode
of the DTP to "No Data Transfer;Delta Status in Source Fetched" and execute it once
more. It acts as" Init Without Data transfer". After that you can again start doing the
delta load as usual.

if you want to follow the 3.5 procedure you can go thru that... in the Infosource tab-> select your info area -->and right clcik it.. ->in that create INFOSOURCE 3.X.. for
that just right clcik ur CUBE or DSO and --> Additional functions --> Update Rules..
and like that you can follow the same procedure which you done in 3.5..

and for data loading in BI 7.0 from fla file means

fisrt create one Cube or DSO with the same structure which you have in flatfile..and
activate it.. --->now comes to Datasource tab--> create one Datasource here you
need to select type of data for example..
select Transactional data --> and menntion your flatfile name in extraction tab- and
file type and eneter your info object names in FIELDS tab --> and load preview data
Activate it..
now select your datasource and create info package and schedule it.. now your
data will loded in to PSA level...----> and now comes to info provider select your
cube.. and right clcik it.. and create transformations.,. and activate it..----> and
create DTP -- Activate it.. and Execute it..
1)Create datasource. Here u can set/check the Soucre System fields.
2)Create Transformation for that datasource. (no more update rules/transfer rules)
2.1) While creating transformation for DS it will ask you for data target name, so
just assign where u want to update ur data.DataSource -> Transformation -> (DTP)->Data TargetNow if you want to load data into data target from Source System
Datasource:
1) Create infopackage for that data source. If you are creating infopackage for new
datasources, it will only allow you update upto PSA, all other options u can see as
disabled.

2)Now Create DTP (Data Transfer Process) for that data source.
3) NOw schdule the Infopackage, once the data is loaded to PSA, you can execute
your DTP which will load data to data target.Data Transfer Process (DTP) is now used
to load data using the dataflow created by the Transformation.
Here's how the DTP data load works:
1) Load InfoPackage
2) Data gets loaded into PSA (hence why PSA only is selected)
3) DTP gets "executed"
4) Data gets loaded from PSA into the data target once the DTP has executed

full load is from active table. delta load is from changelog table.

what you need to do is create a delta package with the two cubes as data
target. insert the delta in the process chain instead of the full load. when
chain runs for the first time it will see that there is no delta init and it will
automaticallly do a delta init. the next time it will do a delta

you need to delete the already existing full loads later to avoid duplicate
data.

you can of course do the same manually

Creating Process Chains for DSO


Saptechnical.com
By Sandhya Punuru, Perot Systems

A process chain is a sequence of processes that are scheduled to wait in the


background for an event. Some of these processes trigger a separate event that
can, in turn, start other processes. It looks similar to a flow chart. You define the list
of Infopackages / DTPs that is needed to load the data, say delta or Full load. Then
you schedule the Process chain as hourly, daily, monthly, etc, depending on the
requirement

Use: If you want to automatically load and schedule a hierarchy, you can include
this as an application process in the procedure for a Process Chain

Note: Before you define process chains, you need to have the Objects ready i.e
DSO, Cubes etc

Steps to Create Process Chains

Step 1: Go to RSPC transaction. Click on Create. Give Process chain name and
Description.

Step 2: Give the Start process name and description and click on Enter

Note: Each process chain should start with a Start Process

Step 3: The next screen will show the Scheduling options.


There are two options:
Direct Scheduling:
Start Using Meta Chain or API
In my example I have chosen Direct Scheduling as I am processing only one chain
i.e DSO. Click on Change Selections

In the below screen shot, you can give the scheduling details i.e the Immediate,
Date& time etc. and click on SAVE

The Screen shot below indicates that we have started the process.

Step 4:
Click on the icon process types as shown in the below figure. You will get a list of
options.
In my example I am scheduling for DSO. To process we need to have InfoPackage,
DTPs for the corresponding DSO
Open the tree Load Process and Post Processing, We need to drag and drop
Execute InfoPackage

Step 5: Once you drag and drop the Execute infopackage we get the below Popup.
We need to keyin the Infopackage name. To do this click on F4 and chose your
Infopackage and click on ENTER

Step 5: Once you drag & drop the InfoPackage, the corresponding DTPs and the
corresponding Active Data table are automatically called.

Step 6: Save + Activate and Execute the Process.

Step 7: Once the process is completed, we can see the whole chain converted to
Green, Which indicates the process is successfully completed
Note: Incase of errors in any step, the particular chain will be displayed in red

Step 8: We can see if the data is successfully updated by Right-click on the


Datastore Data

Step 9: On selecting Administer Data Target, will lead you to InfoProvider


Administration.

Step 10: Click on Active Data tab to see the data successfully uploaded.

Note:
Similarly the process can be done for Cubes, Master data and Transactional data
When you create Process chains, by default they are stored in Not Assigned. If you
want to create your own folders,

a) Click on the icon Display Components on your toolbar => choose F4 => Create

b. Give appropriate name for the folder => Enter

c. SAVE + ACTIVATE

apd
You can use Analysis Process Designer (APD) to run query result sets. In this
scenario, APD serves as an extractor to load data in the background to a DataStore
object (DSO) and then load master data to provide the query result data for
analysis. This data is gathered from the query, stored in a direct-update DSO, and
then passed to a master data InfoObject. You can use this technique to store
snapshots of query result sets or to store the values of the query results.

Measures are often contained in queries that you need to extract for statistical
analysis, trend analysis, or use in other queries for further analysis. If the data is
strictly shown in queries, there is no standard DataSource that allows data loads
directly from a query into a DataStore object (DSO) or InfoCube. Analysis Process
Designer (APD) allows queries to run in the background. The resulting query data is
extracted and stored for use in other queries.

This often-overlooked technique allows you to use query result data as a


DataSource. You can use this technique to store trend analysis or extract complex
result sets from a query that would be very difficult or impossible to do with
transformation logic. This technique also provides the query designer the flexibility
to create the various metrics needed for extraction.

Potrebbero piacerti anche