Sei sulla pagina 1di 14

Filters

Use

A filter is an object that describes a multidimensional segment of data from a data set. Filters are
used in reporting, analysis and planning, for example, to restrict data to a certain business area,
certain product groups or certain time periods. You segment data in this way so that users or user
groups only have access to the data that is relevant to them or so that only certain data areas are
available within an application scenario.

Within BI Integrated Planning, filters determine the selection of data upon which a planning
function is executed. A planning sequence comprises a set of planning functions. A filter is
assigned to each of these functions.

You want to revaluate the transaction data in your InfoProviders by a factor of 10%. However,
you only want to perform the revaluation for certain groups of customers. To do this, you create
a filter that contains the group of customers for which you want to revaluate data.

Filters can be reused in planning functions and in queries.

Integration

You can create multiple filters for an InfoProvider. You do this using the Planning Modeler or
Planning Wizard or the Query Designer. In the Planning Modeler or Planning Wizard, you can
only define filters on aggregation levels.

For more information about filters in the query, see the documentation on the Query Designer
under Filter

Prerequisites

To create a filter and use it in BI Integrated Planning, you need an aggregation level. For more
information, see Aggregation Levels.

Features

You choose the characteristics that you want to restrict from the characteristics of an aggregation
level and add them to the filter.

A filter has the following components:


Filter Components

Element Description
Characteristic restrictions In the restriction dialog, you further restrict the
characteristic using single values, value ranges, hierarchy
nodes and variables. These characteristic restrictions
determine the selection of data for a filter.
Default values Default values are only relevant in queries. They can be
defined in the same way as characteristic restrictions. They
define the initial filter status of the query upon execution.

To specify selections of data that are time dependent, for example if you want to determine a
time-dependent hierarchy for time-dependent hierarchy node selections, you specify a Filter Key
Date.

You use the delivered variable 0PLANDATA with characteristic 0CALDAY to synchronize key
dates in queries, filters, characteristic relationships, data slices and planning functions. In this
way you ensure that the same key date is used in these objects.

The function of a filter depends on whether you are using it in a planning function or in a query.

Filters in Planning Functions

In planning functions, a filter on the characteristic restrictions describes the data for which a
planning function is executed.

Selections in the default values are not consulted when the planning function is executed.

You use a key date for the filter to determine time-dependent selections.

Filters in Queries

The values defined in the characteristic restrictions restrict the data that is available for further
filtering at runtime of a query. You cannot apply a filter to a characteristic value that is not
included in this value set.

The default values determine the initial filter status of the query.

The settings Changeable upon Execution and Only Single Value generally refer to the use of
filters with a query.

Changeable upon Execution determines whether you can change the values selected in the
characteristic restrictions when you execute the query. This setting is a prerequisite for the
definition of default values for a characteristic.
If you select the Changeable upon Execution option, you can use the Only Single Value option to
specify that you want to use a single value only for filtering the query.

For more information about filters in queries, see the documentation on the Query Designer
under Filter.

Activities

You are in the Planning Modeler on the Filter tab page. In the Filter Selection screen area, you
can create, copy, delete, change, check, save and activate filters.

Creating Filters

...

1. To create a filter, choose Create.

2. In the Create Filter screen area, enter a technical name and a description for the filter that you
want to create.

3. In the Aggregation Level Selection screen area, choose the required aggregation level. If you do
not enter a search term and choose Start, the system shows all the aggregation levels available in
your system.

Choose Transfer. In the lower screen area of the Planning Modeler, the system displays the
Filter and Settings tab pages.

4. On the Filter tab page, choose the characteristics that you want to restrict.

You can adapt the display of the characteristics according to your requirements (display with
key, text, key/text or text/key).

Add the characteristics that you want to restrict to the list. You can add individual characteristics
to the list or all characteristics in the aggregation level (choose Add or Add All).

5. Select the characteristic that you want to restrict and choose the symbol for input help in the
column after Characteristic Restrictions. The dialog box for determining characteristic
restriction appears.

You can choose single values, value ranges and hierarchy nodes or variables as required. You
can also transfer values from the history or from favorites.
You can choose one of the following views to select single values, value ranges and hierarchy
nodes or variables:

● All Values to display all characteristic values

● Search to search for a specific characteristic value or hierarchy node

● Value Range to define value ranges (such as intervals)

● Variables to select or create a variable

● All Nodes to display and select hierarchy nodes

6. In the list of values, select one or more values, value ranges or hierarchy nodes and choose
Insert and save the relevant selection by choosing OK. The system transfers the relevant settings
to the list of restricted characteristics.

7. You can make further restrictions by choosing Show Enhanced Settings:

○ Changeable upon Execution (determines whether the characteristic restrictions can be


changed at execution)

If you select the Changeable upon Execution option, you can make further settings:

○ Default Value Choose the symbol in the column after Default Value. The dialog box for
determining the default value appears. Proceed as when restricting the characteristic values.

8. On the Settings tab page, you set the key date.

9. To save the definition of the filter, choose Save.

10. To check that the filter definition is consistent, choose Check.

Even when the check for a filter is not successful, you can save the filter in the Planning Modeler
or Planning Wizard (like in the Query Designer). This allows you to save filters that have
characteristic values that are not yet available and create these filters in the system later. The
system performs a consistency check when it executes the filter, before the filter is used.

• Filter, then in that case you can provide selections and based on the selections you
can load the data to datatarget.

In case of semmantic group. If there is any error in the loadings, then based on the
selection of objects in semmantic group. Sequnce of records is pushed to error stack
with errornous records. so that they do not miss the sequnece , when you correct the
load in the error stack and load to the data target.

222

Semantic Groups specifies how you want to build the data packages that are read from the source
(DataSource or InfoProvider). To do this, define key fields. Data records that have the same key
are combined in a single data package.

This setting is only relevant for DataStore objects with data fields that are overwritten. This
setting also defines the key fields for the error stack. By defining the key for the error stack, you
ensure that the data can be updated in the target in the correct order once the incorrect data
records have been corrected.

Filters : Using filters, you can use multiple data transfer processes with disjunctive selection
conditions to efficiently transfer small sets of data from a source into one or more targets, instead
of transferring large volumes of data. The filter thus restricts the amount of data to be copied and
works like the selections in the InfoPackage. You can specify single values, multiple selections,
intervals, selections based on variables, or routines.

3.
symantic groups are used for error handling in DSO

FOR EXAMPLE:

When you run DTP ,Suppose if the first record has error, it will not go to the DSO, It will go to
ERROR STACK. the furthur record say 900 will go to DSO and overwrites 700.

After correcting the error in ERROR STACK when you run the error DTP the 800 will go and
overwrite. Finally we have 800 instead of 900

SO to avoid this we use symantic group option

If any error comes , system takes furthur records which hav same keyfields combination to error
stack. After correcting the first record all records will go to DSO in order

4………………………

Filter:

The Filter restricts the amount of data to be copied and works like the selections in the
InfoPackage. You can specify single values, multiple selections, intervals, selections based
on variables, or routines. Choose Change Selection to change the list of InfoObjects that can
be selected.
This means that data transfer processes (DTP) with disjunctive selection conditions to
efficiently transfer small sets of data from a source into one or more targets, instead of
transferring large volumes of data.

Semantic Group:

Semantic Groups to specify how you want to build the data packages that are read from the
source (DataSource or InfoProvider). To do this, define key fields. Data records that have
the same key are combined in a single data package.
5.

Transitive Attributes as Navigation Attributes


Use

If a characteristic was included in an InfoCube as a navigation attribute, it can be used for


navigating in queries. This characteristic can itself have further navigation attributes, called
transitive attributes. These attributes are not automatically available for navigation in the query.
As described in this procedure, they must be switched on.

An InfoCube contains InfoObject 0COSTCENTER (cost center). This InfoObject has navigation
attribute 0COMP_CODE (company code). This characteristic in turn has navigation attribute
0COMPANY (company for the company code). In this case 0COMPANY is a transitive attribute
that you can switch on as navigation attribute.
Procedure

In the following procedure, we assume a simple scenario with InfoCube IC containing


characteristic A, with navigation attribute B and transitive navigation attribute T2, which does
not exist in InfoCube IC as a characteristic. You want to display navigation attribute T2 in the
query.

1. Creating Characteristics

Create a new characteristic dA (denormalized A) which has the transitive attributes requested in
the query as navigation attributes (for example T2) and which has the same technical settings for
the key field as characteristic A.

After creating and saving characteristic dA, go to transaction SE16, select the entry for this
characteristic from table RSDCHA (CHANM = <characteristic name> and OBJVERS = 'M') and
set field CHANAV to 2 and field CHASEL to 4. This renders characteristic dA invisible in
queries. This is not technically necessary, but improves readability in the query definition since
the characteristic does not appear here.

Start transaction RSD1 (InfoObject maintenance) again and activate the characteristic.

2. Including Characteristics in the InfoCube

Include characteristic dA in InfoCube IC. Switch on its navigation attribute T2. The transitive
navigation attributes T2 are now available in the query.

3. Modifying Transformation Rules

Now modify the transformation rules for InfoCube IC so that the newly included characteristic
dA is calculated in exactly the same way as the existing characteristic A. The values of A and dA
in the InfoCube must be identical.
4. Creating InfoSources

Create a new InfoSource. Assign the DataSource of characteristic A to the InfoSource.

5. Loading Data

Technical explanation of the load process:

The DataSource of characteristic A must define the master data table of characteristic A as well
as of characteristic dA. In this example the DataSource delivers key field A and attribute B. A
and B must be updated to the master data table of characteristic A.

A is also updated to the master data table of dA (namely in field dA) and B is only used to
determine transitive attribute T2, which is read from the updated master data table of
characteristic B and written to the master data table of characteristic dA.

Since the values of attribute T2 are copied to the master data table of characteristic dA, this
results in the following dependency, which must be taken into consideration during modeling:

If a record of characteristic A changes, it is transferred from the source system when it is


uploaded into the BI system. If a record of characteristic B changes it is transferred from the
source system when it is uploaded into the BI system. However, since attribute T2 of
characteristic B is read and copied when characteristic A is uploaded, a data record of
characteristic A might not be transferred to the BÍ system during a delta upload of characteristic
A because it has not changed. But the transitive dependent attribute T2 might have changed for
this record only but the attribute would not be updated for dA.

The structure of a scenario for loading data depends on whether or not the extractor of
DataSource A is delta enabled.
Loading process:

1. Scenario for non-delta-enabled extractor

If the extractor for DataSource A is not delta enabled, the data is updated to the two different
InfoProviders (master data table of characteristics A and dA) using an InfoSource and two
different transformation rules.

2. Scenario for delta-enabled extractor

If it is a delta-enabled extractor, a DataStore object from which you can always execute a full
update in the master data table of characteristic dA is used. With this solution the data is also
updated to two different InfoProviders (master data table of characteristic A and new DataStore
object which has the same structure as characteristic A) in a delta update using a new InfoSource
and two different transformation rules. Transformation rules from the DataStore object are also
used to write the master data table of characteristic dA with a full update.
For both solutions, the transformation rules in the InfoProvider master data table of
characteristic dA must cause attribute T2 to be read. For complicated scenarios in which you
read from several levels, function modules will be retrieved that execute this
service.

It is better for the coding for reading the transitive attributes (in the transformation rules) if you
include the attributes to be read in the InfoSource right from the beginning. This means that you
only have transformation rules that perform one-to-one mapping. The additional attributes that
are included in the InfoSource are not filled in the transfer rules. They are only computed in the
transformation rules in a start routine, which must be created. The advantage of this is that the
coding for reading the attributes (which can be quite complex) is stored in one place in the
transformation rules.

In both cases the order at load time must be adhered to and must be implemented either
organizationally or using a process chain. It is essential that the master data to be read (in our
case the master data of characteristic B) already exists in the master data tables in the system
when the data providing the DataSource of characteristic A is loaded.
Change the master data from characteristic B so that it is also visible with the next load into A /
dA.

GENERIC DELTA --------UPPER CASE & LOWER CASE

………………………………………………………………………………………………………

Delta Load Management Framework Overview


CAF and SAP BW integration supports delta loading for DataSources created by entity and
application service extractor methods. When working with applications with large data
volumes, it is logical to prevent long loading times and unnecessary locks on the database
by only loading new or modified data records into SAP BW.

Features
Generic delta management works as follows:

1. A data request is combined with particular selection criteria in an InfoPackage and is to be


extracted in delta mode.

2. The request is sent to the source system and then received by the SAPI (service application
programming interface) request broker.

3. The generic delta management is initiated before the data request is transferred to the extractor
corresponding to the DataSource. This enhances the selection criteria of the request in
accordance with the update mode of the request. If the delta-relevant field is a timestamp, the
system then adds a time interval to the selection criteria.

Delta management can take the lower limit from the last extraction. The upper limit is taken
from the current time. For example, date of application server minus a safety margin (SY-DATE,
SY-TIME).

4. The enhanced request is transferred to the extractor. The update mode is ‘translated’ by the
generic delta management into a selection criteria. For this reason, the update mode is set first to
full.
5. At the end of the extraction, the system informs generic delta management that the pointer can
now be set to the upper limit of the previously returned interval.

You can find a description of this transfer process in the figure below.

Structure
Delta Fields

The delta-relevant field of the extract structure meets one of the following criteria:

• ● Field type is timestamp.

○ New records that are to be loaded into BW using a delta upload each have a higher entry in
this field than records that have already been loaded.

○ The same criteria applies for new records as in the case of a timestamp field.

• ● Field type is not timestamp. This case is only supported for SAP Content
DataSources. At the start of delta extraction, the maximum value to be read must be
returned using a DataSource-specific exit.
You can use special data field to achieve more reliable delta loading from different source
systems. They are integrated into the delta management framework. They are:

• ● Safety interval upper limit


• ● Safety interval low limit

Safety Interval Upper Limit

The upper limit for safety interval contains the difference between the current highest value
at the time of the delta or initial delta extraction and the data that has actually been read. If
this value is initial, records that are created during extraction cannot be
extracted.

A timestamp is used to determine the delta value. The timestamp that was read last stands at
12:00:00. The next data extraction begins at 12:30:00. The selection interval is therefore
12:00:00 to 12:30:00. At the end of the extraction, the pointer is set to
12:30:00.

This transaction is saved as a record. It is created at 12:25 but not saved until 12:35. As a result,
it is not contained in the extracted data and the timestamp means the record is not included in the
subsequent extraction.

To avoid this discrepancy, the safety margin between read and transferred data must always be
longer than the maximum time the creation of a record for this DataSource can take (for
timestamp deltas), or a sufficiently large interval (for deltas using a serial number).

Safety Interval Lower Limit

The lower limit for safety interval contains the value that needs to be taken from the highest
value of the previous extraction to obtain the lowest value of the following extraction.

A timestamp is used to determine the delta. The master data is extracted. Only images taken after
the extraction are transferred and overwrite the status in BW. Therefore, with such data, a record
can be extracted more than once into BW without too much difficulty.

Taking this into account, the current timestamp can always be used as the upper limit in an
extraction and the lower limit of the subsequent extraction does not immediately follow on from
the upper limit of the previous one. Instead, it takes a value corresponding to this upper limit
minus a safety margin.

This safety interval needs to sufficiently large so that all values that already contain a timestamp
at the time of the last extraction, but which have yet to be read (see type 1), are now contained in
the extraction. This implies that some records will be transferred twice. However, due to the
reasons outlined previously, this is irrelevant.

You should not fill the safety intervals fields with an additive delta update, as duplicate
records will invariably lead to incorrect data.

It is not necessary to set safety intervals for DataSources used in CAF and SAP BW integration.

Potrebbero piacerti anche