Sei sulla pagina 1di 7

My Sketchpad

where I sketch my random thoughts on

Menu
Skip to content

Home
Work Stuffs
Thought
About me

Essbase ASO Performance Tuning


and Overview
William LesmanaFebruary 22, 20140
ASO cubes have certain restrictions, but they are the best if you have huge
dimensions. ASO cube aggregates very fast and this is primarily due to the
fundamental differences in the architecture of ASO and BSO. On a high level
following are the ASO properties that one would have to know
1. ASO cannot be used to load data at non-level0 members. ASO will accept data
only at the lowest level
2. ASO does not support calculations in stored members for Non Account
dimensions
3. Each non-Account dimension hierarchies in an ASO cube can be of 3 types.
They are Stored, Dynamic and Multiple Hierarchies
4. A Stored Hierarchy dimension is like a normal hierarchy in BSO. But the major
difference is that the same member cannot be shared more than once within the
hierarchy. Also, non-level0 members cannot be shared within a stored
hierarchy. This hierarchy does not support stored members within calculations
5. A dynamic hierarchy on a non-accounts dimension has all the properties of a
dimension in a BSO cube. But the major difference is that the upper level
member values are dynamically obtained (during data retrieval). Also,
calculated members are supported in this hierarchy
6. A multiple hierarchy dimension can have both stored and dynamic hierarchies.
But this dimension should have atleast one stored hierarchy

7. ASO data loads are typically more flexible than BSO data loads. ASO supports
the concept of load buffers which can do data addition, subtraction, etc. of data
coming in from multiple data sources in memory
8. There is no need for identifying sparse and dense dimensions
Outline Fragmentation
ASO has a quirk behaviour when it comes to outline maintenance: if you
amend/delete the outline members periodically, the outline keeps growing bigger and
BIGGER.! This will impact the retrieval, and subsequent maintenance jobs
performance. To rectify this, in my implementations I usually add the following
scheduled steps in the monthly outline maintenance job using a temp application:
Note:
Real app: This is the live production Essbase ASO cube
Temp app: Temporary cube that is used for maintenance process
Empty outline: can be saved in any server directory, this contains empty dimensions
of Real app with minimum required members. The idea is to re-build the app
everytime from scratch to prevent the outline fragmentation.
Steps:
1.
2.
3.
4.

Copy an empty outline of the Real app to the Temp app


Do your dimension build process in Temp app
Switch Real app and Temp app by using rename essmsh command
Now your Temp app has become Real app with its new dimension members
and without fragmentations.!

The steps above help in the system availability as well by minimizing downtime
required to update the application.
Compression
Compression dimension is not mandatory, but helps performance. The compression
dimension is a dynamic dimension. It should be the column headers in a data load file.
Ideal compression is achieved if the leaf level member count is evenly divisible by 16.
Accounts

Accounts is a dynamic dimension that allows for non-additive unary operators


(minuses in a structure still make a hierarchy dynamic). The only reason to make a
dimension Accounts in ASO is for time balancing. Expense flags are accomplished
through UDAs and member formulae.
Time
Time can make a good candidate for compression dimension. Should be stored. Use
multi-hierarchy if formulae are necessary. Prior to 9.3.1, to-date is best performed in
Time dimension. 9.3.1+, use view dimension with to-date members like MTD,
QTD, and YTD.
Meta-data vs. data. Dont evaluate data when a metadata check will suffice. For
instance, IIF(Is(Scenario.CurrentMember,Actual) is faster than
IIF(Scenario.CurrentMember=Actual) because the latter IIF actually compares values.
MDX Optimization
Dont use the MDX Round() function: rounding is a function of formatting in a
reporting tool not the database. Remove CurrentMember if possible, because thats
whats already being calculated. Use LastPeriods() instead of Lag() when doing of
range of Time periods. Dont use a function where direct referencing can be
performed. (Call out specific members instead of functions, for instance.) Only
perform calculation when data to support the math exists [i.e., start off with Case
When Not IsEmpty()]. In 11.1.1, there is a new NonEmptyMember directive to only
calculate when data exists.
Aggregation View
You can turn Query Hints (Level Usage for Aggregation) on for specific members
(the member information tab in the member properties in EAS). You can also specify
a specific level intersection to materialize via EAS. Both types of Query Hints can
only be set through EAS. He mentioned that only 1,024 level intersections can be
materialized which Ive never seen anyone ever come close to.
Slices
Is the primary feature enabling Excel (Lock&)Send and trickle feeding functionality.
Creates subcubes alongside the primary slice of the database. Dynamic aggregations

are performed across the necessary slices to provide query results.


Data load
Data should be loaded as additive values (instead of replace even on an empty
database). Multiple buffers can be used to parallel load the database. Requires
simultaneous MaxL processes to be executed. Ignore Zeros and Missing values
whenever possible (buffer setting which was available in 9.0).
Cache
Tune your cache buffer settings in the Application, for optimum retrieval
performances.
Good Luck.!
About these ads
Share this:

Share

Like this:
Like Loading...
Related
Essbase Study MaterialsIn "Work Stuffs"
Essbase Performance Tuning - Simulated CalculationIn "Work Stuffs"
Oracle to Phase Out Essbase Add-inIn "Work Stuffs"
This entry was posted in Work Stuffs and tagged Oracle;Hyperion;Essbase;
ASO;Performance;Outline;Fragmentation. Bookmark the permalink.

Post navigation
Oasis Stop Crying Your Heart Out
How Poor We Are

Hyperion essbase calculation time


performance improvement
sri b asked May 11, 2009 | Replies (5)

I had around 3 GB of data. I was loading the level0 data and the current month data. It was taking
around 1 hour for data loading. But for calculation it was 3 days. I was using these setting:
Dimensions: 8 (dense-2, sparse-6)
Data file cache: 100 MB
Data cache: 10 MB
Index cache: 100 MB
Using these scripts in the calculation:
Set aggmissg ON;
Set Calc parallel 4;
Set Updatecalc off;
set msg summary;
set msg detail;
Calcall;
Also i tried with this script:
Set aggmissg ON;
Set Calc parallel 4;
Set Updatecalc off;
set msg summary;
set msg detail;
Agg(s1,s2,s3,s4) sparse dimensions
As I was using dynamic calc on all upper level mbrs on dense dimensions, I was calculating only
sparse dimensions. I don't have any formulas on sparse dimensions hence using agg command.
Let me know how can i improve the calculation time.
Join this group

Popular White Paper On This Topic

Taking Business Intelligence to the Next Level

5 Replies
0

serqet replied May 11, 2009

I suggest:
1) comment out set msg detail, this is taking up overhead writing to the app log file every block that
is calculated.
2) change set calc parallel to 3. (how many processors do you have?)
3) make the calculation utilze the cache. Try using SET CACHE ALL;
3a) What are your cache size settings in the essbase.cfg file? Check that.
4) Note from Tech Reference: When a dimension contains fewer than six consolidation levels, AGG
is typically faster than CALC. Conversely, the CALC command is usually faster on dimensions with
six or more levels. How many levels do you have in your sparse dimensions?
5) What are your other two sparse dims?
6) As far as your day long data load for only 3 GB of data, how is your data file ordered? It should be
d1, d2, s2, s2, s3, s4, s5, s6 for maximum performance.
0

serqet replied May 11, 2009

Apologies about item 6. I mis-read the post re: data load time.
0

mugundh72 replied May 11, 2009

1.First Of all set you Essbase.cfg file with the following settings
CALCCACHE TRUE|FALSE
CALCCAHEHIGH|CALCCACHEDEFAULT|CALCCACHELOW
INTELLIGENT CALCULATION - UPDATECALC TRUE|FALSE
set the agent delay and net delay.
2.calc all calculates and aggregates the entire database based on the database outline.instead of
using calc all ;
try to calculate by using the
CALC DIM (dim1, dim2);
or
AGG (dim3, dim4);
Use calc dim to calculate the Dense dimension and agg to sparse dimension.
3.make sure your outline follows the hour glass model
let me know how it goes.

idiot4dogs replied May 12, 2009

you may want to tune your cache settings.


Index cache should be set equal to the combined size of all essn.ind file, if able. If you can't do that,
then just set it as high as possible. Do not set the index cache higher than the combined size of all
essn.ind files or you will just be consuming more memory than needed which can slow down
processes.
Data file cache only pertains to Direct I/O.
Data cache should be set equal to 0.125 times the combined size of all essn.pag files.

White Papers and Webcasts

Popular
Taking Business Intelligence to the Next Level

Related
5 PC Hardware Innovations that Pay Off for Business
Beyond Excel: Taking Business Intelligence to the Next Level
SMB Security Comparison Guide
More White Papers
0

FredThePianist replied May 13, 2009

Hi Harsha,
You could perform calculation by two different ways.
At First, by making an export/import for level 0. Your Index and pag will be reduced, before your
aggregate calculation script.
The second one, if you need to save your upper level for previous month, I suggest to fix the current
month and clear upper level data for this month. By this way, you could avoid a waste of time for
your aggregate calculation scripts.

Potrebbero piacerti anche