Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
i want to aggrigate data in source file, how can u do it using rule files
2. @ prior,@attributeval, @xref functions
3. diff b/n ess command and maxl
4. what is the ideal size of block density
5. alias tables
6. any idea about .sec file
7. if .sec file is currupt, how do u restore it.
8. where is the information of substitution variables store,
9. alter system resync(what it will do)
10. can we use calc script for 2 databases in one application
11. where the structure of BR's and Calc scriptys store
12. if u click retrieve in excel addin, what is the mechaniism that innerly
it will do for retriving data.
The Essbase optimization main items checklist (for block storage cubes)
Block size
Large block size means bigger chunks of data to be pulled into memory with
each read, but might also mean more of the data you need is in memory IF your
operations are done mainly in-block. Generally I prefer smaller block sizes, but there
is not a specific guide. In the Essbase Admin Guide they say blocks should be
between 1-100Kb in size, but nowadays with more memory on servers this can be
larger. My experience is to make blocks below 50Kb but not less the 1-2Kb, but this
is all dependent on the actual data density in the cube. Do not be afraid to
experiment with dense and sparse settings to get to the optimal block size, I have
done numerous cubes with just one dimension as dense (typically a large account
dimension), and cubes where neither the account nor time dimension is dense. You
will know you have good block size selection by looking at the next point, block
density.
Block density
This gives an indication of the average percentage of each block which contains
data. In general data is sparse, therefore a value over 1% is actually quite good. If
your block density is over 5%, then your dense/sparse setting is generally spot-on.
Look at this whenever you change dense and sparse settings in conjunction with
block size to see if you settings are optimal. A large block with high density is OK,
but large blocks with very low density (< 1%) not.
Cache settings
Never ever leave a cube with the default cache settings. Often a client
complains about Essbase performance, and sure enough when I look at the cache
settings it is the default settings. This is never enough (except for a very basic
cube). Rule of thumb here is to see if you can get the entire index file into the cache,
and make the data cache 3 times index cache, or at least some significant size. Also
check your cube statistics to see the hit ratio on index and data cache, this gives an
indication what % of time the data being searched is found in memory. For index
cache this should be as close to 1 as possible, for data cache as high as possible.
These are just the initial general optimization points which can cause huge
performance improvements without too much effort, in a next post I will look at
more advanced optimization techniques, but generally these ones should handle
70% of your optimization issues.