Sei sulla pagina 1di 11

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

Demantra Performance Solutions - Real World Problems and Solutions [ID 1081962.1]

Modified 17-NOV-2010 Type WHITE PAPER Status PUBLISHED

In this Document
Abstract
Document History
Demantra Performance Solutions - Real World Problems and Solutions
Summary

Applies to:

Oracle Demantra Demand Management - Version: 7.0.2 to 7.3.0 - Release: 7 to 7.3.0


Information in this document applies to any platform.

Abstract

Purpose
-------
To guide the participant through real world performance solutions, maintenance and setup.
This problems below are actual Demantra performance issues as reported from the field or
end user customer communities. At the end of this document you will find two helpful guides
reporting your performance issues to Oracle Support.

- Performance Problem? Save Time, Gather the Required Data Before Contacting Oracle Support
- Steps to take before logging a performace SR

Also, please see the Demantra Performance Best Practices Guide, Doc ID 1081936.1.

Document History

Author:
Create Date 07-Apr-2010
Update Date 07-Apr-2010
Expire Date 07-Apr-2013 (ignore after this date)

Demantra Performance Solutions - Real World Problems and Solutions

==============================================================================================================================
Performance Solutions
==============================================================================================================================

===============================================================
The source RDBMS server
===============================================================

Source Database Objects to Maintain


====================================
Determine the analyze status of the source objects. This is not an exhaustive list. You may add to this SQL based on need.

select table_name, last_analyzed from dba_tables


where table_name in('OE_ORDER_LINES_ALL', 'OE_ORDER_HEADERS_ALL',
'MTL_SYSTEM_ITEMS_B', 'HZ_CUST_SITE_USES_ALL',
'OE_ORDER_HEADERS_ALL', 'HZ_CUST_ACCOUNTS')

Adjustments at the Source Instance


==================================
We are attempting to collect booked / shipped items from the source but noticed that the
collection process is performing full table scans. The reason for full table scan on the
huge table OE_ORDER_LINES_ALL is that we are trying to collect all the order history within
the given date range. But there is no seeded index defined on the date columns in the order lines
table.

Solution at the Source


----------------------
The history data is brought in today for three different type of dates:

1. Booked Date (booked_date column in OE_ORDER_HEADERS_ALL)


2. Requested Date (request_date column in OE_ORDER_LINES_ALL)
3. Shipped Date (actual_shipment_date column in OE_ORDER_LINES_ALL)

The shipment and booking history program tries to get all data from the table OE_ORDER_LINES_ALL
for the given date ranges. Since currently there are no indexes on date column, it has no other
option but to do FULL TABLE SCAN.

Indexes can be created at the source instance, to be used specifically for the Demantra data collection
process. The indexes(s) required will depend upon the collection parameters to the Shipment and
Booking History concurrent program. Which streams do you plan to collect into Demantra?

1 de 11 23/02/2011 11:32 a.m.


https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

Let us say for example, you want to collect -


1. Booking History - Booked Items - Requested Date
2. Shipment History - Shipped Items - Requested Date

Then in this case, only one index (on request_date column) needs to be created.

However, if you want to collect -


1. Booking History - Booked Items - Requested Date
2. Shipment History - Shipped Items - Shipped Date

Then in this case, two indexes, one each for request_date and actual_shipment_date columns need to be created.

Depending upon the streams that you intend to collect into Demantra you will have to define either 1 or 2
indexes on OE_ORDER_LINES_ALL.

The custom indexe(s) can be made passive - meaning they are needed only while running the collection
program. When not needed these indexes can be disabled.

Note: The decision on the maintenance of the custom index is a responsibility of the customer.

It is up to the customer whether index is to be present all the time or only during the collection.

===============================================================
The Demantra RDBMS server
===============================================================

Statistics / Chained Rows Quick Check


=====================================
This sql will reveal if key tables have up to date Statistics and/or if there are Chained Rows.

- select * from user_tables;

As a rule the number in the Chain_Cnt column should not be higher than 5% of the number found in the
Num_Rows column.

Note: The objects that we will be concentrate on are the Sales_Data and Mdp_Matrix tables although others are certainly important.

If it is higher than 5% this would indicate fragmentation of the table and will require a rebuild of those objects. We are also going to look at the Last_Analyzed column
as well as the Sample_Size column (the latter as a percentage of the number in the Num_Rows columns).

Note: That a Chain_Cnt of 0 for a large table like Sales_Data either represents a very efficiently laid out
tablespace or more likely that the Sample_Size of the Compute Stats was not large enough to discover the presence of Chained Rows.

Redo Log Management


===================
You can greatly improve DB performance for both the engine and collaborator by reducing the redo logging when creating temporary tables. These are the table that
do not require recovery after a DB server crash.

A simple, but not 100% effective solution in Oracle is to add the keyword NOLOGGING when creating the table:

create table T nologging as select * from T2

A more effective solution is to use a separate tablespace for temporary tables, and to create the tablespace
without logging enabled.

I/O Stats Generated by AWR


==========================
Additional review of the AWR report has uncovered a serious IO issue on the customer Database:

Tablespace Av Rd(ms)
------------------- -----------
TS_MDP_MATRIX 75213.66
TS_SALES_DATA 211230.12
TS_PROMOTION_DATA 204440.64
TEMP 10.49
TS_PROMOTION_DATA_X 209627.96
TS_SALES_DATA_X 181928.17
SYSTEM 178143.95
UNDOTBS1 158710.24
TS_DP 204306.89
SYSAUX 131322.26
TS_MDP_MATRIX_X 93844.24
TS_DP_X 139546.01
TS_MANUALS_X 162708.28
TS_MANUALS 262186.11
TS_SALES_DATA_ENGINE 0
TS_SALES_DATA_ENGINE_X 50
TS_SIM 30
TS_SIM_X 0
USERS 50

2 de 11 23/02/2011 11:32 a.m.


https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

The IO stats have to be investigated from a hardware perspective. The normal range
should be between 5-10 ms and in no way near the 100K ms.

To put this into perspective - if the average read time from TS_MDP_MATRIX is 75213.66 it
means every read is 75213/1000/60= 1 minute and 15 seconds!

We used the same SQL from above (Cartesian product) on a local schema and the SQL
returned in 200ms: PROMOTION_DATA is 25 million, MDP_MATRIX is 1.2 million.

Reording Columns in Oracle For Performance, Rebuilding Tables by Primary Key


----------------------------------------------------------------------------
The problem is that the data in a critical table (MDP_MATRIX, SALES_DATA, PROMOTION_DATA) is not physically stored in the database in Primary Key order.
DEV is investigating building a tool for this issue.

Database I/O: To reduce system I/O you may need to rebuild the tables by
primary key. To determine whether we need table rebuild, run the SQL below.

This SQL will report the out of sequence ratio of 2 big tables of the system which
are heavily used by the worksheets. These queries would take time to run, so
it would be better if we run this in test first and gauge the timing before
executing in Production.

- Please see Reordering Columns in Oracle For Demantra Performance Improvement

document 1085012.1.

*Note: If it takes hours then we need to run this in Production during off hours.

You can determine just how "out-of-order" a table is by using these instructions:

If you decide to reorder / rebuild the table you will need to do something like this:
1. Create a new table structure exactly like the old table.
2. INSERT INTO new_table SELECT * FROM old_table ORDER BY primary_key_fields;
3. Create new indexes on the new table.
4. Rename the old table and its indexes to keep them as a backup
5. Rename the new table and indexes to use the official names.

You can determine how much you will save by executing the following:

-- Replace
-- <TABLE> with the table name in question. This should be submitted once for each heavily used table.
-- <KEY COLUMNS> with a list of the primary key column names, in the order that they appear in the PK.

SELECT (ROUND(((SELECT COUNT(*) AS CNT


FROM (SELECT <KEY COLUMNS>
,RELATIVE_FNO
,BLOCK_NUMBER
,ROW_NUMBER
,DATA_ROW
,(LAG(DATA_ROW) OVER(PARTITION BY RELATIVE_FNO, BLOCK_NUMBER ORDER BY ROW_NUMBER)) AS PREV_DATA_ROW
FROM (SELECT <KEY COLUMNS>
,RELATIVE_FNO
,BLOCK_NUMBER
,ROW_NUMBER
,(DENSE_RANK() OVER(PARTITION BY RELATIVE_FNO, BLOCK_NUMBER ORDER BY <KEY COLUMNS>)) AS DATA_ROW
FROM (SELECT <KEY COLUMNS>
,DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) RELATIVE_FNO
,DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) AS BLOCK_NUMBER
,DBMS_ROWID.ROWID_ROW_NUMBER(ROWID) AS ROW_NUMBER
FROM <TABLE>
)C
)B
)A
WHERE DATA_ROW != PREV_DATA_ROW
AND DATA_ROW != PREV_DATA_ROW + 1);

(SELECT COUNT(*) FROM <TABLE>)),3)*100) AS "Out Of Order Ratio %"


FROM DUAL;

Note: Complete instructions are contained in note:


Reording Table Columns for Maximum Performance. Created for 4-Nov-2009 Webcast.
This is found at the Demantra forum.

Using Materialized Views


------------------------
The Demantra Export Integration Interfaces have the option of being created as a regular View
or a Materialized View.

3 de 11 23/02/2011 11:32 a.m.


https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

- The Materialized View might not work depending on the specific series expressions that you use.

- You need to choose between the regular View and the Materialized View based on your usage patterns.

- If you do many exports with little data change then the Materialized Views may be faster.

We are looking to guidelines and procedures. For now, it is more important to determine if your
implementation is a candidate. When we release greater details, the notice will be available in the
Demantra Forum:

http://myforums.oracle.com/jive3/forum.jspa?forumID=1414

===============================================================
The Demantra Parameter Settings
===============================================================

Parameters That Impact WorkSheet Performance


============================================
At one customer site we made several parameter changes that resulted in a positive performance impact:

1. threadpool.query_run.per_user=16 (seems optimal to some environments and yields the fastest


response time for paged worksheets)

2. worksheet.full.load=1 (used when using cross tab worksheets)

3. client.worksheet.calcSummaryExpressions=0

4. Removed the custom time levels, then retest.

The setting of threadpool.query_run.per_user and worksheet.full.load depends on the hardware


in use and the expected usage of the system. Let us say that you have 8 CPU's in your DB Server.
The means you probably want at most 8 to 16 DB sessions running at the same time. So you can set
worksheet.full.load=16.

If you typically will have 4 users running worksheets at the exact same time, not staring at the
screen, but actually RUNNING the worksheets, then you should set threadpool.query_run.per_userand=4
(which is 16/4).

*Note that 4 simultaneous users running worksheets is probably equal to 30 or 40 users logged in.

UseParallelExportHint
=====================
I cannot make the new functionality UseParallelExportHint work correctly when testing Demantra processes.

- To attain this feature apply patch 6379666 in a TEST instance.


- The version_details_history should show a record for 7.1.1 7110006.

Customer ran integration interface but it created the view without the hints. Is there a profile that needs
to be set to add the hint?

Steps to configure the new Parameter:


1. Open Business Modeler application.
2. Select in the menu: Parameters -> System Parameters.
3. In the second Tab 'System', change the Parameter 'UseParallelExportHint' from Value=0 to Value=1.
4. Restart the application server and then, run the integration again.

Customer confirmed that it now works after enabling the parameter 'UseParallelExportHint'.

WORKSHEET PERFORMANCE and MaxAvailableFilterMembers


===================================================
Two approaches to worksheet filter quantity control.

1) Use open with instead of Enabling extra filters. I think that this method can make sense plus the user
eliminates clicks from the user side.

2) MaxAvailableFilterMembers specifies the maximum number of members that can be retrieved in the
worksheet filter screen. The ability to configure the max selected filters member is accomplished by
adjusting the value of MaxSqlInExpressionTokens in AppServer.properties file. The webserver will need to be restarted after modifying AppServer.properties file.

There have been a number of requests to obsolete this parameter, to make the selection of
MaxAvailableFilterMembers automatic.

- Some customers do not want the restriction set on the number of filter members.
- If you set the MaxAvailableFilterMembers too high, it affects the performance.
- Setting value too low limits number of filter members.

Prior to 7.2 MaxAvailableFilterMembers could not be set higher then 1000. 7.2 and above provides the
ability to set upper limits using the following procedure:

To implement the solution, please execute the following steps:

1. select pname, pval from sys_params where pname='MaxAvailableFilterMembers'

4 de 11 23/02/2011 11:32 a.m.


https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

2. Based on the number of members available, in this case 3000, please update the same

UPDATE SYS_PARAMS
SET PVAL = '3000'
WHERE PNAME = 'MaxAvailableFilterMembers'

3. Bounce the apps server

===============================================================
The Demantra Worksheet
===============================================================

Crosstab Performance Recommendations


====================================
The worksheet is defined with more then just levels, there is no simple answer here.

- The crosstab includes series (number of columns), date aggregation and range (rows per combination)
and aggregation levels.

- The total amount of memory a worksheet consumes (the number of cells in it) is contributed from of all
three factors, and more. You will need to balance, so the crosstab will not have too many cells in it.

- So the answer is complex. In most cases I would suggest having as many levels as you can in the page items and not have more then 2-4 levels in the crosstab.

- One common mistake is having levels in the crosstab that are mainly descriptive levels. In this case a more appropriate approach will be to create these as level
attributes and present them as a series thus saving the need to processed levels at the crosstab.

In summary, the number of members seems to have little or no effect on loading times. Applying loads of filters to decrease the amount of members will still lead to
several minutes load time or perhaps an out of memory error.

* The number of rows in combination with order/amount of levels in the crosstab does have a very large
impact on performance.

Steps to reproduce
------------------
Open a member which has more than about 14 rows in a certain worksheet and you will receive an outofmemoryerror. Alternatively, if you have increased the memory
amount in the JRE parameters, you might not receive the outofmemoryerror, but loading will still take a very long time.

Very poor WS performance is also connected to WS design. In this case there were 10 levels in the crosstab. Plus Excel like designs can suffer from Performance
problems. See more below.

Server Expressions Tips


=======================
The majority of the performance issues are caused by inefficient definition of several server expressions,
these include:

1. Series which call the GET_MAX_DATE application function which executes selects on SYS_PARAMS table for each expression on each row that is being
aggregated. At one customer site, replacing with a constant value reduced 25% of run time.

2. GL Series with EXTRA_FROM and EXTRA_WHERE that includes ITEMS/LOCATION tables. These should be replaced by MDP_MATRIX table which is already
included in the select and avoid adding extra tables. At one customer site replacing with MDP_MATRIX reduced 8% of run time.

3. Server expression complexity can be to high and include repetition of the same columns using the NVL function several times. Verify your server expression(s).

Customer Case: VERY POOR CROSSTAB PERFORMANCE


==============================================
We have a worksheet with 330 combinations, and we are receiving sporadic out of memory
errors, Java.lang.OutOfMemoryError: Java heap space.

When we open a member which has more than about 14 rows in a certain worksheet, we receive
an outofmemoryerror. Alternatively, if you have increased the memory amount in the JRE parameters,
you might not get the outofmemoryerror, but loading will still take a very long time.

Changing the appserver.properties:


- client.worksheet.calcSummaryExpressions=1 to client.worksheet.calcSummaryExpressions=0
brings 90-150 second load times down to 12-13 seconds.

This means that it takes a relatively long time to draw a 3 second database query with very few rows.
Plus we are still receiving many out of memory errors, which leave the user hanging.

Is this the best performance that can be achieved in 7.2?

Answer
------
In this case, poor worksheet performance is mostly caused by your worksheet design.

You have 10 levels in the crosstab. We had a few sessions with the implementation team.
The customer is using Excel sheets to support current process. They wanted to use Demantra as Excel
and mimic the Excel sheets behavior in Demantra. The problem was that they had 10 or more levels and
they wanted to put them all in the crosstab.

5 de 11 23/02/2011 11:32 a.m.


https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

We explained the difference between the Demantra and Excel. We explained the memory limitations and
the alternatives:

- using the page and not having all levels in the crosstab.
- showing different parts of the data not in the main worksheet, but rather embedded in the worksheet.
- present some of the level information as a level attribute and show it as a series and not as a level.
- use open with but do not load the full data set all the time.

BLE PERFORMANCE ISSUE, Client Expression INSERTING RECORDS INTO SALES_DATA


==========================================================================
Working with Demantra development we found an adjustment to the client expression solved this performance issue.

- We have changed the client expressions in the BLE to not update the series unless the final value
of the series is greater than zero.

- This prevents the generation of empty (zero) rows during BLE execution.

Performance Issue Caused by Excessive Combinations


==================================================

SYMPTOM
-------
- when we attempt to open the worksheet with a relatively small volume of data, the performance is bad.

- The Worksheet continues to display the message 'Loading' but it never come up..

- We also enabled the Java console and re-tested to determine if there are any error messages.

- After re-testing the following error message was displayed then everything froze.

Exception in thread "Get_Meta_Data_Thread" Java.lang.OutOfMemoryError: Java heap space


at sun.nio.cs.StreamEncoder.write(Unknown Source)
at Java.io.OutputStreamWriter.write(Unknown Source)
at Java.io.Writer.write(Unknown Source)
at org.apache.log4j.helpers.QuietWriter.write(QuietWriter.Java:39)
at org.apache.log4j.WriterAppender.subAppend(WriterAppender.Java:292)
at org.apache.log4j.WriterAppender.append(WriterAppender.Java:150)
at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.Java:221)
at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(Appender AttachableImpl.Java:57)
at org.apache.log4j.Category.callAppenders(Category.Java:187)
at org.apache.log4j.Category.forcedLog(Category.Java:372)
at org.apache.log4j.Category.error(Category.Java:286)
at com.demantra.partner.client.model.query.QueryModel$GetMetaDataThread.run(Query model.Java:927)

Exception in thread "Image Animator 3" Java.lang.OutOfMemoryError: Java heap space


at sun.awt.image.GifImageDecoder.readImage(Unknown Source)
at sun.awt.image.GifImageDecoder.produceImage(Unknown Source)
at sun.awt.image.InputStreamImageSource.doFetch(Unknown Source)
at sun.awt.image.ImageFetcher.fetchloop(Unknown Source)
at sun.awt.image.ImageFetcher.run(Unknown Source)

Exception in thread "Keep_Session_Alive_Thread" Java.lang.OutOfMemoryError: Java heap space at


sun.net.www.http.ChunkedInputStream.<init>(Unknown Source)

SOLUTION
--------
From the Java client log:

Demand Planner-Dependent Demand by Area loaded, contains 66605 combinations.

66,000 combinations is a HUGE amount of data. You will need to redesign this worksheet so that it pulls
less data. Using filters is a good place to start.

Rolling Update Performance Impact


=================================

Question
--------
Is there anyway to speed up rolling update? When we have increased the volume of data the runtime for rolling update has increased and is now one of the main
bottlenecks.

Answer
------
We acknowledge the fact that rolling updates is a bottleneck and does not scale up nicely. We are currently re-writing the entire code for the rolling updates. Making it
faster and scalable. The new procedure will start to roll out to the different versions in the near future.

Be Aware of your Workstation Memory Capabilities


================================================

Problem

6 de 11 23/02/2011 11:32 a.m.


https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

-------
We tried to open a worksheet and it caused an outOfMemory error. I have 71,000
combinations. I have other worksheets that have 19,000, which is still too many,
but that worked without error.

I know that we need to reconfigure this worksheet in order to lower the amount of
data being retrieved but is there a method to increase the available memory?

Solution
--------
Increase the available memory via the Java control panel. Before setting the -Xmx parameter
in the Java control panel, ensure that all of the applications running Java on that machine,
and of course the browser, are closed. Make the change for ALL known Java environments.

Test Case
---------
I had two desktops workstations, one with 1g memory and the other with 3g memory. I was experiencing
a hang in internet explorer. There were no error messages but I suspected a memory issue.

On my 1g desk top I updated the JRE with '-Xmx512m', the internet explorer hangs or I receive
'Java Runtime Environment cannot be loaded.'. At least now I had evidence that there is a Java issue.

On my 3GB desk top I set JRE with '-Xmx512m', it works fine.

Conclusion
-------------
Since I succeeded in configuring the memory parameter on one machine it is clear that this is not
an application issue. Also, multiple Java instances on the client machine forces you to have manage
them separately to avoid collisions. This can be managed using the library_path.

Tracking Java Plugin Memory Consumption


=======================================
There are two values we are usually interested in: Total memory and Used memory

Total Memory
------------
- Is the total memory allocated to the JVM by the OS for application objects.
- This is what you see in the Windows Task Manager under the Memory tab.
- This can grow and shrink as the application runs, and is controlled by the JVM.
- Actually more memory is used by the garbage collector, but this is usually not important for us.

Used Memory
-----------
- Is a subset of the Total Memory that is actually used for live objects that were not collected
by the garbage collector.

There are also two major JVM parameters that can be used to control the Total memory:

-Xmx : Max memory that the JVM will ask for from the OS. (For example: -Xmx256M)
-Xms : The starting Total memory when the application starts up.

If you want to know which parameters are currently configured you can press 's' in the Java console, and look up these parameters.

- It is important to understand that the JVM sometimes reaches the Max memory although it could have avoided if the garbage collection had ran.
- So actually the Total Memory value is important to us only in the sense that when it is reached and the application needs more memory an outOfMemoryError occurs.
- This terminates the application.
- If you see the Total Memory growing it does not mean you have a leak, it might be that the garbage collection has not been called in a while.

In order to know the current actual memory usage at runtime (a basic tool for finding memory leaks) you use the Java console commands:

'm' - Prints Total memory and Free memory. total - free = usage.
'g' - Asks the JVM to garbage collect (equivalent to System.gc() call). This is only a suggestion to the JVM, but it is usually performed. It also prints the same data as
'm' after the collection completes.

- So in order to check the current memory of the client application: press 'g' 3 times, this will invoke a full
garbage collection.

- Then calculate Total Memory - Free Memory = Used Memory. The actual memory consumption
percentage is thus: (Used Memory / MAX Memory)*100

Note: Sometimes Plug-in Console Window is not accessible from the System Tray. In such a case it is possible to view the plug-in log file directly. Go to
$USER_HOME/Application Data/Sun/Java/Deployment/log directory and open pluginXXX_XX.trace file that matches your Java version.

Improve Audit Trail Performance by ADDING Indexes 7.3.0 Only


============================================================
It has been reported from the field that our audit trail functionality performance can be greatly improved by adding the following indexes.

*Note: Please attempt in TEST first.

CREATE INDEX AUDIT_DATA_IDX4 ON AUDIT_DATA


(AUDIT_ID);

CREATE INDEX AUDIT_DATA_1_IDX ON AUDIT_DATA

7 de 11 23/02/2011 11:32 a.m.


https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

(TABLE_ID, KEY_FIELD_ID1, AUDIT_ID);

CREATE INDEX AUDIT_DATA_2_IDX ON AUDIT_DATA


(TABLE_ID, KEY_FIELD_ID1, KEY_FIELD_ID2, AUDIT_ID);

CREATE INDEX AUDIT_DATA_3_IDX ON AUDIT_DATA


(TABLE_ID, KEY_FIELD_ID1, KEY_FIELD_ID2, KEY_FIELD_ID3, AUDIT_ID);

CREATE INDEX AUDIT_VALUES_1_IDX ON AUDIT_VALUES


(AUDIT_ID, SALES_DATE, TO_SALES_DATE);

CREATE INDEX AUDIT_FILTERS_IDX ON AUDIT_FILTERS


(AUDIT_ID);

Level Management / Worksheet Speed


==================================

Symptoms
--------
Displaying levels in the page item section of the worksheet. I am testing the query redesign where I
put the sub-category in the page area.

- First sub category came back in 1 minute (this was encouraging)


- The second sub-category took 7 minutes to come back after I clicked, it takes 8 minutes to see 2 where it was 14 to see all 7 before,
- The third one came back in 2 minutes. At this point I would think clicking between the options would
be quick, but there is always a delay.
- The 4th option came back in 5 minutes
- The 5th option is 5 minutes

Possible Options
----------------
- Caching worksheet
Customer does not want to consider this option since they want to view the data in real time.

- Tuning of AppServer parameters, the database, Disk I/O, database objects

Change the following values in the AppServer.properties file:


- query_run.per_user = 12
- query_run.size = 60

The following database parameters were set:


- Cursor_sharing = Force
- Optimizer_index_cost_adj= 30
- optimizer_index_caching = 80
- db_file_multiblock_read_count = 16

* The db_block_size parameter recommendation, increasing to a factor 16 or even 32 from 8.


For example move from 8192 to 16384. You will need to rebuild the database.

Hardware
- Increase the number of disks in the RAID array. For example, if your current configuration has only
3 stripes (4-1), add 3 more disks making it 6 stripes (7-1). This should greatly improve I/O performance.

- If you are at 32 bits, switch to 64 bits and double the buffer pool memory allocation.

==========================
Integration Multithreading
==========================
There will be additional information regarding this feature available soon. The way the worksheets make use of parallelism is by running multiple threads in the Java
application server, not at the database. You can configure this in AppServer.properties or in APP_PARAMS, 7.3.0.

For example, if the following are set:

threadpool.query_run.size=40
threadpool.query_run.per_user=4

- A single user running a worksheet query will have 4 parallel threads accessing the database at the same time. All users together are limited to 40 threads.

Parameter Setting
-----------------
- There are no valid general recommendations available. Each implementation is different.
- The setting for the parameters depend on the number of concurrent users, the number of concurrent batch jobs and their nature, the database hardware configuration
and more.
- Each setting, listed below, has a description and configuration rules that need to be addressed per implementation.

# Maximum size of the Query Run Thread pool, if this value is missing or is negative
# the query run execution mechanism will not use threads.
threadpool.query_run.size=40

# The amount of threads that a single thread can use.


# If the query run thread pool size is negative this value is meaningless

8 de 11 23/02/2011 11:32 a.m.


https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

threadpool.query_run.per_user=4

#Number of parallel manual update processes, app server handle at a time.


threadpool.update.data.manual.size=8

#Number of parallel manual update tables, app server handle at a process.


threadpool.update.table.manual.size=2

#Number of parallel manual update combs, app server handle at a table.


threadpool.update.comb.manual.size=2

#Number of parallel manual update records, app server handle at a comb.


threadpool.update.record.manual.size=2

#Number of parallel batch(Integration/Ble) update processes, app server handle at a time.


threadpool.update.data.batch.size=2

#Number of parallel batch(Integration/Ble) update tables, app server handle at process.


threadpool.update.table.batch.size=2

#Number of parallel batch(Integration/Ble) update combs, app server handle at a table.


threadpool.update.comb.batch.size=2

#Number of parallel batch(Integration/Ble) update records, app server handle at a combination.


threadpool.update.record.batch.size=2

#Support for parallel integration procedure # Max number of Parallel Update Threads.
# Default Threads = 5 (Number of DB Server CPU + 1)
MaxUpdateThreads=5

Worksheet Layout and Performance


================================
The worksheet performance is very much connected to the layout that the user chooses for
his/her worksheet. In this case they were able to improve performance by reducing the amount of
data displayed. By making the "Product Category" level a dropdown they prevented the
system from loading all of the product categories. For the user picks the product
category that interests him and then data is loaded for only that category.

Demantra workskeets contain aggregations of data across different dimensions. Careful worksheet
design is needed to ensure that loading/running a worksheet does not access millions of data rows.

===============================================================
The Desktop Workstation / Client
===============================================================

JRE parameters
- add details

Java Specific Logging


---------------------
This should be used when investigating issues on the client. If you are logging an SR with
a client issue, please supply the Java console output.

To better understand client issues we ask that you turn on logging:

1) Edit ClientLogConf.lcf from web_app_root/portal on the app server.

2) Logout from the client and close the browser.

3) Restart the browser and Login again.

4) Then re-test and send us the Java console output (from the client).

More Java Logging


-----------------
1) Please open file ClientLogConf.lcf (under 'portal' folder)

2) Open the following categories to 'DEBUG' mode.


log4j.category.tunnel.general=DEBUG
log4j.category.dpweb.connection=DEBUG

3) Add to this file the following categories opened at 'DEBUG' mode:


log4j.category.org.apache.commons.httpclient.HttpMethodBase=DEBUG
log4j.category.org.apache.commons.httpclient=DEBUG
log4j.category.httpclient.wire.header=DEBUG

4) After you experience the disconnecting issue, please examine the client Java console + JavaConsole.log files.

===============================================================

9 de 11 23/02/2011 11:32 a.m.


https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

General
===============================================================

Hardware Management - Performance Problem


=========================================

The hardware configuration is as follows:

Demantra Data Base


- IBM 3650 2way 3.0Ghz Dual Core Server
- 8GB RAM
- (2) 146GB 10k drives
- (4), 73 Gb 15k Drives
- Raid Controller

Demantra app/Analytical Engines


- IBM 3850 M2 4way 2.4 Ghz Dual core Server
- 48 Gb Ram
- (4) 146Gb 10k Drives
- Raid controller

I reviewed the options outlined in performance white paper Trouble Shooting Demantra Worksheet Performance
<<470852.1>>, available on My Oracle Support. My performance problem still exists.

Performance Discovery Details


-----------------------------
- 99.4% of the time went to "db file sequential read" (reading through indexes).

- The disk I/O is also slow. We are used to seeing 3-5 ms wait times, but we are
experiencing 16-19 ms on the TS_SALES_DATA tablespace.

- 90% of the physical disk reads are on the SALES_DATA table.

- We could ease the I/O bottleneck by dedicating more memory to the buffer pool, but that will be hard to do in 32 bits. Currently the buffer pool is 1432 MB. By
doubling it, we may achieve a 40% I/O improvement.

- Data fragmentation: Rebuilding sales_data and mdp_matrix improved the SQL run time. Please confirm.

SOLUTION:
1. Reorganize SALES_DATA

2. Change the bock size to 16k or even 32k (currently it is still 8K).

3. Switch to 64 bits and double the buffer pool memory allocation.

4. Increase the number of disks in the RAID-5 array. Your current configuration only has 3 stripes (4-1).
Adding 3 more disks will make it 6 stripes (7-1) and should greatly improve I/O performance.

This resolved the performance problem.

Middle Tier
===========
We recommend at least 512 MB for the application server Java.

For Tomcat server please make sure to add in system environment variables this parameter:
- Name: JAVA_OPTS
- Value: -Xmx512m

Performance Problem? Save Time, Gather the Required Data Before Contacting Oracle Support
==========================================================================================
In addition to the data dump file, we ask you to collect performance statistics using the AWR reports.
This report should include two process (worksheet first run, worksheet rerun).

Please perform the following steps:


1. Start AWR process.
2. Run the specific worksheet.
3. Save this AWR report.
4. Start new AWR process.
5. Rerun the worksheet.
6. Save this second AWR report.
7. Provide us these two AWR reports.

- In order to get an accurate picture we'd like a short time period. This can be done in Enterprise Manager by clicking:

Server > AWR Baselines > Create > Single > Baseline Name: Demantra Time Range

Specify Start and Stop times in the future, say 10 minutes apart. Then run the operation in question
during the time range specified.

Steps to take before logging a performance SR

10 de 11 23/02/2011 11:32 a.m.


https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

--------------------------------------------
1) Provide answers from the Demantra Performance Questionnaire, Demantra - Performance Questionnaire for 4-Nov-2009 Webcast located at the Demantra forum.

2) Review note <<738503.1>> How to set the Java Plugin Heap Size in response to an OutofMemory error and in general for best performance

3) Review note <<863025.1>> How to Analyze Demantra Forecast Engine Performance (Processing Time) Issues / Demantra Engine is Slow

4) Review note <<867238.1>> How to run a Query Servlet Report to help diagnose Demantra Worksheet Performance bottlenecks

5) Please supply the record count of some main tables on the instance is as below:
select count(*) from t_ep_item
select count(*) from t_ep_ebs_cpn_code
select count(*) from mdp_matrix
select count(*) from sales_datail
select count(*) from t_ep_site
select count(*) from t_ep_organization

6) A vital part of performance diagnostics is the environment and parameter settings. At the database server:
- Provide up-to-date init parameters
- OS version
- Is it a dedicated machine or is it running additional applications?
- List the processes running at the time of this performance issue.

Appserver machine detailed information:


- OS version
- Is it a dedicated machine or is it running additional applications?

FOR IMPORT SPECIFIC PERFORMANCE ISSUE, PROVIDE THE ADDITIONAL ITEMS BELOW:

7) Collaborator.log

8) Integration.log

9) A copy of the data that was used for the loading, taken prior to loading.

10) A copy of the _ERR table taken once the import is done to reflect any
records that had errors and did not go through the import.

11) A record of the start time and end time of the import (should be server-side time so
that we'll know to to match log timing.

** It is also very important to make sure that no other medium/heavy process


is executed on the database or the Application Server during the time of the import
to make sure we are truly measuring the import runtime unimpacted by unrelated processes.

Performance Notes for your Reference:


=====================================
<<863025.1>> How to Analyze Demantra Forecast Engine Performance (Processing Time) Issues / Demantra Engine is Slow
<<860576.1>> Rebuild_Schema procedure - One of the Keys to Improved Demantra performance
<<470852.1>> Trouble Shooting Demantra Worksheet Performance
<<738503.1>> How to set the Java Plugin Heap Size in response to an OutofMemory error and in general for best performance
<<867238.1>> How to run a Query Servlet Report to help diagnose Demantra Worksheet Performance bottlenecks

Summary

In summary, pointed questions and comments are welcome.

Related

Products

More Applications > Value Chain Planning > Oracle Demantra > Oracle Demantra Demand Management

Keywords

TABLESPACE; PERFORMANCE PROBLEMS; PERFORMANCE; DB_BLOCK_SIZE; ORACLE DEMANTRA; DEMANTRA; PERFORMANCE STATISTICS;
MEMORY USAGE
Errors

JAVA HEAP SPACE

Back to top

11 de 11 23/02/2011 11:32 a.m.

Potrebbero piacerti anche