Sei sulla pagina 1di 31

Hyperion

Interactive Reporting
11.1.1 Capacity Planning Report
for Windows

An Oracle White Paper
May 2009





Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows


Executive Overview.................................................................................................................... 3
Introduction................................................................................................................................. 3
Scope............................................................................................................................................. 4
Test Configuration...................................................................................................................... 4
Hardware Setup....................................................................................................................... 4
Hyperion Architecture Layout Details............................................................................ 5
Tuning ...................................................................................................................................... 5
Test Scenario........................................................................................................................... 6
Results........................................................................................................................................... 8
Response Time vs. Load........................................................................................................ 8
Resource vs. Load................................................................................................................. 11
CPU.................................................................................................................................... 11
Memory.............................................................................................................................. 14
Network............................................................................................................................. 16
Disk.................................................................................................................................... 17
Implications for Capacity Planning ........................................................................................ 17
Capacity Planning Tables..................................................................................................... 17
Using the Interactive and Production Reporting Capacity Planning Tables........... 19
Adjusting for Different Test Scenarios......................................................................... 19
When to Cluster/Replicate Services .................................................................................. 20
Tuning Interactive Reporting Systems................................................................................... 20
Heap Sizes.............................................................................................................................. 20
Workspace Web Heap Settings...................................................................................... 21
Workspace Service Heap Settings.................................................................................. 21
Data Access Service Parameters......................................................................................... 21
BIService Parameters ........................................................................................................... 21
Workspace Service Parameters ........................................................................................... 22
Application Server Threads................................................................................................. 22
Resolving Resource Issues ....................................................................................................... 22
Processing Issues .................................................................................................................. 22
Memory Issues ...................................................................................................................... 23
Disk Access Issues................................................................................................................ 24
Conclusion.................................................................................................................................. 24
Appendix A Test Scenario Details ...................................................................................... 25
Contents of Workspace Repository................................................................................... 25
Query Activity ....................................................................................................................... 27
Profile of Simulated Users................................................................................................... 27
Activity Measurements......................................................................................................... 28
Mixed Workload Test ...................................................................................................... 28
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 2



Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows


EXECUTIVE OVERVIEW
This document summarizes performance testing for Hyperion

Interactive Reporting, a component of Oracle


Enterprise Performance Management (EPM) System. These tests used EPM version 11.1.1 running Microsoft
Windows 2003 on HP ProLiant BladeSystem servers provided by HP. Tests demonstrated the softwares ability to
support more than 1,000 users on a basic configuration.
The results presented here represent one of several sources of information that are typically used to plan
hardware capacity for a Hyperion Interactive Reporting implementation. This white paper is intended for
preliminary capacity planning use only and is not to be interpreted as a benchmark or general statement of the
performance and scalability characteristics of Hyperion Interactive Reporting software.
INTRODUCTION
Performance testing and capacity planning for enterprise applications such as Oracles Hyperion Intelligence
Reporting tool is a complex task. To demonstrate the ability of Hyperion Interactive Reporting to support large
user populations while delivering quick response times on Windows servers, Oracle partnered with HP to conduct
large-scale performance tests. The results of these tests prove the scalability of Hyperion Interactive Reporting on
Windows servers and provide data that can be used for preliminary sizing estimates.
Hyperion Interactive Reporting connects business users to data and gives them a complete set of tools to support
business decisions including ad hoc client/server querying, reporting, and analysis in one application. Hyperion
Interactive Reporting provides the following capabilities:
Data extraction and analysis
Reporting and distribution
Platform development
Hyperion Interactive Reporting is an all-in-one query, data analysis, and reporting tool. The interface is highly
intuitive and provides an easy-to-navigate environment for data exploration and decision-making. With a
consistent design paradigm for query, pivot, charting, and reporting, users at any level move fluidly through
cascading dashboards to find answers quickly. Trends and anomalies are automatically highlighted, and robust
formatting tools enable users to easily build free-form, presentation-quality reports for broad-scale publishing
across their organizations.
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 3



SCOPE
This paper focuses on the performance testing of Hyperion Interactive Reporting only, and tuning of database and
Web servers is out of its scope. This document covers Hyperion Interactive Reporting installations on Windows
Server 2003 machines only.
TEST CONFIGURATION
Hardware Setup
HP provided HP ProLiant BladeSystem Servers and HP Enterprise Virtual Array (EVA) Storage at one of its
testing labs. The configuration used for Hyperion Interactive Reporting comprised six servers and a storage array
as described below.

Hostname Model Processors Memory Function
HPBLADE4 HP ProLiant BL460c 2 proc, quad core Xeon 2.66 GHz 16 GB Apache Server
HPBLADE1 HP ProLiant BL460c 2 proc, quad core Xeon 3.166 GHz 32 GB Web Tier
HPBLADE5 HP ProLiant BL460c 2 proc, quad core Xeon 2.66 GHz 24 GB Service Tier
HPBLADE11 HP ProLiant BL460c 2 proc, quad core Xeon 2.66 GHz 16 GB
Shared Services,
SunONE LDAP
HPBLADE12 HP ProLiant BL460c 2 proc, quad core Xeon 2.66 GHz 24 GB Essbase
HPBLADE13 HP ProLiant BL460c 2 proc, quad core Xeon 3.166 GHz 32 GB Oracle
HP SAN_1
EVA4100 (28 disks, 136 GB each, RAID01)
HP SAN_2
HP StorageWorks SB40c (six disks, RAID01)

Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 4





Hyperion Architecture Layout Details
Apache was installed on one server and configured to load-balance among five instances of the Workspace servlets
on another server. The Workspace Service and five instances of the Hyperion Interactive Reporting services were
configured on a third server. A fourth server hosted the Hyperion Shared Services, Open LDAP, and the SunONE
Directory Server. The Oracle database ran on its own server, as did Essbase. The Oracle and Essbase servers ran
as 64-bit applications; all other processes ran as 32-bit applications. An HP EVA4100 (28 disks, 136 GB each,
RAID01) was used for the SAN in the services tier. An HP StorageWorks SB40c (six disks, RAID01) was used as
SAN storage for Oracle database storage.
Tuning
This section provides information for tuning the Hyperion Interactive Reporting system for best performance with
the test scenario used here.
Apache
o MaxKeepAliveRequests 0
o <IfModule mpm_winnt.c>
ThreadsPerChild 2048
ListenBacklog 2000
</IfModule>
o LogLevel crit

WebLogic/ Workspace Servlets
o Java heap settings: -Xms1200M -Xmx1200M
o Root log level: ERROR
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 5




Workspace Services
o Usage Tracking, Harvester services: HOLD
o Number of connections for Services: 3,000
o Java heap settings: -Xms1200M -Xmx1200M
o Root log level: ERROR

SunONE LDAP
o Database cache size: 100 MB
o Entry caches: 100 MB

Oracle RDBMS

Parameter Name Location Value
shared_pool_size spfile/init.ora 300000000
db_cache_size spfile/init.ora 700000000
large_pool_size spfile/init.ora 8388608
open_cursors spfile/init.ora 1000
processes spfile/init.ora 1000
sessions spfile/init.ora 1000
sort_area_size spfile/init.ora 1000000
sort_area_retained_size spfile/init.ora 65536
timed_statistics spfile/init.ora FALSE
db_file_multiblock_read_count spfile/init.ora 4
optimizer_index_cost_adj spfile/init.ora 20
optimizer_index_caching spfile/init.ora 20
Test Scenario
Client loads for Hyperion Interactive Reporting testing were simulated using LoadRunner 8, an HP load-testing
tool. LoadRunner enables users to record browser actions in a file that can then be edited to add think time,
transaction definitions, and parameters for substitution with random values. LoadRunner records not only the
HTTP requests made by the client, but also the data sent to the server. The ability to record network traffic is a
requirement for testing Hyperion Interactive Reporting. Using LoadRunner, Oracle developed a test scenario that
included a list of host computers for clients, the names of the test scripts, and the number of simulated users
running each script. The test scripts were then run repeatedly for each simulated user until the defined test
schedule terminated them.
Three test scripts were created with LoadRunner: one simulating Foundation users who access nonquery content,
another for Hyperion Interactive Reporting Web Client users, and the third for Hyperion Interactive Reporting
thin-client users. Simulated user actions include opening documents and processing queries in Hyperion
Intelligence Reporting documents (BQY files). Random think times were included between actions in all scripts.
Three types of think time intervals were used in the tests. Shorter think times, 215 seconds, were used everywhere
except for actions with BQY files. When working with BQY files, medium think times, 1530 seconds, were used
between changing sections, and longer think times, 120180 seconds, were used after processing queries. Longer
think times simulated users taking time to view the results of queries and examine charts and reports. File names
were replaced with random variables that selected from a list of file names representing copies of the same file.
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 6



This was done to allow random file selection and guarantee the validity of subsequent actions with that file, no
matter which file was randomly chosen. User names used to log into the EPM System were chosen randomly
from 10,000 total registered users, ensuring that many different users logged in during the course of testing.
Find Additional details of the test scenario in Appendix A.

Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 7



RESULTS
Response Time vs. Load
Response times for logging in to the EPM system and browsing repository folders was quite fast throughout the
test, showing virtually no degradation with increased load, because a large portion of this work was performed on
the Web tier, which was not as heavily used as the services tier.








Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 8



Web Client response times also were fast throughout the test, because most of the Web Client processing occurs
on the clients and not on the servers.

In contrast, thin-client response times increased with load because most of this work is done on the services tier.
Because of the nature of thin-client applications, placing the load on the servers instead of the clients, the services
tier became quite busy as load increased. The following charts show the effects on thin-client response times for
various transactions.
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 9





Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 10



The calculations performed in the creation of Report and Chart sections are particularly intensive. Response times
for charts are directly proportional to the amount of data included in the charts. Those with more data, such as
Chart2, can be affected significantly as load increases.
Resource vs. Load
CPU
By far, the busiest box was the services server, hosting the Workspace Service, the BIService, and the Data Access
Service. That server was 80% busy in the later stages of the test. The other servers were less than 20% busy
through most of the test, illustrated in the chart below.













Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 11



On the Web tier, Apache and Workspace both peaked at about 150% utilization, or 1.5 CPU cores. The
Workspace CPU time was evenly split among five instances, each running in its own JVM.




















Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 12



Looking closely at the services host, we see that the majority of the CPU time was used by the BIService. There
were five instances of the BIService, and the total CPU time shown here was split equally among them. The
Workspace Service was the second largest consumer of CPU time, followed by the Data Access Services (DAS).
The DAS is responsible for sending SQL queries to the data source when users process queries in their
documents. As a result, the CPU time required for the DAS is related to the percentage of users who process
queries. In our case, 10% of the user population processed queries; the remaining 90% only viewed the data saved
with the report. For implementations expecting a larger percentage of users processing queries, the DAS CPU
requirements will be proportionately larger.


Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 13



Memory
Memory use on the Web tier was relatively flat for the duration of the test. Workspace memory use was driven by
the Java heap settings, which were set to 1200 MB for the minimum and maximum.













Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 14



On the Services tier, memory for the Workspace Agent and the DAS services was stable. Memory usage for the
BIService, which handles most of the processing for thin client requests, increased linearly with user load. The
chart below shows the total memory use for all instances of each process. This test included five BIService
processes and five DAS processes.


Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 15



Network
The Workspace and Services servers were configured with two network cards each, with the first card handling the
smaller incoming requests and the second card handling the larger response data. The Apache server was
configured with only one card; as a result, it was the busiest network card in the system. The Oracle and Essbase
network cards were not busy, as the following chart illustrates.


Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 16



Disk
Examining disk queues is the best way to measure disk use. Besides a few short spikes, disk queues were almost
nonexistent, indicating that disk usage was not a bottleneck.


IMPLICATIONS FOR CAPACITY PLANNING
The configuration handled more than 1,400 users without fatal errors. Depending on the volumes of data used in
reports, the response times for this number of users may not be acceptable. This system handled 600 users with
virtually no increase in response time when compared to single-user performance. Ignoring the Chart2
transactions, response times remained shorter than 20 seconds through approximately 1,000 users, suggesting that
this system could be acceptable for environments with occasional peak use of up to 1,000 users. Because delays
were almost entirely with thin-client transactions, moving more users to the Web Client would further increase the
capacity of these servers.
Capacity Planning Tables
The Windows capacity planning tables presented here were created to provide initial sizing estimates. They are
based on the resource use presented in the previous section, for the test described in Appendix A. These tables
provide estimates for individual services and not for server hosts. After the services distribution across hosts is
determined, the capacity requirements can be summed for each host.
The following must be considered when using these tables:
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 17



o The specific use of an Hyperion Interactive Reporting system has a significant effect on the capacity of
the system. Appendix A: Test Scenario Details provides details about the testing scenarios and
documents used to produce the capacity estimates presented here. That scenario should be used as a
reference to compare with anticipated Hyperion Interactive Reporting usage in other implementations so
that appropriate adjustments can be made.
o Memory requirements are highly dependent on the amount of data contained in reports. Reports larger
than those described in Appendix A will require more memory than the capacity planning tables
recommend.
o The data provided in the tables below are based on average use for each load level. Peak periods of use
may require additional resources.
o Except as noted in the tables, CPU estimates are based used Quad-Core Intel Xeon processors
operating with a 2.66 GHz clock. If older processors with slower speeds and/or single cores are used,
additional CPUs may be needed.
o Each instance of a process has some initial memory footprint to load the software into memory.
Therefore, the total memory for multiple instances may be higher than for a single instance servicing the
same load.
Table 1
Processor Capacity Planning Table
*
1100 Users 100500 Users 500750 Users 7501000 Users 10001500 Users
CPU Cores CPU Cores CPU Cores CPU Cores CPU Cores
Workspace Service 0.10.3 0.20.9 0.71.4 1.21.8 1.52.5
BIService 0.30.7 0.52.3
1.83.5 3.05.0 4.07.0
DAS 0.10.2 0.20.4 0.30.6 0.50.8 0.71.0
Web Server (Apache) 0.10.2 0.10.7 0.51.0 0.81.4 1.02.0
Workspace Web 0.10.2** 0.10.7** 0.51.0** 0.81.4** 1.02.0**
Shared Services <0.1 <0.1 <0.1 <0.1 <0.1
LDAP Server <0.1 <0.1 <0.1 <0.1 <0.1
Table 2
Memory Capacity Planning Table
*
1100 Users 100500 Users 500750 Users 7501000 Users 10001500 Users


Memory
(MB)
Memory
(MB)
Memory
(MB)
Memory
(MB)
Memory
(MB)
Workspace Service 300500 400800 7001000 9001200 10001500
BIService 300700 5002200 19003500 30005000 45007500
DAS 85100 95130 110170 130200 150250
Web Server (Apache) 5070 5090 80100 100120 120150
Workspace Web 300700 5001500 10002000 20003000 30005000
Shared Services <350 <350 <350 <350 <350
LDAP Server 80100 80100 80100 80100 100120
Notes: * Based on the test scenario described in Appendix A: Test Scenario Details
** CPU estimates for Workspace Web are fractions of a 3.166 GHz Intel Xeon processor
core. All other CPU estimates are fractions of a 2.66 GHz Intel Xeon processor core.
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 18



Using the Interactive and Production Reporting Capacity Planning Tables
Based on the tables above, to support 500 active users, the configuration should include approximately eight CPU
cores as follows:
0.9 for the Workspace Service with GSM
2.3 for the BIService
0.4 for the DAS
0.7 for the Workspace Web application
0.7 for the Apache Web server
0.1 for the Shared Services
0.1 for the LDAP Server
The total would be 5.2 cores. To be conservative, to allow for periods of higher peak use, and to account for
cores/processors usually being installed as pairs, the number should be rounded up to eight cores. The
recommended distribution of the services would be across two machines as a starting point (Web server and
servlets on a two-core machine, Workspace Service, BIService, DAS, Shared Services, and LDAP Server on a six-
core machine). If there is no requirement for a firewall between the Web and services tier, one eight-core server
also would work well. A general, conservative guideline is that, of an organizations user population, 10% would be
active on the system at any time (10% of the total users eligible to log on would be logged in). Therefore, the 500
active users here on these 10 cores represent a total user population of 5,000 users. If usage increases, the BI and
DAS services can be moved to an additional machine.
Adjusting for Different Test Scenarios
The performance achieved in any system will be highly dependent on the specific usage scenario. The percentage
of users processing queries vs. running reports, the size of the result sets, and the complexity of the reports
significantly affect resource usage and response times. Perhaps the most crucial factor is the number of users
running each type of client: the thin client or Web Client. The nature of thin-client applications places much more
load on the servers than does a fat client. In the test reported here, 50% used the thin-client interface. For systems
with a different number of thin client users, the resource usage will change proportionately, primarily on the
services host and specifically for the BIService. Likewise, the Data Access Service resource usage will change in
proportion to the number of users who process queries through both the thin client and Web Client. The
following example illustrates how to adjust resource requirements for different scenarios.
Consider the case of Intelligence Web Client users with no thin-client users. Say that 50% of Web Client users
process queries instead of 10%. The changes to the scenario would be:
1. Intelligence Web Client use increases from 40% to 90% (still have 10% Foundation users).
2. Thin-client use drops from 50% to 0%.
3. Query processing increases from 10% to 50% .
To accommodate each change, these adjustments must be made to the estimates:
1. Do not increase resources for additional Intelligence Web Client users. Moving users from the thin client
to the Web Client transfers work to client machines, away from the servers.
2. Decrease BIService requirements to 0, because there will be no thin-client users.
3. Increase DAS resources by a factor of five.
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 19



Using the example of 500 active users from the previous section, the CPU requirements would now be:
0.9 for the Workspace Service with GSM
0.0 for the BIService
2.0 for the DAS
0.7 for the Workspace Web application
0.7 for the Apache Web server
0.1 for the Shared Services
0.1 for the LDAP Server
This adds up to 4.5 cores. Rounding up for higher peak periods and installing cores in pairs would increase this
number to six cores. In this new case, the DAS becomes the busiest service instead of the BIService. The
recommended distribution of these six cores is a similar two-box configuration, as in the previous example
(Apache and Workspace Web on a two-core machine, and Workspace Service, BIService, DAS, and Shared
Services on a four-core machine). Again, using the general guideline that 10% of an organizations user population
would be active on the system at any time, this hardware could support a user population of up to 5,000 users
running this scenario.
These two scenarios illustrate the need to adequately estimate the anticipated usage of a system before planning
capacity. The same total number of users can require different resources for different usage scenarios; sometimes
drastically different resources are required.
When to Cluster/Replicate Services
The primary factors driving replication of services are memory usage (if the service needs more memory than a 32-
bit process can access), and CPU (if not enough processors are on one server). If the Workspace Web application,
BIService, or DAS needs more than 2 GB of virtual memory, and physical memory and CPU cycles are available,
those components may be replicated on a single host. If the host does not have sufficient physical resources, they
can be replicated on additional servers. Portions of the Workspace Service also may be replicated on additional
servers if more resources are required. See installation and administration documentation for additional
information on replicating services.
The number of CPUs and the memory required will vary not only for user load, but also based on the amount of
data accessed. For thin-client users the data includes the number of calculations performed in each BQY file and
the complexity of charts and pivot tables. Therefore, it is impossible to offer a guideline for the number of users
that can be supported per instance of the services. To determine optimal clustering and replication for an
implementation, testing must be performed with the actual reports and user scenarios expected.
TUNING HYPERION INTERACTIVE REPORTING SYSTEMS
Heap Sizes
Adjusting the Java heap sizes for the Workspace Web servlets and Workspace Service can improve performance
and increase the load that these applications can handle. java.lang.OutOfMemory errors in the logs indicate that
insufficient heap has been allocated.
Because the overall size is effectively limited to 2 GB for 32-bit processes, the recommended maximum heap
should not exceed 1.3 GB, to allow enough space for the JVM itself and for threads to allocate extra space for their
stacks.
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 20



If an message indicating that there is insufficient memory to create a new thread accompanies an out-of memory
error, the heap is too large, which means that there is not enough process memory left for nonheap allocations,
such as those needed to create threads. When thread creation fails, the maximum heap size must be set smaller.
Workspace Web Heap Settings
The Workspace servlets maintain session information for each user. Consequently, they perform best when
provided with plenty of Java heap. Maximum heap sizes of 1.0 GB may be necessary for several hundred
concurrent active users. Change the heap size for the servlets by adding Java arguments to the application server. If
thousands of concurrent users are required, and a 32-bit application server is selected, more server clones should
be created to provide more total memory for the Workspace servlets. The application server vendor also may
suggest tuning other parameters, such as those associated with garbage collection, for better performance.
The Workspace Web application heap should be adjusted within the application server console or in the
application server startup file. Refer to the application server documentation to determine the correct adjustment
of these settings.
Workspace Service Heap Settings
The EPM System Workspace Service probably also will need a heap size adjustment. The maximum heap needed
depends on the peak number of users logged into the system and the number of objects published to the EPM
repository. A good starting value is 768 MB for up to approximately 600 active users. A larger maximum heap may
be needed for larger loads. The minimum heap size also should be increased. Ideally, to avoid performance
penalties for increasing the heap, make the minimum heap size equal the maximum heap size.
The Workspace Agent heap size can be adjusted by changing the Xms and Xmx parameters for the Workspace
Service using the Configuration Management Console. The Xms setting specifies the minimum size; the Xmx
setting specifies the maximum size.
Data Access Service Parameters
Changing the value of the Relational Partial Result Cell Count parameter can improve BQY query performance.
This parameter, set in the Data Access Service parameters in the Configuration Management Console, specifies
how many bytes of data that the DAS should retrieve from the database in one fetch. A value of 500,000 works well
for queries returning 200,000 to 20,000,000 bytes of data. Similarly, changing the mdd Partial Result Cell Count
parameter can improve BQY query performance for multidimensional queries. Increasing the value for large
queries can result in fewer data fetches and less overhead but causes increased memory use by the DAS.
BIService Parameters
If BIService memory use is high, adjusting the value of the Document Unload Timeout parameter, found in the
Intelligence Service properties in the CMC, may help. This parameter indicates how long a thin-client document
will remain inactive in the cache until it is removed from the cache. The value is in seconds; the default is 900
seconds (15 minutes). Decreasing this value will remove documents from the cache sooner, thereby freeing
memory sooner. It could also mean that documents will need to be retrieved from the repository more often, so
carefully consider this tradeoff before choosing a value for each installation.
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 21



If keyboard navigation is not required, BIService memory usage can be reduced by adding
DISABLE_KEYBOARD_NAV_IN_NON_508_MODE=true
to the Dynamic Service Properties of the Intelligence Service through the Configuration Management Console.
Workspace Service Parameters
The Workspace Service is a set of service agents that run as a single process (the Workspace Agent process). The
maximum connections to each service agent is 100 by default. Under load, this value may not be sufficient. In
general, use these guidelines to set the proper number of connections:
Service Broker and Repository Manager:
number of active users
+ number of jobs that can run at any given time
+ 50
Job Service:
number of concurrent jobs * 1.1
Event Service:
100 or the number of jobs that can run at any given time, whichever is greater.
The number of connections for each service agent can be changed using the Service Configurator tool.
The Workspace Service uses a connection pool for communications with the repository database. In the testing
described here, the default pool size of 50 concurrent connections was used. The value may need to be adjusted
for different EPM System implementations. To change the value to 100, for example, add
Dmax_db_pool_size=100 to the list of JAVA_OPTS for the Workspace Service Properties in the Configuration
Management Console (CMC). Note that increasing this value increases the amount of memory that the Workspace
Service uses, regardless of whether the connections in the pool are used. Therefore, keep this value as low as
possible. An EPM system with a single Workspace Service using a max_db_pool_size of 50 supported more than
1,400 active users without running out of database connections.
Application Server Threads
For a mixed workload such as that described in Appendix A of this document, the number of application server
threads required will be low, roughly 20% of the number of active users. However, if many users are running
queries that take more than a few seconds to complete, the number of threads must be increased, because the
application server thread issuing a query will remain in use until all of the results of the query are returned. Manual
tuning is unnecessary when using WebLogic 9 as application server, because it can adjust the number of threads
automatically based on current load requirements. Other application servers may need manual tuning.
RESOLVING RESOURCE ISSUES
Processing Issues
Possible ways to resolve processing issues with the Hyperion EPM System:
Reduce the number of concurrent BQY and/or SQR jobs so that fewer jobs run at one time.
Replicate and distribute the different components of the architecture.
Increase the number and/or speed of processors on your machines.
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 22




If the SQR job response times are slow at peak times because of a CPU bottleneck, try setting the maximum
number of concurrent jobs to a smaller number. To limit the number of concurrent jobs to five, for example, add the
string -Djob_limit=5, to the JAVA_OPTS argument for the Workspace Service in the Configuration
Management Console to allow more resources for each job and result in fewer CPU context switches. Some jobs
may be queued waiting to run, but the time saved with fewer context switches should more than make up for the
queue time. Tuning of the maximum jobs parameter must be refined through trial-and-error testing in each
environment. If necessary, Job Services may be installed on additional machines to increase the capacity for
concurrent jobs.
The same concept of limiting concurrent jobs applies to BQY jobs. By default, each IRJob service runs, at most,
five concurrent jobs. To limit the number of concurrent jobs to three, for example, set that value for the
Maximum Concurrent Job Requests parameter for the Hyperion Interactive Reporting Job Service in the
Configuration Management Console.
If the EPM services run on the same machine as the database, install them on different machines so that the EPM
services do not compete with the database for CPU cycles. If the response times are still slow, and the EPM
services are running on the same machine as the Web server and servlets, move the Web and application servers to
their own machine.
To improve thin-client response times when a CPU bottleneck exists, consider replicating the Intelligence Server
(BIService) on another machine. If many queries are processed, also replicating the DAS may help.
To improve Intelligence Web Client response times, replicating the DAS service should help when many queries
run concurrently. An additional set of servlets, with users load-balanced by the Web server, also may help. It is
important to first identify whether the servlets or services are causing the bottleneck of the system so that the
proper component can be replicated.
Memory Issues
If the EPM System is memory-bound, the OS statistics will show memory use for EPM processes approaching the
32-bit OS limit of 2 GB. If the machine lacks sufficient physical memory, there will be a consistently high paging
rate accompanied by a small amount of free memory. If this occurs, consider adding more RAM if possible. Also
be sure there is sufficient page file space.
It is possible that the server has enough memory, but specific processes may not be allocated as much as they
need. For example, Java processes are limited by the maximum heap, and 32-bit processes, compiled without flags
to use extended memory, can use only 2 GB of memory. To check the amounts of memory used by the Common
Services and Workspace servlets, examine memory usage information provided in the services and servlets logs.
Most application servers provide administration consoles that also may be used to monitor servlets and Web
application memory usage. The BIService also writes its memory usage information into its own log file.
Periodically checking these files can help diagnose possible memory problems.
If memory causes a bottleneck for the BIService process, consider replicating that service. If the machine has
plenty of memory available, the BIService can be replicated on the same machine, or on another machine.
Likewise, if Workspace memory is an issue, multiple Workspace deployments can be used, on either one machine
or multiple machines.
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 23



Disk Access Issues
If the Hyperion EPM System is experiencing disk access issues, first consider using RAID 0 (pure striping) or
RAID 0+1 striping. RAID 0 should provide the best performance, but it does not provide data redundancy. For
performance and data redundancy, Oracle recommends using RAID 0+1 (or RAID10, mirrored striping). If RAID
0+1 is too expensive to implement (because of the number of required disks), another solution that provides data
redundancy is RAID 5 (striping with parity). If your enterprise chooses to implement a RAID 5 solution, select a
hardware RAID 5 solution (with buffering), because it should perform much better than the software RAID 5
solution. Be sure to stripe data across at least four disks. Other, newer disk technologies may also work well.
Consult your system administrator for the best disk solution for your environment.
In addition, consider these recommendations for reducing disk bottlenecks:
Isolate the EPM System Servers, Web server, and the databases from one another, or at least have
them access disks on different controllers.
Compress BQY files so that less data is transferred to and from disks.
Avoid excessive writing to log files, which can quickly create a disk bottleneck.
CONCLUSION
The data presented here show the scalability and performance of Hyperion Interactive Reporting software. Using a
scenario of ramping up user load at regular intervals, the configuration provided by HP was able to support more
than 1,400 users, a mix of Web Client and thin-client users. Web Client response times were very good throughout
the test. Thin-client response times were acceptable through approximately 1,000 users. The most heavily used
server was that used for the services. The other servers were minimally loaded throughout the test.
Note that this configuration could support more users if the thin-client users converted to the Web Client. Doing
so transfers much of the processing activity from the servers to the individual client machines, enabling the servers
to handle more of the work that can be performed only on the server side. For a similar test scenario with only
Web Client users, the configuration used likely would support more than 2,000 active users, depending on report
content and the amount of query processing. Assuming that 10% of a user population is actively using the system
at a given time, this configuration could support a total user population of 20,000 or more.
And, finally, because many factors affecting capacity and performance are dependent on the actual usage of a
specific environment, the only way to adequately predict capacity is to perform load tests on that environment with
the documents and use cases that actual users will follow.


Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 24



APPENDIX A: TEST SCENARIO DETAILS
Contents of the Workspace Repository
The Intelligence Reporting system was populated with more than 12,000 nonquery documents and 300 BQY files,
each with both Hyperion Interactive Reporting Web Client and thin-client versions. The BQY files were
compressed and included one query, one result, one pivot, three charts, one report, and two dashboard sections.
The results section included five computed columns.
Figure 1 shows a segment of the folder tree structure. Each SSubCat folder contained the same three BQY
documents. All of these documents used the same OCE file published in the root folder.
Figure 1
Documents in SSubCat0 Folder

Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 25



A sample of a BQY document is shown in Figure 2. The difference between each file was the number of rows in
the results set. The results set contained 1,000, 10,000 or 100,000 rows. Each row contained approximately 200
bytes of data plus five computed fields.

Figure 2
Sample BQY Document


Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 26



Query Activity
The query used for tests:
Select
CUST_NUM
NAME
ADDR1
ADDR2
CITY
CUSTOMERS001K.STATE
ZIP
PHONE
TOT
SALES
PROFITS
REGIONS.STATE
STATEABBR
REGION
TIMEZONE
From CUSTOMERS_TABLE, REGIONS
Where CUSTOMERS_TABLE.STATE = REGIONS.STATEABBR

Here CUSTOMERS_TABLE is a variable for one of these tables:
CUSTOMERS001K with 1,000 rows of data
CUSTOMERS010K with 10,000 rows of data
CUSTOMERS100K with 100,000 rows of data

Profile of Simulated Users
The test results are heavily dependent on the profile of user activity. For data consistency and testing purposes, a
set of user profiles was implemented during the course of testing. A brief description:
10% of active users executed the Foundation tests.
50% of active users opened Hyperion BQY documents through the thin client.
40% of active users opened Hyperion BQY documents through the Intelligence Web Client.
10% of users who opened a BQY document through the thin client processed a query within the
document.
10% of users who opened a BQY document through the Web Client processed a query within the
document.
100% of users who opened a thin client document opened each section (such as Report, Chart, and Pivot)
of the document.
25% of thin-client users did not properly close BQY documents after opening them.
10% of Hyperion Interactive Reporting users (both thin client and Web Client) published and deleted
BQY documents.
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 27



Activity Measurements
Mixed Workload Test
The following tables describe the detailed actions in each of the three LoadRunner scripts.
Foundation Script Scenario
Transaction Timer Comment
1000 Home Open login page
1010 Login Submit login information
1020 Explorer Open Explorer module
1040 0010 Category Go to Folder with 10 items
1040 0100 Category Go to Folder with 100 items
1040 0500 Category Go to Folder with 500 items
1040 1000 Category Go to Folder with 1000 items
1050 TheRealWorld Go to TheRealWorld folder
1060 Category Go to Category0
1070 Subcat Go to SubCatN, N in 0..9
1080 View 100K Html Open 100 KB nonquery file
1080 View 1B Html Open 1 B nonquery file
1090 SSubcat Go to SSubCatN, N in 0..9
1100 SSSubcat Go to SSSubCatN, N in 0..4
1110 Sqr Jobs Folder Go to SQR_Jobs Folder
1120 Sqr Subfolder Go to SQRN, N in 0..9
1130 Publish 100K Step1 (*) Click on Publish file
1140 Publish 100K Step2 (*) Publish (upload) file
1150 Publish 100K Step3 (*) Submit publishing
1160 Subscribe 100K Step1 (*) Subscribe to file that was just published
1170 Subscribe 100K Step2 (*) Submit subscribing
1175 Personalpage (*) Go to Personal Page
1176 View 100K Favorite (*) Open published and subscribed file
1180 Delete 100K (*) Delete file that was just published
1190 Import Sqr Step1 (*) Click on Publish Job
1200 Import Sqr Step2_{filename} (*) Specify file to be published
1210 Import Sqr Step3_{filename} (*) Specify Advanced Options, Access Control
1220 Import Sqr Step4_{filename} (*) Specify connectivity and required files
1220 Import Sqr Step4 1_{filename} (*) Specify connectivity and required files
1240 Import Sqr Step5_{filename} (*) Define output types
1250 Import Sqr Step6_{filename} (*) Click on Finish
1270 Del Sqr Step1_{filename} (*) Delete published SQR
1290 Search Step1 Click on Search
1300 Search Step2 W1 Submit search, single keyword
1300 Search Step2 W1 And W2 Submit AND search
1300 Search Step2 W1 Or W2 Submit OR search
1300 Search Step2 W2 Submit search, single keyword
1305 Closeall Close all documents
1310 Logout Click on Exit
(*) 10% of all Foundation users execute these actions
{filename} represents a randomly selected SQR file with the 10,000- or 100,000-row query
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 28



Hyperion Interactive Reporting Thin-Client Script Scenario
Transaction timer Comment
2000_Home Open login page
2010_Login Submit login information
2020_Explore Open Explore module
2040_TheRealWorld Go to TheRealWorld
2050_Category Go to Category0
2060_SubCat Go to SubCatN, N in 0..9
2070_SSubCat Go to SSubCatN, N in 0..9
2080_OpenBQY_{filename} Open randomly selected BQY file
2090_QueryBQY_{filename} Open Query section
2100_Results_BQY_{filename} Open Results section
2110_Pivot_BQY_{filename} Open Pivot section
2120_Chart_BQY_{filename} Open Chart section
2130_Chart2_BQY_{filename} Open Chart2 section
2140_Charts3_BQY_{filename} Open Chart3 section
2150_Report_BQY_{filename} Open Report section
2160_EIS_BQY_{filename} Open EIS section
2170_Process_BQY_{filename} (*) Process BQY
2180_Close_BQY_{filename} Close BQY
2200_BQY_Jobs_Folder Open BQY jobs folder
2210_BQY_Subfolder Open BQY jobs subfolder
2220_PublishBQY_Step1(*) Click on Publish file
2230_PublishBQY_Step2(upload)_{filename}(*) Specify file to be published
2240_PublishBQY_Step3(next) _{filename} (*) Specify connectivity
2250_PublishBQY_Step4(finish) _{filename} (*) Click on Finish
2260_PublishBQY_Step5(refresh)_{filename}(*
)
Refresh content
2270_Delete_BQY (*) Delete just published BQY file
2270_Logout Logout
(*) 10% of all thin-client users execute this action.
{filename} represents a randomly selected BQY file with the 1,000-, 10,000-, or 100,000-row query.
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 29



Hyperion Interactive Reporting Web Client Script Scenario
Transaction Timer Comment
3000_Home Open login page
3010_Login Submit login information
3020_Explore Open Explore module
3040_TheRealWorld Go to TheRealWorld
3050_Category Go to Category0
3060_SubCat Go to SubCatN, N in 0..9
3070_SSubCat Go to SSubCatN, N in 0..9
3080_OpenIC_BQY_{filename} Open randomly selected BQY in Plugin
3090_ProcessIC_BQY_{filename} (*) Process opened BQY
3100_BQY_Jobs_Folder Open BQY jobs folder
3110_BQY_Subfolder Open BQY jobs subfolder
3120_PublishBQY_Step1(*) Click on Publish file
3130_PublishBQY_Step2(upload)_{filename}(*) Specify file to be published
3140_PublishBQY_Step3(next) _{filename} (*) Specify connectivity
3150_PublishBQY_Step4(finish) _{filename} (*) Click on Finish
3160_PublishBQY_Step5(refresh)_{filename}(*
)
Refresh content
3170_Delete_BQY (*) Delete just published BQY file
3110_Exit Logout
(*) 10% of all Intelligence Client users executed these actions.
{filename} represents a randomly selected BQY file with the 1,000-, 10,000-, or 100,000-row query.
Hyperion

Interactive Reporting 11.1.1 Capacity Planning Report for Windows Page 30



Hyperion Interactive Reporting 11.1.1 Capacity Planning Report for Windows
May 2009
Author: Katherine Hagedon
Contributing Author: Nikolai Potapov

Oracle Corporation
World Headquarters
500 Oracle Parkway
Redwood Shores, CA 94065
U.S.A.

Worldwide Inquiries:
Phone: +1.650.506.7000
Fax: +1.650.506.7200
oracle.com

Copyright 2009, Oracle. All rights reserved.
This document is provided for information purposes only and the
contents hereof are subject to change without notice.
This document is not warranted to be error-free, nor subject to any
other warranties or conditions, whether expressed orally or implied
in law, including implied warranties and conditions of merchantability
or fitness for a particular purpose. We specifically disclaim any
liability with respect to this document and no contractual obligations
are formed either directly or indirectly by this document. This document
may not be reproduced or transmitted in any form or by any means,
electronic or mechanical, for any purpose, without our prior written permission.
Oracle, JD Edwards, PeopleSoft, and Siebel are registered trademarks of Oracle
Corporation and/or its affiliates. Other names may be trademarks
of their respective owners.

Potrebbero piacerti anche