Sei sulla pagina 1di 28

White paper

IBM WebSphere Application Server Standard and Advanced Editions A Methodology for Production Performance Tuning

By Gennaro (Jerry) Cuomo - IBM WebSphere

Document Version 1.0


09/13/2000

Copyright IBM Corp. 2000. All rights reserved. Page - 1

Intended audience Acknowledgments

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 5

Related Documents and Tools


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 5

Overview

Adjusting WebSphere System Queues


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 7

WebSphere Queuing Network


Closed Queues vs. Open Queues
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 7

Queue Settings in WebSphere Application Server


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 8

Determining Settings
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 9

Upstream queuing
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 9

Drawing a Throughput Curve


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 10

Queue Adjustments
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 11

Queue Adjustments for Accessing Patterns


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 11

Queuing an Enterprise JavaBean component


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 11

Call-by-Value vs Call-by-Reference
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 13

Queuing and Database access


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 13

Copyright IBM Corp. 2000. All rights reserved. Page - 2

Prepared Statement Cache


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 13

Queuing and Clustering


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 14

Tuning Java Memory


The Garbage Collection Bottleneck

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 16

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 16

The Garbage Collection Gauge


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 17

Detecting Over Utilization of Objects


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 18

Detecting memory leaks


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 19

Java Heap Parameters


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 19

Relaxing Auto Reloads


Servlet Reload Interval

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 21

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 21

JSP Reload Interval


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 21

Web Server Configuration Reload Interval


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 22

Summary

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 23

Appendix A SEStats.java Appendix B GCStats.java


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page - 24

Copyright IBM Corp. 2000. All rights reserved. Page - 3

Trademarks
....................................................... Page - 27

Copyright IBM Corp. 2000. All rights reserved. Page - 4

Copyright IBM Corp. 2000. All rights reserved. Page - 5

Intended audience
This paper is intended for IT specialists or application administrators who set up production e-business solutions with IBM Websphere Application Server Advanced or Standard Editions. This paper assumes that the reader is familiar with the WebSphere product, as well as the basic concepts of Web serving and Java technology.

Acknowledgments
The methodology described in this paper is the product of work that evolved from many members of the extended WebSphere Application Server performance team, including Michael Fraenkel, Carmine Greco, Chet Murthy, Ruth Willenborg, Stan Cox, Carolyn Norton, Chris Forte, Ron Bostick, Scott Snyder, Norman Creech, Charley Bradley, Cindy Tipper, Ken Ueno, Tom Alcott, Hong Hua and Bob Dimpsey

Related Documents and Tools


The following documents provide additional information about WebSphere Application Server performance and tuning: IBM WebSphere Application Server Standard and Advanced Editions, Version 3.0 Performance Report http://www.ibm.com/software/webservers/appserv/whitepapers.html WebSphere V3 Performance Tuning Guide (SG24-5657-00) http://www.redbooks.ibm.com/abstracts/sg245657.html WebSphere 3.5 Resource Analyzer http://www.ibm.com/software/webservers/appserv/download_ra.html WebSphere Application Server Development Best Practices for Performance and Scalabilityhttp://www.ibm.com/software/webservers/appserv/ws_bestpractices.pdf

Overview
This report provides a methodology for tuning and configuring WebSphere Application Server Standard and Advanced Editions for production environments. In particular, three categories of performance tuning are presented in order of decreasing complexity: Adjusting WebSphere System Queues Tuning Java Memory Relaxing Auto Reloading of Resources This paper does not discuss the art of application tuning, an integral subject to the overall performance tuning process. It is assumed that some reasonable amount of application tuning precedes the methodology outlined in this paper. This paper focuses on enterprise Web serving scenarios: Web browser clients driving a series of Web servers, connected to a servlet engine, which manipulate enterprise data. The concepts and methodologies in this paper Copyright IBM Corp. 2000. All rights reserved. Page - 6

apply equally to WebSphere Application Server Version 3.0 to 3.5, running on Windows NT, IBM AIX, Sun Solaris or HP.

Copyright IBM Corp. 2000. All rights reserved. Page - 7

Adjusting WebSphere System Queues


WebSphere Application Server has a series of interrelated components that must be harmoniously tuned to support the custom needs of your end-to-end e-business application. These adjustments will help your system achieve maximum throughput, while maintaining overall system stability.

WebSphere Queuing Network


WebSphere Application Server establishes a queuing network, which is a network of interconnected queues that represent the various components of the application serving platform. These queues include the network, Web server, servlet engine, Enterprise JavaBean (EJB) component container, data source and possibly a connection manager to a custom backend system. Each of these WebSphere resources represents a queue of requests waiting to use that resource.
clients clients

WebSphere Queuing Network


Web Server
Servlet Engine EJB Container

Network
Data Source

DB

The WebSphere queues are load-dependent resources -- the average service time of a request depends on the number of concurrent clients.

Closed Queues vs. Open Queues


Most of the queues comprising the WebSphere queuing network are closed queues. A closed queue places a limit on the maximum number of requests active in the queue. (Conversely, an open queue places no such restrictions on the maximum number of requests active in a queue). A closed queue allows system resources to be tightly managed. For example, the WebSphere servlet engines Max Connections setting controls the size of the servlet engine queue. If the average servlet running in a servlet engine creates 10 megabytes of objects during each request, then setting Max Connections to 100 would limit the memory consumed by the servlet engine to approximately 1 gigabyte. Hence, closed queues typically allow the system administrators to manage their applications more effectively and robustly. In a closed queue, a request can be in one of two states -- active or waiting. In the active state, a request is doing work or is waiting for a response from a downstream queue. For example, an active request in the web server is either doing work (such as retrieving static HTML) or waiting for a request to complete in the servlet engine. In waiting state, the request is waiting to become active. The request will remain in waiting state until one of the active requests leaves the queue. All Web servers supported by WebSphere software are closed queues. The WebSphere servlet engine and data source are also closed queues in that they allow you to specify the maximum concurrency at the resource. The EJB container inherits its queue behavior from its built-in Java technology object request broker (ORB). Hence, the EJB component container, like the Java ORB, is an open queue. Given this fact, it is important for the application calling enterprise beans to place limits on the number of concurrent callers into the EJB container. (Because the Servlet Redirector function, typically used as a means of separating the Web server and servlet engine on different machines is an EJB Client application, it is an Open Queue.) If enterprise beans are Copyright IBM Corp. 2000. All rights reserved. Page - 8

being called by servlets, the servlet engine will limit the number of total concurrent requests into an EJB container because the servlet engine has a limit itself. This fact is only true if you are calling enterprise beans from the servlet thread of execution. There is nothing to stop you from creating your own threads and bombarding the EJB container with requests. This is one of the reasons why it is not a good idea for servlets to create their own work threads.

Queue Settings in WebSphere Application Server


The following table outlines the various WebSphere queue settings:

Queue Settings in WebSphere Application Server Standard and Advanced Web Servers
Server Name
IBM HTTP

Where is setting found?


conf/httpd.conf

Setting name
MaxClients (Unix) or ThreadPerChild (NT)

Comments
A number of related parameters such as KeepAlive can also be found n the httpd.conf file.

ThreadsPerChild | MaxClients
httpd.conf

75

Servlet Engine
Server Name
WebSphere Application Server

Where is setting found?


Administrative Client

Setting name
Max Connections

Comments
Located on the Advanced Tab of the ServletEngine property sheet.

Max Connections

WebSphere Admin. Console

EJB Container
Server Name
WebSphere Application Server

Where is setting found?


Administrative Client

Setting name
Thread Pool Size

Comments
Located on the Advanced Tab of the Application Server property sheet.

Data Source
Server Name
WebSphere Application Server - DB Connection Mgr. WebSphere Application Server - Prepared Statement Cache

Where is setting found?


Administrative Client Administrative Client

Setting name
Minimum / Maximum connection pool size See DataSource Section below

Comments
Located on the Advacned Tab ot the DataSource property sheet. See DataSource Section below

Copyright IBM Corp. 2000. All rights reserved. Page - 9

Minimum/Maximum Connection pool size

WebSphere Admin. Console

Determining Settings
The following section outlines a methodology for configuring the WebSphere queues. You can always change the dynamics of your system, and therefore your tuning parameters, by moving resources around (such as moving your database server onto another machine) or providing more powerful resources (such as a faster set of CPUs with more memory). Thus, tuning should be done using a reasonable replica of your production environment.

Upstream queuing

The first rule of WebSphere tuning is to minimize the number of requests in WebSphere queues. In general, it is better for requests to wait in the network (in front of the Web server) than it is for them to wait in the WebSphere Application Server. This configuration results in only allowing requests into the WebSphere queuing network that are ready to be processed. (Later, we will discuss how to prevent bottlenecks that might occur from this configuration by using WebSphere clustering.) To effectively configure WebSphere in this fashion, the queues furthest upstream (closest to the client) should be slightly larger. Queues further downstream should be progressively smaller. The following figure illustrates a sample configuration that leads to client requests being queued upstream. Arriving requests are queued in the network as the number of concurrent clients increases beyond 75 concurrent users.

clients clients

UpStream Queuing
Arriving Requests Arriving Requests Arriving Requests

Network 200

75
ServletEngine (N = 50)

50
DataSource (N = 25)

Arriving Requests

25

Web Server (N = 75)

125
Waiting Requests

25
Waiting Requests

25
Waiting Requests

DB

In the example, the queues in this WebSphere queuing network are progressively smaller as work flows downstream. The example shows 200 clients arriving at the Web server. Because the Web server is set to handle 75 concurrent clients, 125 requests will remain queued in the network. As the 75 requests pass from the Web server to the servlet engine, 25 remain queued in the Web server and the remaining 50 are handled by the servlet engine. This process progresses through the data source, until finally 25 users arrive at the final destination, the database server. No component in this system will have to wait for work to arrive because, at Copyright IBM Corp. 2000. All rights reserved. Page - 10

each point upstream, there is some work waiting to enter that component. The bulk of the requests will be waiting outside of WebSphere software , in the network. This will add stability to the WebSphere Application Server, because no one component is overloaded. Waiting users can also be routed to other servers in a WebSphere cluster using routing software like the IBM Network Dispatcher.

Drawing a Throughput Curve


Using a test case that represents the full spirit of your production application (for example, exercising meaningful code paths) or using your production application itself, run a set of experiments to determine when your system capabilities are maximized (the saturation point). Conduct these tests after most of the bottlenecks have been removed from your application. The typical goal of these tests is to drive your CPUs to near 100 percent utilization. Start your initial baseline experiment with large queues. This will allow maximum concurrency through your system. For example, you might start your first experiment with a queue size of 100 at each of the servers in the queuing network: Web server, servlet engine and data source. Now, begin a series of experiments to plot a throughput curve, increasing the concurrent user load after each experiment. For example, perform experiments with 1 user, 2 users, 5, 10, 25, 50, 100, 150 and 200 users. After each run, record the throughput (requests per second) and response times (seconds per request).

Throughput curve
Throughput (requests/sec)
60 50 40 30 20 10 0
light load zone heavy load zone Saturation point

buckle zone

Concurrent Users

The curve resulting from your baseline experiments should resemble the typical throughput curve shown above. The throughput of WebSphere servers is a function of the number of concurrent requests present in the total system. Section A, the light load zone, shows that as the number of concurrent user requests increase, the throughput increases almost linearly with the number of requests. This reflects the fact that, at light loads, concurrent requests face very little congestion within the WebSphere system queues. After some point, congestion starts to build up and throughput increases at a much lower rate until it hits a saturation point that represents the maximum throughput value, as determined by some bottleneck in the WebSphere system. The best type of bottleneck is when the CPUs of the WebSphere Application Server become saturated. This is desirable because you can easily remedy a CPU bottleneck by adding additional or more powerful CPUs. Section B, in the above figure is the heavy load zone. As you increase the concurrent client load in this zone, throughput will remain relatively constant. However, your response time will increase proportionally to your user load. That is, if you double the user load in the heavy load zone, the response time will double. At some point, represented by Section C (the buckle zone), one of the system components becomes exhausted. At this point, throughput will start to degrade. For example, the system might enter the buckle zone when the Copyright IBM Corp. 2000. All rights reserved. Page - 11

network connections at the Web server exhaust the limits of the network adapter or if you exceed the OS limits for File handles. If you reached the saturation point by driving the system CPUs close to 100 percent, you are ready to move on to the next step. If your CPU was not driven to 100%, there is likely a bottleneck that is being aggravated by your application. For example, your application might be creating Java objects excessively, which might be causing garbage collection bottlenecks in Java (as discussed further in the Tuning Java Memory section). There are two ways to deal with application bottlenecks. The best way is to remove them. Use a Java technology-based application profiler to examine your overall object utilization. Profilers such as JProbe, available from the KLGroup, or Jinsight, available from the IBM alphaWorks Web site (http://www.alphaworks.ibm.com/), can be used with WebSphere software to help turn the lights on within your application. Cloning is another way to deal with application bottlenecks, as discussed in a later section.

Queue Adjustments
The number of concurrent users at the saturation point represents the maximum concurrency of your application. It also defines the boundary between the light and heavy zones. Select a concurrent user value in the light load zone that has a desirable response time and throughput combination. For example, if your application saturated WebSphere at 50 users, you might find that 48 users gave the best throughput and response time combination. We will call this value the Max Application Concurrency value. Max. Application Concurrency becomes the value to use as the basis for adjusting your WebSphere system queues. Remember, we want most users to wait in the network, so queue sizes should decrease as you move downstream. For example, given a Max. Application Concurrency value of 48, you might want to start with your system queues at the following values: web server 75, servlet engine 50, data source 45. Perform a set of additional experiments adjusting these values slightly higher and lower to find the exact setting Appendix A provides the source listing of a Java utility servlet, SEStats.java, that reports the number of concurrent users in the servlet engine. SEStats can be run either after or during a performance experiment. In WebSphere Application Server, V3.5, the Resource Analyzer can also be used to determine Max Application Concurrency.

Queue Adjustments for Accessing Patterns


In many cases, only a fraction of the requests passing through one queue will enter the next queue downstream. For example, in a site with many static pages, many requests will be turned around at the Web server and will not pass to the servlet engine. The Web server queue can then be significantly larger than the servlet engine queue. In the previous section, we set the Web server queue to 75 rather than to something closer to the Max. Application Concurrency. Similar adjustments need to be made when different components have vastly different execution times. Consider an application that spends 90 percent of its time in a complex servlet and only 10 percent making a short Java Database Connectivity (JDBC) query. On average only 10 percent of the servlets will be using database connections at any given time, so the database connection queue can be significantly smaller than the servlet engine queue. Conversely, if much of a servlets execution time is spent making a complex query to a database, then consider increasing the queue values at both the servlet engine and the data source. As always, you must monitor the CPU and memory utilization for both the WebSphere Application Server and database servers to ensure that you are not saturating CPU or memory.

Queuing an Enterprise JavaBean component

Copyright IBM Corp. 2000. All rights reserved. Page - 12

Method invocations to Enterprise JavaBean components are queued only if the client, making the method call is remote. That is, if the EJB component client is running in a separate Java Virtual Machine (another address space) from the enterprise bean. On the other hand, if the EJB component client (either a servlet or another enterprise bean) is installed in the same JVM, the EJB component method will run on the same thread of execution as the EJB client and there will be no queuing. Remote enterprise beans communicate using the Remote Method Invocation/Internet Inter-ORB (RMI/IIOP) Protocol. Method invocations initiated over RMI/IIOP will be processed by a server side ORB. The EJB containers thread pool will act as a queue for incoming requests. However, if a remote method request is issued and there are no more available threads in the thread pool, a new thread is created. After the method request completes the thread is destroyed. Hence, when the ORB is used to process remote method requests, the EJB container is an open queue, because its use of threads is unbounded. The following illustration depicts the two queuing options of EJB components.

EJB Queuing

WebSphere Application Server WebSphere Application Server

I. Requests queued in the ServletEngine Threads

ServletEngine ServletEngine EJB Container Container

REMOTE WebSphere REMOTE WebSphere Application Server Application Server

ORB Thread Pool ORB Thread

Servlet Servlet EJB Client EJB Client

II. Requests queued in the ORB Thread Pool

When configuring the thread pool , it is important to understand the calling patterns of the EJB client. (See above section on Queue Adjustments for Accessing Patterns) For example, if a servlet is making a small number of calls to remote enterprise beans and each method call is relatively quick, you should consider settting the number of threads in the ORB thread pool to a smaller value than the Servlet Engines Max Concurrency.

Short-lived Short-lived EJB calls


Servlet service() BEGIN

Remote Call

Remote Call

execution timeline

Servlet service() END

Longer-lived Longer-lived EJB calls


Servlet service() BEGIN

Remote Call

Remote Call

execution timeline

Servlet service() END

The degree to which you should increase the ORB thread pool value is a function of the number of simultaneous servlets (i.e., clients) calling EJBs and the duration of each method call. Hence, if the method calls longer, you might consider making the ORB thread pool size equal to the ServletEngine Max Concurrency size because there will be little interleaving of remote methods calls. The figure above illustrates two servlet-to-EJB component calling patterns that might occur in a WebSphere Application Server. The first pattern shows the servlet making a few short-lived (i.e., quick) calls. In this model there will be interleaving of requests to the ORB. Servlets can potentially be reusing the same ORB Copyright IBM Corp. 2000. All rights reserved. Page - 13

thread. In this case, the ORB Thread pool can be small, perhaps even one half of the Max Concurrency setting of the ServletEngine. In the second example, the Longer-lived EJB calls would hold a connection to the remote ORB longer and therefore, tie-up threads to the remote ORB. Hence we want to configure more of a one-to-one relationship between the ServletEngine and the remote ORB thread pools.

Call-by-Value vs Call-by-Reference
The EJB 1.0 spec states that method calls are to be call-by-value. That is, in the process of making a remote method call, a copy of the parameters in question is made before a remote method call is made. As one might imagine, the copying action in the call-by-value can be expensive. It is possible in WebSphere to specify call-by-reference, which passes the original Object reference without making a copy of the object. In the case where your EJB client (e.g., a Servlet) and EJB server are installed in the same WebSphere Application Server instance, specifying call-by-reference can improve performance up to 50 percent. Note that call-by-reference only helps performance in the case where non-primitive Object types are being passed as parameters . Hence, int, floats, etc., are always copied regardless of call model. Also, call-by-reference can be dangerous and sometimes lead to unexpected results if not handled properly. The case where this is possible is when an object reference is modified by the remote method resulting in unexpected side effects. To set call by reference in WebSphere Application Server Advanced Edition V3.0.2 and V3.5, set the following lines on the Java Command line arguments of your Application Server (in the WebSphere Admin Console): -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util -Dcom.ibm.CORBA.iiop.noLocalCopies=true

Queuing and Database access


The data sources for WebSphere Application Server provide optimized access to databases. In order to efficiently configure queue sizes (such as maximum connection pool size) it is important to understand how best to utilize WebSphere connection management. The following section explains how prepared statements can improve performance of a data source queue.

Prepared Statement Cache


The WebSphere data source optimizes processing of prepared statements. A prepared statement is a pre-compiled SQL statement that is stored in a prepared statement object. 1 This object can then be used to efficiently execute the given SQL statement multiple times. The WebSphere data source manages a pool of database connections, as well as an associated cache of prepared statement objects. Prepared statements are cached separately for each connection that executes them. The following figure illustrates the relationship between the SQL statements used, the prepared statement cache, a data source and a database. The application uses five SQL statements (two selects, one delete, one insert and one Update).

Prepared statements are optimized for handling parametric SQL statements that benefit from precompilation. If the JDBC driver, specified in the data source, supports precompilation, the creation of a prepared statement will send the statement to the database for precompilation. Some drivers may not support precompilation. In this case, the statement may not be sent to the database until the prepared statement is executed.

Copyright IBM Corp. 2000. All rights reserved. Page - 14

Data Source MAX = 3 Data Source


Connection Pool

connection 1 connection 2 connection 3

Database

Prepared Statement Cache MAX = 10 Prepared Statement Cache


# Connection Statement

# 1 2 3 4 5

SQL Statements use by Application SELECT B from Table where Uid = ? SELECT A from Table where Uid = ? DELETE from Table where Uid = ? and B = ? INSERT into Table (A,B) values (?,?) UPDATE Table set A =? where Uid = ?

1 2 3 4 5 6 7 8 9 10

1 1 1 2 2 2 3 3 3 3

statement 3 statement 2 statement 1 statement 3 statement 1 statement 2 statement 4 statement 2 statement 1 statement 5

The data source is configured with a maximum of three concurrent connections to the database. The connections have already been created and many SQL statements have been executed. The prepared statement cache has been configured to hold 10 statements. There are three prepared statements cached for Connection 1 and 2. Connection 3 has four statements cached. Because statements are compiled into prepared statements as they are used, the prepared statement cache reflects the database usage patterns of the application. The prepared statement cache implements a FIFO queue. A prepared statement object, representing a given SQL statement, can appear multiple times in the prepared statement cache. In particular, it can appear one time for every connection in the connection pool. For example, Statements 1 and 2 appear three times -- once for each connection. Statement 3 doesnt appear for connection 3 and Statement 4 and 5 only appear for connection 3. Hence, it might take a little longer to execute Statements 4 and 5 if they occur on Connections 1 and 2 because of the need to recompile them for those connections. A better alternative for this example would be to set the prepared statement cache of size 15 (five prepared statements for each of the three connections). Rule of Thumb: Size of prepared statement cache = number of SQL statements * size of connection pool

In WebSphere Application Server V3.0.2, to set the cache size for an application server, add the following statement to its command line arguments: -Dcom.ibm.ejs.dbm.PrepStmtCacheSize=15 In WebSphere Application Server V3.5, extended configuration can be provided on a Datasource by specifying a file, datasources.xml, in the /websphere/appserver/properties directory. The following example sets the Prepared Statement Cache Size as well an Oracle specific setting to increase the number of rows fetched while retrieving results sets (from 10 to 50).
<data-sources> <data-source name="trade"> <attribute name="statementCacheSize" value="500" /> <attribute name="defaultRowPrefetch" value="25" /> </data-source> </data-sources>

Queuing and Clustering


Copyright IBM Corp. 2000. All rights reserved. Page - 15

The application server cloning capabilities of WebSphere Application Sever can be a valuable asset in configuring highly scalable production environments. This is especially true when your application is experiencing bottlenecks that are preventing full CPU utilization of SMP servers. When adjusting the WebSphere system queues in clustered configurations, remember that as you add an additional server to a cluster, the server downstream will may get twice the load. Consider the following example.
clients clients

Clustering and Queuing


Web Server
Servlet Engine Servlet Engine

Network

Data Source

DB

Two servlet engine clones sit between a Web server and a data source. We can assume that the Web server, servlet engines and data source (but not the database) are all running on a single SMP server. Given these constraints, the following queue considerations need to be made: Web server queue settings can be doubled to ensure ample work is distributed to each servlet engine Servlet engine queues might have to be reduced to keep from saturating some system resource like the CPU or some other resource that servlets are utilizing Database source at each servlet engine might have to be reduced to keep from saturating some system like the CPU or a remote resource like the database server Java heap parameters might have to be reduced for each instance of the application server. For versions of the JVM shipped with WebSphere Application Server V3.02 and V3.5, it is crucial that the heap from all JVMs remain in physical memory. Therefore, if a cluster of four JVMs is running on a system, then enough physical memory must be available for all four heaps. (The next section discusses the tuning of Java heap settings.)

Copyright IBM Corp. 2000. All rights reserved. Page - 16

Tuning Java Memory


The following section focuses on tuning Java memory management. Enterprise applications written in the Java programming language often involve complex object relationships and utilize large numbers of objects. Although Java technology automatically manages memory associated with an objects life cycle, it is important to understand the object usage patterns of your application. In particular, you must ensure that: 1. 2. 3. Your application is not over utilizing objects Your application is not leaking objects (i.e., memory) Your Java heap parameters are set to handle your object utilization

Before we go into these topics, we will first look at how to visualize the impacts of garbage collection and how to use garbage collection as a way to gauge the health of your WebSphere application.

The Garbage Collection Bottleneck


Examining Java garbage collection (GC) can give you insight into how your application is utilizing memory. First, its important to mention that garbage collection is one of the strengths of Java technology. By taking the burden of memory management away from the application writer, Java technology applications tend to be much more robust than applications written in non-garbage collected languages. This robustness applies as long as your application is not abusing objects. It is normal for garbage collection to consume anywhere from 5 percent to 20 percent of the total execution time of a well-behaved WebSphere application. If not kept in check, GC can be your applications biggest bottleneck, especially true when running on SMP server machines. The problem with GC is simple -- during garbage collection, all application work stops. This is because modern JVMs support a single-threaded garbage collector. During GC, not only are freed objects collected, but memory is also compacted to avoid fragmentation. It is this compacting that forces Java technology to stop all other activity in the JVM. The following illustration shows how GC can impact performance on a two -way SMP computer.

Time spent in Garbage Collection


100 80
Processor #1 Working Working Working Working
Processor #1 Processor #2

Processor #2

CPU %

60 40

Garbage Collection 20 Dips


0

Time

Here we plot CPU utilization over time. Processor #1 is represented by the thick line and Processor #2 the thin line. Garbage collection always runs on Processor #1. The graph starts with both processors working efficiently at nearly 100 percent utilization. At some point, GC begins place on Processor #1. Because all Copyright IBM Corp. 2000. All rights reserved. Page - 17

work is suspended (except the work associated with GC), Processor #2 has almost 0 percent utilization. After GC completes, both processors resume work. JVM technology is evolving rapidly. In particular, the IBM family of JVMs continues to improve features like garbage collection, threading and the just-in-time compiler. For example, in a typical WebSphere Application Server workload with servlets, JSP components and data access, moving from IBM Developer Kit, Java Technology Edition, V.1.1.6 to 1.1.8 improved GC performance by 2 times. JVM 1.2.2 continues to improve GC performance.JVM 1.2.2 is supported in WebSphere V3.5. IBMs Developer Kit 1.3 addresses the single-threaded GC issue, by added multithreaded GC Support. WebSphere Application Server will support 1.3 in a future release.

The Garbage Collection Gauge


Use garbage collection to gauge your applications health. By monitoring garbage collection during the execution of a fixed workload, you can gain insight into whether your application is over utilizing objects. GC can even be used to detect the presence of memory leaks. GCStats is a utility program that tabulates statistics using the output of the -verbosegc flag of the JVM. A listing of the program is provided in Appendix A. The following is a sample output from GCStats:
> java GCStats stderr.txt 68238

------------------------------------------------- GC Statistics for file - stderr.txt -------------------------------------------------* Totals - 265 Total number of GCs - 7722 ms. Total time in GCs - 12062662 Total objects collected during GCs - 4219647 Kbytes. Total memory collected during GCs -* Averages - 29 ms. Average time per GC. (stddev=4 ms.) - 45519 Average objects collected per GC. (stddev=37 objects) - 15923 Kbytes. Average memory collected per GC. (stddev=10 Kbytes) - 97 %. Free memory after each GC. (stddev=0%) - 8% of total time (68238ms.) spent in GC. ___________________________ Sun Mar 12 12:13:22 EST 2000

Using GCStats is simple. First, enable verbose garbage collection messages by setting the -verbosegc flag on the Java command line of the WebSphere Application Server. For test environments, the minimum and maximum heap sizes should be the same. Pick an initial heap parameter of 128M or greater assuming you have ample system memory. (We will go into heap settings later.) The following screen shot illustrates how to set these parameters:

Copyright IBM Corp. 2000. All rights reserved. Page - 18

Java Command Line setting (verbosegc and heap size)

WebSphere Admin. Console

After the WebSphere Application Server is started, detailed information on garbage collection will be logged to the standard error file. It is important to clear the standard error file before each run so that statistics will be limited to a single experiment. The test-case run during these experiments must be a representative, repetitive workload so that we can measure how much memory is allocated in a steady-state cycle of work. (These are the same requirements as the test used the throughput curve in Step 1.) It is also important that you can determine how much time your fixed workload requires. The tool that is driving the workload to your test application must be able to track the total time spent. This number is passed into GCStats on the command line. For example, if your fixed workload (such as 1000 HTTP page requests) takes 68238 ms to execute, and verbosegc output is logged to stderr.txt in the current directory, then use the following command-line: >java GCStats stderr.txt 68238. To ensure meaningful statistics, run your fixed workload for long enough that the state of your application is steady. This typically means running for at least several minutes.

Detecting Over Utilization of Objects


GCStats can provide clues as to whether your application is over utilizing objects. The first statistic to look at is total time spent in GC (the last statistic presented in the output). As a rule of thumb, this number should not be much larger than 15 percent. The next statistics to examine are: average time per GC; average memory collected per GC; average objects collected per GC. (Average objects is not available in JDK 1.2.2). Looking at these statistics gives an idea of the amount of work occurring during a single GC. If test numbers lead you to believe that over utilizing objects is leading to a GC bottleneck, there are three possible actions. The most cost effective remedy is to optimize your application by implementing object caches and pools. Use a Java profiler to determine which objects in your application should be targeted. If for some reason you cant optimize your application, a brute force solution can be to use a combination of server cloning and additional memory and processors (in an SMP computer). The additional memory will allow each clone to maintain a reasonable heap size. The additional processors will allow the clones to distribute the workload among multiple processors. Statistically speaking, when one application server clone starts a GC, it is likely that the others will not be in GC, but doing application work. A third possibility is to move to WebSphere Application Server V3.5, which is based on JDK 1.2.2. The improved GC technology in this JVM will likely reduce GC times dramatically.

Copyright IBM Corp. 2000. All rights reserved. Page - 19

Detecting memory leaks


Memory leaks in Java technology are a dangerous contributor to GC bottlenecks. They are worse than memory over utilization, because a memory leak will ultimately lead to system instability. Over time, garbage collection will occur more and more frequently until finally your heap will be exhausted and Java will fail with a fatal Out of Memory Exception. Memory leaks occur when an unneeded object has references that are never deleted. This most commonly occurs in collection classes, such as Hashtable, because the table itself will always have a reference to the object, even when all real references have been deleted GCStats can provide insight into whether your application is leaking memory. There are several statistics to monitor for memory leaks. However, for best results, you will have to repeat experiments with increasing workload durations (such as 1000, 2000, and 4000 page requests ). Clear the standard error file, and run GCStats after each experiment. You are likely to have a memory leak if, after each experiment: The percentage of free memory after GC decreases measurably The standard deviations (stddev) increases measurably

Memory leaks must be fixed. The best way to fix a memory leak is to use a Java technology profiler that allows you to count the number of object instances. Object counts that exhibit unbounded growth over time indicate a memory leak.

Java Heap Parameters


The Java heap parameters can influence the behavior of garvage collection. There are two Java heap parameter settings on the Java command line, -ms (starting heap size) and -mx (maximum heap size). Increasing these creates more space for objects to be created. Because this space takes longer for your application to fill, your application will run longer before a GC occurs. However, a larger heap will also take longer to sweep for freed objects and compact. Hence garbage collection will also take longer. Set -ms and -mx to values close to each other for performance analysis When tuning an environment, it is important to configure -ms and -mx to the equal values (i.e., -ms256m -mx256m). This will assure that the optimal amount of heap memory is available to your application and Java technology will not be overworked attempting to grow the heap. In production, this need not be the case. On newer versions of JVMs (e.g., 1.2, 1.3), however, it is advisable to allow the JVM to adapt the heap to the workload. From a high level point of view, there is a mechanism within the JVM that tries to adapt to your working set size and a heap which is too big or small. Hence, with WebSphere Application Server V3.5 and beyond, you might consider setting your -ms value to half the value of -mx (i.e., -ms128m -mx256m).

Copyright IBM Corp. 2000. All rights reserved. Page - 20

Varying Java Heap Settings


-ms256M, -mx256M
100 80

Time spent in Garbage Collection

CPU %

60 40 20 0

Processor #1 Processor #2

-ms128M, -mx128M
100 80

Time

Time spent in Garbage Collection

CPU %

60 40 20 0

Processor #1 Processor #2

Time

-ms64M, -mx64M
100 80

Time spent in Garbage Collection

CPU %

60 40 20 0

Processor #1 Processor #2

Time

The above illustration represents three CPU profiles, each running a fixed workload with varying Java heap settings. The center profile has -ms128M and -mx128M. We see four GCs. The total time in GC is about 15% of the total run. When we double the heap parameters to 256M, as in the top profile, we see the length of the work time between GCs increase. There are only three GCs, but the length of each GC also increased. In the third graph, the heap size was reduced to 64M and exhibits the opposite effect. With a smaller heap, both the time between GCs and time for each GC are shorter. Note that for all three configurations, the total time in garbage collection is approximately 15 percent. This illustrates an important concept about Java heap and its relationship to object utilization. Garbage collection is a fact of life in Java technology. You can pay now by setting your heap parameters to smaller values or pay later by setting your heap to larger values, but garbage collection is never free. Use GCStats to search for optimal heap settings. Run a series of test experiments varying Java heap settings. For example, run experiments with 128M, 192M, 256M, and 320M. During each experiment, monitor total memory usage. If you expand your heap too aggressively, you might start to see paging occur. (Use vmstat or the Windows NT performance monitor to check for paging.) If you see paging, either reduce the size of your heap or add more memory to your system. After each experiment, run GCStats, passing it the total time of the last run. When all runs are done, compare the following statistics from GCStats: Total number of GCs Average time per GC % Time Spent in GC

You will likely see behavior similar to the graph above. However, if your application is not over utilizing objects and it has no memory leaks, it will hit a state of steady memory utilization, in which garbage collection will occur less frequently and for short durations.

Copyright IBM Corp. 2000. All rights reserved. Page - 21

Relaxing Auto Reloads


The first rule of WebSphere Application Server tuning is to turn off, or greatly relax, all dynamic reload parameters. When your applications resources (such as servlets and enterprise beans) are fully deployed, it is not necessary to aggressively reload these resources as you would during development. For a production system, it is common to reload resources only a few times a day. In WebSphere Application Server V3.x, there are three types of auto reloading: servlets, JSP components, and Web server configuration.

Servlet Reload Interval


Each Web application has the ability to dynamically reload servlets. This is useful when upgrading to a new version of a given servlet. To relax these values, go to the Web Application panel of the Administrative Console. Either set the Reload Interval to a very large value or set the Auto Reload field to False.

Reload Interval (sec)

WebSphere Admin. Console

remember to scroll down...

JSP Reload Interval


In WebSphere Application Server V3.02 and above, the JSP component processing engine is implemented as a servlet. The JSP 1.0 servlet is invoked to handle requests destined to all JSP files matching the assigned URL filter, such as /myApp/*.jsp. Each time a JSP component file is requested, the corresponding disk file is checked to see if a newer version is available. This can be an expensive operation. Change the rechecking interval by adding an init parameter, minTimeBetweenStat, to the JSP servlet. The default value is 1000 (1000 ms).

Reload Interval (sec)

minTimeBetweenStat

100000

WebSphere Admin. Console

Copyright IBM Corp. 2000. All rights reserved. Page - 22

Web Server Configuration Reload Interval


WebSphere Application Server administration tracks a variety of configuration information about WebSphere resources. Some of this information needs to be understood by the Web server as well, such as uniform resource identifiers (URIs) pointing to WebSphere resources. This configuration data is pushed to the Web server via the WebSphere plug-in. This allows new servlet definitions to be added without having to restart any of the WebSphere servers. Unfortunately, the dynamic regeneration of this configuration information is an expensive operation. A setting in websphere_root/properties/bootstrap.properties defines the interval between these updates.

ose.refresh.interval
bootstrap.properties

20000

Copyright IBM Corp. 2000. All rights reserved. Page - 23

Summary
There are four categories of activities related to tuning the WebSphere Application Server Standard and Advanced Editions. The first, which is not covered in detail in this paper, is application tuning. Tuning your application for performance requires special tools such as Java technology profiles to visualize Memory and CPU utilization of your application. This paper focuses on a methodology for tuning WebSphere system parameters, such as relaxing auto reloading of WebSphere resources, tuning WebSphere system queues and tuning Java memory. A synopsis of the methodology is outlined below.

Step 1. Relax Auto Re-loading


1. Relax or disable auto reloading of Servlet engine, JSP files, and web server configuration Identify the WebSphere System Queues used in your production environment Network, web server, servlet engine, EJB server, data source Identify a representitive and reproducible workload (test application) This will likely require the use of a stress test tool such as Load Runner by Mercury or WebStone Draw the throughput curve Use a fixed number of requests to your test application, varying the client load. After each experiment note the throughput, response time, system memory, and CPU utilization Find the saturation point and adjust queues Adjust queue settings using the Max. Application Concurrency value as a reference point Queue upstream, preferably in the network Readjust queues for access patterns Readjust queues after adding additional servers in a cluster Use SEStats.java (or Resource Analyzer) to understand concurrency in the servlet engine Understand how your application is using memory Using the test application from Step 2, enable -verbose GC and use GCStats.java Repeat experiments with fixed workload while varying Java heap parameters Run GCStats.java against output from stderr.log to determine the percentage of time spent in GC Understand if your application is leaking memory Repeat experiments with increasing workloads and fixed heap parameters Run GCStats.java after each experiment, taking note of the percentage of free memory after GC Adjust heap parameters Using results from utilization and leak experiments, determine your best heap settings

Step 2. WebSphere System Queues


1. 2. 3. 4.

Step 3. Tuning Java Memory


1.

2. 3.

Using the above methodology, your WebSphere system should run optimally from both a performance and stability perspective.

Copyright IBM Corp. 2000. All rights reserved. Page - 24

Appendix A - SEStats.java
/** * * SEStats * * A servlet program that keeps some simple statistics for a web application * running in a servlet engine. The servlet initializes itself as a listener * of servlet invocation events to keep track of the number of requests being * concurrently serviced and the time it takes each request to finish. * * The servlet reports statistics for overall basis, i.e., since the servlet was * initialized, and on an interval basis. To configure the interval length, set * the "intervalLength" initial servlet parameter to the number of requests you * want in each interval's statistics. The default intervalLength setting is 100 * requests. * * @version %I%, %G% * @author Carmine F. Greco * * 3/17/2000 Initial coding */ import com.ibm.websphere.servlet.event.*; import javax.servlet.*; import javax.servlet.http.*; import java.io.PrintWriter; import java.util.*; public class SEStats extends HttpServlet implements ServletInvocationListener { static Integer lock = new Integer(0); static int activeThreads; static int aggregateCount; static int aggregateComplete; static int intervalMax; static int overallMax; static long totalServiceTime; static long lastIntervalServiceTime; static int intervalLength = 100; static Vector intervalMaxs; static Vector intervalTimes; static Hashtable urlCount; public void init(ServletConfig config) { // Initialize all servlet counters and variables. activeThreads = 0; aggregateCount = 0; aggregateComplete = 0; intervalMax = 0; overallMax = 0; totalServiceTime = (long)0.0; lastIntervalServiceTime = (long)0.0; intervalMaxs = new Vector(); intervalTimes = new Vector(); urlCount = new Hashtable(); // Check for initial parameters String tmp; if((tmp = config.getInitParameter("intervalLength")) != null) { intervalLength = Integer.valueOf(tmp).intValue(); } // Register as a listener for this WebApplication's servlet invocation events ServletContextEventSource sces = (ServletContextEventSource)config.getServletContext().getAttribute( ServletContextEventSource.ATTRIBUTE_NAME); sces.addServletInvocationListener(this);

public void doGet(HttpServletRequest req, HttpServletResponse res) { try { PrintWriter out = res.getWriter(); out.println("<HTML><HEAD><TITLE>Servlet Engine Statistics</TITLE></HEAD>"); out.println("<BODY>"); out.println("<h1>Servlet Engine Statistics</h1>"); out.println("<h2>Overall Statistics</h2>"); out.println("<table border>"); out.println("<TR><TD>Total service requests:</TD><TD>"+aggregateCount+"</TD></TR>"); out.println("<TR><TD>Overall maximum concurrent thread count:</TD><TD>"+overallMax+"</TD></TR>"); out.println("<TR><TD>Total service time (ms):</TD><TD>"+totalServiceTime+"</TD></TR>"); out.println("</table>"); out.println("<h2>Interval Statistics</h2>"); out.println("Interval length: " + intervalLength); out.println("<table border>"); out.println("<TR><TH>Interval</TH><TH>Interval Maximum concurrent thread count</TH><TH>Interval Time</TH></TR>"); for(int i = 0; i < intervalMaxs.size(); i++) { if(((Integer)intervalMaxs.elementAt(i)).intValue() == overallMax) { out.println("<b><TR><TD>"+i+"</TD><TD>"+intervalMaxs.elementAt(i)+"</TD><TD>"+intervalTimes.elementAt(i)+"</TD></TR></b>"); }else { out.println("<TR><TD>"+i+"</TD><TD>"+intervalMaxs.elementAt(i)+"</TD><TD>"+intervalTimes.elementAt(i)+"</TD></TR>"); } } out.println("</table>"); out.println("</BODY>"); out.println("</HTML>");

Copyright IBM Corp. 2000. All rights reserved. Page - 25

}catch(Exception e) {e.printStackTrace();}

public void destroy() { // Cleanup listener ServletContextEventSource sces = (ServletContextEventSource)getServletConfig().getServletContext().getAttribute( ServletContextEventSource.ATTRIBUTE_NAME); sces.removeServletInvocationListener(this); } /* * ServletInvocationListener */ public void onServletStartService(ServletInvocationEvent event) { synchronized(lock) { activeThreads++; aggregateCount++; // Keep track of the interval maximum if(activeThreads > intervalMax) { intervalMax = activeThreads; } // Keep track of the overall maximum if(intervalMax > overallMax) { overallMax = intervalMax; } if(aggregateCount%intervalLength == 0) { // Record and reset interval stats intervalMaxs.addElement(new Integer(intervalMax)); intervalMax = 0; }

public void onServletFinishService(ServletInvocationEvent event) { synchronized(lock) { aggregateComplete++; activeThreads--; // Add response time to total time totalServiceTime += event.getResponseTime(); if(aggregateComplete%intervalLength == 0) { // Record total interval response time intervalTimes.addElement(new Long(totalServiceTime - lastIntervalServiceTime)); lastIntervalServiceTime = totalServiceTime; }

} }

Appendix B - GCStats.java
// GCStats.java // This utility tabulates data generated from a verbose garbage collection trace. // To run this utility type: // java GCStats inputfile [total_time] // // Gennaro (Jerry) Cuomo - IBM Corp. 03/2000 // Carmine F. Greco 3/17/00 - JDK1.2.2 compatibility // import java.io.*; import java.util.*; public class GCStats { static static static static static int long long long int total_time=-1; total_gctime=0, total_gctime1=0; total_bytes=0, total_bytes1=0; total_free=0, total_free1=0; total_gc=0; // // // // // total total total total total time of run in ms time spent in GCs bytes collected number of GCs

static boolean verbose=false; // debug trace on/off public static void parseLine(String line) { // parsing a string that looks like this... // <GC(31): freed 16407744 bytes in 107 ms, 97% free (16417112/16777208)> if (isGCStatsLine(line)) { // First test if line starts with "<GC..."

if (verbose) System.out.println("GOT a GC - "+line); long temp=numberBefore(line, " bytes")/1024; // get total memory collected total_bytes+=temp; total_bytes1+=(temp*temp); temp=numberBefore(line, " ms"); // get time in GC total_gctime+=temp; total_gctime1+=(temp*temp); temp=numberBefore(line, "% free"); // get time % free total_free+=temp; total_free1+=(temp*temp); if (temp!=0) { total_gc++; // total number of GCs }

public static int numberBefore( String line, String s) {

Copyright IBM Corp. 2000. All rights reserved. Page - 26

int ret = 0; int idx = line.indexOf(s); int idx1= idx-1; if (idx>0) { // the string was found, now walk backwards until we find the blank while (idx1!=0 && line.charAt(idx1)!=' ') idx1--; if (idx1>0) { String temp=line.substring(idx1+1,idx); if (temp!=null) { ret=Integer.parseInt(temp); // convert from string to number } } else { if (verbose) System.out.println("ERROR: numberBefore() - Parse Error looking for "+s); }

} return ret;

public static boolean isGCStatsLine(String line) { return ( (line.indexOf("<GC") > -1) && (line.indexOf(" freed")>0) && (line.indexOf(" bytes")>0)); } public static void main (String args[]) { String filename=null; BufferedReader foS=null; boolean keepgoing=true; if (args.length==0) { System.out.println("GCStats - "); System.out.println(" - "); System.out.println(" - Syntax: GCStats filename [run_duration(ms)]"); System.out.println(" - filename = file containing -verbosegc data"); System.out.println(" - run_duration(ms) = duration of fixed work run in which GCs took place"); return; } if (args.length>0) { filename=args[0]; } if (args.length>1) { total_time=Integer.parseInt(args[1]); } if (verbose) System.out.println("Filename="+filename); try { foS = new BufferedReader(new FileReader(filename)); } catch (Throwable e) { System.out.println("Error opening file="+filename); return; } while (keepgoing) { String nextLine; try { nextLine=foS.readLine(); } catch (Throwable e) { System.out.println("Cannot read file="+filename); return; } if (nextLine!=null) { parseLine(nextLine); } else { keepgoing=false; } } try { foS.close(); } catch (Throwable e) { System.out.println("Cannot close file="+filename); return; } System.out.println("-------------------------------------------------"); System.out.println("- GC Statistics for file - "+filename); System.out.println("-------------------------------------------------"); System.out.println("-**** Totals ***"); System.out.println("- "+total_gc+" Total number of GCs"); System.out.println("- "+total_gctime+" ms. Total time in GCs"); System.out.println("- "+total_bytes+" Kbytes. Total memory collected during GCs"); System.out.println("- "); System.out.println("-**** Averages ***"); double mean=total_gctime/total_gc, stddev=Math.sqrt((total_gctime1-2*mean*total_gctime+total_gc*mean*mean)/total_gc); int imean=new Double(mean).intValue(), istddev=new Double(stddev).intValue(); System.out.println("- "+imean+" ms. Average time per GC. (stddev="+istddev+" ms.)"); mean=total_bytes/total_gc; stddev=Math.sqrt((total_bytes1-2*mean*total_bytes+total_gc*mean*mean)/total_gc); imean=new Double(mean).intValue(); istddev=new Double(stddev).intValue(); System.out.println("- "+imean+" Kbytes. Average memory collected per GC. (stddev="+istddev+" Kbytes)"); mean=total_free/total_gc; stddev=Math.sqrt((total_free1-2*mean*total_free+total_gc*mean*mean)/total_gc); imean=new Double(mean).intValue(); istddev=new Double(stddev).intValue(); System.out.println("- "+imean+"%. Free memory after each GC. (stddev="+istddev+"%)"); if (total_time>0 && total_gctime>0) { System.out.println("- "+((total_gctime*1.0)/(total_time*1.0))*100.0+"% of total time ("+total_time+"ms.) spent in } System.out.println("___________________________ "+new Date()); System.out.println("");

GC.");

}}

Copyright IBM Corp. 2000. All rights reserved. Page - 27

Trademarks

The following terms are trademarks of International Business Machines Corporation in the United States, other countries, or both: AIX IBM WebSphere

Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Other company, product, and service names may be trademarks or service marks of others.

Copyright IBM Corp. 2000. All rights reserved. Page - 28

Potrebbero piacerti anche