Sei sulla pagina 1di 7

JBoss ApplicAtion server tuning guide

Tuning: To adjusT for maximum usabiliTy or performance. jboss as 4.0.4TomcaT 5.5.17

mark newton mark cheung published october 2005

RED HAT

APPLICATION STACK

IntroductIon
The jboss application server exists to provide services to the applications that run on it. by providing common services such as security, transaction management, messaging, and database connectivity, an application server allows a developer to concentrate on developing business logic unique to each application. The second section of this guide covers the concepts of tuning, including the aims of tuning, the problems typically encountered, and some of the techniques used to overcome them. The ability of the server to host a wide range of applications makes it a very useful piece of software. Thanks to the modular design of jboss, you can add or remove services as required, which further increases its usability. The third section of this guide explains how this is done and shows how to create a custom configuration. Whichever configuration you choose, its important to consider the performance of the server in a production environment. The ability to quickly and reliably process growing numbers of concurrent requests from users or other systems is critical to the success of a business. Thats why its critical to ensure that the hardware and software in your system is performing to the best of its abilities. its unlikely that the out-of-the-box settings of the jboss application server will provide the best application performance. jboss is configured for developers by default to help speed up application development. configuration changes are nearly always required to gain performance, because each application has its own unique requirements. further, running multiple applications on the same server will inevitably use more resources, so applications must be configured correctly to ensure efficient usage. The fourth section of this guide provides step-by-step procedures to configure jboss for maximum performance. The fifth section of this guide covers tuning of the java Virtual machine (jVm). specifically, it examines resizing the heap and adjusting the garbage-collection algorithms to maximize application throughput. faster jVm choices, as well as the benefits of 64-bit versus 32-bit processing, are also discussed.

JBoss ApplicAtion server tuning guide

tunIng concepts
before you begin tuning, you should understand that although performance is important, you should not sacrifice correctness or stability to achieve it. The aim of tuning is to make an application perform in the most efficient manner. This typically means understanding where the bottlenecks are located. The often-quoted dont optimize too early maxim is normally true. The design should avoid potential obvious problems, but only performance testing and careful measurements will show where the true bottlenecks are located.

tunIng AIms
guaranteed response time
j2ee application servers arent designed as real-time systems. neither are most operating systems. guaranteed small response times are not possible.

reducing response time


one option for tuning a system is to try to reduce the average response time for requests. This usually involves avoiding intensive processing in interactive requests.

Increasing throughput
another option is to process as many transactions as possible within a given period. This involves using resources in the most efficient manner without worrying too much about long response times. However, care must be taken that long-running processes do not hold locks; this can reduce concurrency and decrease the throughput.

Bottlenecks
cpu
a computer has only so much cpu power. once you reach 100% cpu utilization, no more processing can be performed. a saturated cpu can lead to longer response times and a decrease in throughput. in these situations, adding more cpus can alleviate these problems and increase performance.

memory
physical memory is also a limited resource. To counter this limitation, modern operating systems use a technique called virtual memory paging. This saves unused regions of data, called pages, from memory to the disk when the memory becomes full so that the memory can be reused by another application. The pages of data are loaded back into the memory when they are next needed by the original application. However, excessive paging can negatively affect performance. for example, collecting garbage from paged memory is many times slower than collecting garbage from data held within memory, because the disk must be accessed repeatedly. using more memory to avoid these kinds of problems is often a good strategy, because memory is a relatively cheap resource. The amount you can add depends on the address space of the machine. for example, 32-bit machines can only address a maximum of 4gb. The scalability of the garbage collector may also be a factor, especially for systems with a great deal of memory (see, section 5, the jVm, for more information about garbage collector options).

threads
Thread management is critically important for an application server because threads allow multiple requests to be processed concurrently. The operating system scheduler is responsible for this concurrency, as it allocates processing time to each thread in turn using a scheduling routine. some modern operating systems have very efficient scheduling routines, but they may still perform slowly when there are too many threads. additionally, each thread has a stack that holds parameters, local variables, and return values for any methods that it calls. These stacks consume memory and, together with the overhead from the scheduler, put a limit on the total number of threads that can be created before performance starts to suffer. adding more cpus can help, because the threads can be shared between them and run in parallel to increase performance.

JBoss ApplicAtion server tuning guide

communication/serialization
communication between machines on a network always involves network latency, which affects response times. network latency becomes more significant when there are many small requests versus a few big requests, because more network calls are made for a given amount of data. other latencies occur when the communication mechanism performs buffering. in this case, a request or response may be complete inside an operating system buffer but its delivery is delayed until the buffer is filled. serialization is the conversion of java objects into a byte format and back again (communication is done in bytes, whether it is a raw byte stream or some other format such as xml). serialization consumes much cpu power, and serialization times often far outweigh network latencies.

locking
in any concurrent system, access to shared resources must be controlled. This is usually achieved with locks, which lead to threads waiting for a resource to become available while another uses it. Thus, a shared resource can become a global point of contention, which can essentially turn a multithreaded application server into a single-threaded machine.

JBoss confIgurAtIon
a jboss configuration is the set of available j2ee-compliant services. The following sections describe the three jboss configurations available out of the box, along with instructions for creating custom configurations.

JBoss out-of-the-Box confIgurAtIons


When you first download and unpack the jboss 4.0.4 distribution, you are given three server configurations to choose from: default, minimal, and all. each configuration contains a different set of services, which allows jboss to run as anything from a barebones server with no j2ee capabilities all the way up to a fully loaded j2ee server with clustering and additional features such as aspect-oriented programming support. A note about J2ee compliance: most importantly, jboss 4.0.4 is certified as j2ee 1.4-compliant, meaning that it has been successfully tested against sun microsystems Technology compatibility Kit (TcK). (The TcK is run against a subset of the all configuration because it requires a custom configuration for the jms queues, database connections, etc.) because only binaries can be tested against the TcK, source code cannot be described as a compatible implementation of the j2ee specification. Thus, writing or deploying applications to builds based on source code can lead to a lack of portability. if you want to guarantee portability, consider deploying production applications using the pre-built and precertified binaries of the jboss 4.0.4 application server.

JBoss custom confIgurAtIons


The jboss application server isnt limited to the three server configurations supplied out of the box. in fact, youre encouraged to create custom configurations that contain only the services you require for your applications. This has the following benefits: removing redundant services reduces the complexity of the system and makes it easier to understand. removing redundant services with open ports increases the security of your system, as it decreases the number of entry points into the application server. The memory footprint and the total number of threads in the server will decrease as fewer services are started, which helps increase performance. The start-up time of the server is reduced if fewer services are started.

JBoss ApplicAtion server tuning guide

tunIng JBoss
The following sections examine the various steps required to tune jboss 4.0.4 with embedded Tomcat 5.5.17. The outof-the-box default server configuration is used as the starting point for each step. consequently, any paths given are relative to the jboss-4.0.4.gA/server/default directory.

WeB contAIners (tomcAt)


The jboss Web container called Tomcat can be found in the deploy/jbossweb-tomcat55.sar directory. Tuning of Tomcat focuses on three areas: connectors, jsps, and access logging. Tomcat supports three connectors: HTTp 1.1, apache (ajp 1.3), and ssl/Tls. by default, the HTTp and apache connectors are enabled in jboss, while the ssl/Tls connector is not. all these connectors are disabled by commenting out or removing the relevant code in the deploy/jbossweb-tomcat55.sar/server.xml file.

connectors
enabling/disabling http and Apache tomcat connectors
The jboss default settings enable both direct connections to Tomcat via HTTp and indirect connections via apache/ mod_jk. if both connection styles are used, you can leave the default settings in place. However, if only one connection style is used, you can remove the other style and thereby reduce the footprint and start-up time of jboss. if your users directly connect to Tomcat via HTTp and do not pass through apache/mod_jk: open deploy/jbossweb-tomcat55.sar/server.xml with a text editor remove/comment the following xml fragment: <!-- A AJP 1.3 Connector on port 8009 --> <Connector port="8009" address="${jboss.bind.address}" empySessionPath="true" enableLookups="false" redirectPort="8443" protocol="AJP/1.3"/>

see the following section for specific instructions on tuning the Tomcat HTTp connector. if your users always pass through apache/mod_jk and do not directly connect to Tomcat via HTTp, then: open deploy/jbossweb-tomcat55.sar/server.xml in a text editor remove/comment the following xml fragment: <!-- A HTTP/1.1 Connector on port 8080 --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="250" strategy="ms" maxHttpHeaderSize="8192" emptSessionPath="true" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true"/>

a complete list of the available attributes for this connector, together with their meanings, can be found at: http://jakarta.apache.org/tomcat/tomcat-5.5-doc/config/http.html

JBoss ApplicAtion server tuning guide

tunIng the eJB contAIner


The container-configurations element in the conf/standardjboss.xml file declares the standard container configurations for ejbs. These standard configurations will be used automatically unless a custom configuration is defined. The application assembler can define custom configurations in the jboss.xml file by creating new configurations or extending the standard configurations. of course, the application assembler can change the standard configurations directly in the conf/standardjboss.xml file; if you do that, the change will apply to all ejbs. for this reason, we strongly recommend that you leave the standard configurations alone and use the jboss.xml file to define custom per-ejb configurations.

tuning the Java Virtual machine (JVm)


The java Virtual machine (jVm) provides a runtime execution environment for java bytecode and acts as a layer of abstraction over the operating system. Thanks to the jVm, java code can run unaltered on many different types of computing platforms. sun microsystems recognized that it would not be economical or realistic to produce a jVm for every conceivable type of operating system, so the company produced the java Virtual machine specification. This allows anyone to create a jVm implementation and brand it as compliant with the specification, providing it passes a series of tests available from sun. a compliant jVm guarantees that java code will run as expected in a consistent manner, fulfilling the java promise of write once, run anywhere. sun microsystems has created its own jVm called Hotspot, which runs on solaris, linux, and Windows. likewise, ibm has created a jVm (called aix jVm) for its proprietary operating system aix, and apple has created its own jVm for mac os x. bea, which does not have its own operating system, has also created a jVm called jrockit for Windows, linux, and, most recently, solaris to add value to its java business. There are many others available, including hardware jVms, but the vast majority of systems in production use the handful mentioned here.

the JVm
The jVm provides a runtime execution environment for java bytecode and acts as a layer of abstraction over the operating system. jVm performance, specifically in the area of garbage collection, can have a significant impact on overall system and application performance. However, jVm tuning is a very complex topic and requires a fairly deep understanding of jVm mechanisms and operations. given the effort needed to tune a jVm, the first order of business is to determine whether jVm performance is a problem. The only way to determine this is by testing your application with realistic loads. under these conditions, youll be able to observe the performance characteristics of the garbage collectors and determine whether jVm changes are required. The following section describes how to determine the characteristics of garbage collection in your system/application.

JBoss ApplicAtion server tuning guide

determining Whether JVm tuning Is necessary


We strongly recommend running a load test to verify the duration and frequency of garbage collects. in general, if the load test indicates that an application spends 20% of its time doing garbage collections, that clearly indicates a problem. if an application spends less than 5% of its time doing garbage collections, that clearly indicates that tuning the jVm will not improve performance. if the application spends between 5% and 20% of its time performing garbage collections, then tuning the jVm may improve performance. To run the load test: set the command line jaVa_opTs to identify the time and frequency of garbage collections (for example, "java verbose:gc XX:+PrintGCTimeStamps XX:+PrintGCDetails loggc=loggc.out" ). These options are described in Table 1. be sure to remove them in the production system. optIon
-verbose:gc -xloggc=filename -xx:+printgctimestamps -xx:+printgcdetails

descrIptIon
Turns on the logging of gc information specifies the name of a log file where the verbose:gc information can be logged (instead of standard output) prints the times at which the gcs happen relative to the start of the application gives detailed information about the gcs, such as size of the young and old generation before and after gcs, size of the total heap, and time that it takes for a gc to happen in the young and old generation.

table 1: garbage collection statistics options


run the load test to determine the frequency and duration of collections. evaluate the frequency of garbage collections and the time spent on each one. a good rule of thumb is to specify minor garbage collections approximately every 30 seconds and full (major) collections more infrequently. View and analyze the loggc file to make sure that the heap settles down (rather than grows) after a full collection. you can also use the printgcstats tool1 to analyze the total percentage of time the application is spending on garbage collections. adjust the heap size parameters (Xms, Xmx, etc.), run the load test, evaluate the time spent in major and minor collections, and adjust the settings until you find those that provide the minimum average overhead from garbage collections. if major garbage collections still do not fit within the maximum allowed pause time, then consider other garbage collection algorithms. normally, the default algorithms work well; however, if you decide to consider other algorithms, then parallel copying collector (XX:+UseParNewGC) for new generation and the concurrent collector (XX:+UseConcMarkSweepGC) for old generation may result in performance gains. see the vendors jVm documentation for additional options. noTe: there are other jVm performance-measurement techniques in addition to those presented here. jVm performance measurement can now be done in other ways, including Visualgc (available from sun microsystems) and jconsole. further, jVm 1.5 provides automatic tuning features. if you use it, you may not be able to improve on its settings. However, the procedure above is known to work on jVm versions between 1.2 and 1.5 inclusive, and jboss consulting uses this process to determine whether there is a problem because its known to be accurate.

if you need more details to back up any of these points, please visit www.redhat.com for whitepapers, positioning information, and other resources. or contact your red Hat representative for one-on-one answers to your questions.
1

freely available from sun microsystems at http://java.sun.com/developer/technicalArticles/programming/turbo/.

confidential and proprietary to red hat, inc. copyright 2006 red Hat, inc. all rights reserved. red Hat, red Hat linux, and the red Hat shadowman logo are registered trademarks of red Hat, inc. in the us and other countries. linux is a registered trademark of linus Torvalds.

Potrebbero piacerti anche