Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
D61523GC20
Edition 2.0
May 2011
D72553
Author Copyright © 2011, Oracle and/or it affiliates. All rights reserved.
Trademark Notice
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names
may be trademarks of their respective owners.
Contents
1 Course Overview
Course Objectives 1-2
Target Audience 1-3
Introductions 1-4
Course Schedule 1-5
Course Appendix 1-7
Course Practices 1-8
Classroom Guidelines 1-9
For More Information 1-10
Related Training 1-11
Oracle by Example (OBE) 1-12
2 WLST Monitoring
Objectives 2-2
WLS Domains: Review 2-3
Java Management Extension (JMX): Review 2-4
WLS MBean Hierarchies 2-5
WLS MBean Reference Documentation 2-6
Console Monitoring: Review 2-8
WebLogic Scripting Tool (WLST): Review 2-9
WLST MBean Syntax: Review 2-10
Domain Runtime 2-11
Basic Jython Syntax: Review 2-12
Basic WLST Commands 2-13
Variable Declaration 2-14
Password Management 2-15
Error Handling 2-16
File I/O 2-17
Standard Jython Libraries 2-18
WLST Example: Monitor a JMS Server 2-19
Quiz 2-20
Summary 2-23
Practice 2-1 Connecting to the Classroom Grid 2-24
Practice 2-2 Developing a Custom Monitoring Script 2-25
iii
3 Guardian
Objectives 3-2
Guardian Capabilities 3-3
Using Guardian 3-4
Guardian Architecture 3-5
Agent Installation 3-6
Collected Data 3-7
Client Installation 3-8
Guardian User Interface 3-9
Activating a Domain 3-10
Creating a Domain Inventory 3-11
Signatures and Bundles 3-12
Updating the Signature Repository 3-13
Signature Annotations 3-14
Evaluating a Domain 3-15
Evaluation Summary 3-16
Generating a Support Request 3-17
Command-Line Interface 3-18
Quiz 3-19
Summary 3-22
Practice 3-1 Using Guardian to Evaluate a Domain 3-23
iv
WLDF WLST Examples 4-20
Section Summary 4-22
Road Map 4-23
Harvester Architecture 4-24
Metric Collector Definitions 4-25
Configuring a Metric Collector 4-26
Watches and Notifications 4-28
Configuring a Watch 4-29
Watch Alarms 4-31
Configuring a JMS Notification 4-32
Configuring an Email Notification 4-33
Harvester WLST: Example 4-34
Watch WLST: Example 4-35
WLDF Sample Framework 4-36
Section Summary 4-37
Practice 4-1 Harvesting Diagnostic Metrics 4-38
Road Map 4-39
New Monitoring Dashboard 4-40
Viewing the Dashboard 4-41
Monitoring Dashboard Interface 4-42
Views 4-43
Built-In Views 4-44
Creating a Custom View 4-45
Metric Browser 4-46
Anatomy of a Chart 4-47
Chart and Graph Properties 4-48
Chart Styles 4-49
Current and Historical Data 4-50
Section Summary 4-51
Practice 4-2 Monitoring Diagnostic Metrics 4-52
Road Map 4-53
Subsystem Debugging 4-54
Console Debug Scopes 4-55
Debug Scopes: Examples 4-56
Debug Logging 4-57
WLST Debugging: Examples 4-58
Section Summary 4-59
Quiz 4-60
Summary 4-64
v
5 Diagnostic Instrumentation
Objectives 5-2
Road Map 5-3
Instrumentation Scenarios 5-4
Instrumentation Architecture 5-5
Monitor Actions 5-6
Application-Scoped Modules 5-8
WLS Monitor Library 5-9
Deployment Plan Review 5-11
WLDF and Deployment Plans 5-12
WLDF Deployment Plan: Example 5-13
WLDF Hot Swap 5-14
Configuring a System-Scoped Monitor 5-15
Configuring an Application-Scoped Monitor 5-17
Aspect-Oriented Programming (AOP) Concepts 5-18
Custom Monitors 5-19
Instrumentation WLST: Example 5-20
Instrumentation and Request Performance 5-21
Section Summary 5-22
Practice 5-1 Configuring and Monitoring Diagnostic Events 5-23
Road Map 5-24
Request Context ID 5-25
Viewing Context IDs 5-26
Request Dying 5-27
Available Dyes 5-28
Configuring a Dye Injection Monitor 5-29
Event Filtering 5-30
Configuring Dye Masks 5-31
Event Throttling 5-32
Configuring Throttle Properties 5-33
Section Summary 5-34
Quiz 5-35
Summary 5-38
Practice 5-2 Tracing a Client Request 5-39
6 JVM Diagnostics
Objectives 6-2
Road Map 6-3
Basic Java Concepts 6-4
Java Virtual Machine (JVM): Review 6-5
Oracle JVM Support 6-6
vi
JVM Recommendations 6-7
JVM Memory 6-8
Garbage Collection 6-9
Sun HotSpot Garbage Collection 6-10
Garbage Collection (GC) Types 6-11
Setting WLS JVM Arguments 6-12
Basic Sun JVM Arguments 6-13
JRockit Garbage Collection 6-14
Basic JRockit JVM Arguments 6-15
Out of Memory 6-16
Out-of-Memory Response 6-17
Memory Leak 6-18
JVM Crash 6-19
JVM Error Log 6-20
Section Summary 6-21
Road Map 6-22
JVM Tool Varieties 6-23
Java Stack Trace 6-24
Java Thread Dump: Overview 6-25
Thread Dump Signal 6-26
JVM Crash Actions 6-27
Verbose GC 6-28
Sun JVM Profiler Agent 6-29
Sun JVM Diagnostic Tools: Overview 6-30
Sun Diagnostic Tools: Examples 6-31
JVisualVM 6-33
Using JVisualVM 6-34
Section Summary 6-36
Practice 6-1 Troubleshooting a Running JVM 6-37
Road Map 6-38
Console JVM Monitoring 6-39
JVM WLST: Example 6-40
WLS Low Memory Detection 6-41
Configuring Low Memory Detection 6-42
Section Summary 6-43
Road Map 6-44
JRockit Diagnostic Tools: Overview 6-45
JRockit Diagnostic Tools: Examples 6-46
Management Communication 6-47
JRockit Mission Control (JRMC) 6-48
JRockit Discovery Protocol (JDP) 6-49
vii
JVM Browser 6-50
Management Console: Features 6-51
Management Console: General > Overview 6-52
Management Console: Runtime > Threads 6-53
Management Console: MBeans > Triggers 6-54
JRockit Flight Recorder (JFR) 6-55
Integration of JRockit Flight Recorder and WLDF 6-56
Starting the Flight Recorder from JRMC 6-57
Flight Recorder Output 6-58
General > Overview 6-59
Memory: Object Statistics 6-60
Code > Overview 6-61
Memory Leak Detector (Memleak): Features 6-62
Memleak: Trend Tab 6-63
Memleak: Type Graph 6-64
Section Summary 6-65
Quiz 6-66
Summary 6-70
Practice 6-2 Troubleshooting Applications on JRockit 6-71
viii
Too Many Open Files Errors 7-22
Quiz 7-23
Summary 7-25
Practice 7-1 Investigating Classpath Problems 7-26
8 Troubleshooting Servers
Objectives 8-2
Road Map 8-3
WLS Message Catalog: Review 8-4
Server Startup Errors 8-5
Boot Identity Errors 8-6
WLS Native Libraries 8-7
Setting the Native Library Path 8-8
Causes of Unresponsive Servers 8-9
WLS Threading Architecture 8-10
Execute Thread State 8-11
Work Managers 8-12
Work Manager Architecture 8-13
Creating a Work Manager 8-14
Creating and Using a Request Class 8-15
Assigning Work Managers to Applications 8-16
Monitoring a Server Thread Pool 8-17
Monitoring Individual Server Threads 8-18
Server Monitoring: WLST Examples 8-19
Server WLDF Image Contents 8-20
Java Deadlock Concepts 8-21
Thread Analysis 8-22
Lock Chains 8-23
Stuck Thread Detection 8-24
Overload Protection 8-25
Configuring Overload Protection 8-26
Section Summary 8-27
Practice 8-1 Investigating Server Problems 8-28
Road Map 8-29
WLS Deployment: Review 8-30
Deployment Errors 8-32
Application Staging 8-33
Deployment Memory Errors 8-34
Shared Library: Review 8-35
Library Errors 8-36
Deployment Debug Flags 8-37
ix
Application Error Handling 8-38
Application Monitoring: Review 8-39
Application Monitoring: WLST Examples 8-40
Section Summary 8-41
Quiz 8-42
Summary 8-45
9 Troubleshooting JDBC
Objectives 9-2
JDBC: Review 9-3
Data Sources: Review 9-4
JDBC Management: WLST Examples 9-5
JDBC Runtime Attributes 9-6
JDBC Monitoring: WLST Examples 9-7
JDBC WLDF Image Contents 9-8
JDBC WLDF Monitor: Review 9-9
Data Source Diagnostic Profiling 9-10
Configuring Diagnostic Profiling 9-11
JDBC Debug Flags 9-12
Other JDBC Debugging Tools 9-13
Common Configuration Errors 9-14
Configuration Error Examples 9-15
Insufficient Connection Errors 9-16
Connection Leaks 9-17
Database Cursor Considerations 9-18
Common Connection Errors 9-19
Statement Timeout 9-20
Data Sources and Database Availability 9-21
Retry Frequency and Login Timeout 9-22
Connection Testing: Review 9-23
Testing Trusted Connections 9-24
Firewall Considerations 9-25
Multi Data Source: Overview 9-26
Multi Data Source: Architecture 9-27
Java Persistence API (JPA): Overview 9-28
JPA Configuration: Overview 9-29
Troubleshooting JPA: Overview 9-30
Quiz 9-31
Summary 9-34
Practice 9-1 Investigating JDBC Problems 9-35
x
10 Troubleshooting JMS
Objectives 10-2
JMS: Review 10-3
WebLogic JMS Configuration: Review 10-5
JMS Transactions: Review 10-7
JMS Management: Overview 10-8
Console JMS Management 10-9
JMS Management: WLST Examples 10-10
JMS Runtime MBean Hierarchy 10-11
JMS Monitoring: WLST Examples 10-12
JMS Diagnostic Image Contents 10-13
JMS Message Logging 10-14
Configuring JMS Logging 10-15
JMS Debug Flags 10-16
Message Type Considerations 10-17
Common Configuration Errors 10-18
JMS Client Libraries 10-20
Out-of-Memory Errors and Quotas 10-21
Configuring a JMS Server Quota 10-22
Creating a Destination Quota 10-23
Message Paging 10-24
Too Many Pending Messages 10-25
Quota Blocking Policies 10-26
Thresholds and Flow Control 10-27
Configuring Thresholds 10-28
Tuning Flow Control 10-29
Lost Messages 10-30
Time to Live (TTL) 10-31
Expiration Policies 10-32
Delivery Mode 10-33
Message Redelivery 10-34
Time to Deliver (TTD) 10-35
Durable Subscriber Review 10-36
Monitoring and Managing Subscriptions 10-37
Duplicate Messages 10-38
Poison Messages 10-39
Consumer Acknowledgement Modes 10-40
Messages Out of Sequence 10-41
Unit of Order (UOO): Overview 10-42
Unit of Work (UOW): Overview 10-43
Message-Driven Beans (MDBs): Review 10-44
xi
MDB Capabilities 10-45
MDB Runtime Attributes 10-46
MDB Diagnostics and Debugging 10-47
Quiz 10-48
Summary 10-51
Practice 10-1 Investigating JMS Problems 10-52
11 Troubleshooting Security
Objectives 11-2
Road Map 11-3
Secure Sockets Layer (SSL): Review 11-4
SSL Communication: Review 11-5
WebLogic SSL Scenarios 11-6
Proxy Server SSL Scenarios 11-8
Keystore: Review 11-9
Trust Keystores 11-10
Keytool: Review 11-11
WebLogic SSL Support 11-13
SSL Configuration: Review 11-14
Restarting SSL 11-16
SSL Debug Flags 11-17
SSL Handshake Trace 11-18
Other SSL Traces 11-19
Invalid Format or Cipher Errors 11-20
Certificate Validation Errors 11-21
Host Name Verification Errors 11-22
Certificate Chains 11-23
WLS Chain Validation Utility 11-24
Missing Constraint or Policy Errors 11-25
Section Summary 11-26
Practice 11-1 Investigating SSL Problems 11-27
Road Map 11-28
Security Realm: Review 11-29
Security Provider Stores 11-30
Some Security Providers 11-31
Embedded LDAP: Review 11-32
Embedded LDAP Backups 11-33
Embedded LDAP Synchronization Issues 11-34
Viewing Embedded LDAP Contents 11-35
LDAP Concepts 11-36
LDAP Structure 11-37
xii
LDAP Search Operations 11-38
Resetting Admin Password in Embedded LDAP 11-39
Database Store Cache Synchronization Issues 11-40
Auditing Provider 11-41
Security Audit Events 11-42
Configuring the Auditing Provider 11-43
Realm Debug Flags 11-44
Typical Authentication Trace 11-45
Typical Role Mapping Trace 11-47
Typical Authorization Trace 11-48
LDAP Trace Log 11-49
Authentication Provider Control Flags 11-50
External LDAP Authentication Providers 11-52
LDAP Provider Configuration: Overview 11-53
Common LDAP Issues 11-56
Section Summary 11-57
Quiz 11-58
Summary 11-62
Practice 11-2 Investigating Security Realm Problems 11-63
xiii
Quiz 12-24
Summary 12-26
Practice 12-1 Investigating Node Manager Problems 12-27
13 Troubleshooting Clusters
Objectives 13-2
Road Map 13-3
Cluster Review 13-4
Proxy Plug-in Review 13-5
Obtaining and Using Plug-Ins 13-6
Oracle HTTP Server (OHS) Review 13-7
Oracle Process Manager and Notification Server (OPMN) Review 13-9
OPMNCTL Examples 13-10
OHS Logs 13-11
Plug-in Configuration Review 13-12
Basic Plug-in Parameters 13-13
Proxy Connection Architecture 13-14
Dynamic Server List 13-16
Connection Parameters 13-17
Common Connectivity Issues 13-18
Proxy SSL Issues 13-19
Proxy Debug Page 13-20
Proxy Debug Log 13-22
Typical Proxy Trace 13-23
Section Summary 13-24
Practice 13-1 Investigating Proxy Problems 13-25
Road Map 13-26
Cluster Communication Review 13-27
Unicast Architecture 13-28
Session Management Review 13-29
Session Persistence Review 13-30
In-Memory Replication Review 13-31
Cluster Monitoring WLST Examples 13-32
Session Monitoring WLST Examples 13-33
Session Monitoring Attribute 13-34
Session Instrumentation 13-35
Cluster Debug Flags 13-36
Typical Cluster Heartbeat Trace 13-37
Typical Replication Trace: Primary 13-38
Typical Replication Trace: Secondary 13-39
Common Replication Issues 13-40
xiv
HttpSession API Overview 13-41
Serialization Overview 13-42
Serialization Debug Messages 13-43
Section Summary 13-44
Quiz 13-45
Lesson Summary 13-48
Practice 13-2 Investigating Cluster Replication Problems 13-49
A WebLogic SNMP
Simple Network Management Protocol (SNMP) A-2
SNMP Architecture A-3
Object Identifier (OID) A-4
Management Information Base (MIB) A-5
WLS MIB and OIDs A-6
Common SNMP Message Types A-7
WLS SNMP Architecture A-8
Creating an SNMP Agent A-10
Configuring an SNMP Agent A-11
SNMP Channels A-12
WLS SNMP Notifications A-13
Creating Trap Monitors A-14
Creating Trap Destinations A-15
SNMP Security A-16
Configuring Agent Security A-17
Configuring SNMP V3 Credentials A-18
Configuring Trap Destination Security A-19
WLS SNMP Utility A-20
xv
Course Overview
If you are concerned about whether your experience fulfills the course prerequisites, ask the
instructor.
Introduce yourself.
Tell us about:
• Your company and role
• Your experience with WebLogic Server
• Any previous Oracle product experience
Day Lesson
1 AM WLST Monitoring
Guardian
PM Diagnostic Framework Essentials
Diagnostic Instrumentation
2 AM Diagnostic Instrumentation (continued)
JVM Diagnostics
PM JVM Diagnostics (continued)
Troubleshooting Java Applications
Troubleshooting Servers
The class schedule might vary according to the pace of the class. The instructor will provide
updates.
Day Lesson
3 AM Troubleshooting JDBC
PM Troubleshooting JMS
Troubleshooting Security
4 AM Troubleshooting Security (continued)
Troubleshooting Node Manager
PM Troubleshooting Clusters
Appendix A is intended to be additional reference material and will only be presented if time
permits.
We hope that these guidelines help the class proceed smoothly and enable you to get the
maximum benefit from the course.
Topic Website
Education and Training http://www.oracle.com/education
Product Documentation http://www.oracle.com/technetwork/indexes/documentation
Product Downloads http://www.oracle.com/technetwork/indexes/downloads
Product Articles http://www.oracle.com/technetwork/articles
Product Support http://www.oracle.com/support/
Product Forums http://forums.oracle.com
Product Tutorials http://www.oracle.com/technetwork/tutorials
Sample Code http://samplecode.oracle.com
After you complete the course, Oracle provides a variety of channels for developers and
administrators to access additional information.
Course Title
Oracle WebLogic Server: Advanced Administration
Oracle WebLogic Server: Monitor and Tune Performance
The Oracle by Example (OBE) series provides hands-on, step-by-step instructions on how to
implement various technology solutions to business problems. OBE solutions are built for
practical real-world situations, allowing you to gain valuable hands-on experience as well as
to use the presented solutions as the foundation for production implementation, dramatically
reducing time to deployment.
Domain B
Domain A Domain A Domain B
Admin
Server Server server
Admin
server Server
Server
Product Product
installation Product installation installation
MBeans MBeans
A managed bean (MBean) is a Java object that provides a Java Management Extensions
(JMX) interface. JMX is the Java EE solution for monitoring and managing resources on a
network. Like SNMP and other management standards, JMX is a public specification and
many vendors of commonly used monitoring products support it. WebLogic Server provides a
set of MBeans that you can use to configure, monitor, and manage WebLogic Server
resources through JMX.
Each server in the domain has its own copy of the domain’s configuration documents (which
consist of a config.xml file and subsidiary files). During a server’s startup cycle, it contacts
the Administration Server to update its configuration files with any changes that occurred
while it was shut down. Then it instantiates configuration MBeans to represent the data in the
configuration documents.
All WebLogic Server MBeans can be organized into one of several general types. Runtime
MBeans contain information about the runtime state of a server and its resources. They
generally contain only data about the current state of a server or resource, and they do not
persist this data. When you shut down a server instance, all runtime statistics and metrics
from the runtime MBeans are destroyed. Configuration MBeans contain information about the
configuration of servers and resources. They represent the information that is stored in the
domain’s XML configuration documents.
All edits to MBean attributes occur within the context of an edit session, and only one edit
session can be active at a time within each WebLogic Server domain. Changing an MBean
attribute or creating a new MBean updates the in-memory hierarchy of pending configuration
MBeans. If you end your edit session before saving these changes, the unsaved changes will
be discarded. When you activate your changes, WebLogic Server copies the saved, pending
configuration files to all servers in the domain. Each server evaluates the changes and
indicates whether it can consume them. If it can, it updates its active configuration files and in-
memory hierarchy of configuration MBeans.
Links to child
MBean types
MBeans, like any Java objects, consist of attributes, which can be simple types such as
Strings or doubles, or they can be other MBeans. In other words, MBeans can contain other
MBeans and define a hierarchy of management objects. MBeans provide standard methods
for managing these relationships, which are typically of the form createXYZ, destroyXYZ, or
lookupXYZ, where XYZ is the type of child MBean. For example, the DomainMBean includes
an operation named lookupServer(String name), which takes as a parameter the name
that was used to create a server instance.
MBean interface can also include other arbitrary system operations that JMX clients can
invoke remotely. For example, JMS destinations can be administratively paused, resumed,
purged, and so on. Note that each attribute also has an implicit operation named get<name>,
where <name> is the attribute name. For configuration MBeans that are not read-only, there
may also be a matching set<name> operation.
Monitoring
tabs
Every time that a resource, service, or application object can be monitored, a Monitoring tab is
available in the console for that object. Clicking it shows you the available monitoring
information for the selected object. Moreover, when the monitoring page shows information in
tabular format, you may change the way that the information is displayed. To do this, click
“Customize this table” and choose which columns to display and on what columns to sort the
table.
One way to determine which MBean types and attributes correspond to the displayed
columns is by using the context-sensitive Help link found at the top of the console. For each
column, the Help page provides a description and the corresponding MBean information.
You cannot monitor the activity of one domain through another domain. For example, you
cannot open the Administration Console for domain Y and try to monitor servers within
domain Z.
jvm = getMBean('/JVMRuntime/server1')
print 'Free Heap: ' , jvm.getHeapFreePercent()
wm = getMBean('/WorkManagerRuntimes/weblogic.kernel.Default')
print 'Pending: ' , wm.getPendingRequests()
JMSRuntime
Name: ServerC.jms
JMSServers: <list> JMSServerRuntime
Name: HRJMS
JMSDestinations: <list>
JMSDestinationRuntime
Name:
BenefitsJMSModule!SyncB
enefitsQueue
getMBean('/JMSRuntime/ServerC.jms/JMSServers/HRJMS/Destination
s/BenefitsJMSModule!SyncBenefitsQueue')
In the WLST file system, MBean hierarchies correspond to drives; MBean types and
instances are directories; MBean attributes and operations are files. In WLST, you traverse
the hierarchical structure of MBeans by using commands such as cd(), ls(), and pwd() in
a way similar to how you would navigate a file system in a UNIX or Windows command shell.
For example, to navigate back to a parent configuration or runtime bean, enter cd('..'). To
get back to the root bean after navigating to a bean that is deep in the hierarchy, enter
cd('/'). After navigating to an MBean instance, you interact with the MBean using WLST
commands.
WLST Server
Server
Admin
WLST Server
Server
Server
Server runtime MBeans expose monitoring, runtime control, and the current configuration
settings (read-only) of a specific WebLogic Server instance.
Some runtime MBeans are only available on the Administration Server, such as several types
of security MBeans. Access these “domain-wide” MBeans by using the domain runtime
hierarchy. This MBean server also acts as a single point of access for MBeans that reside on
other managed servers within the same domain.
If your client monitors runtime MBeans for multiple servers, or if your client runs in a separate
JVM, Oracle recommends that you connect to the domain runtime MBean hierarchy on the
Administration Server instead of connecting separately to the runtime MBean hierarchy on
each server instance in the domain. Your script then only needs to construct a single URL for
connecting to the domain runtime hierarchy on the Administration Server, instead of working
with multiple address/port combinations.
Using the domain runtime hierarchy, you can also route all administrative traffic in a domain
through the Administration Server’s secured administration port. Additionally, you could use a
firewall to prevent direct connections to managed server administration ports from outside
some network boundary.
list = ['ab','cd','ef']
Conditional
List variable if len(list) >= 3:
expressions
for x in list:
For loop print 'length: ', len(x)
print 'done'
else: Print a number.
print 'list too small'
Jython programs start with the first line of code that is not commented, not indented, and is
not a function definition (def keyword). Leading white space (spaces and tabs) at the
beginning of a logical line is used to compute the indentation level of the line, which in turn is
used to determine the grouping of statements. First, tabs are replaced (from left to right) by
one to eight spaces so that the total number of characters up to and including the
replacement is a multiple of eight (this is intended to be the same rule as used by UNIX). The
total number of spaces preceding the first non-blank character then determines the line’s
indentation. Indentation cannot be split over multiple physical lines by using backslashes; the
white space up to the first backslash determines the indentation.
Jython is an interpreted language. As a result, the developer is not required to declare the
variable types at design time. The Jython interpreter assigns the types at run time. However,
each type object has an application programming interface (API), and if the API of one object
is used by your code and the type has been assigned by the interpreter to another type at run
time, the interpreter throws an exception at run time. Ensure that the correct API is used
based on the type that you expect to assign to an object at run time.
Command Description
# Variable definitions
url = 'localhost:7020'
username = 'user'
password = 'password'
dsName = 'MyDS'
targetName = 'MyServer'
Parameterize your WLST scripts. Provide variables defined and initialized in a preamble at
the top of the script. This practice promotes reuse and can aid in troubleshooting.
Alternatively, you can define variables in a separate text file and import them as variables by
using the loadProperties WLST command, as in the following example:
loadProperties('c:/temp/myLoad.properties')
connect(url=myurl)
File retrieved from the
... home directory by default
The storeUserConfig() command creates a user configuration file and an associated key
file for the identity of the current user you are connected to WLS with. The user configuration
file contains an encrypted username and password. The key file contains a secret key that is
used to encrypt and decrypt the username and password. Only the key file that originally
encrypted the username and password can be used to decrypt the values. If you lose the key
file, you must create a new user configuration and key file pair. If you do not specify file
names or locations, the command stores the files in your home directory as determined by
your JVM. The location of the home directory depends on the SDK and type of operating
system on which WLST is running. The default file names are <username>-
WebLogicConfig.properties and <username>-WebLogicKey.properties.
If you do not specify credentials as part of the connect() command, and a user
configuration and a default key file exists in your home directory, then the command uses the
credentials found in those files. This option is recommended if you use WLST in script mode
because it prevents you from storing unencrypted user credentials in your scripts.
Alternatively, you can provide the specific locations and names of these files by using the
userConfigFile and userKeyFile command arguments.
ds = getMBean('/JDBCSystemResources/' + dsName)
if ds != None:
...
The first example shows the use of a try/except block as part of creating a new server
resource, such as a JDBC data source. The try/except block is used while attempting to
navigate to a specific configuration MBean with the cd() command. If this code block
succeeds without an exception, the MBean already exists and the script is terminated. If an
exception is thrown (the expected case), the rest of the script is executed to create the new
resource. The Python “pass” statement is a required placeholder, although it performs no
actual work.
The second example shows how similar functionality can be accomplished using the
getMBean() command. Unlike the cd() command, getMBean() does not throw an
exception if the given MBean is not found.
The use of print statements to inform the user of script progress and successful completion is
also a good practice.
outfile = open('script.log','w')
...
outfile.write('Server State: ' + server.getState() + '\n')
outfile.close()
infile = open('data.in','r')
line = 'temp'
while len(line) != 0:
line = infile.readline()
...
Jython includes a built-in file type and a built-in function open() for creating variables of this
type. In addition to supplying the file name and path, you can optionally indicate a mode ('r',
'w', 'a', for reading, writing and appending, respectively). In append mode, all write operations
are appended to the end of any existing file contents. The mode flag can also include a 't' or
'b' suffix to indicate text or binary mode. In text mode, newline characters are automatically
converted to the current platform's line terminator, and vice versa.
The read() command reads until the end of the file or up to a specified number of bytes, and
returns a string. If a size is not indicated, the whole file is read. This command returns an
empty string if the end of the file has been reached. The readline() command is similar,
but also stops reading if a line terminator is found. Finally, the readlines() command
returns a list of strings.
The write() command writes the supplied string to the file, and the writelines()
command writes a list of strings to the file.
import sys
import os Environment
variables
if os.environ.has_key('APP_HOME'):
os.rename(filename, filename + '.bak') Create or
... rename files.
The sys module includes some general purpose global variables. The sys.argv variable is
a list that contains the command-line arguments passed to the Jython interpreter. The
expression argv[0] evaluates to the script name.
The os module provides a portable way of using operating system–dependent functionality.
For example, the os.environ variable is a map object that provides access to system
environment variables. This module also has listdir(), makedir(), rename(), and
rmdir() functions. If you simply want to read or write a file, use the built-in open() function.
If you want to manipulate system paths, see the os.path module instead.
Jython includes several modules for working with times and dates. The time module, for
example, includes a sleep() function.
server = 'serverA'
jmsserver = 'DefaultJMSServer'
while true:
connect('user','password','myhost:7001')
serverRuntime()
disconnect()
time.sleep(60)
The example in the slide connects to a server and prints the values of various JMS runtime
MBean attributes every 60 seconds. The loop is written so that it runs indefinitely, until the
process is terminated. In this example, the WLST script connects to and disconnects from the
server each time data is collected. Connection creation is a relatively expensive operation, so
this approach can have an impact on the script’s or server’s responsiveness. Alternatively,
create an initial connection, reuse it, and automatically re-create it if the connection ever fails.
Answer: a, c , d, e
Answer: c
Answer: d
Oracle Guardian:
• Is like a virus scanner for WebLogic Server
• Checks domains for common configuration and runtime
problems and then recommends solutions
• Includes graphical and command-line interfaces
• May also scan other Oracle products running on WLS
• Is free to those who have an Oracle support account
• Can also help automate support-case creation
Oracle Guardian is a diagnostic tool for identifying potential problems in your environment
before they occur, and then providing specific instructions for resolving them. Using Guardian
is like having the entire Oracle Customer Support team scrutinize your domain and
immediately present its findings and recommendations to you at your convenience.
Each Guardian installation maintains a registry of active domains. A domain is considered
active when it is capable of being evaluated. You can activate and deactivate domains at will,
and select which to evaluate at any given time. You can also organize the domains in
Guardian into domain groups for easier management.
When you conduct an evaluation that detects a problem pattern or “signature,” you can create
a service request directly from a selected signature in an evaluation summary. Guardian
creates and saves the service request data as a service request archive for later submission
to Oracle Support.
Domain
Admin server
JMX
GUI or script Guardian agent
Guardian
client Managed server
HTTP(S)
JMX
Guardian agent
Repository
Oracle
Support Domain
The Guardian agent is a lightweight Web application that gathers the data used for
evaluations. It collects data from the server via JMX, from the JVM, and inspects application
resources such as deployment descriptors.
Both graphical and command-line Guardian clients are available. The first time you launch the
Guardian interface it will generate a workspace folder. Guardian stores all settings,
preferences, domain inventories and domain evaluations in this workspace. To perform an
evaluation, the Guardian client communicates with the agents deployed on your servers.
The Guardian repository contains the locally persisted store of problem signatures available
for evaluation. When you download signatures from the Guardian update site, they arrive in a
Java Archive (JAR) file. The JAR file is stored in the repository/archives directory of your
Guardian installation directory.
To safeguard your domains, Guardian requires valid login credentials for all communications
between Guardian and your Guardian domains. Whenever you conduct an evaluation or
activate a domain, Guardian prompts you for the username and password of an Administrator
or Monitor account on the target domain.
SSL encryption is available for all communication with Oracle over the Internet and for all
communication with Guardian agents in your target domain(s).
To quickly deploy the Guardian agent Web application bundled with your WebLogic Server
installation:
1. Launch the console and select the domain name in the Domain Structure panel.
2. Select the Enable Oracle Guardian Agent check box and click Save.
3. Restart all servers in your domain.
The Guardian agent included with WLS can be found at <WL_HOME>/server/lib/bea-
guardian-agent.war. If a new version of the agent becomes available, you can simply
replace this file and perform an application update, or delete the application deployment and
reinstall the new file. Do not change the default name of the Guardian agent when deploying it
manually.
1. Install Guardian:
a. Launch guardian_installer.exe/bin.
b. Select the location in which to install Guardian. Accept the default, or click Choose
to open a file browser from which you can select a location. If you already have
Guardian installed, be sure to install it to a different location from that of the
previous Guardian installation.
c. Review your installation details and click Install to proceed.
d. Click Done to dismiss the wizard.
2. Launch Guardian and select a Workspace location. The workspace is the directory in
which all of your Guardian data is stored, including user preferences, domain
inventories, and domain evaluation summaries. To prevent loss of work when Guardian
is updated or uninstalled, select a workspace location outside of the Guardian
installation directory.
If you select the “Use this as the default workspace and do not ask again” check box,
Guardian will use the current location from then on when starting. To resume opening the
Select Workspace dialog box at startup, select the “Prompt for workspace on startup” check
box in the “Startup and Shutdown” section of the Guardian Preferences menu.
Main toolbar
The main Guardian toolbar is located below the main menu bar and contains action buttons
for the most common Guardian tasks. You can place your cursor over a button to get a brief
description of its functionality. The Open button opens the current selected entity in one of the
tabs, such as a domain inventory, evaluation summary, or signature. These resources are
shown in separate tabbed document panels on the right side of the interface (by default). To
print the contents of the current active document panel, use the Print button.
The navigation pane in Guardian resides on the left side of the interface and contains several
tabs to access various explorer panels. The Domain Explorer view enables you to browse,
manage, and monitor the domains you have registered with Guardian. The Signature Explorer
view allows you to browse the problem signatures in Guardian’s repository and view their
details.
Explorer and document panels contain title bar buttons and can be individually maximized,
minimized, or closed as needed. You can also drag and drop panels to reorder them. These
types of customizations are automatically saved in your workspace for subsequent Guardian
sessions.
The online help for Guardian provides a detailed overview of the tool’s capabilities. However,
for specific instructions on using the client interface, use Guardian’s integrated Help system
(available from the main menu).
Activating a domain:
• Registers the domain in your workspace
• Generates an initial domain inventory using the agent
1 Location of
admin server
1. Either click the Activate button in the main toolbar or select File > New > Domain from
the main menu.
2. On the General tab, enter the following:
- Protocol: Select “http://” or “https://.” Oracle recommends using the SSL
encryption (“https://”) option for communication between Guardian clients and
agents. Guardian utilizes open source, 128-bit encryption.
- Host Name: This is the listen address for the domain’s Administration Server.
- Port Number: This is the listen port for the domain’s Administration Server.
- Username/Password: A domain account that has administrator or monitor
privileges
- Remember Username/Password: Store the supplied credentials securely in the
Guardian workspace to avoid entering them for every subsequent task.
A domain inventory:
• Is a snapshot of a domain’s high-level configuration
(servers, applications, data sources, and so on)
• Is automatically generated prior to each evaluation
• Can also be created manually for use in a support case
Or
After activating a domain, a node is added to the Domain Explorer view, in the Target
Domains folder. An initial domain inventory entry is also added to the domain node’s
Inventory History folder. Double-click an inventory to view its details in the document pane.
To explicitly create a new inventory file for a domain:
1. Either click the Inventory button in the main toolbar or select File > New > Inventory from
the main menu.
2. Select one of the available activated domains. If prompted, also specify the
Username/Password to access this domain.
To compare two domain inventories:
1. Within the Domain Explorer, expand the Inventory History folder for a domain.
2. Select two inventory files using the Ctrl or Shift keys.
3. Right-click the selected items and select Compare. A document panel then highlights
the differences between the two files. Various box and connector shapes are used to
indicate content that has been added, removed, or updated. Right-click within this panel
to see additional display options.
A Guardian signature:
• Identifies a pattern in domain evaluation data that indicates
a potential problem
• Is created by Oracle Support
• Has a severity level (Info, Warning, Critical, and so on)
A signature bundle:
• Is a collection of related signatures with similar
characteristics
• Can be evaluated together as a unit
Oracle Support has identified patterns in user domains that can cause problems. These
patterns are described in XML documents called signatures. Signatures form a primary
component of Oracle Guardian because they contain the distilled knowledge of Oracle
Support for both detecting potential problems and resolving them.
Signatures describe potential problems based on information about your WebLogic Servers
and the environment in which they are deployed, including JVMs, operating systems, and
databases. Signatures contain executable logic that can identify specific versions of these
products as well as their configuration settings. Signatures also contain a remedy
recommendation and a severity level: Critical, Warning, or Info. These severities approximate
the level of attention you should give the signature when it is detected.
A signature bundle is a group of signatures that are evaluated together against one or more
specified domains. Signatures are grouped into bundles based on their characteristics. For
example, the Security Advisories bundle contains signatures that detect potential security
problems for which Oracle has issued security advisories.
The Signature Explorer is a view that allows you to browse and interact with the available
signatures. If you double-click a signature in the Signature Explorer, a Signature Details editor
opens in the document pane.
1
2
1. Click the Update button on the main toolbar or select Help > Software Updates >
Guardian Updates from the main menu.
2. Enter your Oracle Support username and password. (Selecting the Remember
Username/Password check box is optional.)
3. On the Search Results page, select the features to install using the supplied check
boxes. Click More Info to see a more detailed description of each update.
4. After the updates are installed, you are prompted to restart Guardian. When Guardian
restarts, all the new signatures and application features you downloaded will be
available.
Annotations enable you to tag a signature with one or more persistent attributes. Use the
Annotations Wizard to create and manage custom annotations. The most common uses of
signatures are (a) to indicate that a signature should be ignored or skipped during an
evaluation and (b) to record custom notes and comments about an annotation for later
reference.
1. Navigate to a signature list in either the Signature Explorer, Bundle Explorer, or an
Evaluation Summary. Right-click a signature and select Annotations > Manage
Annotations. Then, in the annotations table, click Add to create a new annotation for this
signature. Similarly, use the Edit and Delete buttons to update or remove them.
2. Enter the following values as needed:
- Type: Select “ignore” to skip this signature while evaluating the specified target
domains. Select “flag” to simply enter some comments. In either case, this
signature’s icon will now be decorated with symbols to indicate that it has
annotations.
- Name: Give the annotation some arbitrary name.
- Comment: Record custom notes about this signature.
- Domains: Specify which target domains this signature should be evaluated on.
- Evaluations: Indicate whether this annotation applies to a specific evaluation or any
evaluation.
To detect which signatures apply to your domain, you conduct an evaluation. Guardian
collects data about your domain environment and identifies which signatures apply to the
domain.
Note that the evaluation of some signatures can fail or cause the evaluation to hang. To
prevent this, use the Preferences page to set the Enable Safe Evaluation option.
1. Click the Evaluate button on the main toolbar. Alternatively, from the Domain Explorer,
right-click an available domain and select Evaluate.
2. Select one or more domains from the displayed table. In the Bundle column for the
indicated domain, select the signature bundle to evaluate. To test all problem
signatures, choose the All Signatures option. As always, enter the credentials to use to
access the domain. If you selected multiple domains, you must supply credentials for
each of them.
3. (Optional) Use the Create Shortcut check box to add a new shortcut to the Shortcut
Explorer. This is useful if you plan to perform this same evaluation routinely.
Select a detected
signature.
View suggested
remedy and links
to related docs.
View signature
details.
When the evaluation is complete, the results are displayed in an Evaluation Summary file in
the document pane. The Evaluation Summary lists all the detected signatures, along with the
severity level, description, and recommended remedy for each.
For a given bundle of signatures, some signatures may not be used in the evaluation because
they target different products or versions than the domain is configured for. The ones that do
target what the domain has are counted as Targeted Signatures. The Targeted Signatures
are divided into Detected Signatures, which are actually found on the domain, and
Undetected Signatures, which are not found on the domain.
Optionally, you can right-click one of the detected signatures in the summary and select
Filters. Filters enable you to hide or show certain signatures according to their characteristics.
On the bottom of the summary page, the following tabs provide different representations of
the summary data:
• Overview: The default view (as described above)
• Source: Provides the actual XML source from the summary document. It includes the
raw data for each detected and undetected signature.
• Report: Provides all the information contained in the Overview tab but in a printer-
friendly format
...
1 2
Evaluation
summary
A support service request is a record that is created when you submit technical questions or
issues to Oracle Support. Customers with a support contract can open a service request over
the phone or online. Suppose that despite Guardian’s assistance you are unable to resolve a
given problem signature. Before creating a new service request, Guardian can create and
save the initial service request template as a file for later submission to Oracle Support.
These service request archive files include all the information from the signature along with
the corresponding evaluation summary and domain inventory files. This enables an Oracle
support engineer to quickly begin working on your service request upon receipt of the archive.
You can also add custom attachments and notes before creating the support request file.
1. Open an evaluation summary and select a signature. Then, within the Remedy section,
click the Get More Help link. If desired, enter any Additional Service Request Notes you
want the support engineer to see.
2. Select one of the servers in your domain. By default, the selected server’s log file is
included in the support request archive. Alternatively, you can disable the inclusion of
the server log and/or domain configuration files by using the supplied check boxes.
3. Enter the destination on the local file system to which Guardian will persist the service
request archive file.
Activate a domain:
guardianHeadless.sh –gactivateDomain
–t http://199.177.1.1:7001 -u myuser –p mypassword
–data /home/oracle/guardianWorkspace
The Guardian command-line interface provides a set of commands that can be issued directly
from the operating system shell, and therefore can also be scheduled to run automatically at
specific times. There is a Guardian command for each of the most common tasks you can
perform using the Guardian graphical interface. For a list of available commands, use the
–ghelp argument. To evaluate a domain, you must specify the bundle’s internal ID instead of
its display name. For example, the default bundle is 0 while the All Signatures bundle is 8.
This tool can either run a single command or a series of them placed in a separate script file
(-gscript command). Also note that on Windows all command output is directed to a file
named headless_output.txt, while on Linux it is simply written to the standard output
stream.
All commands require access to your Guardian workspace directory. If a workspace is not
explicitly specified, the default workspace is used. In addition to the commands shown in the
slide (-g<commandName>), the tool also supports the following:
• createShortcut: Create a shortcut to evaluate the specified domain and bundle.
• evaluateShortcut: Run the specified shortcut.
• listShortcuts: List the IDs of all available shortcuts in the workspace.
• inventoryDomain: Generate an inventory for an activated domain.
Answer: b, c, d
Answer: d
Answer: a, d, e
• WLDF Administration
– Architecture
– Logging Review
– Diagnostic Images
– Diagnostic Archives
– Diagnostic Modules
• Metric Collectors
• Monitoring Dashboard
• WLS Debugging
WLDF consists of a number of components that work together to collect, archive, and access
diagnostic information about a WebLogic Server instance and the applications it hosts.
WLDF
Image Capture
MBeans
Harvester Data
archive
Metric Collectors
Logs
Notifications
Watches
Code
Instrumentation
Event
Monitors archive
Data creators generate diagnostic data that is consumed by the logger and the Harvester.
Those components coordinate with the archive to persist the data, and they coordinate with
the watch and notification subsystem to provide automated monitoring. The data accessor
interacts with the logger and the Harvester to expose current diagnostic data and with the
archive to present historical data. MBeans make themselves known as data providers by
registering with the Harvester. Collected data is then exposed to both the watch and
notification system for automated monitoring and to the archive for persistence.
The instrumentation system creates monitors and inserts them at well-defined points in the
flow of code execution within the JVM. These monitors trigger events and publish data directly
to the archive. They can also take advantage of watches and notifications.
Diagnostic image capture support gathers the most common sources of the key server state
used in diagnosing problems. It packages that state into a single artifact, which can be made
available to support technicians.
The past state is often critical in diagnosing faults in a system. This requires that the state be
captured and archived for future access, creating a historical archive. In WLDF, the archive
meets this need with several persistence components. Both events and harvested metrics can
be persisted and made available for historical review.
• Each server has its own log file into which subsystems
record messages.
• Applications can write to the host server’s log as well.
• Servers also have access logs to record all HTTP
requests.
• Certain important messages are also broadcast to the
domain log hosted on the admin server.
Configure logging
for a server.
Each WebLogic Server instance writes all messages from its subsystems and applications to
a server log file that is located on the local host computer. By default, the server log file is
located in the logs directory below the server instance root directory (for example,
DOMAIN_NAME/servers/SERVER_NAME/logs/SERVER_NAME.log). The messages
include information about the time and date of the event as well as the ID of the user who
initiated the event.
Application developers who want to use the WebLogic Server message catalogs and logging
services as a way for their applications to produce log messages must know the Java APIs.
In addition to writing messages to the server log file, each server instance forwards a subset
of its messages to a domain-wide log file. By default, servers forward only messages of
severity level NOTICE or higher. The domain log file provides a central location from which to
view the overall status of the domain. The domain log resides in the Administration Server
logs directory. The default name and location for the domain log file is
DOMAIN_NAME/servers/ADMIN_SERVER_NAME/logs/DOMAIN_NAME.log.
Some subsystems maintain additional log files to provide an audit of the subsystem’s
interactions under normal operating conditions. The HTTP subsystem keeps a log of all HTTP
transactions in a text file. The default location and rotation policy for HTTP access logs is the
same as the server log.
Only Notice or
higher will be sent to
standard output.
The severity attribute of a WebLogic Server log message indicates the potential impact of the
event or condition that the message reports. TRACE and DEBUG messages are the lowest
severity while CRITICAL, ALERT, and EMERGENCY are the highest. Under normal
circumstances, WebLogic Server subsystems generate many messages of lower severity and
fewer messages of higher severity. In addition to writing messages to a log file, each server
instance prints a subset of its messages to standard out. Usually, standard out is the shell
(command prompt) in which you are running the server instance.
• Logger Severity Properties: Configure which log message severities are generated by
named WLS subsystems or Java packages
• Log File: Severity Level: The minimum severity of log messages going to the server
log file. By default all messages go to the log file
• Standard Out: Severity Level: The minimum severity of log messages going to the
standard out
• Domain Log: Severity Level: The minimum severity of log messages going to the
domain log from this server’s log broadcaster
• Minimum Severity to Log: The minimum severity of log messages going to all log
destinations
You can use WebLogic logging services to keep a record of which user invokes specific
application components, to report error conditions, or to help debug your application before
releasing it to a production environment. Your application can also use them to communicate
its status and respond to specific events. A major advantage of integrating your application
logging with WebLogic logging framework is ease of management.
Many implementation options are available. First, WLS provides APIs and tools to build
custom message catalogs that support multiple languages. These catalogs are then
referenced by your application. WLS also supports alternative APIs to simply let applications
programmatically add message text to the server log, without the use of catalogs. Finally,
WLS can also integrate with other popular open-source logging frameworks including Apache
Commons and Apache Log4J.
These third-party frameworks are not distributed with WLS and must be downloaded and
added to the server classpath manually. This is most commonly done using the domain’s
/lib directory. Additional steps are required to complete the integration. For example, to
support Log4J, you must edit the server’s Logging Implementation attribute, as shown above.
To integrate Commons, you must set various Java system properties when starting the
server.
The Server Logging Bridge provides a lightweight mechanism for applications that currently
use Java Logging or Log4J Logging to have their log messages redirected to WebLogic
logging services. Applications can use the Server Logging Bridge with their existing
configuration; no code changes or programmatic use of the WebLogic Logging APIs is
required. To use the Server Logging Bridge, you only need to create a logging properties file.
Java Logging
WebLogic Server exposes the Server Logging Bridge as the handler object
weblogic.logging.ServerLoggingHandler. When the handler receives an application
log message in the form of a java.util.logging.LogRecord object, the handler
redirects the message to the WebLogic logging service destinations, such as stdout, server
log, domain log, and so on.
Log4J Logging
WebLogic Server exposes the Server Logging Bridge as the appender object
weblogic.logging.log4j.ServerLoggingAppender. When the appender receives an
application log message in the form of a org.apache.log4j.spi.LoggingEvent object,
the appender redirects the message to the WebLogic logging service destinations such as
stdout, server log, domain log, and so on.
Instrumentation Harvester
Watches
Diagnostic Module
WLDF provides features for generating, gathering, analyzing, and persisting diagnostic data
from WebLogic Server instances and from applications deployed to them. For server-scoped
diagnostics, some WLDF features are configured as part of the configuration for a server in a
domain. Other features are configured as system resource descriptors that can be targeted to
servers (or clusters). For application-scoped diagnostics, diagnostic features are configured
as resource descriptors for the application.
The Harvester, watch, and instrumentation features are configured, packaged, and targeted
as part of a diagnostic module, similar to a JMS module resource. Only instrumentation
monitors can be defined in application-scoped modules, which are placed in the application’s
weblogic-diagnostics.xml file.
You create a diagnostic system module through the admin console or WLST. It is created as
a WLDFResourceBean, and the configuration is persisted in a resource descriptor file called
<module>.xml, where <module> is the name of the diagnostic module. The file is created
by default in the domain’s config/diagnostics directory. The file has the extension .xml.
You use the diagnostic image capture component of WLDF to create a diagnostic snapshot,
or dump, of a server’s internal runtime state at the time of the capture. This information helps
support personnel analyze the cause of a server failure. You can capture an image manually
using the console or WebLogic Scripting Tool (WLST), or you can generate one automatically
as part of a watch notification.
Because the diagnostic image capture is meant primarily as a post-failure analysis tool, there
is little control over what information is captured. It includes the server’s configuration, log
cache, JVM state, work manager state, JNDI tree, and most recent harvested data. The
image capture subsystem combines the data files produced by the different server
subsystems into a single zip file.
Location of
image file (.zip)
1
3
getAvailableCapturedImages()
Returns, as an array of strings, a list of the previously captured diagnostic images that are
stored in the image destination directory configured on the server. The default directory is
SERVER\logs\diagnostic_images. This command is useful for identifying a diagnostic
image capture that you want to download, or for identifying a diagnostic image capture from
which you want to download a specific entry.
saveDiagnosticImageCaptureFile(imageName)
Downloads the specified diagnostic image capture from the server to which WLST is currently
connected
saveDiagnosticImageCaptureEntryFile(imageName,imageEntryName)
Downloads a specific entry from the diagnostic image capture that is located on the server to
which WLST is currently connected
Example
images=getAvailableCapturedImages()
saveDiagnosticImageCaptureFile(images[0])
Module1 Module2
Harvested data archive
Harvester Harvester
Instrumentation Instrumentation
The Archive component of WLDF captures and persists all data events, log records, and
metrics collected by WLDF from server instances and applications running on them. You can
access archived diagnostic data in online mode (that is, on a running server). You can also
access archived data in off-line mode by using WLST. You configure the diagnostic archive
on a per-server basis.
For a file-based store, WLDF creates a file to contain the archived information. The only
configuration option for a WLDF file-based archive is the directory where the file will be
created and maintained. The default directory is
<domain>/servers/<server>/data/store/diagnostics. When you save to a file-
based store, WLDF uses the WebLogic Server persistent store subsystem.
To use a JDBC store, the appropriate tables must exist in a database, and JDBC must be
configured to connect to that database. The wls_events table stores data generated from
WLDF Instrumentation events. The wls_hvst table stores data generated from the WLDF
Harvester component. Refer to the documentation for the required schema.
2
1
3
For file
store option
For JDBC
store option
Log files are archived as human-readable files. Events and harvested data are archived in
binary format, in a WebLogic persistent store or in a database.
1. In the left pane, expand Diagnostics and select Archives.
2. Click the name of the server for which you want to configure diagnostic archive settings.
3. Select one of the following archive types from the Type list:
- Select File Store to persist data to the file system. If you choose this option, enter
the directory in the Directory field.
- Select JDBC to persist data to a database. If you choose this option, select an
existing JDBC data source from the Data Source list.
File-based policy
(megabytes)
Age-based
policy (hours)
WLDF includes a configuration-based data retirement feature for periodically deleting old
diagnostic data from the archives. You can configure size-based data retirement at the server
level and age-based retirement at the individual archive level. Size-based data retirement can
be used only for file-based stores. These options are ignored for database-based stores.
To configure size-based policies, select a diagnostic archive for a specific server. In the
Preferred Store Size field, enter a maximum data file size, in megabytes. When this size is
exceeded, enough of the oldest records in the store are removed to reduce the size of the
store below the maximum. In the Store Size Check Period field, enter the interval, in minutes,
between the times when the store will be checked to see if it has exceeded the preferred store
size.
To configure age-based policies, select a diagnostic archive for a specific server and locate
the Data Retirement Policies table. Then click New and enter the following criteria:
• Age: Retirement age for records in hours. Older records will be eligible for deletion.
• Time: The hour of day at which the data retirement task will first run during the day
• Period: The period in hours at which the data retirement task will be periodically
performed for the archive during the day after it is first executed
To use a JDBC store, the appropriate tables must exist in a database, and JDBC must be
configured to connect to that database. If they do not already exist, you must create the
database tables used by WLDF to store data in a JDBC-based store. Two tables are required.
The wls_events table stores data generated from WLDF Instrumentation events, while the
wls_hvst table stores data generated from the WLDF Harvester component.
The SQL data definition language (DDL) used to create tables may differ for different
databases, depending on the SQL variation supported by the database. A sample DDL
implementation for Pointbase is provided in the online documentation:
http://download.oracle.com/docs/cd/E12839_01/web.1111/e13714/config_diag_archives.htm#
i1067779
Consult the database documentation or your database administrator when creating these
tables for your database.
2
1
Search for or
browse records.
WebLogic Server’s Data Accessor subsystem retrieves diagnostic information from WLDF
components. Captured information is segregated into logical data stores that are separated by
the types of diagnostic data. For example, server logs, HTTP logs, and harvested metrics are
captured in separate data stores. WLDF maintains diagnostic data on a per-server basis.
Data stores can be modeled as tabular data. Each record in the table represents one item,
and the columns describe characteristics of the item. Different data stores may have different
columns. However, most data stores have some of the same columns, such as the time when
the data was collected.
Because WLDF archives are modeled the same as log files, you can browse and search their
contents in the console using the same process you use for log files:
1. In the left pane of the console, expand Diagnostics and select Log Files.
2. In the Log Files table, select the option button next to the name of the WLDF archive
that you want to view, and click View.
3. By default, the subsequent table displays the most recent contents of the archive and
some basic columns, but this can be customized as needed. Select the option button
next to the archive record that you want to view, and click View.
1
2
To configure and use the Instrumentation, Harvester, and Watch and Notification components
at the server level, you must first create a system resource called a diagnostic system
module. System modules are globally available for targeting to servers and clusters
configured in a domain. But at most only one diagnostic system module can be targeted to
any given server or cluster.
1. In the left pane, expand Diagnostics and select Diagnostic Modules.
2. Click New. Enter a name for the module and, optionally, enter a description. Then click
OK.
3. Use the various Configuration tabs to add diagnostic components to this module.
4. To target the module to a server or cluster, click the Targets tab.
serverRuntime()
wldfCapture = getMBean('WLDFRuntime/WLDFRuntime/
WLDFImageRuntime/Image')
wldfCapture.captureImage('logs/diagnostic_images',30)
cd('/')
module = cmo.createWLDFSystemResource('JMSDebugModule')
module.addTarget(getMBean('/Servers/serverA'))
Additional WLDF WLST examples can be found in the WLDF guide in the product
documentation.
Refer to the following MBeans:
• WLDFRuntimeMBean
• WLDFImageRuntimeMBean (a component of WLDFRuntimeMBean)
• WLDFServerDiagnosticMBean
• WLDFDataRetirementMBean (a component of WLDFServerDiagnosticMBean)
• WLDFDataRetirementByAgeMBean (a component of
WLDFServerDiagnosticMBean)
• WLDFResourceBean
archives = getMBean('Servers/serverA/
ServerDiagnosticConfig/serverA')
archives.setStoreSizeCheckPeriod(30)
archives.setDataRetirementEnabled(true)
startTime = System.currentTimeMillis()
endTime = 3600 * 1000
exportDiagnosticDataFromServer(
logicalName='HarvestedDataArchive',
exportFileName='wls_debug.xml',
beginTimestamp=startTime, endTimestamp=endTime,
query="ATTRNAME = 'MessageHighCount'")
• WLDF Administration
• Metric Collectors
– Harvester Architecture
– Watches
– Alarms
– Notifications
• Monitoring Dashboard
• WLS Debugging
Data
archive
Harvester
Metric collectors
MBeans
Notifications
Watches
Diagnostic module
The Harvester component of WLDF gathers metrics from attributes on qualified MBeans that
are instantiated in a running server. The Harvester can collect metrics from WebLogic Server
MBeans and from custom MBeans. To be harvestable, an MBean must be registered in the
local WebLogic Server runtime MBean server. The Harvester is configured and metrics are
collected in the scope of a diagnostic module targeted to one or more server instances.
Harvesting metrics is the process of gathering data that is useful for monitoring the system
state and performance. The Harvester gathers values from selected MBean attributes at a
specified sampling rate. Therefore, you can track potentially fluctuating values over time.
The Watch and Notification component of WLDF provides the means for monitoring server
and application states and then sending notifications based on criteria set in the watches.
Watches and notifications are configured as part of a diagnostic module targeted to one or
more server instances in a domain.
MBean attributes
Harvesting metrics is the process of gathering data that is useful for monitoring the system
state and performance. Metrics are exposed to WLDF as attributes on qualified MBeans. The
Harvester gathers values from selected MBean attributes at a specified sampling rate.
Therefore, you can track potentially fluctuating values over time. For custom MBeans, the
MBean must be currently registered with the JMX server.
You can configure the Harvester to harvest data from named MBean types, instances, and
attributes. If only a type is specified, data is collected from all attributes in all instances of the
specified type. If only a type and attributes are specified, data is collected from all instances of
the specified type. MBean type declarations must specify the full Java package name if no
wildcards are used (for example,
weblogic.management.runtime.ServerRuntimeMBean).
The sample period specifies the time between each cycle. For example, if the Harvester
begins execution at time T, and the sample period is I, the next harvest cycle begins at T + I.
If a cycle takes A seconds to complete and if A exceeds I, then the next cycle begins at T + A.
If this occurs, the Harvester tries to start the next cycle sooner to ensure that the average
interval is I.
WLDF allows for the use of wildcards (*) in type names, instance names, and attribute
specifications. WLDF also supports nested attributes using a dot delimiter, as well as complex
attributes such as arrays and maps. WLDF watch expressions also support similar
capabilities.
How often to
collect samples?
Metrics are configured and collected in the scope of a diagnostic system module targeted to
one or more server instances. Therefore, to collect metrics, you must first create a diagnostic
system module.
1. Click the name of the module for which you want to configure metric collection. Then
click Configuration > Collected Metrics.
2. To enable (or disable) all metric collection for this module, select (or deselect) the
Enabled check box. To set the period between samples, enter the period (in
milliseconds) in the Sampling Period field. To define a new collected metric, click New.
From the MBean Server Location drop-down list, select either DomainRuntime (admin
server only) or ServerRuntime. Then click Next.
Select the
MBean type.
3
Select MBean
attributes or enter a
wildcard expression.
4
Messages*,
for example
3. Select an MBean that you want to monitor from the MBean Type list. Then click Next
again.
4. In the Collected Attributes section, select one or more attributes from the Available list
and move them to the Chosen list (default is all attributes). Alternatively, WLDF supports
wildcard expressions in attribute specifications using the Attribute Expressions field.
Click Next.
5. In the Collected Instances section, select one or more instances from the Available list
and move them to the Chosen list (default is all instances). Once again, you can
alternatively enter an Instance Expression that includes wildcards. Click Finish.
6. Click Save.
• A WLDF watch:
– Inspects data generated from metric collectors, events
generated from monitors, or server log files
– Compares data to one or more conditions or “rules”
– Triggers one or more notifications
• Available notification types include:
– JMS
– Email
– SNMP trap
– Diagnostic image capture
A watch identifies a situation that you want to trap for monitoring or diagnostic purposes. You
can configure watches to analyze log records, data events, and harvested metrics. A watch is
specified as a watch rule, which includes a rule expression, an alarm setting, and one or more
notification handlers. A notification is an action that is taken when a watch rule expression
evaluates to true. You must associate a watch with a notification for a useful diagnostic
activity to occur (for example, to notify an administrator about specified states or activities in a
running server).
Log and instrumentation watches are triggered in real time, whereas Harvester watches are
triggered only after the current harvest cycle completes.
Watches and notifications are configured separately from each other. A notification can be
associated with multiple watches, and a watch can be associated with multiple notifications.
This provides the flexibility to recombine and reuse watches and notifications, according to
current needs. Each watch and notification can be individually enabled and disabled as well.
A complete watch and notification configuration includes settings for one or more watches,
one or more notifications, and any underlying configurations required for the notification media
(for example, the SNMP configuration required for an SNMP-based notification).
2
3
A collected metric watch can monitor any runtime MBean in the local runtime MBean server.
Log watches monitor the occurrence of specific messages and/or strings in the server log.
Watches contain a severity value that is passed through to the recipients of notifications.
Instrumentation watches are triggered as a result of the event being posted that matches
some criteria.
The example in the slide depicts a collected metric watch:
1. Click the name of the module for which you want to create a watch. Then click
Configuration > Watches and Notifications.
2. In the Watches section, click New.
3. Enter a name for the watch in the Watch Name field. Then select a Watch Type. To
enable or disable the watch, select or deselect the Enable Watch check box. Then click
Next.
4. Click the Add Expressions button to construct one or more watch expressions.
Expressions can be entered manually using the WLDF expression language or
constructed graphically. To group two or more expressions, select the check boxes
adjacent to the expressions that you want to group and click Combine. To reorder
expressions, select the check boxes adjacent to the expressions you want to move and
click Move Up or Move Down.
Watch based on
collected MBean
attributes
7
Assign previously
configured notifications.
5. For each watch expression, select either DomainRuntime or ServerRuntime from the
MBean Server Location drop-down list. Then click Next. Select an instance from the
Instance drop-down list, and click Next again.
6. Select an attribute from the Message Attribute list and an operator from the Operator list,
and enter a value with which to compare the attribute using the Value field. The value
must be an appropriate value for the attribute chosen above. Click Next.
7. Select and move one or more existing Notifications from the Available list to the Chosen
list. Then click Finish.
Trigger watch
repeatedly.
Watch must be
reactivated by
administrator.
Watch will
reactivate after
a specified time.
Each watch definition can be individually enabled and disabled using the console or WLST.
When disabled, the watch does not trigger and corresponding notifications do not fire. If
watches and notifications are disabled at the module level, all individual watches are
effectively disabled (the value of this flag on a specific watch is ignored).
Watches can be specified to trigger repeatedly, or to trigger once, when a condition is met.
For watches that trigger repeatedly, you can optionally define a minimum time between
occurrences. The default option, “Don’t use an alarm,” causes the watch to trigger whenever
possible. The “Use an automatic reset alarm” option also causes the watch to trigger
whenever possible, except that subsequent occurrences cannot occur any sooner than the
specified time interval. Finally, the “Use a manual reset alarm” option causes the watch to fire
a single time, but after it fires, you must manually reset it to fire again.
2 1
1. Click the name of the module for which you want to create a notification. Then click
Configuration > Watches and Notifications. Scroll down and click the Notifications tab,
and then click New.
2. Select the JMS Message type and click Next.
3. Enter an arbitrary Notification Name and select whether this notification should be
enabled or not. Then click Next.
4. Enter the JMS Destination JNDI Name and Connection Factory JNDI Name that will
allow the diagnostics framework to publish a local JMS message. The published
message will be formatted as a JMS MapMessage.
1 3
2
Assign previously
configured mail
session.
Override default
email subject or
body, if desired.
Simple Mail Transfer Protocol (SMTP) notifications are used to send email messages over the
SMTP protocol in response to the triggering of an associated watch. Before configuring an
SMTP notification, first create an SMTP Mail Session resource in you domain and target it to
the same server(s) that this diagnostic module is targeted to. Mail sessions define how
WebLogic connects to an enterprise mail server.
To create the notification:
1. Click the name of the module for which you want to create a notification. Then click
Configuration > “Watches and Notifications.” Scroll down and click the Notifications tab,
and then click New. Select the SNMP type and click Next.
2. Enter an arbitrary Notification Name and select whether this notification should be
enabled or not. Then click Next.
3. Select your previously configured mail session and enter one or more email recipients.
Messages sent by the diagnostics framework will contain a default subject and body, but
you can override this behavior and enter custom text.
harvester = getMBean('/WLDFSystemResources/JMSDebugModule/
WLDFResource/JMSDebugModule/Harvester/JMSDebugModule')
harvester.setSamplePeriod(300000)
harvester.setEnabled(true)
harvestType = harvester.createHarvestedType
('weblogic.management.runtime.JMSServerRuntimeMBean')
harvestType.setEnabled(true)
harvestType.setHarvestedAttributes(['MessagesHighCount'])
Additional WLDF WLST examples can be found in the WLDF guide in the product
documentation.
Refer to the following MBeans:
• WLDFResourceBean
• WLDFHarvesterBean (a subclass of WLDFResourceBean)
• WLDFHarvestedTypeBean (a component of WLDFHarvesterBean)
notify = getMBean('/WLDFSystemResources/JMSDebugModule/
WLDFResource/JMSDebugModule/WatchNotification/JMSDebugModule/
JMSNotifications/ITMgmtQueue')
watches = getMBean('/WLDFSystemResources/JMSDebugModule/
WLDFResource/JMSDebugModule/WatchNotification/JMSDebugModule')
watches.setEnabled(true)
watch = watches.createWatch('JMSWatch')
watch.setRuleType('Harvester')
watch.setEnabled('true')
watch.setRuleExpression('${ServerRuntime//[weblogic.management
.runtime.JMSServerRuntimeMBean]//MessagesHighCount} > 1000')
watch.setNotifications(array([notify],
weblogic.diagnostics.descriptor.WLDFNotificationBean)
Additional WLDF WLST examples can be found in the WLDF guide in the product
documentation.
Refer to the following MBeans:
• WLDFResourceBean
• WLDFWatchNotificationBean (a subclass of WLDFResourceBean)
• WLDFWatchBean (a component of WLDFWatchNotificationBean)
• WLDFNotificationBean (abstract; subclasses exist for each notification type)
Some users find it difficult to determine what WLS metrics to collect, especially users who are
new to JMX and/or WLDF. WLS includes some WLST scripts and utilities that provide a
starting point for creating common WLDF configurations, referred to as profiles. Each profile is
represented by a Jython class that encapsulates the MBean Harvester and watch
configurations for a specific WLS subsystem. These class definitions can then be modified or
extended to suit your individual needs. In addition, you can use the framework of utility
classes separately and outside the context of profile configuration while working with WLDF
from the WLST command-line.
WLDFResource.py provides a core set of utility classes in Jython for manipulating instances
of the WLDFSystemResourceMBean and its constituent bean objects. This file also provides
various helper functions that are used by other classes and scripts in this framework.
WLDFProfiles.py contains the set of Jython classes that represent the predefined profiles,
as well as a utility class, WLDFProfileManager, for enabling or disabling groups of these
profiles. For convenience, various “enable” scripts are included for each profile, such as
enableEJBProfile.py and enableJTAProfile.py. Each also has a corresponding
“disable” script, which will remove the configured MBean instances associated with the profile.
The scripts accept command-line arguments in the form of name/value pairs to override
common settings and parameters, such as the server URL, username, password, WLDF
module name, and Harvester period.
• WLDF Administration
• Metric Collectors
• Monitoring Dashboard
– New Interface to Metrics Graphing
– Views
– Charts and Graphs
• WLS Debugging
The Monitoring Dashboard provides views and tools for graphically presenting diagnostic
data. The underlying functionality for generating, retrieving, and persisting diagnostic data is
provided by the WebLogic Diagnostics Framework (WLDF).
You can launch the Monitoring Dashboard from the WLS admin console, or you can run it
independently. To launch it from the admin console, go to the Home page and under “Charts
and Graphs” click the Monitoring Dashboard link. The dashboard opens in its own window (or
tab). If you are not logged in to the admin console when you launch the dashboard, you are
prompted for an admin-level username and password. To access the Monitoring Dashboard
directly, use the URL http:<host>:<port>/console/dashboard (for example,
http://localhost:7020/console/dashboard).
The diagnostic data displayed by the Monitoring Dashboard consists of runtime MBean
attributes. These values are referred to in the Monitoring Dashboard as metrics. The
dashboard obtains metrics from two sources:
• Directly from active runtime MBean instances. These metrics are referred to as polled
metrics.
• From Archive data that have been collected by the Harvester. These metrics are
referred to as collected metrics.
The Monitoring Dashboard requires the Java plug-in version 1.5 (J2SE Runtime Environment
5.0) or later. If the required Java plug-in is not already installed in your Web browser, you may
be prompted to initiate a download from the Oracle Java web site when you access the
dashboard.
Follow the instructions on the screen. The exact installation procedure varies depending on
the type of browser and platform. For example, for Firefox on Linux, you typically are required
to copy or create a symbolic link to the file libjavaplugin_oji.so into your
<browser_install>/plugins directory.
Explorer
View List
The Monitoring Dashboard has two main panels: the explorer panel and the view display
panel.
The explorer panel has two tabs:
• View List: A list of existing built-in and custom views. It also contains controls for
creating, copying, renaming, and deleting views.
• Metric Browser: A way of navigating to and selecting the specific MBean instance
attributes whose metric values you want to display in a chart in a view
Views:
• Are a way to organize your charts and graphs
• Typically display metrics that are related in some way
• Are individually started and stopped
• Continue to collect data even when not being displayed
Stop
Stop all
Start active views.
(disabled
since already
running)
Copy the
Active view selected view.
Inactive view
The View List tab lists views. A view is a collection of one or more charts that display captured
monitoring and diagnostic data. You can access, create, and update views from the View List
tab. When you click the name of a view on the View List tab, that view is displayed in the View
Display on the right.
The dashboard uses icons to indicate the status of a view. A gray icon indicates that the view
is inactive and data polling is not occurring for the charts in that view. A color icon indicates
that the view is active and data polling is occurring for all charts in that view (this is true
whether or not the view is currently displayed in the View Display).
To start the data collection for a view, click the view name in the list and click the green Start
button above the tabs. To stop data collection, click the red-and-white rectangular Stop
button. To stop all active views, click the red octagonal Stop All button.
Preconfigured
views listed
per server
The built-in views are a set of predefined views of available runtime metrics for all running
WebLogic Server instances in the domain. These views surface some of the more critical
runtime WebLogic Server performance metrics and serve as examples of the dashboard’s
chart and graph capabilities.
You cannot modify a built-in view, but you can copy it. This copy is now one of your custom
views. As a custom view, the copy can be modified, renamed, saved, and later deleted.
Custom views are available only to the user who created them and only within the current
domain.
A custom view is any view created by a user. Custom views are available only to the user who
created them. You can access a custom view again when needed.
To create a new custom view, click the View List tab. Then click the New View button. A new
view appears in the list named New View. Replace the default name with something
meaningful. Also, a new empty view appears in the View Display area.
To add charts to the custom view, use the drop-down menu above the View Display area and
click New Chart.
To add graphs to a chart, first click the Metric Browser tab. Select a server in the Servers
drop-down list and click Go. Then select an MBean type and an MBean instance. In the
Metrics list for that instance, drag an MBean attribute to a chart. A view may have as many
charts as you like and a chart may graph as many metrics as you like. Also, if a metric is
dragged to a view that contains no charts, the dashboard automatically creates a new chart to
contain the graph.
When the metrics are in place, click the green Start button to start collecting data.
To delete a custom view, select the name of the view and click the Delete (red “X”) button.
The Metric Browser tab displays metrics available for a single server at a time. The Server
drop-down list displays all the servers in the domain. Select a server and click the Go button.
The metrics for the selected server are displayed. If the selected server is not running (is
currently shut down), a message indicates that the dashboard cannot connect to the server.
You can track a metric based on an attribute of any registered MBean simply by dragging the
attribute to a chart on the displayed view.
As a convenience, to have only metrics that have been collected by the Harvester displayed,
select the Collected Metrics Only check box.
To determine whether a metric was collected by the harvester, select the metric. A pop-up
note provides information about the metric, including whether or not it is a collected metric.
To see metrics for all runtime MBean types regardless of whether instances of them are
active, select Include All Types.
Edit Tool
Y-axis
Menu
Metric Menu
A chart contains one or more graphs that show data points over a specified time span. A chart
also includes a legend that lists the data sources for each graph along with their associated
icons and colors.
When working with a view, you can do the following:
• Add charts to views.
• Add graphs to charts.
• Pan and zoom.
• Edit labels and legends by using the Edit Tool.
• Start and stop data collection for charts in a view.
Chart style
Properties of
selected chart
Individual
metric
properties
When you create a chart, it is created with a default type and default properties. To change
the chart type, use the chart menu and select Chart Type. The types available are Bar Chart,
Line Plot, Scatter Plot, Vertical Linear Gauge, Horizontal Linear Gauge, and Radial Gauge.
To change the properties for a chart, use the chart menu and select Properties. You can
change the Chart Title, Y-axis Units, Color, Background Color, and Highlight Color. You can
choose Set Y-axis Range Automatically, or you can choose Y-axis Max, Y-axis Min,
Threshold Max, and Threshold Min. You can also change the Time Range.
Each graph in the chart (metric) also has properties. Use the drop-down menu next to the
metric in the legend and select Properties. Here you can change the Name of the metric as
well as choose its Marker Symbol and Color.
The Monitoring Dashboard displays two kinds of diagnostic metrics: real-time data directly
from active runtime MBeans (called polled metrics), and historical data collected by a
previously configured WLDF Harvester (called collected metrics).
Note that with polled metrics, if polling has been taking place long enough for old data to be
purged, a view will not contain all data from the time polling started.
If a Harvester was configured to harvest data for a particular metric, that historical data is
available and can be displayed. Data that matches a selected metric and within a view’s time
range will be displayed by that view.
• WLDF Administration
• Metric Collectors
• Monitoring Dashboard
• WLS Debugging
– Debug Scopes
– Debug Logging
1
2
Scopes
Attribute
This debugging method is dynamic and can be used to enable debugging while the server is
running. Alternatively, many debug flags can also be set as command-line arguments when
starting a server.
Examples
• -Dweblogic.debug.DebugJDBCSQL=true
• -Dweblogic.debug.DebugJMSBackEnd=true
• -Dweblogic.debug.DebugSAFSendingAgent=true
• -Dweblogic.debug.DebugJDBCJTA=true
You can enable debugging by setting the appropriate Severity Level server logging
configuration attributes to true.
Under normal circumstances, WebLogic Server subsystems generate many messages of
lower severity and fewer messages of higher severity. In addition to writing messages to a log
file, each server instance prints a subset of its messages to standard out. Usually, standard
out is the shell (command prompt) in which you are running the server instance.
• Log File: Severity Level: The minimum severity of log messages going to the server
log file. By default, all messages go to the log file.
• Standard Out: Severity Level: The minimum severity of log messages going to the
standard out
• Memory Buffer: Severity Level: The minimum severity of log messages that WebLogic
accepts from subsystems and applications and then keeps in memory and eventually
distributes to destinations such as standard output or the log file
debug = getMBean('/Servers/serverA/ServerDebug/serverA')
debug.setDebugJDBCInternal(true)
debug.setDebugJMSBackEnd(true)
debug.setDebugSSL(true)
scope =
getMBean('/Servers/serverA/ServerDebug/serverA/DebugScopes/web
logic.jdbc')
scope.setEnabled(true)
The MBean used in the slide, ServerDebugMBean, is not currently documented in the main
MBean Reference guide online, but it is documented in the MBean API Reference (Javadoc).
Search for the type weblogic.management.configuration.ServerDebugMBean. You
can also inspect this or any other MBean’s available attributes by using WLST in online mode.
For example, browse to the above location and execute the ls command.
Unlike in the admin console, the MBean used to maintain debug attributes does not organize
them using scopes. Debug scopes are simply a convenience of the console user interface.
Answer: d
Answer: b
Answer: b, c, d
Answer: a
• Monitors
– Architecture
– Actions
– Application-Scoped Modules
– Hot Swap
– Pointcuts
• Request Tracking
A diagnostic monitor is a dynamically manageable unit of diagnostic code that is inserted into
server or application code at specific locations. Monitors allow you to better troubleshoot
issues that occur at the specific points in the server or application's flow of execution. They
can also be used to simply help identify the part of an application that is causing some
behavior. This approach can be especially useful to administrators who may not have direct
access to an application’s source code or documentation, or at least not have it readily
available. The concept of instrumentation assumes that the underlying application code
remains unmodified.
Diagnostic monitors:
• Trigger actions at specific code locations
• Record the data from actions as events in the archive
Event
Instrumentation archive
Monitors
Actions
Code
Notifications
Watches
Diagnostic module
A diagnostic monitor is a dynamically manageable unit of diagnostic code that is inserted into
server or application code at specific locations. WLDF provides a library of predefined
diagnostic monitors and actions. You can also create application-scoped custom monitors,
where you control the locations where diagnostic code is inserted in the application.
Monitors are applied and removed from server and application Java code dynamically without
modifying the code itself. The WLDF instrumentation code is inserted or “woven” into server
and application code at precise locations. A joinpoint is a specific location in a class, for
example the entry and/or exit point of a method or a call site within a method. A pointcut is an
expression that specifies a set of joinpoints, for example all methods related to scheduling,
starting, and executing work items. For a monitor to perform any useful diagnostic work, you
must configure at least one action for the monitor. Only certain actions are available to certain
monitor types.
You use instrumentation watches to monitor the events from the WLDF instrumentation
component, similar to monitoring the MBean data collected from the Harvester component.
Watches of this type are triggered as a result of the event being posted, or when the event’s
data meets some conditions. Recall that watches respond with notifications, such as capturing
a server diagnostic image.
Diagnostic actions perform some type of data collection intended to help gain insight into the
server or application. Each diagnostic action can only be used with the monitor types with
which they are compatible. In addition to the data described above, all actions also capture
general statistics such as the current time, transaction ID, and user ID, if applicable.
When attached to “before” monitors, the Display Arguments instrumentation event captures
input arguments to the joinpoint (for example, method arguments). When attached to “after”
monitors, the instrumentation event captures the return value from the joinpoint.
When executed, the Trace Elapsed Time action captures the timestamps before and after the
execution of an associated joinpoint. It then computes the elapsed time by computing the
difference. It generates an instrumentation event which is dispatched to the events archive.
The elapsed time is stored as event payload.
Trace Memory Allocation is very similar to the Trace Elapsed Time, except that the memory
allocated within a method call is traced, rather than the time to run the method.
You can provide instrumentation services at the system level (servers and clusters) and at the
application level. Many concepts, services, configuration options, and implementation
features are the same for both. However, there are differences, including the types of
monitors that are available.
Server-scoped instrumentation for a server or cluster is configured and deployed as part of a
diagnostic module, an XML configuration file located in the
<DOMAIN_NAME>/config/diagnostics directory, and linked from config.xml. Only
one WLDF system resource (and hence one system-level diagnostics descriptor file) can be
active at a time for a server (or cluster). Server-scoped instrumentation can be enabled,
disabled, and reconfigured without restarting the server.
Application-scoped instrumentation is also configured and deployed as a diagnostics module,
but as an XML configuration file named weblogic-diagnostics.xml, which is packaged
with the deployed application in the META-INF directory. As with all deployment descriptors,
simply redeploy the application for your latest instrumentation settings to take effect. For
instrumentation to be available for an application, instrumentation must be enabled on the
server to which the application is deployed.
System Application
Subsystem Available Code Points
Module? Module?
JDBC Before/After Connection X X
Reserve/Release Connection X X
Before/After Commit X X
Before/After Rollback X X
Before/After SQL Statement X X
JMS Before/After Message Send X
Before/After Message Receive X
JNDI Before/After Lookup X
JTA Before/After Start X
Before/After Commit X
Before/After Rollback X
System Application
Subsystem Available Joinpoints
Module? Module?
Servlet/JSP Before/After Execution X
Before/After Session Access X
Before/After Tag Execution X
EJB Before/After All Entity Methods X
Before/After Entity Business X
Methods
Before/After All Session Methods X
Before/After Session Business X
Methods
Before/After MDB Message Receive X
Before/After MDB Created X
Before/After MDB Removed X
EJBs define one or more custom “business” methods that clients can execute, but all EJBs
also include a set of standard “semantic” methods such as ejbActivate(), ejbLoad(),
and ejbRemove(). WLDF monitors are available to troubleshoot only EJB business
methods, only EJB semantic methods, or both types of methods.
Deployment plans:
• Add or override elements in deployment descriptors
• Are associated with an application during deployment
• Can be created with the help of the console or command-
line tools
plan.xml
DEPLOY TestDomain
MSQLDataSource
MyEJBApp
ejb-jar.xml
plan.xml
ORADataSource DEPLOY ProdDomain
<variable>
<name>EJBMonitor-Action</name>
<value>TraceAction</value> Set the action element
</variable> for a monitor with a
... given name.
<module-descriptor external="false">
<root-element>wldf-resource</root-element>
<uri>META-INF/weblogic-diagnostics.xml</uri>
<variable-assignment>
<name>EJBMonitor-Action</name>
<xpath>/wldf-resource/instrumentation
/wldf-instrumentation-monitor/[name="EJBMonitor"]
/action</xpath>
</variable-assignment>
...
</module-descriptor>
Hot swap:
• Automatically detects and applies changes to deployment
plans
• Does not require that you redeploy the application to add
or remove monitors
• Is enabled using a JVM argument
startWebLogic.sh:
...
JAVA_OPTIONS="${JAVA_OPTIONS}
-javaagent:${WL_HOME}/server/lib/diagnostics-agent.jar"
...
If you enable a feature called “hot swap” before deploying your application with a deployment
plan, you can dynamically update all instrumentation settings without redeploying the
application. If you do not enable hot swap, or if you do not use a deployment plan, changes to
some instrumentation settings require redeployment. If hot swap is not enabled, you can
“remove” a monitor, but that just disables it. The instrumentation code is still woven into the
application code. You cannot re-enable it through a modified plan without also redeploying the
application.
It is recommended that you create an empty descriptor. That provides full flexibility for
dynamically modifying the configuration. It is possible to create monitors in the original
descriptor file and then use a deployment plan to override the settings. You will, however, be
unable to completely remove monitors without redeploying. If you add monitors using a
deployment plan to an empty descriptor, all such monitors can be removed.
1. Click the name of the module to which you want to add diagnostic monitors. Then select
Configuration > Instrumentation.
2. Click the Add/Remove button.
3. To enable (or disable) the monitor, select (or deselect) Enabled. Then locate the
Diagnostic Monitors section and select one or more monitors from the Available list.
Click the right arrow button to move the monitors to the Chosen list. The name of the
monitor indicates where in the flow of execution the monitoring takes place.
4. If you are editing a delegating monitor and you want to add Actions, click the desired
actions in the Available list and move them to the Chosen list.
5. Click Finish.
1
Create or edit
deployment plan.
2
1. Click the name of an existing application. Then select Configuration > Instrumentation.
2. Click the Add Monitor From Library button.
3. Locate the Diagnostic Monitors section and select one or more monitors from the
Available list. Click the right arrow button to move the monitors to the Chosen list. The
name of the monitor indicates where in the flow of execution the monitoring will take
place. Click OK.
4. Choose a Path for the deployment plan and click OK.
5. Click the name of a monitor to edit its settings.
6. To enable or disable the monitor, select or deselect Enabled.
7. Under Actions, click the desired actions in the Available list and move them to the
Chosen list. Click Save.
Pointcut Description
* com.mycompany.MyClass doIt(...) A specific method in a specific class
* *.MyClass do*(...) All methods named “do” in all classes
named “MyClass”
Traditionally in object-oriented programming, if you wanted to implement a service that will cut
across your business logic modules, you would write a separate module, such as logging, and
then provide access to that module through an abstract interface. Your business logic
modules can do their logging by using the logging services, and you can change how you do
logging by changing the logging module without touching the business logic. The only
potential problem with this is that you have to embed calls to the logging module within the
business logic. You cannot flexibly or easily change where and when you want the logging to
occur. Aspect-oriented programming builds on top of object-oriented programming. You
define the points at which you want something to happen, and then you define what you want
to happen (for example, logging). You are essentially defining rules in which to weave new
code into your program. Common AOP examples include diagnostics, security, transactions,
resource pooling, and persistence.
A joinpoint is an identifiable point in the execution of a program. Currently, WLDF only
supports joinpoints that describe Java methods or constructors. WLDF supports “call” and
“execution” joinpoints, whose differences are very subtle. At a “call” joinpoint, an action will
occur when a method is invoked but before its logic is actually executed. An “execution”
joinpoint occurs around (before and after) a method’s execution. A pointcut is an expression
that specifies a set of joinpoints by matching certain characteristics. Pointcuts can also use
logical operators such as AND, OR, and NOT.
2
call(...) or
execution(...)
A custom monitor is available only for application-scoped instrumentation and does not have
a predefined pointcut or location. A diagnostic location is the position relative to a joinpoint
where the diagnostic activity will take place. Diagnostic locations are before, after, and
around. You assign a name to a custom monitor, define the pointcut and the diagnostics
location the monitor will use, and then assign actions from the set of predefined diagnostic
actions.
To add a custom monitor:
1. Edit an application and click the Configuration > Instrumentation tab.
2. Click the Add Custom Monitor button.
When configuring a custom pointcut in WLS, the following syntax rules apply:
• Indicate either a “call” or “execution” pointcut.
• Specify a type (class or interface) and method name. Wildcards (*) can be used in class
types and method names. Use a “+” prefix to indicate all subclasses, subinterfaces, or
concrete classes implementing the specified class or interface.
• (Optional) Indicate method arguments and return type.
• Pointcut expressions can be combined with AND, OR, and NOT Boolean operators to
build complex pointcut expression trees.
instr = getMBean('/WLDFSystemResources/MyModule/
WLDFResource/MyModule/Instrumentation/MyModule')
instr.setEnabled(true)
monitor = instr.createWLDFInstrumentationMonitor(
'JDBC_Before_Connection_Internal')
monitor.setEnabled(true)
monitor.setActions(array(['StackDumpAction'],String))
Additional WLDF WLST examples can be found in the WLDF guide in the product
documentation.
Refer to the following MBeans:
• WLDFResourceBean
• WLDFInstrumentationBean (a subclass of WLDFResourceBean)
• WLDFInstrumentationMonitorBean (a component of
WLDFInstrumentationBean)
• Monitors
• Request Tracking
– Context ID
– Request Dyes
– Event Filtering
– Event Throttling
Server Server
Web EJB EJB
ID: 1111 ID: 1111
ID: 1112
The WLDF Instrumentation component provides a way to uniquely identify requests (such as
HTTP or RMI requests) and track them as they flow through the system. The diagnostic
context consists of two pieces: a unique context ID and a 64-bit dye vector that represents the
characteristics of the request. The context ID associated with a given request is recorded in
the event archive and can be used to associate other server log messages with the same
request.
The diagnostic context for a request is created and initialized when the request enters the
system (for example, when a client makes an HTTP request). The diagnostic context remains
attached to the request, even as the request crosses thread boundaries and Java Virtual
Machine (JVM) boundaries. The diagnostic context lives for the duration of the life cycle of the
request. Every diagnostic context is identified by a Context ID that is unique in the domain.
Because the Context ID travels with the request, it is possible to determine the events and log
entries associated with a given request as it flows through the system.
All server log and WLDF archive entries include the current
context ID (if available).
By default, the Context ID column for server log files is not shown in the console. When it is
added, you can quickly find all of the log and WLDF messages associated with a single
request as it passes through different applications and WLS subsystems.
You can also filter the displayed log messages by Context ID, as in the following example:
CONTEXTID LIKE '%abcd%'
Dye Server
Injector
ID: 1111
ID: 1112
ID: 1113
When any request enters the system, WLDF creates and instantiates a diagnostic context for
the request. The context includes a unique context ID and a dye vector. The Dye Injection
monitor, if enabled at the server level within a WLDF diagnostic module, examines the
request to see if any of the configured dye values in the dye vector match attributes of the
request. For example, it checks to see if the request originated from the user associated with
USER1 or USER2, and it checks to see if the request came from the IP address associated
with ADDR1 or ADDR2.
For each dye value that matches a request attribute, the Dye Injection monitor sets the
associated dye bits within the diagnostic context. For example, if the Dye Injection monitor is
configured with USER1=weblogic, USER2=admin@avitek.com, ADDR1=127.0.0.1,
ADDR2=127.0.0.2, and the request originated from user weblogic at IP address
127.0.0.2, it sets the USER1 and ADDR2 dye bits within the dye vector.
#
Dye Type Condition Based On
Available
ADDR 4 The incoming client IP address
USER 4 The user ID associated with this request
PROTOCOL 7 The protocol of the incoming request (HTTP,
RMI, SSL, T3, and so on)
COOKIE 4 The value of a browser cookie named
weblogic.diagnostics.dye
DYE 8 A value set programmatically using the WLDF
APIs
Use the ADDR1, ADDR2, ADDR3, and ADDR4 dyes to specify the IP addresses of clients that
originate requests. These dye flags are set in the diagnostic context for a request if the
request originated from an IP address specified by the respective property. These dyes
cannot be used to specify DNS names.
Use the USER1, USER2, USER3, and USER4 dyes to specify the user names of clients that
originate requests. These dye flags are set in the diagnostic context for a request if the
request was originated by a user specified by the respective property.
COOKIE1, COOKIE2, COOKIE3, and COOKIE4 are set in the diagnostic context for an HTTP
or HTTPS request, if the request contains the cookie named weblogic.diagnostics.dye
and its value is equal to the value of the respective property.
PROTOCOL_HTTP is set in the diagnostic context of a request if the request uses HTTP or
HTTPS protocol. PROTOCOL_SSL is set in the diagnostic context of a request if it uses the
Secure Sockets Layer (SSL) protocol. Similar flags are available named PROTOCOL_IIOP,
PROTOCOL_JRMP, PROTOCOL_T3, PROTOCOL_RMI, and PROTOCOL_SOAP.
DYE_0 to DYE_7 are available for use only by application developers.
Add the
monitor.
To initialize the dye vector for requests coming into the system, you must:
1. Create and enable a diagnostic module for the server (or servers) that you want to
monitor.
2. Enable instrumentation for the diagnostic module.
3. Configure and enable the Dye Injection monitor for the module. Only one Dye Injection
monitor can be used with a diagnostic module at any one time.
4. For Properties, enter the list of conditions for each dye you would like to use, such as
ADDR1, ADDR2, USER1, USER2, COOKIE1, COOKIE2, and so on.
Dye MonitorA
Injector Mask: D2
Event
archive
ID: 1111
ID: 1112
ID: 1113
D2
One of the main reasons for using diagnostic context is to filter requests. Although you may
want a diagnostic action to trigger when something happens within the server or the
application, you may not want it to happen every time for every request. Dye filtering is one of
the ways you can restrict how many actions are triggered. Dye filtering is done using dye
masks. A dye mask is a is a selection of dyes from the Dye Injection monitor whose
conditions must evaluate to “true.”
Edit any
monitor.
Enable
filtering.
Which dyes
must be set?
After editing an existing server-scoped or application-scoped monitor, update the Dye Mask
section. Move any of the available dyes from the Available list to the Chosen list. Also be sure
to select the EnableDyeFiltering check box. Then click Save. Remember that for application-
scoped instrumentation, these changes are saved in the application’s deployment plan.
The flags that are enabled in the diagnostic monitor must exactly match the bits set in the
request’s dye vector for an action to be triggered and an event to be written to the event
archive. For example, if the diagnostic monitor has both the USER1 and ADDR1 flags enabled,
and only the USER1 flag is set in the request’s dye vector, no action is triggered and no event
is generated.
When configuring a diagnostic monitor, do not enable multiple flags of the same type. For
example, do not enable both the USER1 and USER2 flags, because the dye vector for a given
request will never have both the USER1 and USER2 flags set.
ID: 1112
ID: 1113
ID: 1114
Throttling is used to control the number of requests that are processed by the monitors in a
diagnostic module. Throttling is configured using the THROTTLE dye, which is defined in the
Dye Injection monitor. The USERn and ADDRn dyes allow inspection of requests from specific
users or IP addresses. However, they do not provide a means to look at arbitrary user
transactions. The THROTTLE dye provides that functionality by allowing sampling of requests.
If dye filtering for a monitor is enabled and that monitor has a dye mask, filtering is performed
based on the dye mask. That mask may include the THROTTLE dye, but it does not have to. If
THROTTLE is included in a dye mask, then THROTTLE must also be included in the request’s
dye vector for the request to be passed to the monitor. However, if THROTTLE is not included
in the dye mask, all qualifying requests are passed to the monitor, whether their dye vectors
include THROTTLE or not.
On the other hand, if dye filtering is not being used, you can still use the Dye Injection monitor
to configure throttling of event data. The throttling feature is not dependent on the use of filters
in your monitors.
THROTTLE_INTERVAL sets an interval (in milliseconds) after which a new incoming request
is dyed with the THROTTLE dye. If the THROTTLE_INTERVAL is greater than 0, the Dye
Injection monitor sets the THROTTLE dye flag in the dye vector of an incoming request if the
last request dyed with THROTTLE arrived at least THROTTLE_INTERVAL before the new
request. For example, if THROTTLE_INTERVAL=3000, the Dye Injection monitor waits at
least 3000 milliseconds before it dyes an incoming request with THROTTLE.
THROTTLE_RATE sets the rate (in terms of the number of incoming requests) by which new
incoming requests are dyed with the THROTTLE dye. If THROTTLE_RATE is greater than 0,
the Dye Injection monitor sets the THROTTLE dye flag in the dye vector of an incoming
request when the number of requests since the last request dyed with THROTTLE equals
THROTTLE_RATE. For example, if THROTTLE_RATE=6, every sixth request is dyed with
THROTTLE.
You can use THROTTLE_INTERVAL and THROTTLE_RATE together. If either condition is
satisfied, the request is dyed with the THROTTLE dye.
Answer: a
Answer: b
Answer: d
• JVM Concepts
– JVM Support
– Heap
– Garbage Collection
– Basic Settings
– Memory Leak
– Crash Files
• Basic JVM Tools
• WLS JVM Tools
• JRockit Mission Control
com.mycompany.commerce.InventoryManager
An object is a software bundle of related state and behavior. Software objects are often used
to model business entities and processes. Software objects are conceptually similar to real-
world objects—they consist of state and related behavior. An object stores its state in fields
(variables in some programming languages) and exposes its behavior through methods
(functions in some programming languages). Methods operate on an object’s internal state
and serve as the primary mechanism for object-to-object communication.
A class is a blueprint or prototype from which objects are created. An object is an instance of
a specific class. An interface is a contract between a class and the outside world. When a
class implements an interface, it promises to provide the behavior published by that interface.
The concepts of classes and interfaces are often generalized using the term “type.”
A package is a namespace for organizing classes and interfaces in a logical manner.
Because software written in the Java programming language (particularly Java EE systems)
can be composed of thousands of individual classes, it makes sense to keep things organized
by placing related classes and interfaces into packages. Conceptually you can think of
packages as being similar to different folders on your computer. The Java platform provides
an enormous class library (a set of packages) suitable for use in your own applications, which
represent common, general-purpose programming requirements. For example, a
java.io.File object allows you to easily read, create, update, and delete folders and files
on the file system.
HelloWorld.java
Compile
HelloWorld.class
Java programs are compiled into a form called Java bytecodes. The JVM executes Java
bytecodes, so Java bytecodes can be thought of as the machine language of the JVM. The
Java compiler reads Java language source (.java) files, translates the source into Java
bytecodes, and places the bytecodes into class (.class) files. The compiler generates one
class file per class in the source.
To the JVM, a stream of bytecodes is a sequence of instructions. Each instruction consists of
a one-byte opcode and zero or more operands. The opcode tells the JVM what action to take.
If the JVM requires more information to perform the action than just the opcode, the required
information immediately follows the opcode as operands.
The graphic in the slide shows how source code is compiled into class files that can be used
by JVMs running on multiple operating systems.
Note: The following information is subject to change. Always refer to the latest support
documentation.
If an Oracle product has been certified against and is supported on a version of Red Hat
Enterprise Linux (RHEL), it is automatically certified and supported on the corresponding
version of Oracle Enterprise Linux (OEL) (for example, RHEL4 > OEL4, RHEL5 > OEL5).
If a product is supported and certified on OEL or RHEL, it is also certified and supported in the
virtualized installation of the same version of OEL or RHEL running on Oracle VM (for
example, OEL4 > OEL4 on Oracle VM, OEL5 > OEL5 on Oracle VM, RHEL4 > RHEL4 on
Oracle VM, RHEL 5 > RHEL5 on Oracle VM). Oracle recommends using the latest update
levels and OVM versions available.
Every Oracle product that is certified on Windows is also certified and supported when
running on Windows in a virtualized environment with Oracle VM.
The Java Virtual Machine (JVM) is a virtual “execution engine” instance that executes the
bytecodes in compiled Java class files on a microprocessor. To the JVM, a stream of
bytecodes is a sequence of instructions. Each instruction consists of a one-byte opcode and
zero or more operands. The opcode tells the JVM what action to take. If the JVM requires
more information to perform the action than just the opcode, the required information
immediately follows the opcode as operands.
Tuning the JVM to achieve optimal application performance is one of the most critical aspects
of WebLogic Server performance. A poorly tuned JVM can result in slow transactions, long
latencies, system freezes, and even system crashes. Ideally, tuning should occur as part of
the system startup by employing various combinations of the startup options.
Heap
Native memory is the memory that the JVM uses for its own internal operations. The amount
of native memory heap that will be used by the JVM depends on the amount of code
generated, threads created, memory used during garbage collection for keeping Java object
information and temporary space used during code generation, optimization, and so on. If
there is a third-party native module, it could also use the native memory; for example, native
JDBC drivers allocate native memory.
Java heap is the memory used by the JVM to allocate Java objects. The maximum size of
Java heap can be specified using arguments on the Java command line. If the maximum
heap size is not specified, the limit is decided by the JVM considering factors such as the
amount of physical memory in the machine and the amount of free memory available at that
moment. You should specify a value for the maximum Java heap value. The Java heap
contains objects used by the Java programs, including both live and dead objects, as well as
free memory that has been allocated but not used.
The Java heap is where the objects of a Java program live. It is a repository for live objects,
dead objects, and free memory. When an object can no longer be reached from any pointer in
the running program, it is considered “garbage” and ready for collection. The Java language
does not allow you to free allocated memory directly. Instead, the runtime environment keeps
track of the references to each object on the heap and automatically frees the memory
occupied by objects that are no longer referenced by a process called garbage collection.
The JVM heap size determines how often (and for how long) the VM collects garbage. An
acceptable rate for garbage collection is application-specific and should be adjusted after
analyzing the actual time and frequency of garbage collections. If you set a large heap size,
full garbage collection is slower but occurs less frequently. If you set your heap size in
accordance with your memory needs, full garbage collection is faster but occurs more
frequently.
In addition to freeing unreferenced objects, a garbage collector may also combat heap
fragmentation. Heap fragmentation occurs through the course of normal program execution.
A second advantage of garbage collection is that it helps ensure program integrity. Garbage
collection is an important part of Java’s security strategy. Java programmers are unable to
accidentally (or purposely) crash the JVM by incorrectly freeing memory.
Memory in the Java HotSpot virtual machine is organized into three generations: a young
generation, an old generation, and a permanent generation. Most objects are initially
allocated in the young generation. The old generation contains objects that have survived
some number of young generation collections, as well as some large objects that may be
allocated directly in the old generation. The permanent generation holds objects that the JVM
finds convenient to have the garbage collector manage, such as objects describing classes
and methods, as well as the classes and methods themselves.
The young generation consists of an area called Eden, plus two smaller survivor spaces. Most
objects are initially allocated in Eden. (As mentioned, a few large objects may be allocated
directly in the old generation.) The survivor spaces hold objects that have survived at least
one young generation collection and have thus been given additional chances to die before
being considered “old enough” to be promoted to the old generation. At any given time, one of
the survivor spaces holds such objects, while the other is empty and remains unused until the
next collection.
When the young generation fills up, a young generation collection (sometimes referred to as a
minor collection) of only that generation is performed. When the old or permanent generation
fills up, what is known as a full collection (sometimes referred to as a major collection) is
typically done.
There are a number of garbage collection techniques (for example, reference counts and
mark and sweep). Different versions of Java use different algorithms, but recent versions use
generational garbage collection, which often proves to be quite efficient. Garbage collection is
an important part of Java’s security strategy. Java programmers are unable to accidentally (or
purposely) crash the JVM by incorrectly freeing memory.
When a heap generation becomes full, garbage is collected by running a partial collection,
where all objects that have lived long enough in the younger generations are promoted
(moved) to an older generation, thus freeing up the younger one for more object allocation.
When the old space becomes full, a full collection is required.
The reasoning behind generations is that most objects are temporary and short lived. A partial
collection is designed to be swift at finding newly allocated objects that are still alive and
moving them away. Typically, a partial collection frees a given amount of memory much faster
than a full collection. When monitoring the heap consumption of a JVM, it is often easy to
identify the points at which different collection algorithms are used, based on the amount of
time to perform the collection and the amount of memory freed.
export JAVA_VENDOR="Oracle"
export USER_MEM_ARGS="-Xms512m –Xmx1g"
./startWebLogic.sh
When you create a domain, if you choose to customize the configuration, the Configuration
Wizard presents a list of SDKs that WebLogic Server installed. From this list, you choose the
JVM that you want to run your domain, and the wizard configures the Oracle start scripts
based on your choice.
After you create a domain, if you want to use a different JVM, you can modify the scripts and
change either the JAVA_HOME or JAVA_VENDOR environment variables. For JAVA_HOME,
specify an absolute pathname to the top directory of the SDK that you want to use. For
JAVA_VENDOR, specify the vendor of the SDK. Valid values depend on the platform on which
you are running:
• “Oracle” indicates that you are using the JRockit SDK. It is valid only on platforms that
support JRockit.
• “Sun” indicates that you are using the Sun SDK.
• “HP” and “IBM” indicate that you are using SDKs that Hewlett Packard or IBM have
provided. These values are valid only on platforms that support HP or IBM SDKs.
Argument Description
With JRockit, the heap may be divided into two generations called the nursery (or young
space) and the old space. The nursery is a part of the heap reserved for allocation of new
objects. When the nursery becomes full, garbage is collected by running a special young
collection, where all objects that have lived long enough in the nursery are promoted (moved)
to the old space, thus freeing up the nursery for more object allocation. When the old space
becomes full, garbage is collected there (a process called an old collection). The reasoning
behind a nursery is that most objects are temporary and short lived. A young collection is
designed to be swift at finding newly allocated objects that are still alive and moving them
away from the nursery.
During object allocation, JRockit JVM distinguishes between small and large objects. The limit
for when an object is considered large depends on the JVM version, the heap size, the
garbage collection strategy and the platform used, but is usually somewhere between 2 KB
and 128 KB. Small objects are allocated in thread local areas (TLAs). The thread local areas
are free chunks reserved from the heap and given to a Java thread for exclusive use. The
thread can then allocate objects in its TLA without synchronizing with other threads. When the
TLA becomes full, the thread simply requests a new TLA. The TLAs are reserved from the
nursery if a nursery exists; otherwise, they are reserved anywhere in the heap.
Argument Description
The -Xms option sets the initial and minimum Java heap size. Combine -Xms with a memory
value and add a unit. If you do not add a unit, you will get the exact value you state; for
example, 64 will be interpreted as 64 bytes, not 64 megabytes or 64 kilobytes.
The –Xmx option sets the maximum Java heap size. Depending upon the kind of operating
system that you are running, the maximum value that you can set for the Java heap can vary.
Note, however, that this option does not limit the total amount of memory that the JVM can
use.
The -Xns option sets the nursery size. JRockit JVM uses a nursery when the generational
garbage collection model is used. You can also set a static nursery size when running a
dynamic garbage collector (-XgcPrio).
The -XgcPrio option sets a dynamic garbage collection mode. This garbage collector
combines all types of garbage collection heuristics and optimizes the performance
accordingly. When running this garbage collector, you need to determine only whether your
application responds best to optimal memory throughput during collection or minimized pause
times. The dynamic garbage collector will then adapt its choice of collector type, in run time,
to what best suits your application.
Advanced –XX options are subject to change without notice and are not recommended unless
you have a thorough understanding of the JVM.
An out-of-memory condition may occur in the Java heap, when the JVM does not have
enough heap space to allocate new Java objects. A similar error can occur for native memory,
when the JVM itself or any additional native shared libraries cannot allocate space for objects
or internal operations.
Native memory errors can also occur due to applications that utilize the Java Native Interface
(JNI). JNI allows you to write native methods to handle situations when an application cannot
be written entirely in the Java programming language, such as when the standard Java class
library does not support the required platform-specific features. Many of the standard Java
library classes depend on JNI to provide functionality to the developer and the user, including
file I/O and sound capabilities. Before resorting to using JNI, developers should make sure
that the functionality is not already provided in the standard libraries. The JNI framework does
not provide any automatic garbage collection for non-JVM memory resources allocated by
code executing on the native side. Consequently, native-side code (such as C, C++, or
assembly language) must assume the responsibility for explicitly releasing any such memory
resources that it itself acquires.
An out-of-memory condition can occur when there is free memory available in the heap but it
is too fragmented and not contiguously located to store the object being allocated or moved
(as part of a garbage collection cycle).
• Memory leaks:
– Are a common cause of out-of-memory errors
– Are less common in Java than in traditional languages
– Occur when an object is no longer needed but is not made
eligible for garbage collection
• Typical scenarios include:
– Excessive caching
– Excessive use of the HTTP session
– Failing to close DB results when finished
– Problems with dynamic class loading
At first glance, it may appear that memory leaks are not possible with Java, because the JVM
automatically destroys any objects that the application is no longer referencing. However, a
poorly designed or implemented application can still accidentally maintain a reference to an
object even after the object is no longer needed.
In Java EE applications, a common memory leak culprit is the HTTP session, which is a
specific form of caching. Recall that sessions are collections of objects that the application
wants to maintain for the lifetime of a user conversation, such as navigating a Web site. In this
case, objects are initially identified by the application, but are subsequently referenced and
maintained by the server. By default, WebLogic Server will maintain sessions for 1 hour of
inactivity before making them eligible for garbage collection. For servers hosting hundreds
and thousands of concurrent users, large sessions can quickly cause the JVM to run out of
memory. Poorly built applications will cache large amounts of data in HTTP sessions simply
for convenience or for a perceived performance benefit.
WebLogic uses classloaders to dynamically modify the classpath for individual applications at
run time. A typical example is the (re)deployment of an application. This class loading
behavior can also be customized by developers and administrators. If done improperly, this
customization can create subtle memory leaks.
• Rarely, the JVM itself can fail and cause the server to
crash.
• Depending on the JVM and type of failure, a log and/or a
core file may be produced.
• A core file:
– Is a binary snapshot of the process memory at the time of
failure
– Is triggered by failing native code (JVM, native WLS libraries,
native DB drivers, and so on)
– Can be viewed using OS-specific tools
– Can be disabled or truncated based on your OS settings
On rare occasions you may encounter a JVM crash, which means that the JVM itself has
encountered a problem from which it has not managed to recover gracefully. You can identify
and troubleshoot a JVM crash by the diagnostic files that are generated by the JVM and/or
OS. A snapshot is created that captures the state of the JVM process at the time of the error.
These diagnostic files may include a core file, which typically uses the extension .core on
UNIX and .mdmp on Windows. This binary file contains information about the entire JVM
process and needs to be opened using debugging tools including with your OS. The size of
the binary crash file is usually quite large, so you need to make sure there is enough disk
space for the file to be completely written to disk. Alternatively, you can configure the JVM
and/or OS to limit the file size. For example, on UNIX, use the ulimit command or the
limits.conf file. JVMs also include options to change the file name or location of
generated core files.
The gdb debugging tool, popular on Linux, can extract useful information from core files. The
Dr. Watson tool on Windows provides similar capabilities.
The JVM error log is a text file that is like an executive summary of the full memory image and
the environment in which the JVM was run at the time of the crash. This file is produced by
the JVM itself when it crashes and is useful for classifying crashes. It can also sometimes be
used to identify problems that have already been fixed. While this file by itself rarely reveals
enough information to actually find the cause of the problem, it can be useful when creating a
support case.
The exact contents of the log file will vary by JVM, but they generally include similar types of
information. On the Sun JVM, the log file is named hs_err_pid<pid>.log, where <pid> is
the process ID of the process. JRockit refers to this error log as a “dump” file with the name
jrockit.<pid>.dump.
The log shows the actual error message that the operation threw at the time of the crash.
Check your operating system vendor’s user information to find out more about the error. It
also describes how many CPUs have been used and how much memory has been consumed
by the JVM. In addition, the log includes the state of any native threads being utilized by the
JVM.
The log includes a summary of the JVM’s host environment, including the OS version and
hardware specifications. Be sure that your JVM is running on a supported configuration.
• JVM Concepts
• Basic JVM Tools
– Stack Trace
– Thread Dump
– Verbose GC
– Sun Profiler Agent
– Sun Diagnostic Tools
– JVisualVM
• WLS JVM Tools
• JRockit Mission Control
A stack trace:
• Describes a sequence of method invocations that have led
up to a specific point in a program
• Helps depict the various layers of your application
• Includes source line numbers, unless explicitly removed
during compilation
WLS code
at weblogic.jdbc.jta.DataSource.connect(DataSource.java:383)
at org.apache.openjpa.jdbc.kernel.JDBCStoreManager.
connectInternal(JDBCStoreManager.java:773)
at kodo.jdbc.kernel.KodoJDBCStoreManager.
connectInternal(JDBCStoreManager.java:29)
at kodo.jdbc.sql.KodoSelectImpl.execute(KodoSelectImpl.java:28)
at com.mycompany.ejb.Customer.findByStatus(Customer.java:21)
If an exception or error is not explicitly handled by WLS or an application, the server will log
the error message along with a stack trace. For a typical Java EE application running on
WLS, a full stack trace for a single client request could comprise100 or more method
invocations. Developers can also programmatically print the current stack trace if desired. The
text <init> is used within a stack trace entry to indicate a constructor method of a class.
Stack traces may include calls to native functions, such as within the JVM or other native
libraries. These native calls will be indicated as such in the stack trace and will not include
detailed line numbers.
For each method call within a stack trace, you will see the containing class and package
name. Even if you are not intimately familiar with WLS or your application’s implementation
details, these package names can still help you identify which general layers of the system
are involved. For example, generally speaking, packages starting with weblogic are part of
WebLogic’s codebase, while those starting with org tend to be third-party frameworks or
libraries. In the example in the slide, we can see that this application uses two such libraries:
OpenJPA and Kodo (both of which are bundled with WLS).
A thread dump is a snapshot of all active threads in a Java Virtual Machine in a readable
textual format. A thread dump provides information about what each of the threads is doing at
a particular moment in time.
A set of thread dumps can help analyze the change, or lack of change, in each thread’s state
from one thread dump to another. To detect deadlocks, get thread dumps several times (three
or more) on each server, spaced somewhere between 5 and 30 seconds apart.
Some threads in WebLogic are not used to process client requests but instead perform
internal operations. For example, socket reader (or muxer) threads simply poll sockets for
incoming requests and place them on a queue to be processed by other threads. The number
of these socket reader threads that are created by WebLogic can vary based on the host OS
and the type of native performance pack installed.
There are several open-source graphical tools that can help you better visualize and analyze
the data contained in a thread dump. Some may support only a specific JVM, but others
support multiple JVMs. Examples include Samurai and the TDA project. JVMs themselves
typically provide similar thread dump analysis tools as well.
kill -3 <WLSProcessID>
Generally speaking, thread dumps are written to the standard output stream, but there are
some exceptions. For example, you can view the state of all threads by using the WLS
Administration Console. Select a server and click the Monitoring > Threads tab. The Current
Request column indicates what the thread is currently working on. To instead trigger a thread
dump (on standard output) by using the console, click the Monitoring > Performance tab and
then click Garbage Collect.
The WLST threadDump() command displays a thread dump for the given server and, by
default, also writes this data to a file. By default, the file name is
Thread_Dump_<serverName>. If you are connected to an Administration Server, you can
display a thread dump for the Administration Server as well as for any managed server that is
running in the domain. If you are connected to a managed server, you can only display a
thread dump for that managed server.
Note that the JVM option –Xrs (or –Xnohup) can disable thread dumps on a server. This
option helps to prevent possible interference when the JVM is running as a background
process or service and receives OS signals such as CTRL_LOGOFF_EVENT or SIGHUP. If not
set, upon receiving such events, the VM tries to initiate a shutdown, but this shutdown will fail
because the operating system will not actually terminate the process.
The -verbose and -Xverbose JRockit JVM command-line options output diagnostic
information about the system. The output is by default printed to the standard output for error
messages (stderr) but you can redirect it to a file by using the -XverboseLog command-
line option. The information displayed depends on the parameter specified with the option; for
example, specifying the parameter cpuinfo displays information about your CPU and
indicates whether or not the JVM can determine if hyper-threading is enabled. Available
debug parameters include cpuinfo, gc, gcpause, gcreport, class, load, and codegen.
The Sun HotSpot JVM supports similar command-line options. The -verbose option
supports the parameters gc, class, and jni. Additional command-line options related to
memory management include –XX:PrintGC, -XX:-PrintGCDetails, and –XX:-
PrintGCTimeStamps.
A typical garbage collection debug message includes the memory used prior to the GC, the
new amount of memory used after the GC, the current heap size, and the elapsed time. The
output format can vary, so refer to your JVM documentation. If you encounter out-of-memory
errors even when verbose GC messages indicate that there is free Java heap available, it is
likely due to memory fragmentation.
Tagtraum GCViewer is an open source that can help you visualize the data produced by the
verbose GC options. It supports both the Sun and JRockit JVMs.
HPROF is actually a JVM native agent library that is dynamically loaded through a command-
line option at JVM startup, and becomes part of the JVM process. By supplying HPROF
options at startup, users can request various types of heap and/or CPU profiling features from
HPROF. The data generated can be in textual or binary format, and can be used to track
down and isolate performance problems involving memory usage and inefficient code. The
following profile types are supported:
• -agentlib:hprof=heap=[all|dump|sites]
• -agentlib:hprof=cpu=[samples|times|old]
Additional options are available to control the output format (ASCII or binary), sampling
period, and stack trace depth. For example, to sample CPU information every 20 ms, with a
stack depth of 3, use:
-agentlib:hprof=cpu=samples,interval=20,depth=3
By default, heap profiling information (sites and dump) is written out to java.hprof.txt.
The output in most cases contains IDs for stack traces, threads, and objects. Each type of ID
typically starts with a different number than the other IDs. For example, traces might start with
300000.
Tool Description
You can use these tools bundled with the Sun JDK to monitor JVM performance statistics and
to troubleshoot problems. According to the Oracle Sun web site, the tools described in this
section are unsupported and experimental, and should be used with that in mind. Therefore,
although unlikely, they may not be available in future JDK versions.
JVisualVM is a graphical tool that provides detailed information about one or more local or
remote JVMs. It includes memory and CPU profiling, heap dump analysis, memory leak
detection, access to MBeans, and garbage collection analysis. JConsole is a similar tool but
does not include memory and CPU profiling features.
The jps tool lists the instrumented HotSpot Java Virtual Machines (JVMs) on the target
system. The tool is limited to reporting information on JVMs for which it has the access
permissions. If jps is run without specifying a hostid, it will look for instrumented JVMs on
the local host. If started with a hostid, it will look for JVMs on the indicated host, using the
specified protocol and port. A jstatd process is assumed to be running on the target host.
The jps command will report the local VM identifier, or vmid, for each instrumented JVM
found on the target system. The vmid is typically, but not necessarily, the operating system’s
process identifier for the JVM process. With no options, jps will list each Java application’s
vmid followed by the short form of the application’s class name or JAR file name. The short
form of the class name or JAR file name omits the class’s package information or the JAR
files path information.
<xyz> denotes
internal JVM types.
> jmap –histo 13101 | more
For most Sun diagnostic tools, you must specify the ID of the JVM to communicate with. In
most instances, this is the host process ID.
The jmap tool supports the following data options:
• -histo[:live]: Prints a histogram of the heap. For each Java class, the number of
objects, the memory size in bytes, and the fully qualified class names are printed. VM
internal class names are printed with the * prefix. If the live option is specified, only live
objects are counted.
• -heap: Prints a heap summary, GC algorithm used, heap configuration, and heap
usage for each generation
• -dump:[live,]format=b,file=<filename> : Dumps the Java heap in hprof
binary format to the file name. The live option is optional. If specified, only the live
objects in the heap are dumped. To browse the heap dump, you can use jhat.
• -finalizerinfo: Prints information on objects awaiting finalization
• -permstat: Prints class loader statistics for permanent generation of the Java heap.
For each class loader, its name, status, address, parent class loader, and the number
and size of classes it has loaded are printed. In addition, the number and size of
interned strings are printed.
When running the jstat tool you must specify a data option. For example, -gc prints all
statistics for the entire garbage-collected heap. The –gcold and –gcnew options print
statistics for only the old or new heap generations, respectively. The –gcutil option prints
an abbreviated set of heap statistics.
The jstat tool supports arguments to control the data collection interval, the columns that
are displayed, and the interval at which column headers are displayed. For GC statistics, the
columns include:
• S0C Current survivor space 0 capacity (KB)
• S1C Current survivor space 1 capacity (KB)
• S0U Survivor space 0 utilization (KB)
• S1U Survivor space 1 utilization (KB)
• EC Current Eden space capacity (KB)
• EU Eden space utilization (KB)
• OC Current old space capacity (KB)
• PC Current permanent space capacity (KB)
JVisualVM can:
• Monitor and profile local JVMs
• Monitor JVMs on remote hosts that are also running the
jstatd process
JVisualVM is a tool that provides a visual interface for viewing detailed information about Java
applications while they are running on a JVM, and for troubleshooting and profiling these
applications. JVisualVM combines all of the functionality available in the JDK’s command-line
diagnostic tools, such as jstat, jinfo, jmap, and jstack.
JVisualVM is able to access and display data about applications running on remote hosts.
After connecting to a remote host, JVisualVM can display general data about the application’s
runtime environment and can monitor memory heap and thread activity. JVisualVM can
retrieve monitoring information on remote applications, but it cannot profile remote
applications.
The jstatd tool is an RMI server application that monitors the creation and termination of
instrumented HotSpot Java virtual machines (JVMs) and provides an interface to allow remote
monitoring tools to attach to JVMs running on the local host. The jstatd server requires the
presence of an RMI registry on the local host. The jstatd server will attempt to attach to the
RMI registry on the default port, or on the port indicated by the -p port option. If an RMI
registry is not found, one will be created within the jstatd application bound to the port
indicated by the -p port option or to the default RMI registry port if -p port is omitted. The
jstatd server can monitor only those JVMs for which it has the appropriate native access
permissions. Therefore, the jstatd process must be running with the same user credentials
as the target JVMs.
When you start JVisualVM, the Applications panel is visible at the left side of the main
window. The Applications panel uses a tree structure to enable you to quickly view the
applications that are running on local and connected remote JVM software instances. It also
displays core dumps and application snapshots. For most nodes in the Applications panel,
you can view additional information and perform actions by right-clicking a node and selecting
an item from the popup menu.
The Local node displays the names and process IDs (PIDs) of Java applications that are
running on the same system as JVisualVM. When you launch JVisualVM, it automatically
displays the currently running Java applications, including JVisualVM itself. When a new local
Java application is launched, a node for that application appears under the Local node. The
application node disappears when the application terminates. If you use JVisualVM to take
thread dumps, heap dumps, or profiling snapshots of an application, they are displayed as
subnodes below the node for that application. You can right-click a dump or snapshot
subnode to save it to your local system. You can also collect and archive all the information
collected about an application and save it as an application snapshot.
The VM Coredumps node lists open core dump files. A core dump is a binary file that contains
information about the runtime status of the machine at the time the core dump was taken. You
can then extract an overview of the system properties, a heap dump, and a thread dump.
Profile memory
Monitor usage
threads
JVisualVM displays real-time, high-level data on thread activity on the Threads tab for a
specific JVM instance. By default, the Threads tab displays a timeline of current thread
activity. You can also click a thread in the timeline to view details about that thread on the
Details tab. Use the buttons in the Timeline toolbar to zoom in and out on the current view and
to switch to the “Scale to Fit” mode. The drop-down list enables you to select which threads
are displayed. You can choose to view all threads, live threads, or finished threads.
The Profiler tab enables you to start and stop the profiling session of a local JVM. When
profiling results are available, they are automatically displayed on the Profiler tab. You can
use the toolbar to refresh the profiling results, invoke garbage collection, and save the
profiling data. Choose CPU Profiling to profile the performance of the application. Choose
Memory Profiling to analyze the memory usage of the JVM. The results display the objects
allocated by the JVM along with the classes that allocated those objects. The filter box below
the profiling results enables you to filter the displayed results according to the name of the
method. To filter the results, enter a term in the method name filter box, select a filtering
method to use, and press Return. You can see and select previous filter terms by clicking the
arrow to the right of the method name filter box.
• JVM Concepts
• Basic JVM Tools
• WLS JVM Tools
– Console JVM Monitoring
– Low Memory Detection
• JRockit Mission Control
1
2
3
Force a GC.
domainRuntime()
serverJVM =
getMBean('/ServerRuntimes/serverA/JVMRuntime/serverA')
1
2
You can automatically log low-memory conditions observed by the server. WebLogic Server
detects low memory by sampling the available free memory a specified number of times
during a time interval. At the end of each interval, an average of the free memory is recorded
and compared to the average obtained at the next interval. If the average drops by a user-
configured amount after any sample interval, the server logs a low-memory warning message
in the log file and sets the server health state to Warning.
1. Select a server in your domain.
2. Confirm that the Configuration tab is selected.
3. Click the Tuning tab.
• Low Memory GC Threshold: The threshold level (as a percentage) that this server
uses for logging low-memory conditions and changing the server health state to
Warning. For example, if you specify 5, the server logs a low-memory warning in the log
file and changes the server health state to Warning after the average free memory
reaches 5% of the initial free memory measured at the server’s boot time.
• Low Memory Granularity Level: The granularity level (as a percentage) that this server
uses for logging low-memory conditions and changing the server health state to
Warning. For example, if you specify 5 and the average free memory drops by 5% or
more over two measured intervals, the server logs a low-memory warning in the log file
and changes the server health state to Warning.
• Low Memory Time Interval: The amount of time (in seconds) that defines the interval
over which this server determines average free-memory values. By default, the server
obtains an average free-memory value every 3600 seconds. This interval is not used if
the JRockit VM is used, because the memory samples are collected immediately after a
VM-scheduled garbage collection. Taking memory samples after garbage collection
gives a more accurate average value of the free memory.
• JVM Concepts
• Basic JVM Tools
• WLS JVM Tools
• JRockit Mission Control
– JRockit Diagnostic Tools
– Management Console
– Runtime Analyzer
– Memory Leak Detector
Tool Description
jrmc (JRockit Mission Control) An all-inclusive graphical interface for
monitoring and profiling CPU, JVM memory, threads, and MBeans
jrcmd Identify JRockit processes and send commands to them such as
print_threads, heap_diagnostics or print_class_summary.
The jrcmd file is a command-line tool included with the JRockit JDK that you can use to send
diagnostic commands to one or more running JVM processes. It discovers which JRockit JVM
processes are running on the machine. The syntax is as follows:
jrcmd <pid> [<command> [<arguments>]]
<pid> is the process ID of a JRockit JVM. If set to 0, commands are sent to all JRockit JVM
processes.
[<command> [<arguments>]] is any diagnostic command and its associated arguments.
If instead you want to use these same commands in the context of the JVM thread dump
signal (kill -3 on UNIX, for example), you can edit the ctrlhandler.act file. These
commands include:
• version, which prints the JRockit JVM version
• command_line, which prints the command line that is used to start the JRockit JVM
• print_threads, which prints a normal thread dump
• print_class_summary, which prints all loaded classes
• print_properties, which prints all Java system properties
• heap_diagnostics, which prints memory layout and object allocation statistics
ctrlhandler.act
set_filename filename=/tmp/jvm_output.txt
print_threads
heap_diagnostics
Use diagnostic commands to communicate with a running Oracle JRockit JVM process.
These commands tell the JRockit JVM to, for example, print a heap report or a garbage
collection activity report, or to turn on or off a specific verbose module. You can enable or
disable any diagnostic command by using the -Djrockit.ctrlbreak.enable<name>=
<true|false> system property, where <name> is the name of the diagnostic command.
In addition to the jrcmd tool, you can run diagnostic commands by pressing Ctrl + Break on
Windows or kill -3 on UNIX. When you issue this signal, the JRockit JVM will search for a file
named ctrlhandler.act in your current working directory. If it does not find the file there, it
will look in the directory containing the JVM. If it does not find this file there either, it will revert
to displaying the normal thread dump. If it finds the file, it will read the file searching for
command entries, each of which invokes the corresponding diagnostic command. You can
disable this functionality by setting this JVM option:
-Djrockit.dontusectrlbreakfile=true
The set_filename command sets the file that all commands following this command will
use for printing. You can have several set_filename commands in a single file. It takes two
arguments: the file name and an optional flag to specify if you want to append to the file or
overwrite it. The defaults are stderr for the file name, and to overwrite the file.
The -Xmanagement option starts the JRockit JVM concurrently with the management server
and allows you to either enable and configure or explicitly disable security features for this
connection, such as SSL encryption and authentication.
The following example overrides the default port using 1234, and disables SSL:
-Xmanagement:port=1234,ssl=false,authenticate=false
Due to the security risks and the mission-critical nature of most JRockit JVM deployments, the
new default behavior of the JRockit JVM requires that you either disable security explicitly or
configure and enable security. If you do not take these steps, the management server will not
open a port for remote access and may cause the JVM startup to halt with an error message
concerning the security configuration.
You can also dynamically enable or disable the management port by using the jrcmd
command-line tool or the Ctrl + Break handler. In either case, use the
start_management_server and kill_management_server commands.
• JRMC:
– Is a graphical monitoring and diagnostic client bundled with
JRockit
– Supports real-time monitoring, historical analysis, and
customized alerts
– Is designed to incur minimal overhead on server JVMs when
compared to similar tools
– Runs on JRockit
– Is also available as a plug-in to Eclipse
• Check Oracle.com for the most recent release.
• As a best practice, run JRMC on a separate machine (not
on the machine running the server JVM).
The Oracle JRockit Mission Control client tools suite includes tools to monitor, manage,
profile, and eliminate memory leaks in your Java application without introducing the
performance overhead normally associated with these types of tools.
To view real-time behavior of your application and of Oracle JRockit JVM, you can connect to
an instance of the JRockit JVM and view real-time information through the JRockit
Management Console. Typical data that you can view includes thread usage, CPU usage,
and memory usage. All graphs are configurable, and you can both add your own attributes
and redefine their respective labels. In the Management Console, you can also create rules
that trigger on certain events. For example, an email is sent if the CPU reaches 90% of the
size.
Discover JVM
1 JDP Server
JRMC Client
Management
2 Agent
You can use the –Xmanagement option to make the JRockit JVM use the JRockit Discovery
Protocol (JDP) to announce its presence to management clients such as JRockit Mission
Control.
Some additional Java system properties (-D<property>) are also available to tune this
discovery behavior:
• jrockit.managementserver.discovery.period: The time to wait between
multicasting the presence in ms (default is 5000)
• jrockit.managementserver.discovery.ttl: The number of router hops the
packets being multicasted should survive (default is 1)
• jrockit.managementserver.discovery.address: The multicast group/address
to use (default is 232.192.1.212)
• jrockit.managementserver.discovery.targetport: The target port to
broadcast (default is 7091)
Discovered
remote JVMs
This process
Local
JVMs
Process ID
When you launch Oracle JRockit Mission Control, the JVM Browser is automatically opened.
The JVM Browser is a configurable tree view that you can use to manage your Oracle JRockit
JVM connections. It will automatically discover locally running JRockit JVMs as well as
JRockit JVMs on the network that have been started using JDP. Other plug-ins (for example,
the JRA tool) use the JVM Browser to access the connection(s) on which to operate.
Although JRMC should discover your JVMs automatically, you can also explicitly create a
connection to a JVM. Right-click within the JVM Browser under the Connectors folder and
select New Connection. Enter a host, port, and connection name.
To start other JRMC tools such as the Management Console, select a running JVM and either
use the toolbar buttons at the top of the view or right-click the JVM and select an application.
To update user preferences for the JVM Browser or other plug-ins, select Window >
Preferences from the main menu. In this case, you select JRockit Mission Control > Browser
Preferences.
JRockit instrumentation is accessible through JMX managed bean (MBean) interfaces, which
are registered in the JVM’s management server and displayed using diagnostic tools such as
the Management Console in JRockit Mission Control. The extra cost of running the
Management Console against a running JVM is small and can almost be disregarded. This
provides for low-cost monitoring and profiling of your application. Each tab in the
Management Console focuses on a specific set of JVM MBeans, although the MBean
Browser tab lets you browse and monitor the raw data for all available MBeans.
The Runtime > Memory tab allows you to monitor how efficiently your application is using the
memory that is available to it. This tab focuses on heap usage, memory usage, and garbage
collection schemes. The information provided on this tab can greatly assist you in determining
whether you have configured the JRockit JVM to provide optimal application performance.
The Runtime > System tab provides such information as the average processor load over
time and as a percentage of the overall CPU load. It also lists all system properties loaded
with the JVM.
The Advanced > Exception Count tab supports a type of profiling that counts the number of
exceptions of a certain type, providing information that is helpful when you are troubleshooting
your Java application.
Memory and
CPU gauges
CPU usage
over time
Memory usage
over time
The Overview tab found under the General menu option allows you to monitor processor
behavior and memory statistics during system run time. The Overview tab is helpful in
analyzing a system’s general health because it can reveal behavior that might indicate
bottlenecks or other sources of poor system performance.
Graphs show the performance of one or more MBean attributes over time. You can add or
remove attributes to any graph in the Management Console. The values are plotted on the y
axis (vertical axis) and are usually specified in percentages or raw numbers. The time element
is plotted on the x axis (horizontal axis). Time is displayed in increments of seconds, minutes,
or hours. Each graph has a legend that uses a color patch to identify the attribute being
plotted.
View a thread’s
stack trace.
The Threads tab found under the Runtime menu option allows you to monitor thread activity
for a running application. This tab contains both a graph that plots thread usage by an
application over time and a sortable list of all live threads used by the application. It also
displayed thread stack traces.
Some Management Console tabs display data by using tables. You can add and remove
elements in a Management Console table, and you can also resize the widths of the elements
and the way they are sorted when viewing.
Select JVM or
WLS MBean.
Set conditions.
Select an action.
View triggered
alert actions.
The Triggers tab found under the MBeans menu option lists all trigger rules that you have
created for a JVM and allows you to activate or deactivate any trigger using the supplied
check boxes. These rules, when violated, launch a user-defined notification that advises
users of the violating condition. Create new triggers by using the Add button. Similarly, you
can Edit or Delete existing trigger definitions.
When adding a new trigger, you must specify the MBean attribute on which it should be
based. Browse the available MBeans and select an attribute. Then set the following:
• Max Trigger Value: The maximum value at which point the rule should be triggered
• Sustained: The duration, in seconds, that the triggering condition must be met for the
action to trigger
• Limit Period: The minimum time, in seconds, between consecutive triggers
• Trigger When: Whether you want the action to be triggered when the attribute reaches
the set value and/or when it drops below the set value
The Application Alert action type simply makes the notification available in the JRMC tool
itself. The Console Output type prints the notification to standard output. The Send E-mail
action sends an email to the specified email server and address. The Thread Stack Dump
action captures a JVM stack dump and delivers it as an JRMC alert or writes it to a log file.
The JRockit Flight Recorder (JFR) is a performance monitoring and profiling tool that makes
diagnostic information available, even following catastrophic failure, such as a system failure.
The Flight Recorder is a rotating buffer of diagnostic and profiling data that is available on-
demand. Like its aviation namesake, the Flight Recorder provides an account of events
leading up to a system (or application) failure. Whenever a Java application running on Oracle
JRockit is started with the Flight Recorder enabled, data is constantly collected in the Flight
Recorder or its buffers. In the event of a system failure, the content of this recording can be
transferred to a file and used to analyze what went wrong.
Data displayed by the Flight Recorder GUI is collected by the Flight Recorder Run Time
component, which is part of the Oracle JRockit JVM. Flight recordings are displayed in the
Flight Recorder GUI, which is part of the JRockit Mission Control Client.
To automatically create a flight recording on a failure (unhandled exception), use the following
option when you start the JRockit JVM:
-XX:FlightRecordingDumpOnUnhandledException
The volume of Flight Recorder data that is captured can be configured from the WebLogic
Server Administration Console, which allows you to specify the following settings:
• Off: No data is captured in the Flight Recorder diagnostic image. This is currently the
default.
• Low: Basic information is captured when messages with the “emergency,” “alert,” or
“critical” levels are recorded.
• Medium: Additional information is captured when messages with the “error” level and
above are recorded.
• High: In-depth information is captured when messages with the “error” level and above
are recorded.
There are standard templates that come with the Flight Recorder:
• Default: This template includes most available information. It omits some low-level JVM
data, some costly memory system data, and exception stack traces.
• Real Time: This template consumes the least overhead possible. Therefore, it collects
limited data. It includes the most important garbage collection and memory system
information; however, some tabs do not contain any information.
• Full: This template records every JVM-level event and thus provides the most data.
Conversely, it also requires the most overhead. Note that other events use default
settings from the producer.
To view the flight recording, select File > Open File and then
browse to the recording file.
Tab Description
General General information, such as CPU and heap usage
Memory Memory information (such as memory consumption over time and
garbage collection statistics) along with memory allocation and object
statistics
Code Code information, such as "hot" packages and methods, exceptions by
class, and code-generation statistics
CPU / CPU and thread information, such as CPU usage and thread count,
Threads lock contention and lock profiling, and hot threads and thread latency
Events Event information, such as event producers and types, event logging
and graphing, event by thread, event stack traces, and event
histograms
Java packages
requiring the most
CPU time
Tab Description
From the Trend tab, you start the analysis of your applications. The object types with the
highest growth in bytes/sec are marked red (darkest) in the Trend Analysis table and listed at
the top of the table. For each update, the list can change and the type that was the highest
can move down the list. In addition, the table displays:
• The percentage of the Java heap occupied by the type of object
• The number of instances of the type that currently exist
• The size (in MB or KB) that the type consumes
The trend analysis should start automatically. If not, click the Start button above and to the
right of the table to start the trend analysis. Click the Pause button to freeze the updating of
trend data so that you can start to analyze the application. If you want to view more data from
the same analysis run, click the Play button again and the Memory Leak Detector resumes
displaying samples. If you instead use the Stop button and then later Start again, the current
trend data is reset.
When you find a suspected memory leak, you investigate the suspected leak further on the
Type Graph or Type Tree tab. Before anything is displayed on the tab, you need to select a
type from the Trend tab, right-click it, and then select the “Add to Type Graph” option.
The Type Graph tab offers a view of the relationships between all the types pointing to the
type that you are investigating. For each type you also see a number, which is the number of
instances that point to that type. Once again, darker colors refer to types that appear to have
a high growth rate in terms of their memory consumption. Several buttons are available on the
toolbar to help you navigate the diagram: Zoom in, Zoom out, Reset zoom level, and so on.
Right-click a type and select List All Instances to display them in the Instances window. To
further investigate a specific instance of the selected type, right-click an instance in the
Instances view and then select Inspect Instance. The instance is displayed in the Inspector
view. When inspecting an instance, you will see all fields (variables) that the object contains.
This information will help you pinpoint where the leaking object is located in your application.
To add a specific instance to the Instance Graph or Instance Tree views, right-click an
instance and select “Add to Instance Graph.”
Answer: c
Answer: a
Name three tools included with either the JRockit SDK or Sun
JDK.
a. jstat
b. jrcmd
c. jopt
d. joracle
e. jstack
Answer: a, b, e
Answer: c
When an error occurs within a method, the method creates an object and hands it off to the
runtime system. The object, called an exception object, contains information about the error,
including its type and the state of the program when the error occurred. Creating an exception
object and handing it to the runtime system is called throwing an exception.
The runtime system searches the call stack for a method that contains a block of code that can
handle the exception. This block of code is called an exception handler. The search begins with
the method in which the error occurred and proceeds through the call stack in the reverse order
in which the methods were called. The exception handler chosen is said to “catch the
exception.”
The first kind of exception is the checked exception. These are exceptional conditions that a
well-written application should anticipate and recover from (for example, a File Not Found
exception). The second kind of exception is the error. These are exceptional conditions that are
external to the application and that the application usually cannot anticipate or recover from.
For example, suppose that an application successfully opens a file for input but is unable to
read the file because of a hardware or system malfunction. An application might choose to
catch this exception, in order to notify the user of the problem, but it also might make sense for
the program to print a stack trace and exit.
An application often responds to an exception by throwing another exception. In effect, the first
exception causes the second exception. It can be very helpful to know when one exception
causes another. Chained exceptions in Java help the programmer do this.
When the JVM prints an exception that contains one or more nested exceptions, the stack
traces for each exception are printed to facilitate easier tracing and debugging. The full depth of
the exception chain is included. The exact exception-handling text that a class generates is not
standardized, but most WLS subsystems and third-party frameworks will include phrases such
as “nested exception is” or “caused by” to indicate an exception chain.
java.lang.NoClassDefFoundError:
org/apache/activemq/ActiveMQConnectionFactory
java.lang.ClassNotFoundException:
oracle.jdbc.driver.OracleDriver
NoClassDefFoundError is thrown if the Java Virtual Machine or a class loader instance tries
to load in the definition of a class (as part of a normal method call or as part of creating a new
instance using the new expression) and no definition of the class can be found. The class
definition existed when the currently executing class was compiled, but the definition can no
longer be found during program execution. This error is a type of linkage error, meaning that
although a class has some dependency on another class, the latter class has incompatibly
changed after the compilation of the former class.
ClassNotFoundException is thrown when an application tries to explicitly load in a class by
name but no definition for the class with the specified name could be found. Typical examples
include:
• forName method in class Class
• findSystemClass method in class ClassLoader
• loadClass method in class ClassLoader
java.lang.ClassCastException: org.dom4j.DocumentFactory
at org.dom4j.DocumentFactory.getInstance ...
Every expression written in the Java programming language has a type that can be deduced
from the structure of the expression and the types of the literals, variables, and methods
mentioned in the expression. It is possible, however, to write an expression in a context where
the type of the expression is not appropriate. In some cases, for simple object types, the
language performs an implicit conversion from the type of the expression to a type acceptable
for its surrounding context. However, in most cases casting is required to direct the Java
compiler to treat a variable of a given type as another. It can be done with both primitive types
as well as user-defined types.
The Java language specification defines programming scenarios in which casting is
permissible. For example, an object can be cast “upward” to a more generic type, such as
casting from a Manager type to an Employee type. Conversely, an object can also be cast
“downward” to a more specific subclass. These casting rules are validated at compile time. If an
incompatibility is detected during this runtime validation process, the JVM raises a
ClassCastException. It indicates that the code has attempted to cast an object to a
subclass of which it is not an instance.
Java does provide an instanceof operator to allow programmers to check at run time
whether an object belongs to a specific class, or implements a specific interface. However, it is
generally considered a poor practice to use this operator in lieu of well-designed, well-tested
code.
CLASSPATH=${WL_HOME}/server/lib/weblogic_sp.jar:${WL_HOME}/ser
ver/lib/weblogic.jar:...
...
java ... weblogic.Server
In Java, the classpath is the list of directories that the virtual machine uses to search for
dependent classes in a program. You can set this classpath using an environment variable or
as a command-line parameter when you execute the virtual machine.
After installation, WebLogic Server’s classpath is already set, but you may choose to modify it
for a number of reasons, such as adding a patch to WebLogic Server, updating the version of
the database driver you are using, or adding support for Log4J logging.
The shell environment in which you run a server determines which character you use to
separate elements in the classpath. On Windows, you typically use a semicolon (;). In a BASH
shell, you typically use a colon (:).
Product installation
Patches to add to the
classpath for all domains
setPatchEnv
Standard WLS
classpath for
all domains commEnv setWLSEnv For command-
line tools
Domain A Domain B
Domain-specific
setDomainEnv setDomainEnv classpath additions
startWebLogic startWebLogic
To start a server within a domain, you can use the generated startWebLogic script or develop
your own custom scripts. The default startWebLogic script executes your domain’s
setDomainEnv script. This script in turn calls a script named commEnv, which is included with
your product installation. The commEnv script uses the setPatchEnv script, which is
responsible for initializing variables that point to your currently installed patches. Another script
that makes use of commEnv is setWLSEnv, which is not directly used to start servers. Instead,
it provides a convenient way of initializing your environment to support WebLogic developer
and administrator tools, including Ant and WLST.
Some patches directly replace existing resources in your product installation and therefore
become effective automatically as soon as they are applied. They are not enabled through a
reference in a server start script, such as a classpath. Examples of patches that typically
contain replacement artifacts include resources for Web server plug-ins, native socket
multiplexers, and native dynamically linked libraries.
Other patches include Java class or library files that are loaded by server start scripts. These
patch files are written to a special location within your Oracle root installation folder,
<MIDDLEWARE_HOME>. These folders typically start with the name patch_.
Recall that the convenience script startManagedWebLogic simply calls the standard
startWebLogic script and initializes some basic WebLogic system properties
(-D<name>=<value>), specifically the name of the managed server and the location of the
Administration Server.
The jinfo command-line utility gets configuration information from a running Java process or
crash dump and prints the system properties or the command-line flags that were used to start
the virtual machine. If you start the target Java VM with the -classpath and
-Xbootclasspath arguments, the output from jinfo provides the settings for
java.class.path and sun.boot.class.path. This information might be needed when
investigating class loader issues.
Adds to classpath
weblogic.jar
xmlx.jar
commEnv
MANIFEST
weblogic_patch.jar patchABC.jar
setPatchEnv
MANIFEST patchDEF.jar
The commEnv script explicitly adds the basic WLS libraries to the classpath, including
weblogic.jar. Some of these JAR files include manifest files that indirectly append
additional paths and JAR files to the system classpath.
The commEnv script calls the setPatchEnv script, which includes default definitions for the
environment variables that specify the locations of patch JAR files. By default, these patch
variables are in effect for every WebLogic Server instance that is started by using commEnv.
The setPatchEnv script variable PATCH_CLASSPATH defines a single JAR file named
weblogic_patch.jar. This file contains no Java classes but includes a manifest file. The
manifest file lists all of the specific patch JAR files that are to be dynamically included in the
server classpath. As new patches are downloaded and installed, this manifest file is updated.
With this approach, scripts such as commEnv, setDomainEnv, and startWebLogic need not
be modified.
Alternatively, you can customize a domain’s setDomainEnv script to contain its own definition
for the PATCH_CLASSPATH variable. In this scenario, the definition of PATCH_CLASSPATH in
the commEnv script is overridden for those server instances. It is important that, if you add a
patch path variable definition to a start script, you place the definition before the statement that
invokes another start script.
WebLogic Server includes a lib subdirectory, located in the domain directory, that you can
use to add one or more JAR files to the WebLogic Server system classpath when servers start
up. The lib subdirectory is intended for JAR files that change infrequently and are required by
all or most applications deployed in the server, or by WebLogic Server itself. For example, you
might use the lib directory to store third-party utility classes that are required by all
deployments in a domain. You can also use it to apply patches to WebLogic Server.
The lib directory is not recommended as a general-purpose method for sharing JARs
between one or two applications deployed in a domain, or for sharing JARs that need to be
updated periodically. If you update a JAR in the lib directory, you must reboot all servers in
the domain for applications to realize the change.
Class Loader B
Children
Class Loader A app1.jar
of A
lib1.jar, lib2.jar Class Loader C
app1.jar
Class loaders are a fundamental module of the Java language. A class loader is a part of the
Java virtual machine (JVM) that loads classes into memory; a class loader is responsible for
finding and loading class files at run time.
WebLogic Server allows you to deploy newer versions of application modules such as EJBs
while the server is running. This process is known as hot-deploy or hot-redeploy and is closely
related to class loading. Java class loaders do not have any standard mechanism to undeploy
or unload a set of classes, nor can they load new versions of classes. To make updates to
classes in a running virtual machine, the class loader that loaded the changed classes must be
replaced with a new class loader. When a class loader is replaced, all classes that were loaded
from that class loader (or any class loaders that are offspring of that class loader) must be
reloaded. Any instances of these classes must be re-instantiated.
In WebLogic Server, each application has a hierarchy of class loaders that are offspring of the
system class loader. These hierarchies allow applications or parts of applications to be
individually reloaded without affecting the rest of the system.
The class loader is responsible for locating libraries, reading their contents, and loading the
classes contained within the libraries. This loading is typically done “on demand.” That is, it
does not occur until the class is actually referenced by the code.
A thread of execution has a “current” or “local” class loader. When a thread need a class, it
asks its current class loader to produce or find it. The class loader does the following:
1. It checks its cache to see if it already has the class loaded. If so, the class is returned.
2. If the class is not cached, it asks its parent class loader if it has (or can find) the class.
3. If the class is not returned to the class loader by its parent, the class loader checks its
search path to see if it can find the class and return it. If the class is not found, it throws a
ClassNotFound exception.
This is the same process right up the chain of class loaders. So, if no class loader has the
class in question already cached, the bootstrap (top of the hierarchy) class loader gets the first
chance to search its path for the class and actually load it.
This delegation model is followed to prevent multiple copies of the same class from being
loaded. Multiple copies of the same class can lead to a ClassCastException.
Class Loader C
cache
cache
6 AccessRight.class
AccessRight.class
EAR
Class Loader for all
Each application has EJB Modules
its own class loader.
The bootstrap class loader is the root of the Java class loader hierarchy. The Java virtual
machine (JVM) creates the bootstrap class loader, which loads the Java development kit (JDK)
internal classes and java.* packages included in the JVM (for example, the bootstrap class
loader loads java.lang.String). The extensions class loader (not depicted) is a child of the
bootstrap class loader. The extensions class loader loads any JAR files placed in the
extensions directory of the JDK. This is a convenient means to extending the JDK without
adding entries to the classpath.
The system class loader extends the JDK bootstrap and extensions class loaders. The system
class loader loads the classes from the classpath of the JVM. Application-specific class
loaders, such as those created by WebLogic Server, are children of the system class loader.
WLS class loading is centered on the concept of an application. An application is normally
packaged in an enterprise archive (EAR) file containing application classes. Everything within
an EAR file is considered part of the same application. WLS automatically creates a hierarchy
of class loaders when an application is deployed. The root class loader in this hierarchy loads
any EJB JAR files in the application. A child class loader is created for each Web application
WAR file. Because it is common for Web applications to call EJBs, the WLS application class
loader architecture allows Web components to see the EJB interfaces in their parent class
loader. This architecture also allows Web applications to be redeployed without redeploying the
EJB tier.
WAR
EJB JAR
The root of a WAR hierarchy defines the document root of your Web application.
All files under this root directory can be served to the client over HTTP, except for files under
WEB-INF. All files under WEB-INF are private and are not served to a client, including XML
deployment descriptors. Individual packages and class files are placed within WEB-
INF/classes, while those that have been archived into JAR files are placed under WEB-
INF/lib.
EJB applications and modules have a very simple structure. Place packages and class files
within the root of the archive, as with any standard JAR file.
WebLogic Server provides a location within an EAR file where you can store shared utility
classes. Place utility JAR files in the APP-INF/lib directory and individual classes in the
APP-INF/classes directory. (Do not place JAR files in the /classes directory or classes in
the /lib directory.) These classes are loaded into the root class loader for the application.
This feature eliminates the need to place utility classes in the system classpath or place
classes in an EJB JAR file. Alternatively, utilize the Class-Path entry within a JAR file’s
manifest. Remember that the location specified in the manifest is relative to the path of the
containing JAR file.
...
<container-descriptor>
...
<prefer-web-inf-classes>true</prefer-web-inf-classes>
</container-descriptor>
...
<prefer-application-packages>
<package-name>org.apache.log4j.*</package-name>
<package-name>antlr.*</package-name>
</prefer-application-packages>
In WebLogic Server, any JAR file present in the system classpath is loaded by the WebLogic
Server system class loader. All applications running within a server instance are loaded in
application class loaders that are children of the system class loader. In this implementation of
the system class loader, applications cannot use different versions of third-party JARs that are
already present in the system class loader. Every child class loader asks the parent (the
system class loader) for a particular class and cannot load classes that are seen by the parent.
The WLS Filtering Class Loader provides a mechanism for you to explicitly specify that certain
packages should always be loaded from the application, rather than being loaded by the
system class loader. Although this feature lets you bundle and use third-party JARs in your
application, it is not recommended that you filter out standard classes in the javax.*
packages or weblogic.* packages. To configure the Filtering Class Loader, add a
<prefer-application-packages> descriptor element to the enterprise application’s
weblogic-application.xml file. This element specifies the list of packages to be loaded
from the application.
The same deployment descriptor also supports a <classloader-structure> element,
which allows you to define your own custom hierarchy of class loaders for each of the modules
within the enterprise application. This feature has many consequences and limitations, and is
therefore recommended only for advanced WebLogic users.
A stand-alone client is a client that has a JVM runtime environment independent of WebLogic
Server. Stand-alone clients that access WebLogic Server applications range from simple
command-line utilities that use standard I/O to highly interactive GUI applications built using the
Java Swing/AWT classes.
WLS includes both thin and full client JAR files. Although the WebLogic full client requires the
largest JAR file among the various clients, it supports the most features (clustering, JMS, store
and forward, and so on) and is the best overall performer.
When you run the appc compiler, a JAR file with the classes required to access the EJB is
generated. Make the client JAR available to the remote client’s classpath.
Although infrequent, when you generate classes with the appc compiler, you may encounter a
generated class name collision which could result in a ClassCastException and other
undesirable behavior. This is because the names of the generated classes are based on three
keys: the bean class name, the bean class package, and the configured name for the bean.
This problem occurs when you use an EAR file that contains multiple JAR files and at least two
of the JAR files contain an EJB with both the same bean class, package, or classname, and
both of those EJBs have the same name in their respective JAR files. If you experience this
problem, change the name of one of the beans to make it unique.
java.lang.NullPointerException
at org.apache.struts.tiles.definition.
ComponentDefinitionsFactoryWrapper.getDefinition ...
java.lang.StackOverflowError
at weblogic.servlet.internal.RequestDispatcherImpl.forward
at org.apache.solr.servlet.SolrDispatchFilter.doFilter ...
commEnv script:
... Edit OS configuration files or WLS
ulimit –n 8192 scripts to adjust file descriptor limit
Answer: d
Answer: b, d, e
• Server Diagnostics
– Startup Errors
– Native Libraries
– Thread States
– Work Managers
– Deadlocks
– Overload Protection
• Application Diagnostics
For a detailed description of the log messages in the Oracle WebLogic Server message
catalogs, see “Oracle WebLogic Server Message Catalog” in the online documentation. There
is a link to it on the main Oracle WebLogic Server Documentation Library web page. The
index of messages describes all the messages generated by the Oracle WebLogic Server
subsystems and provides a detailed description of the error, a possible cause, and a
recommended action to avoid or fix the error.
If you use the Configuration Wizard to create a domain in development mode, the
Configuration Wizard creates an encrypted boot identity file in the security directory of the
Administration Server’s root directory. In production mode, you must create this file manually.
Start the Administration Server at least once and provide the user credentials on the
command line. During the Administration Server’s initial startup process, it generates security
files that must be in place before a server can use a boot identity file. If you save the file as
boot.properties and locate it in the security directory of the server’s root directory, the
server automatically uses this file during subsequent startup cycles. The first time you use this
file to start a server, the server reads the file and then overwrites it with an encrypted version
of the username and password.
If you want to specify a different file (or if you do not want to store boot identity files in a
server’s security directory), you can include the following argument in the server’s startup
command:-Dweblogic.system.BootIdentityFile.
If a managed server uses the same root directory as the Administration Server, it can use the
same boot properties file as the Administration Server, or it can use its own dedicated file. If
you use a Node Manager to start a managed server, you do not need to create a server boot
identity file. Instead, you need to specify the boot credentials in the Node Manager’s
configuration.
By default, WebLogic utilizes native threads to read incoming requests from sockets, if the
appropriate native libraries can be located during server startup. These special threads are
also known as muxers. The majority of all platforms provide some mechanism to poll a socket
for data. For example, UNIX systems use the poll system call and the Windows architecture
uses completion ports. Native muxers provide superior scalability because they implement a
nonblocking thread model, unlike standard Java threads. When a native muxer is used, the
server creates a fixed number of threads dedicated to reading incoming requests.
When nonnative I/O is used, I/O is performed in a blocking manner. Therefore, if the number
of socket reader threads is less than the number of active sockets, performance may degrade
significantly. It is recommended that you change your domain settings to improve the
performance. For example, make sure the native libraries for your target environment are
available to the server. Similarly, it is also possible to explicitly disable native I/O by using
server command-line parameters or server settings in the configuration repository
(Configuration > Tuning tab).
Refer to the Product Certification section of the online documentation to determine whether a
WebLogic performance pack is available and supported for your target platform, architecture,
and JVM.
...
if [ ${arch} = "ia64" ]; then
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${WL_HOME}/server/
native/linux/ia64 ...
else
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${WL_HOME}/server/
native/linux/i686 ...
Script excerpt
for Linux
A server can hang for a variety of reasons. Generally, servers hang ultimately because of a
lack of some resource, which in turn prevents the server from servicing requests. For
example, due to the request volume or a problem like a deadlock, there may be no more
execute threads available to do any more work. All available resources are busy processing
previous requests.
When the server appears to hang, first try pinging the server. If the server can respond to the
ping, it may be that the application is hanging and not the server itself. Then make sure that
the server is not simply performing too much garbage collection. For example, restart the
server with -verbosegc turned on, and redirect stdout and stderr to one file.
Furthermore, if garbage collection is taking too long (more than 10 seconds), the server may
miss heartbeat messages that servers use to keep each other up to date on the topology of
the cluster.
Application
WebLogic Server uses a single dynamic thread pool, in which all types of work are executed.
WebLogic Server prioritizes work based on rules that you define, and runtime metrics,
including the actual time it takes to execute a request and the rate at which requests are
entering and leaving the pool.
The common thread pool changes its size automatically to maximize throughput. The
incoming request queue monitors throughput over time and, based on history, determines
whether to adjust the thread count. For example, if historical throughput statistics indicate that
a higher thread count increased throughput, WebLogic increases the thread count. Similarly, if
statistics indicate that fewer threads did not reduce throughput, WebLogic decreases the
thread count. This strategy makes it easier for administrators to allocate processing resources
and manage performance.
The default work manager’s implicit maximum thread pool size is 400, although, due to the
context-switching overhead associated with threads, it is highly unlikely that the thread pool
will even approach this default constraint.
When a performance pack is enabled, WebLogic also uses a static pool of native socket
reader or muxer threads. These threads are responsible for receiving new requests from the
system networking layer and for handling the socket life cycle. Under healthy server
conditions, there should always be three socket reader threads.
The common thread pool changes its size automatically to maximize throughput. The queue
monitors throughput over time and based on history, and determines whether to adjust the
thread count. For example, if historical throughput statistics indicate that a higher thread count
increased throughput, WebLogic increases the thread count. Similarly, if statistics indicate
that fewer threads did not reduce throughput, WebLogic decreases the thread count. This
new strategy makes it easier for administrators to allocate processing resources and manage
performance, avoiding the effort and complexity involved in configuring, monitoring, and
tuning custom executes queues.
Idle threads do not include standby threads or stuck threads. This tag indicates that a thread
is ready to pick up new work when it arrives.
Hogging threads appear to be processing a request for an unusual amount of time, basic on
historical trends. The state of these threads may eventually be changed from active to stuck,
based on your current overload protection settings.
Threads that are not needed to handle the present work load are designated as standby and
added to the standby pool. These threads are activated when more threads are needed.
WebLogic Server enables you to configure how your application prioritizes the execution of its
work. Based on rules that you define and by monitoring actual runtime performance,
WebLogic Server can optimize the performance of your application and maintain service-level
agreements (SLAs). You tune the thread utilization of a server instance by defining rules and
constraints for your application by defining a work manger and applying it either globally to the
WebLogic Server domain or to a specific application component. Each distinct SLA
requirement needs a unique work manager.
To handle thread management and perform self-tuning, WebLogic Server implements a
default work manager. This work manager is used by an application when no other work
managers are specified in the application’s deployment descriptors. In many situations, the
default work manager may be sufficient for most application requirements. WebLogic Server’s
thread-handling algorithms assign to each application its own fair share by default.
Applications are given equal priority for threads and are prevented from monopolizing them.
You can override the behavior of the default work manager by creating and configuring a
global work manager called “default.” This allows you to control the default thread-handling
behavior of WebLogic Server.
Constraint A
Request Class Work Manager
Constraint B
Applications
A request class expresses a scheduling guideline that WebLogic Server uses to allocate
threads to requests. Request classes help ensure that high-priority work is scheduled before
less important work, even if the high-priority work is submitted after the lower-priority work.
WebLogic Server takes into account how long it takes for requests to each module to
complete.
There are multiple types of request classes, each of which expresses a scheduling guideline
in different terms. A work manager may specify only one request class. A fair share request
class specifies the average thread-use time required to process requests. For example,
suppose that WebLogic Server is running two modules. The work manager for Module1
specifies a fair share of “80” and the Work Manager for Module2 specifies a fair share of “20.”
During a period of sufficient demand, with a steady stream of requests for each module so
that the number requests exceeds the number of threads, WebLogic Server allocates 80%
and 20% of the thread-usage time, respectively, to Module1 and Module2.
A constraint defines minimum and maximum numbers of threads allocated to execute
requests and the total number of requests that can be queued or executing before WebLogic
Server begins rejecting requests.
In response to stuck threads, you can define an error-handing policy that shuts down the work
manager, moves the application into admin mode, or marks the entire server instance as
failed.
Using the Administration Console, you can create global work managers that are used to
prioritize thread execution.
To create a global work manager:
1. In the left pane of the Console, expand Environment and select Work Managers. Click
New.
2. Select Work Manager, and click Next.
3. Enter a Name for the new work manager, and click Next.
4. In the Available Targets list, select server instances or clusters on which you will deploy
applications that reference the work manager. Then click Finish.
3
1
After you have created a global work manager, you typically create at least one request class
or constraint and assign it to the work manager. Each work manager can contain only one
request class, but you can share request classes among multiple work managers.
To create a global request class:
1. In the left pane of the Console, expand Environment and select Work Managers. Click
New and then select the type of global request class that you want to create. Click Next.
2. For a fair share request class, enter the numeric weight in the Fair Share field. For a
response time request class, enter a time (in milliseconds) in the Response Time Goal
field. When finished, click Next.
3. In the Available Targets list, select server instances or clusters on which you will deploy
applications that reference this request class. Then click Finish.
4. Edit your existing work manager. Select your new request class by using the Request
Class field, and then click Save.
In your deployment descriptors, you reference one of the work managers, request classes, or
constraints by its name.
An enterprise application (EAR) cannot be directly associated with a work manager, although
it can define its own application-scoped work managers. Instead, individual modules within
the enterprise application can reference global or application-scoped work managers.
For web or web-service applications, use the wl-dispatch-policy element to assign the
web application to a configured work manager by identifying the work manager name. This
web application–level parameter can be overridden at the individual servlet or JSP level by
using the per-servlet-dispatch-policy element.
For EJB applications, use the dispatch-policy element to assign individual EJB
components to specific work managers. If no dispatch-policy is specified, or if the
specified dispatch-policy refers to a nonexistent work manager, the server’s default work
manager is used instead.
Select a server and click Monitoring > Threads. The first table provides general information
about the status of the thread pool. The second table provides information about individual
threads. The available columns in the first table include:
• Execute Thread Total Count: The total number of threads in the pool
• Execute Thread Idle Count: The number of idle threads in the pool. This count does
not include standby threads and stuck threads. The count indicates threads that are
ready to pick up new work when it arrives.
• Pending User Request Count: The number of pending user requests in the priority
queue. The priority queue contains requests from internal subsystems and users. This is
just the count of all user requests.
• Hogging Thread Count: Returns the threads that are being hogged by a request right
now. These threads will either be declared as stuck after the configured timeout or will
return to the pool before that. The self-tuning mechanism will backfill if necessary
• Throughput: The mean number of requests completed per second
To display the current Java stack trace for active threads, click the Dump Thread Stacks
button.
The second table on the server’s Monitoring > Threads tab provides the status and statistics
for individual threads, including:
• Total Requests: The number of requests that have been processed by the thread
• Current Request: A string representation of the request this thread is currently
processing
• Transaction: The XA transaction that the execute thread is currently working on behalf
of
• User: The name associated with this thread
• Idle: Returns the value “true” if the execute thread has no work assigned to it
• Stuck: Returns “true” if the execute thread is being hogged by a request for much more
than the normal execution time as observed by the scheduler automatically. If this
thread is still busy after the stuck thread max time, it is declared as stuck.
serverRuntime()
pool = getMBean('/ThreadPoolRuntime/ThreadPoolRuntime')
print 'Total Count: ' , pool.getExecuteThreadTotalCount()
print 'Idle Count: ' , pool.getExecuteThreadIdleCount()
serverRuntime()
wm = getMBean('/WorkManagerRuntimes/weblogic.kernel.Default')
print 'Default WM Pending: ' , wm.getPendingRequests()
print 'Default WM Stuck Count: ' , wm.getStuckThreadCount()
wm = getMBean('/WorkManagerRuntimes/HighPriorityWM')
print 'Custom WM Pending: ' , wm.getPendingRequests()
print 'Custom WM Stuck Count: ' , wm.getStuckThreadCount()
WORK_MANAGER.img:
A diagnostic image is a heavyweight artifact meant to serve as a server-level state dump for
the purpose of diagnosing significant failures. It enables you to capture a significant amount of
important data in a structured format and then to provide that data to support personnel for
analysis. Because it is an artifact intended primarily for internal consumption, the image
contents are not documented in detail and are subject to change.
Waiting on
Lock B
Lock A
Waiting on
An inadvertent deadlock in the application code can cause a server to hang. For example,
consider a situation in which thread1 is waiting for resource1 and is holding a lock on
resource2, while thread2 needs resource2 and is holding the lock on resource1. Neither
thread can progress. No new work can be introduced because the allocated number of
threads quickly becomes exhausted.
Fundamentally, this problem happens because the design and implementation of the
application have introduced the possibility of deadlocks. These types of problems may only
show up under heavy load. Therefore, these applications often pass through QA testing and
become problems in production.
A special deadlock case can occur in distributed systems. Consider the following: An
application running within a WebLogic instance invokes a service on another remote
WebLogic instance. The remote service then makes a call back to the first server. This sets
up the opportunity for a deadlock on the first server (especially under heavy load). The first
server has an execution thread that is tied up waiting for an inbound response. This inbound
response will require a thread from the same execute pool as the thread that is waiting to
receive the response. If the first server is faster than the remote server, eventually all the
threads in the execute pool will be exhausted by the server making outbound requests, with
fewer threads available for processing inbound responses.
Waiting for a
[ACTIVE] ExecuteThread: '3' ... lock owned by
waiting for lock ExpenseItem@564290 BLOCKED another thread
com.mycompany.PayrollService.update ...
...
Investigate the thread dump to gain an understanding of what the threads were doing when
the server appeared to hang. If no consistent pattern emerges other than that all the threads
are busy, it is likely that your server does not have enough threads to perform the required
work. If there are not 400 threads in use, check whether the application is using a work
manager with a constraint that is artificially limiting the number of threads available to it.
Applications can sometimes appear to hang, or at least run very slowly, because there is
excessive use of Java object synchronization. The synchronization may be in place for
legitimate reasons but can happen for other less appropriate reasons. An application thread
may incorrectly be sharing a resource (for example, a JDBC connection) with another thread
due to incorrect programming. This often results in issues when the two threads both try to
enter synchronized code segments.
On occasion, developers may use synchronization to work around a multithreading issue.
Ultimately, they would like to solve the underlying issue, but using synchronization may
enable the application to pass testing and even reach production. Unfortunately, this kind of
workaround often results in a slow-running application when placed under production-level
loads.
...
Blocked lock chains
===================
Chain 4:
"ExecuteThread: '1' ... waiting for ExpenseItem@564290 held
by:
"ExecuteThread: '2' ... in chain 3
...
Several threads can be tied up in lock chains. Threads A and B form a lock chain if thread A
holds a lock that thread B is trying to take. The JRockit JVM analyzes its threads and
calculates the lock chains. There are three possible types of lock chains: open, deadlocked,
and blocked.
Open lock chains represent a straight dependency—thread A is waiting for B, which is waiting
for C, and so on. If you have long open lock chains, your application may have poor
performance but the stability of the system should be affected. In these cases, you may want
to reconsider how locks are used for synchronization purposes in your application.
A deadlocked (or circular) lock chain consists of a chain of threads in which the first thread in
the chain is waiting for the last thread in the chain. In the simplest case, thread A is waiting for
thread B, while thread B is waiting for thread A. Note that a deadlocked chain has no head. In
thread dumps, the JRockit JVM selects an arbitrary thread to display as the first thread in the
chain. Deadlocks can never be resolved, and the application is stuck waiting indefinitely.
A blocked lock chain is made up of a lock chain whose head thread is also part of another
lock chain, which can be either open or deadlocked. For example, if thread A is waiting for
thread B, thread B is waiting for thread A, and thread C is waiting for thread A, then thread A
and B form a deadlocked lock chain, while thread C and thread A form a blocked lock chain.
WebLogic Server diagnoses a thread as stuck if it is continually working (not idle) for a set
period of time. You can tune a server’s thread detection behavior by changing the length of
time before a thread is diagnosed as stuck, and by changing the frequency with which the
server checks for stuck threads.
If all application threads (or a configured percentage of them) are stuck, a server instance
marks itself Failed and, if configured to do so, exits. You can configure Node Manager or a
third-party high-availability solution to restart the server instance for automatic failure
recovery.
The WorkManagerShutdownTriggerMBean configures the conditions under which an
associated work manager is automatically shut down. The trigger specifies the number of
threads that need to be stuck for a certain amount of time. A shutdown work manager refuses
new work but attempts to complete pending work. There is currently no interface in the
Administration Console for configuring this MBean type for global work managers. For
application-scoped work managers, use the <work-manager-shutdown-trigger>
element, which is a child element of <work-manager>.
WebLogic Server has features for detecting, avoiding, and recovering from overload
conditions. WebLogic Server’s overload protection features help prevent the negative
consequences—degraded application performance and stability—that can result from
continuing to accept requests when the system capacity is reached.
You can define a maximum size of the execute queue used to accept requests. Beyond this
value, the server will refuse all web application requests and lower priority EJB requests. If the
overload condition continues to persist, higher priority requests will start getting rejected as
well, with the exception of JMS and transaction-related requests, for which overload
management is provided by the JMS and the transaction manager.
Administration network channels are an exception, however. Administration channels allow
server access only to administrative users. To ensure that overload conditions do not prevent
administrator access to the system, the limit you set on the execute queue length does not
affect administration channel requests.
A work manager can also specify the maximum requests of a particular request class that can
be queued. When both global and work manager maximum values are set, the limit that is
reached first is honored.
Stuck thread
after 10 minutes
• Server Diagnostics
• Application Diagnostics
– Deployment Errors
– Shared Library Errors
– Error Pages
– Application Monitoring
While deploying an application, make sure that the servers have read and write permissions
of the application archive or directory. Similarly, make sure that the credentials used to
execute deployment tasks meet the criteria of the global Deployer role. Otherwise, you can
expect errors similar to:
“Access not allowed for subject: principals=[myuser, Deployers], on
ResourceType: Cluster Action: execute, Target: addDeployment”
If your applications includes an improperly configured deployment descriptor, this could lead
to deployment exceptions similar to:
“weblogic.management.DeploymentException: Error while loading
descriptors: Error parsing file “META-INF/application.xml”
If one or more target servers are not in a state that is compatible with accepting new
deployment tasks (Failed or Shutting Down states, for example), the task will fail. This is
particularly important when deploying to a cluster.
In stage mode, the Administration Server copies the deployment files from their original
location on the Administration Server machine to the staging directories of each target server.
For example, if you deploy a Java EE application to three servers in a cluster by using stage
mode, the Administration Server copies the deployment files to directories on each of the
three server machines. Each server then deploys the Java EE application by using its local
copy of the archive files. When copying files to the staging directory, the Administration Server
creates a subdirectory with the same name as the deployment name.
Stage mode ensures that each server has a local copy of the deployment files on hand, even
if a network outage makes the Administration Server unreachable. However, if you are
deploying very large applications to multiple servers or to a cluster, the time required to copy
files to target servers can be considerable. Consider “nostage” mode to avoid the overhead of
copying large files to multiple servers.
The Administration Console uses stage mode as the default mode when deploying to more
than one WebLogic Server instance. weblogic.Deployer uses the target server’s staging
mode as the default, and managed servers use stage mode by default.
If you encounter an out-of-memory exception when you deploy or undeploy an application, try
increasing the permanent generation size in the JVM heap. For example, on Sun use the
command-line argument -XX:MaxPermSize=<size>. This issue tends to be less common
on the JRockit JVM.
WebLibraryA
page1.jsp MyWebApp
MyWebApp
lib1.jar page1.jsp
page1.jsp page2.jsp
web.xml DEPLOY
page2.jsp page3.jsp
WebLibraryB page3.jsp lib1.jar
page2.jsp web.xml Overrides
web.xml (merged)
web.xml file in library
At the web application level, a shared library is a WAR file that can include servlets, JSPs,
and tag libraries. Shared libraries can be included in an application by reference, and multiple
applications can reference a single shared library.
You can deploy as many shared libraries to Oracle WebLogic Server as you require. In turn,
libraries can reference other libraries, and so on. Because the shared library code and your
own application code are assembled at run time, rules must exist to resolve potential conflicts.
The following are the rules:
• Any file that is located in your application takes precedence over a file that is in a shared
library.
• Conflicts arising between referenced libraries are resolved based on the order in which
the libraries are specified in the META-INF/weblogic-application.xml file (for
enterprise applications) or the WEB-INF/weblogic.xml file (for web applications).
Oracle WebLogic Server supports versioning of the shared Java EE libraries, so that the
referencing applications can specify a required minimum version of the library to use or an
exact, required version. The specification version identifies the version number of the
specification (for example, the Java EE specification version) to which a shared Java EE
library or optional package conforms. The implementation version identifies the version
number of the actual code.
weblogic.management.DeploymentException:
Error while processing library references. Unresolved
application library references, defined in weblogic-
application.xml: [Extension-Name: hrcommon, Specification-
Version: 3.2, exact-match: true] ...
Required name
and version
As a best practice, you should always include version information (an implementation version,
or both an implementation and specification version) when creating shared Java EE libraries.
Creating and updating version information as you develop shared components allows you to
deploy multiple versions of those components simultaneously for testing. For example, a
production application may require a specific version of a library, because only that library has
been fully approved for production use. An internal application may be configured to always
use a minimum version of the same library. Applications that require no specific version can
be configured to use the latest version of the library. The name and version information for a
shared Java EE library are specified in the META-INF/MANIFEST.MF file, as in the following
example:
Extension-Name: myExtension
Specification-Version: 2.0
Implementation-Version: 9.0.0
After you deploy one or more shared libraries, you can then deploy applications and modules
that reference these libraries. Successfully deploying a referencing application requires that
two conditions are met:
• All referenced libraries are registered on the application’s target servers.
• Registered libraries meet the version requirements of the referencing application.
<error-page>
<exception-type>java.io.IOException</exception-type>
<location>/pages/misc/ioerror.jsp</location>
</error-page>
<error-page>
<error-code>500</error-code>
<location>/pages/misc/generalerror.htm</location>
</error-page>
You can configure WebLogic Server to respond with your own custom web pages or other
HTTP resources when particular HTTP errors or Java exceptions occur, instead of
responding with the standard WebLogic Server error response pages.
You define custom error pages in the <error-page> element of the Java EE standard web
application deployment descriptor, web.xml. (The web.xml file is located in the WEB-INF
directory of your web application.)
Monitor a web
application.
In the left pane of the Administration Console, click Deployments. In the right pane, click the
application that you want to monitor, and then click the Monitoring tab. The available second
level tabs will vary depending on the type of the application. For example, a web application
or module has tabs named web Application, Servlets, Sessions, and Workload. The available
columns in these tabs include:
• Servlets: The number of Java Servlets that are deployed within this application,
including the internal WebLogic Servlets. If required, the Servlets tab displays statistics
on a per-Servlet basis.
• Sessions: A count of the current number of open sessions in this module
• Sessions High: The highest number of active sessions ever managed by this
application. The count starts at zero each time the application is activated.
• Total Sessions: A count of the total number of sessions opened since the application
was deployed
serverRuntime()
web = getMBean('/ApplicationRuntimes/HRApp/ComponentRuntimes/
ServerC_/payroll')
print 'Sessions: ', web.getOpenSessionsCurrentCount()
serverRuntime()
ejb = getMBean('/ApplicationRuntimes/HRApp/ComponentRuntimes/
payroll.jar/EJBRuntimes/PayrollManager/PoolRuntime/
PayrollManager')
print 'Pool Size: ', ejb.getPooledBeansCurrentCount()
print 'In Use: ', ejb.getBeansInUseCount()
print 'Waiting: ', ejb.getWaiterCurrentCount()
Answer: a
Answer: b, d
Answer: b