Sei sulla pagina 1di 364

HP-UX 11i v3 Delta Support

March 2007

Student guide: 3 of 3
HP-UX 11i v3 Delta Support

March 2007

Student guide

Use of this material to deliver training without prior written permission from HP is prohibited.
© Copyright 2007 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice. The only warranties for HP
products and services are set forth in the express warranty statements accompanying such products
and services. Nothing herein should be construed as constituting an additional warranty. HP shall
not be liable for technical or editorial errors or omissions contained herein.
This is an HP copyrighted work that may not be reproduced without the written permission of HP.
You may not use these materials to deliver training to any person outside of your organization
without the written permission of HP.
Intel and Itanium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in
the United States and other countries.
Printed in the USA.
HP-UX 11i v3 Delta Support
Student guide
March 2007
HP Restricted — Contact HP Education for customer training materials.
Contents
Volume 1
1 Course Introduction
2 Tour HP-UX 11i v3 (11.31)
Supported Systems 9
Highlights of HP-UX 11i v3 19
3 System Installation and Configuration
Installation Considerations 3
Installation Process 44
Update Process 53
Troubleshooting Installation & Update 68
Post Installation Tasks 82
Software Deployment 92
Boot and Update 105
Kernel Configuration 112
Peripheral Device Configuration 132

Volume 2
4 Flexible Capacity
Performance and Scalability 3
Montecito Processor 8
Dynamic Resources 49
Operating System Enhancements 127

Volume 2
5 Secured Availability
Montecito Processor 3
Recovery 20
Networking 105
Security 153
Serviceguard 238
6 Simplified Management
System Management 3
Event manager 65
Event Monitoring Service 81
HP-UX 11i v3 Delta Secured Availability

Secured Availability
Section 5

© 2007 Hewlett-Packard Development Company, L.P.


The information contained herein is subject to change without notice

This module covers features that relate to system uptime and keeping system services available,
including network and availability. We refer to these as secured availability features of HP-UX
11i v3 (11.31).

March 2007 Availability-1


HP-UX 11i v3 Delta Secured Availability

Section Objectives
Upon completion of this section, you will be able to:
• Describe features that ensure system and services availability
– Networking
– Security
• Attain the module objectives of these Secured Availability
topics
– Montecito Processor
– Recovery
– Networking
– Security
– ServiceGuard

2 March 2007

Section Objectives

Upon completion of this section, you will be able to describe features that ensure system and
services availability. This section is comprised of modules covering the following topics.

Montecito Processor availability features


System Recovery tools and features
Networking enhancements
Security features, tools, and enhancements
ServiceGuard enhancements

The section includes labs related to the modules.

March 2007 Availability-2


HP-UX 11i v3 Delta Secured Availability

Montecito Processor Module

Montecito Power Management Features


Montecito Automatic Processor Recovery

3 March 2007

March 2007 Availability-3


HP-UX 11i v3 Delta Secured Availability

Montecito Processor
Power Management
Features

4 March 2007

March 2007 Availability-4


HP-UX 11i v3 Delta Secured Availability

Montecito’s ACPI Processor States


High
Processor Utilization Processor
Power States P0 Performance
(C-states) States
System (P-states)
Power States C0 P1
(S-states) PMI,
PAL_HALT INIT,
Reset,
S0 _LIGHT
Etc.
Power-
PAL_HALT
P2
Off,
Standy,
etc. Power-0n
or Wake
C1 Low
C3 Utilization
Event P3
S5
S1
S1
CoreFreq(Pi) > CoreFreq(Pi+1)
PowerDiss(Pi) > PowerDiss(Pi+1)

5 March 2007

This slide illustrates the System Power States (S-states), Processor Power States (C-states), and
Processor Performance States (P-states) on Montecito. The system firmware and OS have the
necessary information for implementation; and, they can decide when and whether to support
these power states based on product requirements.

P-states are different and settable levels of power/frequency that allow for a balance of
performance and power/frequency management. This is also known as Demand Based
Switching.

C-states are different and settable sleep modes to park a processor in lower power consumption
when it is not in use.

These transitional relationships are defined in section 8 of the ACPI Specification version 3.0.
Dependency domain description for multi-core/thread is new in ACPI 3.0. So is a “T-state”,
which Montecito is not using.

Here are some meanings for the processor states when the system is in the S0 working state.
Power state C0 is the active power state wherein the processor executes instructions and has
performance states P0, P1, … Pn. Power states C1 through Cn are processor sleeping states
wherein the processor does not execute instructions.

March 2007 Availability-5


HP-UX 11i v3 Delta Secured Availability

Processor

Montecito Processor Power States Power States


(C-states)

Allow for a state choice of normal power


and low power for idle C0
Benefits: PAL_HALT
• Power savings of up to 50W over regular idle _LIGHT PMI,
INIT,
• Final stage of kernel idle can go into Reset,
Etc.
PAL_HALT_LIGHT to save power PAL_HALT

• A processor removed via iCAP can go into


PAL_HALT_LIGHT to save power C1
• PAL_HALT_LIGHT at final idle signals I-VM to
yield from idle guest C3
Implementation:
• Kernel changed to issue PAL_HALT_LIGHT call in final stage of idle
• System firmware changed to issue PAL_HALT_LIGHT call when iCAP
returns processor
• IVM host must intercept guest PAL_HALT_LIGHT
HP-UX 11i v3
• Issues PAL_HALT_LIGHT in final stage of idle

6 March 2007

The Montecito processor power states allow for a state choice of normal power and low power
for idle processor. The benefits to having processor power states include a power savings of up
to 50W over regular idle. The final stage of the kernel idle can go into PAL_HALT_LIGHT to save
power. If a processor is removed via iCAP, it can also go into PAL_HALT_LIGHT to save power.
Additionally, a PAL_HALT_LIGHT at final idle signals Integrity-VM to yield from idle guest.

To use processor power states, the kernel was changed to issue a PAL_HALT_LIGHT call in the
final stage of idle. The system firmware was changed to issue a PAL_HALT_LIGHT call when
iCAP returns a processor. The Integrity-VM host must intercept guest PAL_HALT_LIGHT.

On HP-UX 11i v3, the kernel issues a PAL_HALT_LIGHT in final stage of idle.

March 2007 Availability-6


HP-UX 11i v3 Delta Secured Availability

C-state Related PAL procedures


Architected PAL Procedures
• PAL_HALT_INFO
– power_buffer
– Does not provide dependency domain info
• PAL_HALT_LIGHT
• PAL_HALT
– halt_state
Itanium-2 Specific PAL
• PAL_HALT_LIGHT_SPECIAL
– Only used by SAL or PMI code for certain chipsets

7 March 2007

Power state (C-states) related PAL firmware procedures include PAL_HALT_INFO,


PAL_HALT_LIGHT, PAL_HALT, PAL_PSTATE_INFO, PAL_SET_PSTATE, and PAL_GET_PSTATE.

PAL_HALT_INFO returns the power buffer. The power buffer contains the following information.
Index 0 is for the LIGHT_HALT. The im, co means implemented, coherency. Power consumption
(mW) and entry and exit latency is also stored. However, it does not provide dependency
domain information. PAL_HALT is the true halted state.

There is an Itanium-2 specific PAL procedure, PAL_HALT_LIGHT_SPECIAL. It is only used by SAL


or PMI code for certain chipsets. The kernel is not supposed to use this call.

ACPI C-state objects are _CST. The system firmware builds _CST from information obtained from
the PAL C-state procedures. The OS decides when to send the processor to the various C-states
_CST.

March 2007 Availability-7


HP-UX 11i v3 Delta Secured Availability

High

Demand Based Switching P0


Utilization
Processor
Performance
Variable performance mechanism States
(P-states)
P1
• Each P-state has a power ceiling
• Voltage and execution frequency modulated …
to keep below power ceiling at each P-state Low
PN Utilization
• Under OS control
CoreFreq(Pi) > CoreFreq(Pi+1)
Benefits: PowerDiss(Pi) > PowerDiss(Pi+1)

• Power consumption can be matched to required


execution performance
HP-UX releases
• HP-UX 11i v3 has P-State infrastructure in place
• Post-11.31 will have DBS enabled concurrent with processor
feature availability
8 March 2007

Demand based switching is the variable performance mechanism using the P-states. Each P-state
has a power ceiling. The voltage and execution frequency is modulated to keep below the
power ceiling at each P-state. This is under the kernel’s control.

The major benefit of using demand based switching is that power consumption can be matched
to the currently required execution performance.

At the initial release of HP-UX 11i v3, the P-state infrastructure is in place. In a post-11.31
release, the demand based switching (DBS) will be enabled concurrent with processor feature
availability.

March 2007 Availability-8


HP-UX 11i v3 Delta Secured Availability

P-state Related Architected PAL Procedures


PAL_PSTATE_INFO
• pstate_buffer
– perf_index (0…100 relative to P0)
– typical_power_dissipation (mW)
– transition_latency_1 and transition_latency_2
• pstate_num (number of P-states supported)
• dd_info (dependency domain info)
– ddt (dependency domain type)
– ddid (dependency domain ID)
PAL_SET_PSTATE
• p_state (P-state requested)
PAL_GET_PSTATE
• Returns “current” and “average”
• “current” is used to match one of the states provided by the
PAL_PSTATE_INFO

9 March 2007

The system firmware builds _PSS and _PSD from information obtained from the PAL P-state
procedures. The OS decides when to send the processor to the various P-states.

The P-state related architected PAL procedures are PAL_PSTATE_INFO, PAL_SET_PSTATE, and
PAL_GET_PSTATE. PAL_PSTATE_INFO returns a P-state buffer that contains a performance index
of 0 to 100 relative to P0. It has a typical power dissipation value (mW) and two transition
latencies. The PAL_PSTATE_INFO also returns the number of P-states supported and the
dependency domain information. The dependency domain information consists of the
dependency domain ID and the dependency domain type. The types are HW independent, HW
coordinated, and SW coordinated. The PAL_SET_PSTATE sets the P-state to the requested P-state.
The PAL_GET_PSTATE returns the “current” and “average” states. The “current” is used to match
one of the states provided by the PAL_PSTATE_INFO.

ACPI P-state objects include _PCT for performance control, _PSS for performance supported
states, and _PPC for performance present capabilities.

March 2007 Availability-9


HP-UX 11i v3 Delta Secured Availability

P and C-state Changes when Thread is Disabled


What happens if multi-threading is disabled?
• Second thread is not operational at all
• PAL takes this into account
– Only need to make PAL P-state calls on the first (and only active)
thread to affect the power/performance

10 March 2007

What happens to the P-states and C-states on Montecito if threading is disabled? If


multithreading is disabled, the second thread is not operational at all. The PAL firmware takes
this into account. Therefore, you would only need to make PAL P-state calls on the first thread,
which is the only active thread, to affect the power/performance.

March 2007 Availability-10


HP-UX 11i v3 Delta Secured Availability

Montecito specific Power-related PAL Procedures


PAL_POWER_INFO
• Min and Max power ceiling
PAL_SET_MAX_POWER
• Sets the processor package power ceiling
• Can only be called once after reset
• When SAL calls PAL_SET_MAX_POWER, it sets the P0 state power
• If SAL never makes this call, the processor package would default to the
maximum power ceiling that is returned in the PAL_POWER_INFO call
These are not architected PAL procedures
• System firmware makes these calls during boot and initialization
– OS is not supposed to make these calls

11 March 2007

There are a couple of Montecito specific power-related PAL procedures. These are not
architected PAL procedures. The kernel is not supposed to make these calls. It is anticipated that
the system firmware makes these calls during boot and initialization.

The PAL_POWER_INFO procedure returns the minimum and maximum power ceiling.

The PAL_SET_MAX_POWER call sets the processor package power ceiling. It can only be called
once after reset. When SAL calls PAL_SET_MAX_POWER, it is setting the P0 state power. If SAL
never makes this call, the processor package would default to the maximum power ceiling that is
returned in the PAL_POWER_INFO call.

March 2007 Availability-11


HP-UX 11i v3 Delta Secured Availability

Foxton Technology
Variable frequency for power / temperature control
• Results in clock speed variations of at most +10% - likely to be less
– Practically always only upside
• Variations based on power at very short intervals
• Temperature events at longer intervals
• Variations may be no worse than those caused by cache misses/stalls
• Provides maximum performance under all power/temp situations
– Especially good for data centers
• Provides consistent power/temp profiles to prevent “spiky” or catastrophic
situations
Frequency scaling / Thermal threshold states
• Steps down execution frequency when CPU starts getting warm
• Between Foxton Technology and thermal alerts
• Over-temp situations can be mitigated with less drastic measures
HP-UX 11i v3
• Analyze any variability problems with feature and performance tests

12 March 2007

Montecito implements the Foxton Technology. Foxton is used to realize the P-states. The
operating system sets the processors to a P-state. Then, the Foxton hardware kicks in to support
the P-state.

On Montecito, there is variable frequency for power and temperature control. This results in
clock speed variations of at most +10%, and it is usually less, such that this feature has only
upside. The variations are based on power at very short intervals. The temperature events are
based at longer intervals. The variations may be no worse than those caused by cache misses or
stalls. The variable frequency provides maximum performance under all power and temperature
situations. This is especially good for data centers because it provides consistent power and
temperature profiles to prevent “spiky” or catastrophic situations.

Both the frequency scaling and the thermal threshold states step down execution frequency when
the CPU starts getting warm. This is at a level between Foxton Technology and thermal alerts.
This allows over-temperature situations to be mitigated with less drastic measures.

The HP-UX 11i v3 pre-enablement will allow analysis of any variability problems with features
and performance tests.

March 2007 Availability-12


HP-UX 11i v3 Delta Secured Availability

ACPI P-states and Thermal Zone


Thermal management is defined in ACPI 3.0
• Thermal influencing information
• Passive Cooling uses P-states
P-states do not require the support for Thermal Zone
• OS determines when to send processors to various P-states by
monitoring application loads and processor idle time
Integrity systems have not implemented Thermal Zone

13 March 2007

Thermal management is defined in section 11 of the ACPI 3.0 specification. Thermal influencing
information is new in ACPI 3.0. Passive Cooling uses P-states.

However, P-states do not require the support for Thermal Zone. The operating system determines
when to send processors to various P-states by monitoring application loads and processor idle
time. An example is Intel Demand-based Switching. For further information, refer to
http://www.intel.com/update/contents/sv09031.htm.

Note that Integrity systems have not implemented Thermal Zone.

March 2007 Availability-13


HP-UX 11i v3 Delta Secured Availability

ACPI P-states and Power Zone


Intel proposed Power Budgeting for Automatic Control of
Power Consumption (ACPC) to ACPI 3.0
• Power budgeting uses P-states
• Targeting Blades data-center environment
P-states do not require the support for Power Zone
• OS determines when to send processors to various P-states by
monitoring application loads and processor idle time

14 March 2007

In the ACPI 3.0 specification, Intel proposed Power Budgeting for Automatic Control of Power
Consumption, or ACPC. Power budgeting uses P-states. It is targeting the Blades data-center
environment.

P-states do not require the support for Power Zone. The operating system determines when to
send processors to various P-states by monitoring application loads and processor idle time, like
the demand-based switching. For further information, including more on ACPC, refer
tohttp://www.intel.com/update/contents/sv09031.htm.

March 2007 Availability-14


HP-UX 11i v3 Delta Secured Availability

Current State
No support for the ACPI Thermal Management
• Thermal Zone and Passive Cooling
PAL C-state and P-state Procedures provided by Intel
If a system supports processor P-states and C-states
• System firmware must supply _CST, _PSS, _PCT, _PPC and _PSD objects
for the P-state and C-state
• Interfaces defined in ACPI
OS should use the ACPI objects and should provide handler to
support ACPI-to-PAL mapping
• OS decides when to send a processor to the various power states
• Optionally an OS can use the architected PAL procedures directly to
manage these power states
– If the OS is not interested in sharing code with other architectures
– If the OS is not concerned about ACPI compliance

15 March 2007

Currently, there is no support for the ACPI Thermal Management: Thermal Zone and Passive
Cooling. The PAL C-state and P-state procedures are provided by Intel.

If a system supports processor P-states and C-states, the system firmware supplies the _CST,
_PSS, _PCT, _PPC and _PSD objects for the P-state and C-state. These interfaces are defined in
ACPI. (Note that _PSD is new in ACPI 3.0.) The ACPI/PAL mapping is described in the Intel
AppNote documentation.

The OS should use the ACPI objects and should provide the handler to support the ACPI-to-PAL
mapping. Then, the OS can decide when to send a processor to the various power states.
Optionally, an OS can use the architected PAL procedures directly to manage these power
states. This can be useful if the OS is not interested in sharing code with other architectures, or if
the OS is not concerned about ACPI compliance.

March 2007 Availability-15


HP-UX 11i v3 Delta Secured Availability

Montecito
Automatic Processor Recovery
and Other Montecito Availability
Features

16 March 2007

March 2007 Availability-16


HP-UX 11i v3 Delta Secured Availability

Automated Processor Recovery (APR)


Recovery for some transient CPU hardware error events
• Cosmic ray/alpha particle events can cause transient errors
• Examples include TLB corruption and register file corruption
• HW (with some FW help) guarantees CPU recovery with no data corruption
• Current computation invalidated
Benefits of APR
• Selective treatment of application versus OS failure
– Application cessation or Integrity-VM guest MCA
• Not necessarily OS panic
– New application error recovery options
• Kill and/or signal a process
• Accurate, precise, and reliable error information
– Potential HW/FW aid in testing and debugging methods
HP-UX releases
• 11.31 enables new MCA architecture to implement more selective recovery
• Post-11.31 activates use of new MCA recovery

17 March 2007

Automated Processor Recovery, APR, is the detection and recovery of some classes of transient
CPU failures or hardware error events, such as those that can be caused by cosmic ray and
alpha particle events. Example error conditions from such events include TLB corruption and
register file corruption. The CPU will detect errors in L1 tags, Floating Point Register and General
Register parity, and TLB parity. When the errors are detected they are repaired and an MCA is
signaled. HP's APR technology is the mechanism by which we keep some recoverable MCA
events from being fatal MCA events. With APR, the hardware, with some help from the
firmware, guarantees CPU recovery with no data corruption. Only the current computation is
invalidated.

(Recall that an MCA, or Machine Check Abort, is raised by the Itanium processor when an error
occurs within the hardware. For example: a multi-bit error in a page of memory. Firmware, PAL
and SAL, initially handle the error, providing containment, information gathering, and some level
of recovery. If the firmware does not completely recover from the error, it calls OS_MCA to
allow the OS to attempt recovery. If the firmware does completely recover from the MCA, it will
report the MCA to the OS at a later time as a Corrected Machine Check (CMC).)

There are two main benefits of APR. The primary advantage is that is allows HP-UX to selectively
treat application versus OS failure. In other words, instead of having a total system failure, the
impact of the error can be limited to either the application or to the Integrity-VM guest. APR
provides new application error recovery options. The OS can kill and/or signal a process, or it
can send an MCA to an Integrity-VM guest, instead of crashing entirely. Additionally, more
accurate, precise, and reliable error information in available that can be used to aid hardware
and/or firmware with testing and debugging.

The HP-UX 11i v3 (11.31) initial release enables the new MCA architecture to implement more
selective recovery. A post-11.31 release will activate use of the new MCA recovery.

March 2007 Availability-17


HP-UX 11i v3 Delta Secured Availability

Pellston – Cache Safe Technology and DPR


Cache Level Protection
• L1 cache is parity protected by PAL firmware
• L2 and L3 caches are ECC protected
Pellston Cache Safe Technology
• Tests and corrects errors on L3 cache line
• Disables defective cache line
– Fetch data from memory if it would have gone to disabled cache line
• No performance penalty
– Fault only affects one cache line
Dynamic Processor Resiliency (DPR)
• De-allocate CPU exhibiting too many errors
– Configurable threshold
• Increased resilience to CPU cache errors
• Can use iCAP to have spare CPUs
18 March 2007

Pellston Technology is another Montecito feature that provides OS transparent cache error
correction. Visibility to the OS happens as a logging event only after a failure threshold has
been reached.

On Montecito, the Level 1 cache is parity protected. If a parity error occurs in the Level 1
cache, the processor generates a local MCA that transfers the control to the PAL firmware, and
PAL corrects the error. After PAL corrects the error, a corrected CMC event is generated for the
OS. The Level 2 and Level 3 caches are ECC protected. If a single bit error occurs in the Level 2
or Level 3 cache, the processor generates the CMC and processor hardware corrects the
error. After the processor corrects the error, CMC event is generated to the OS. Additionally,
the General Registers, Floating Point Registers, and the TLB are parity protected.

The Intel Cache Safe Technology is code-named Pellston. As cache sizes become larger, the
likelihood of cache failures increases. Itanium processors, starting with the dual-core Intel Itanium
2 Montecito processor, offset this increased likelihood of failures using the Pellston technology.
When there is a correctable error in the L3 cache, the Cache Safe Technology tests the cache
line and corrects the error. If the cache-line is found to be defective, it is disabled. Whenever
something is fetched that would be mapped to a disabled cache line, the processor has to fetch
the data from memory. No performance penalty is incurred since the fault affects only one cache
line. The Cache Safe Technology will also keep the processor from crashing if the cache line
ever fails.

Dynamic Processor Resiliency, or DPR, is the system’s ability to de-allocate (online) those CPUs
that are exhibiting an unacceptable number of correctable errors. CPUs are de-configured if the
number of corrected cache errors reaches a specific configurable threshold. DPR is currently
available on HP-UX. DPR, along with Cache Safe technology, makes Montecito based systems
fully resilient to CPU cache errors, a major contributor to system downtime. One side effect of
DPR is that the system does lose processing power if a CPU is de-configured. This issue can be
avoided by configuring the system with extra processing power, by including either active or
passive CPUs into the system, for example by using iCAP. As the CPU technology advances,
Dynamic Processor Resiliency will continue to be enhanced to deal with new recoverable errors
sources. This will further differentiate Itanium 2 CPUs vs. the lower-end CPUs which don’t provide
such protection. This will eliminate about 70% of the remaining CPU error sources.

March 2007 Availability-18


HP-UX 11i v3 Delta Secured Availability

Lockstep and Logical Processor Migration


Lockstep
• Socket level voting
– Enables two CPUs to work together on a computation
• Might become more important in the future if smaller
processes cause chip failures with silent data corruption
– But alternative SDC fixes may be available
Logical Processor Migration (LPM)
• FW call to move HW thread state/context to another HW
thread
• Used by Lock-step voting

19 March 2007

Lockstep is a facility that enables two CPUs to work in tandem on computation with the proviso
that any disagreement between the CPUs on any computation will signal an event. Although not
presently used, this feature could allow the implementation of some HA strategies.

LPM Logical Processor Migration (LPM) is firmware-enabled migration of the state of one CPU (on
a hardware thread level if Hyper-Threading is enabled) to another CPU (or hardware thread).

March 2007 Availability-19


HP-UX 11i v3 Delta Secured Availability

Recovery Module

Single System High Availability


PCI Error Recovery
Next Generation Fault Management
Diagnostics
System Dump Technology

20 March 2007

Single System High Availability


PCI Error Recovery
PCI Error Recovery provides the ability to detect, isolate, and automatically recover from a PCI
error, avoiding a system crash.
The 10 Gigabit Ethernet driver, ixgbe, supports PCI Error Handling and Recovery.
Next Generation Fault Management
• SysFaultMgmt
• WBEM providers
• EVweb
• Error Management Technology (EMT)
Online searchable repository for accessing HP-UX error messages
Diagnostics
Support Tools Manager, EMS Hardware Monitors, other
Improved system dump technology
Livedump (crash dump on a live system), concurrent dump (multithreaded, concurrent I/O crash
dump)

March 2007 Availability-20


HP-UX 11i v3 Delta Secured Availability

Recovery Module
Single System High Availability
(SSHA)

21 March 2007

March 2007 Availability-21


HP-UX 11i v3 Delta Secured Availability

Recovery Module
Single System High Availability
Architecture

22 March 2007

March 2007 Availability-22


HP-UX 11i v3 Delta Secured Availability

I/O SSHA Architecture


KC
WLM

SFM
PRM

hotplugd

libolrad
drv scr
parmgr
pdweb

DLKM
CIM infra
Service Diag / I/O tree
Guard provider
Olrad CLI

libio
CRA infrastructure
Net RDMA MS IPoIB
cra cra CRA cra

User
Kernel EVM

Cell OL*
DLKM
infra
MS ULMs (LVM, VxVM)
Device Driver Env
WSIO Core DDE Services
Disk Tape
Class Class GIO
Driver Driver

OL*, ER infra
EH PSM
SCSI Protocol Services

SAS
DLPI / VLAN

SAN Services
Xport ioconfigd
Drv
APA/LM

pSCSI Xport Driver

FC Xport Driver
Machdep

ILAN
IPoIB drv.

IHV, non fer.

RIF
Native IHV

Driver

HDM iWARP / TOE / iSCSI IB HCA


Device Driver Device Driver

Firmware (PDC, SFW)


Hardware Ethernet iWARP / TOE / iSCSI /
Ethernet Hardware
IB
Hardware
pSCSI
H/W
FC H/W
H/W

23 March 2007

This slide shows a block diagram of the many components of the system at the user, kernel, and
hardware levels that play important roles in providing Single System High Availability, or SSHA.
It also shows their relationships to one another.

The light green boxes are components that the user can work with using various commands and
tool packages. These include, at the user level, ServiceGuard, WLM/PRM, CIM, Kernel
Configuration (e.g. kcweb), Peripheral Device Manager (pdweb), Partition Manager (parmgr),
and System Fault Management (SFM). The EVM package works at the user/kernel intersection.
And, cell OL* is at the kernel level. These user level tools are supported by the Critical Resource
Analysis (CRA) infrastructure, the olrad command, DLKM, and several libraries.

The bulk of the kernel level support for SSHA resides in the Device Driver Environment. Herein is
the core set of device drivers and driver interface infrastructure. Other key elements in the set of
kernel components responsible for supporting SSHA include the OL* support, the ioconfig
daemon, the machine dependent interface to the firmware, the DLKM kernel level infrastructure,
and additional infrastructure support for services such as IHV and DLPI/VLAN.

March 2007 Availability-23


HP-UX 11i v3 Delta Secured Availability

Recovery Module
Single System High Availability
Critical Resource Analysis
(CRA)

24 March 2007

March 2007 Availability-24


HP-UX 11i v3 Delta Secured Availability

Critical Resource Analysis (CRA)


IPoIB is a separate CRA module
• Avoids complexity for RDMA CRA
X.25/HF CRA continues to be part of olrad
• Tier2 technologies may be obsoleted in the next release
NetCRA uses SG commands to determine SG usage of link
• No ServiceGuard CRA
• SG customers use SG commands for delete operations
– Post 11.31
Dump is treated as Data Critical by MS CRA
CRA infrastructure is a pure pass-through
• Commands define op-codes and parameters
– These are handled by the subsystem CRA modules
• CRA infrastructure calls all subsystem CRA modules sequentially
• CRA results are logged to a file
– File is maintained by the commands
– CRA infrastructure is used by nwmgr, scsimgr, saconfig, sasmgr
64-bit CRA is delivered on Integrity-based systems

25 March 2007

IPoIB is a separate CRA module. This avoids complexity for RDMA CRA.

X.25/HF CRA continues to be part of olrad. These Tier2 technologies will probably be obsoleted
in the next HP-UX release.

NetCRA uses ServiceGuard commands to determine ServiceGuard’s usage of the link. There is
not an SG CRA. ServiceGuard customers are expected to use SG commands for delete
operations post HP-UX 11i v3.

Dump is treated as Data Critical by MS CRA.

The CRA infrastructure is a pure pass-through implementation. Commands define op-codes and
parameters that are handled by the subsystem CRA modules. The CRA infrastructure calls all
subsystem CRA modules sequentially; and, the CRA results are logged to a file that is maintained
by the commands. The CRA infrastructure is also used by nwmgr, scsimgr, saconfig, and sasmgr.

Finally, 64-bit CRA is delivered on Integrity-based systems.

March 2007 Availability-25


HP-UX 11i v3 Delta Secured Availability

CRA Synchronizations and Limitations


Net CRA and Mass Storage CRA do not do locked CRA
IB CRA is the only CRA module that does locked CRA
• Device state is changed to "transient" suspended state
• Blocks new IA handles from getting allocated on affected
HCA
CRA infrastructure uses semaphore to synchronize with
other CRA instances
Only one CRA analysis at any time

26 March 2007

Net CRA and Mass Storage CRA do not do locked CRA. IB CRA is the only CRA module that
does locked CRA. During the locked CRA, the device state is changed to a "transient"
suspended state. This blocks new IA handles from getting allocated on the affected HCA. The
CRA infrastructure uses semaphore to synchronize with other CRA instances. There is a limit that
there may be only one CRA analysis at any time.

March 2007 Availability-26


HP-UX 11i v3 Delta Secured Availability

CRA flow during PCI Card Delete


olrad Infrastructure CRA Mass Storage CRA Networking CRA
command interface cra_main( )

ANALYZE_LOCK

ANALYZE_LOCK

CRA SUCCESS

ANALYZE_LOCK

CRA SUCCESS

CRA SUCCESS

DELETE CARD

RELEASE

RELEASE

CRA SUCCESS

RELEASE

CRA SUCCESS

CRA SUCCESS

27 March 2007

When the olrad command, or pdweb, is used to perform a PCI OLD, the first operation that must
happen successfully is the CRA. The olrad command uses the CRA infrastructure interface to first
perform a mass storage CRA, then, if that is successful, it performs a networking CRA. If both of
these succeed, olrad may then signal the ability to remove the card and perform the card
deletion. This resource will then be released at both the mass storage and networking levels.

This slide shows the flow during a successful CRA.

March 2007 Availability-27


HP-UX 11i v3 Delta Secured Availability

LAN CRA

ServiceGuard

NetCRA invokes cmviewcl


command to check SG
usage of LAN links.

28 March 2007

This slide shows how LAN CRA works.

First, the user initiates the Check OLD sequence. The CRA identifies all of the interfaces in
question and calls NetCRA. NetCRA identifies each interface PPA and issues usage information
primitives to DLPI. Then DLPI issues the request to the driver instance if the driver has registered
this capability. The TOE will register capability that it wants to participate in CRA analysis. So
DLPI would call TOE during CRA.

The driver returns the usage to DLPI, which collectively returns the usage from DLPI and the driver
to NetCRA. Now, NetCRA invokes cmviewcl command to check SG usage of LAN links.

NetCRA issues the requests for each one of the interfaces for which CRA has initiated Check
OLD. The same sequence of steps are executed between NetCRA-DLPI-Driver instance.

NetCRA reports the usages and calls back CRA, which returns to the olrad command.

March 2007 Availability-28


HP-UX 11i v3 Delta Secured Availability

Mass Storage CRA


CRA Infrastructure
LVM
MS CRA Subsystem IOI Asterope PM FS
VxVM

SSHA operations Find all MS cards affected


– io_query(class)

Find all affected luns under the


MS cards

pstat_getproc_lite, pstat_getfiledetails for each PID– to


find out all the devices used by the process. Any device
found impacted is a data critical resource.

pstat_swap , pstat_crashdev – any boot/swap device found impacted is


a system critical resource, dump device found impacted is a data
critical resource.

Getmntent() – check if the FS resides on affected LUNs. If so, we


have a system or data critical situation –depending upon the FS - /,
/var, /etc,/stand are system critical.

Query volume managers to get the list of volumes –


check if we have impacted volumes

Query ServiceGuard – check if cluster lock disk is


impacted- this is system critical resource

29 March 2007

This slide illustrates how the Mass Storage CRA works.

The MS CRA subsystem finds all MS cards affected by the I/O query. Then if finds all of the
affected LUNs under the MS cards. Then the pstat subsystem identifies all the devices used by all
of the processes. Any device found impacted is listed as a data critical resource.

Similarly, any boot or swap device found by the pstat subsystem is also identified as a system
critical resource; while any dump device found is a data critical resource.

Next, the MS CRA subsystem checks if the file system resides on affected LUNs. If so, there is a
system or data critical situation. Note that /, /var, /etc, and /stand are system critical FSs.

Next, MS CRA queries the volume managers to obtain the list of volumes to check if there are
any impacted volumes.

Finally, MS CRA queries ServiceGuard to check if a cluster lock disk is impacted because this is
a system critical resource.

March 2007 Availability-29


HP-UX 11i v3 Delta Secured Availability

Recovery Module
Single System High Availability
PCI Error Recovery

30 March 2007

March 2007 Availability-30


HP-UX 11i v3 Delta Secured Availability

PCI Error Handling


Without PCI Error Handling
• PCI slots are in hardfail mode
• PCI error, e.g. parity error, causes system crash
With PCI Error Handling
• Manually recover from PCI error
• Avoid a system crash caused by an MCA or HPMC
• PCI slots are in softfail mode
In softfail mode, when error occurs
• Slot is isolated
• Driver reports error and suspends
Manual recovery
• Use olrad command or attention button
– Restore slot, card, and driver to usable state

31 March 2007

PCI Error Recovery is new for HP-UX 11i v3. It was never released on HP-UX 11i v1 or HP-UX
11i v2. However, a similar feature known as PCI Error Handling was released as a Software
Pack on HP-UX 11i v2.

The PCI Error Handling feature allows an HP-UX system to avoid a Machine Check Abort (MCA)
or a High Priority Machine Check (HPMC), if a PCI error occurs, for example, a parity error.
Without the PCI Error Handling feature installed, PCI slots are set in hardfail mode. If a PCI error
occurs when a slot is in hardfail mode, an MCA or HPMC will occur, then the system will crash.

When the PCI Error Handling feature is installed, the PCI slots containing I/O cards that support
PCI Error Handling will be set in softfail mode. If a PCI error occurs when a slot is in softfail
mode, the slot will be isolated from further I/O, the corresponding device driver will report the
error, and the driver will be suspended. The olrad command and the Attention Button can be
used to online recover, restoring the slot, card, and driver to a usable state.

PCI Error Handling is very similar to PCI Error Recovery. The main difference is that PCI Error
Recovery automatically attempts to recover from a PCI errors, but PCI Error Handling requires
user intervention to attempt recovery from PCI errors.

March 2007 Availability-31


HP-UX 11i v3 Delta Secured Availability

PCI Error Recovery – New on HP-UX 11i v3


Detect, isolate, and automatically recover from PCI error
• Enabled by default on HP-UX 11i v3
• Avoids system crash
When error occurs on PCI bus with I/O card supporting PCI Error
Recovery, automatic recovery steps are taken
• Isolate PCI bus from the rest of the system
• Attempt recovery from error
– Keep bus and I/O card quiesced if a nested error occurs
• Re-initialize the bus
How to determine if PCI Error Recovery is supported
• Use PCI ER support matrix on http://docs.hp.om
• Use ioscan –P error_recovery – available in future fusion release
If the card supports OL*, manual recovery is still possible by
replacing the card
• Use olrad command or attention button

32 March 2007

HP-UX 11i v3 supports PCI Error Handling and Recovery to ensure that PCI errors caused by bad
cards do not crash a hard partition on high end systems. This functionality helps in recovering
the most common parity errors. The PCI Error Handling feature was available on HP-UX 11i v2.
It allowed a bad PCI OLAR capable card to be hot-replaced using either the olrad command,
pdweb, or the attention button. The PCI Error Recovery feature is new on HP-UX 11i v3.

The PCI Error Recovery feature provides the ability to detect, isolate, and automatically recover
from a PCI error. The PCI Error Recovery feature is included with the HP-UX 11i v3 operating
system and it is enabled by default. The PCI Error Recovery feature lets you avoid a system crash
when a PCI error occurs.

With the PCI Error Recovery feature enabled, if an error occurs on a PCI bus containing an I/O
card that supports PCI Error Recovery, then the following operations are performed.

The PCI bus is quarantined to isolate the system from further I/O and prevent the error from
damaging the system. The PCI Error Recovery feature will attempt to recover from the error and
reinitialize the bus so that I/O can resume. If an error occurs during the automated error
recovery process, the bus and I/O card will remain quiesced.

The ioscan -P error_recovery command can be used to determine if an I/O card or driver
supports PCI Error Recovery. This command option in not available at the initial release of HP-UX
11i v3; however, it is expected to be available in a later fusion release. Until then, visit
http://docs.hp.com to review the PCI ER support matrix to determine if a system and a
card/driver supports ER.

If the bus contains a card that supports online addition, replacement, or deletion (OL*) and the
card is in a hot-pluggable slot, the olrad command (or the attention button) can be used to
manually recover from the error by replacing the card.

March 2007 Availability-32


HP-UX 11i v3 Delta Secured Availability

PCI Error Recovery Details


Error reporting done during boot also
• To update node states
• Cards identified as “bad” during early boot are marked as UNUSABLE
Firmware exports bits to indicate Error Handling/Error Recovery support
• On PA – per platform bits
• On Integrity – per LBA bits
driver_init() must report ERROR if ERROR is detected in init phase
Drivers can prevent automatic recovery
• IB is the main user
No Error Recovery on shared slots
Buggy error notifications from drivers are ignored
Bus Reset is done at the end of sync phase
• Bus is taken out of reset in the release phase
Health state of ERROR’ed interface nodes is set to UNUSABLE
Use of tunable to prevent continuous error recovery
Infrastructure notifies System Fault Management (SFM) through EVM
• Successful Error Recovery or not

33 March 2007

Here are some more details about PCI Error Recovery.

PCI ER is usually thought of as happening to recover from PCI errors that happen while the
system is up and running. However, PCI ER also occurs during system boot to update node
states. Cards identified as “bad” during early boot are marked as UNUSABLE. The system
firmware exports bits to indicate Error Handling/Error Recovery support. On PA system, there
are per platform bits; while on Integrity-based system, there are per LBA bits. The driver
initialization routines must report an ERROR if an ERROR is detected in initialization phase.

Drivers can prevent automatic recovery through a suspend callback. IB is the main user of this
feature.

Note that there is no Error Recovery on shared slots; and, buggy error notifications from drivers
are ignored.

A bus reset is done at the end of sync phase; and, it is taken out of reset in the release phase.

The health state of ERROR’ed interface nodes is set to UNUSABLE.

There is a tunable to prevent continuous error recovery.

Finally, the infrastructure notifies System Fault Management (SFM) through EVM on whether the
Error Recovery was successful or not.

March 2007 Availability-33


HP-UX 11i v3 Delta Secured Availability

PCI Error Recovery and ServiceGuard


Enable PCI Error Recovery with ServiceGuard only when
• Storage devices are configured with multiple paths
• HP-UX native multipathing is enabled
ServiceGuard may not detect connectivity loss if
• Single path to storage devices
• PCI Error Recovery enabled
Disabling PCI Error Recovery
• pci_eh_enable tunable
– Default value of 1 means PCI Error Recovery is enabled
– kctune pci_eh_enable=0 disables PCI Error Recovery
– Reboot required to take effect
• Subsequent error on PCI bus results in system crash
– MCA on Integrity systems or HPMC on PA systems
34 March 2007

It is very important to note that PCI Error Recovery is enabled by default. If you use
ServiceGuard, HP recommends the PCI Error Recovery feature only be enabled if your storage
devices are configured with multiple paths and you have not disabled HP-UX native
multipathing. If PCI Error Recovery is enabled, but your storage devices are configured with
only a single path, ServiceGuard may not detect when connectivity is lost and cause a failover.

The kernel tunable controlling the enablement of PCI Error Recovery is pci_eh_enable. Its default
value is one, which means that PCI Error Recovery is enabled. To disable PCI Error Recovery, for
example, if you have ServiceGuard and only a single path to your storage devices, use kctune
pci_eh_enable=0. This is a static tunable, so a reboot is required to turn off PCI Error Recovery
after changing the tunable’s value. This tunable is made static because it is very hard to modify
hardware capability at run time.

If the PCI Error Recovery feature is disabled and an error occurs on a PCI bus, a Machine Check
Abort (MCA) or a High Priority Machine Check (HPMC) will occur, then the system will crash.

March 2007 Availability-34


HP-UX 11i v3 Delta Secured Availability

PCI Error Recovery Tunables and Limitations


Tunables
• Tunable pci_error_tolerance_time
– Time interval between two occurrences of PCI Errors at a slot
• If an error happens within this tolerance time, card is left suspended
No automatic recovery is attempted
Manual recovery can be used to recover such a slot
– Defined as a dynamic tunable
• Can be changed using kctune command without reboot.
– Set to 1440 (24 * 60 minutes) by default
• Static machdep tunable to turn ER on or off
Limitation
• Only one PCI ER operation at any time in the system

35 March 2007

There is a dynamic system tunable called pci_error_tolerance_time. It specifies the time interval
in minutes between two occurrences of PCI Errors at a slot. If an error happens within this
tolerance time, the card will be left suspended; and no automatic recovery is attempted.
However, manual recovery can be used to recover such a slot. Since this is a dynamic tunable, it
can be changed using kctune command without reboot. It is set to 1440 (24 * 60 minutes) by
default. There is also a static machdep tunable that can be used to turn Error Recovery off and
on.

Due to the internal kernel synchronization methodology used, only one PCI ER operation is
allowed at any time in the system.

March 2007 Availability-35


HP-UX 11i v3 Delta Secured Availability

Recovery Module
Single System High Availability
Dynamically Loadable
Kernel Modules (DLKM)

36 March 2007

March 2007 Availability-36


HP-UX 11i v3 Delta Secured Availability

New DLKMs on HP-UX 11i v3


HP CIFS Client kernel component is implemented as a dynamically
linked kernel module
• Supports both static binding and dynamic loading
• Installation, removal, and update of the HP CIFS Client do not require a
system re-boot
HP CIFS Server A.02.03 has a new DLKM
• CFSM, or CIFS File System Module
• Improves file locking interoperability between CIFS clients and NFS clients
– Enforces the file locks held by CIFS clients
• Feature is off by default and can be enabled on a per file system basis.
Enabling this functionality
• The kernel modules are part of an HP-UX 11i v3 only product the is part
of the CIFS Server bundle
VxFS Filesystem is a DLKM
New HP Data Link Provider Interface has DLKM HA Capability
• Offers the customer the ability to dynamically load or unload LAN drivers
without a system reboot

37 March 2007

There are several new Dynamically Loadable Kernel Modules (DLKMs) on HP-UX 11i v3.

The kernel component of the HP CIFS Client is implemented as a dynamically linked kernel
module to support both static binding and dynamic loading. With DLKM support, installation,
removal, and update of the HP CIFS Client do not require a system re-boot.

HP CIFS Server A.02.03 for HP-UX 11i v3 includes new functionality to improve file locking
interoperability between CIFS clients and NFS clients. A new DLKM known as CFSM (CIFS File
System Module) can be used to enforce the file locks held by CIFS clients. This functionality is off
by default and can be enabled on a per file system basis. Enabling this functionality prevents the
possibility of file corruption due to concurrent file accesses from both CIFS and NFS, and allows
for performance enhancing opportunistic locks to be safely used. The kernel modules are part of
an HP-UX 11i v3 only product, which is part of the CIFS Server bundle.

The VxFS file system is as a DLKM.

The new HP Data Link Provider Interface, DLPI, offers DLKM HA Capability. This allows the
customer the ability to dynamically load or unload LAN drivers without a system reboot.

March 2007 Availability-37


HP-UX 11i v3 Delta Secured Availability

Support for DLKM drivers at HP-UX 11i v3


Drivers and olrad
• Driver’s pre-unload script calls olrad to soft-delete all of its instances without Power Off
– Drivers do not have to track all of their instances
• olrad suspend-delete one instance at a time
– Cleaner rollback on failures
• New olrad option to add/enable attach routine
• olrad changes status of the driver attach routine before it starts the driver instance deletion
– Prevents ioscan from re-claiming instances unclaimed as part of unload
– On failure, the attach routine state is restored to normal
Unload removes driver instance numbers
• Load might not get back the previous instance numbers
Dump driver is bundled with the interface driver as one module
• Avoids dependencies between modules
Miscellaneous notes
• No class driver DLKM
• RDMA Infrastructure not DLKM’able
• No force-unload
• No automatic re-configuration on failure rollback

38 March 2007

A driver’s pre-unload script calls olrad to soft-delete all of its instances without Power Off. Drivers
do not have to track all of their instances. Olrad can suspend-delete one instance at a time
providing a cleaner rollback facility on failures. There is a new olrad option to add/enable
attach routine. Olrad changes the status of the driver attach routine before it starts the driver
instance deletion. This prevents any ioscans from re-claiming instances which have already been
unclaimed as part of unload. On failure, the attach routine state is restored to normal.

Note that an unload removes instance numbers. Note that a load might not get back the
previous instance numbers!

On HP-UX 11i v3, the dump driver is bundled along with the interface driver as one module.
This avoids dependencies between modules.

Additionally, there is no class driver DLKM. The RDMA infrastructure not DLKM’able. There is no
force-unload. And, there is no automatic re-configuration on failure rollback.

March 2007 Availability-38


HP-UX 11i v3 Delta Secured Availability

DLKM Load Sequence


Pre-load
K
C
M Success/Abort Success
O
D
U Load
L
E Status
Status

Post-load

kcmodule invokes driver’s preload script


If the driver script returns success
• _load entry point of driver is called
• If driver _load entry point succeeds
– driver’s post load script is called

39 March 2007

This slide illustrates the DLKM load sequence. The kcmodule command invokes a driver’s preload
script. If the driver script returns success, the _load entry point of driver is called. If driver _load
entry point succeeds, then the driver’s post load script is called.

March 2007 Availability-39


HP-UX 11i v3 Delta Secured Availability

DLKM Unload Sequence

Modprep script Olrad CLI CRA Framework Subsystem CRA OL* Infra Driver

Pre-unload

Olrad –d CRA(Unload,drv_name)
driver

CRA(Unload,drv_name,
component ID)

Invoke pre-delete
Success
Script for first instance
Success

Libolar Suspend/delete
event
Invoke post-delete
Success
Scripts for first instance

40 March 2007

This slide illustrates the DLKM unload sequence. The olrad command executes the Critical
Resource Analysis. Upon success, olrad invokes the pre-delete script for the first instance. The
OL* infrastructure sends the suspend or delete event to the driver for execution. Upon success,
olrad invokes the post-delete scripts for the first instance.

March 2007 Availability-40


HP-UX 11i v3 Delta Secured Availability

Recovery Module
Single System High Availability
PCI OLD and Cell OL*

41 March 2007

March 2007 Availability-41


HP-UX 11i v3 Delta Secured Availability

PCI OL* and Cell OL*


Key components of SSHA
Covered in Flexible Capacity Section of this training
• See OL* and On Demand Solutions
Note on OL* on HP-UX 11i v3
• OL* infrastructure uses a global lock within the kernel
– Only one SSHA operation and ioscan can run at any time
• Next Release of HP-UX
– Locking expected to evolve to finer granularity

42 March 2007

PCI OL* and Cell OL* are obviously key to the HP-UX 11i v3 Single System High Availability
solution.

For details of these components, please refer to the Flexible Capacity Section of this Delta
Training. They are covered in the OL* and On Demand Solutions portion.

On a final note about OL* on HP-UX 11i v3, the OL* infrastructure uses a global lock within the
kernel. This means that only one SSHA operation and ioscan can run at any time. The next
major release of HP-UX is expecting to evolve to finer granularity thereby removing this
limitation.

March 2007 Availability-42


HP-UX 11i v3 Delta Secured Availability

Recovery Module
Next Generation
Fault Management

43 March 2007

March 2007 Availability-43


HP-UX 11i v3 Delta Secured Availability

Fault Management Goals Overview


Increase system availability
• Move from reactive fault detection, diagnosis and repair to proactive fault
detection, diagnosis and repair
Detect problems automatically as close as possible to when they
actually occur
Diagnose problems automatically at the time of detection
Automatically report in understandable form
• Description of the problem
• Likely cause(s) of the problem
• Recommended action(s) to resolve the problem
• Detailed information about the problem
Tools available to repair or recover from the fault

44 March 2007

The main goal of fault management is to increase system availability by moving from reactive
fault detection, diagnosis and repair to proactive fault detection, diagnosis and repair.

This is accomplished by detecting problems automatically as close as possible to when they


actually occur and diagnosing problems automatically at the time of detection. The problems
need to be automatically reported in a clear and understandable form. The information must
include a description of the problem, likely cause(s) of the problem, recommended action(s) to
resolve the problem, and detailed information about the problem. Then, tools must be available
to repair or recover from the fault.

March 2007 Availability-44


HP-UX 11i v3 Delta Secured Availability

Recovery Module
Next Generation
Fault Management
System Fault Management
(SysFaultMgmt)

45 March 2007

March 2007 Availability-45


HP-UX 11i v3 Delta Secured Availability

System Fault Management on HP-UX 11i v3


• Provides strong fault management capabilities for Enterprise servers
– Assists in meeting stringent and time critical SLAs
• Assists HP’s field engineers and service personnel in accurately isolating
fault to the lowest granularity of FRU (Field Replacement Unit)
• Based on the WBEM (Web Based Enterprise Management) standard for
wide range of diagnostic and system fault management capabilities
– Hardware inventory information
– Event management
– Health status for computer system and individual devices
• Provides WBEM instrumentation to allow customers to manage
heterogeneous systems in the data center via a single manageability
application
• Offers browsing an online dictionary of error metadata
– Includes probable cause and recommended actions for resolving
system faults

46 March 2007

Enterprise servers have a growing demand for strong fault management capabilities so as to
meet the business needs for stringent and time critical SLA’s (Service Level Agreements). System
Fault Management (SFM) offers tremendous benefit for HP’s field engineers and service
personnel by assisting them in accurately isolating fault to the lowest granularity of FRU (Field
Replacement Unit), analyzing and resolving them. Further, the current enterprise server market is
moving towards consolidation, which puts emphasis on solutions based on open standards. SFM
is a new solution based on the WBEM (Web Based Enterprise Management) standard for wide
range of diagnostic and system fault management capabilities like hardware inventory
information, event management and health status for computer system and individual devices.
Further, it also provides WBEM instrumentation to allow customers to manage heterogeneous
systems in the data center via a single manageability application. This solution also offers the
capability to browse and customize an online dictionary of error metadata including probable
cause and recommended actions for resolving system faults.

March 2007 Availability-46


HP-UX 11i v3 Delta Secured Availability

System Fault Management (SFM)


Integrates with HP’s multi- and single-system manageability solutions
• HP SIM is multi-system manageability solution
• HP SMH is single system manageability solution
Provides common UI across all HP supported platforms and OSs
Allows users to view the inventory and health status of hardware subsystems
Conforms to industry standards
Designed to provide advanced capabilities
• Predict a hardware fault
• Provide mechanism to isolate the faulty component
• Provide the system the ability to automatically recover from the fault
Facilitates improved health status reporting for the entire computer system
• Drill down to detect the specific faulty component using Follow the Red.
Key Components of SFM
• User Interface (UI)
• WBEM Instrumentation and Providers
• OS, Platform and Firmware Abstraction

47 March 2007

SFM integrates, and also facilitates integration, with HP SIM (multi-system manageability
solution) and HP SMH (single system manageability solution), thereby providing a common user
interface across all HP supported platforms and operating systems. Event and error management
capabilities of System Management Homepage allow users to view the inventory and health
status of hardware subsystems such as processors and memory.

SFM is the next generation hardware management solution that conforms to industry standards.
The architecture of SFM is designed to provide advanced capabilities that can not only predict a
hardware fault, but also can provide a mechanism to isolate the faulty component and provide
the system the ability to automatically recover from the fault. It facilitates improved health status
reporting for the entire computer system, from which a user can drill down to detect the specific
faulty component using Follow the Red.

Key Components of SFM


SFM can be broadly classified as comprising of three layers, the User Interface (UI), the WBEM
Instrumentation and Providers, and the OS, Platform and Firmware Abstraction.

March 2007 Availability-47


HP-UX 11i v3 Delta Secured Availability

System Fault Manager (SFM)


Supported on all systems that support HP-UX 11i v3
Collection of tools used to monitor the health of HP servers
• Memory, CPU, power supplies, and cooling devices
Operates in the WBEM environment
• WBEM indications can be logged in syslog
Features include
• Event Manager
– Common Information Model Provider (EVM-CIM)
• Error Management Technology (EMT)
Features not available on HP-UX 11i v3
• SFM Indication Provider
– Use EVWeb Event Viewer to view equivalent indications
• EVWeb Log Viewer
HP threshold indications equivalent to indications generated by High
Availability Monitors are now supported
• View HP threshold indications using the EVWEB Event Viewer

48 March 2007

The System Fault Manager, or SFM, is a collection of tools that is used to monitor the health of
HP servers and receive information about hardware such as memory, CPU, power supplies, and
cooling devices. It operates in the WBEM environment.

System Fault Management features include Event Manager-Common Information Model Provider
and Error Management Technology. However, the SFM Indication Provider and Log Viewer are
not yet available.

SFM is supported on all systems that support HP-UX 11i v3.

March 2007 Availability-48


HP-UX 11i v3 Delta Secured Availability

System Fault Management Architecture

49 March 2007

HP SIM is the multi-system manageability portal which an administrator can use to manage the
entire data center. HP SMH is the single system manageability portal that an administrator can
use to perform more specific management tasks for a given managed system.

SIM, running on a central management system, collaborates seamlessly with SMH, running on
each managed system, through single sign-on. Both SIM and SMH offer browser based GUI
(Graphical user interface) and CLI (Command Line Interface). Various components of SFM, such
as EVWEB, EMT, and hardware device inventory, are integrated with SIM and SMH. An
administrator can perform fault management and diagnostic operations along with various other
system management tasks.

SFM offers following User Interface (UI) components:


EVWEB, as a part of SMH, provides a UI for event viewing and subscription administration for a
managed system. Evweb also has a command-line interface apart from the GUI integrated into
SMH.

Inventory property pages in SIM and SMH facilitate a view of the system hardware information.

EMT, as a part of SMH, allows viewing error metadata of various hardware, firmware and
software errors that can occur on a system running HPUX.

The filter metadata controller facilitates an administrator to create persistent event subscription
filters.

The sfmconfig utility provides a run-time notification mechanism to trigger various memory
resident SFM components to reload any change in configuration files.

March 2007 Availability-49


HP-UX 11i v3 Delta Secured Availability

System Fault Manager – HP-UX 11i v3


SFM never shipped on HP-UX 11i v1
May also be new to many HP-UX 11i v2 customers
New SFM features since HP-UX 11i v2 June 2006
• Event Manager-Common Information Model (EVM-CIM) Provider
• Error Management Technology (EMT)
• SFMIndicationProvider is unavailable
– View equivalent indications by using the EVWEB Event Viewer
• Log Viewer is not available
• HP threshold indications equivalent to indications generated by High
Availability Monitors are now supported
• View HP threshold indications using the EVWEB Event Viewer
• WBEM indications can be logged in syslog

50 March 2007

SFM is new for customers migrating from HP-UX 11i v1. It was not previously delivered on the
HP-UX 11i v1 September 2005 OE media. It may also be new to many HP-UX 11i v2 customers.

Several features are new on SFM since the HP-UX 11i v2 June 2006 release. The Event
Manager-Common Information Model (EVM-CIM) Provider is introduced. And, the Error
Management Technology (EMT) is introduced.

SFMIndicationProvider is not available. However, you can continue to view indications


equivalent to those generated by the SFMIndicationProvider, using the EVWEB Event Viewer.
The Log Viewer is also unavailable.

HP threshold indications equivalent to indications generated by High Availability Monitors are


now supported. View HP threshold indications using the EVWEB Event Viewer.

WBEM indications can be logged in syslog also.

March 2007 Availability-50


HP-UX 11i v3 Delta Secured Availability

SysFaultMgmt Documentation
• Administrator’s Guide
• Frequently asked questions (FAQs)
• Release notes
• Provider data sheets
• Online help – man pages for CLIs
• Available at the diagnostics documentation Web page
– http://docs.hp.com/en/diag.html
• EVWEB documentation
– Integrated online help – man pages

51 March 2007

For further information, see the following documents, available at http://docs.hp.com/en/diag


• System Fault Management Administrator’s and User's Guide
• SFM Release Notes
• Frequently Asked Questions (FAQs)
• SFM Provider Data Sheets
• SFM Tables of Versions

March 2007 Availability-51


HP-UX 11i v3 Delta Secured Availability

Recovery Module
Next Generation
Fault Management
EVWeb

52 March 2007

March 2007 Availability-52


HP-UX 11i v3 Delta Secured Availability

EVWeb
• Manage event subscription
• View events happening on the system
• EVWeb GUI requires installed SysMgmtWeb bundle
– Provided by HP SMH
• CLI can be accessed if the HP-UX base operating environment
is installed

53 March 2007

With EVWeb, a user can manage event subscription and view events happening on the system
using subscription administration and event viewer respectively. The EVWEB GUI is integrated
with HP SMH, where event viewer and subscription administration can be accessed via Logs
and Tools tab respectively. If SysMgmtWeb bundle, provided by HP SMH, is installed, then
evweb GUI can be accessed. However, CLI can be accessed if the HP-UX base operating
environment is installed.

March 2007 Availability-53


HP-UX 11i v3 Delta Secured Availability

EVWeb GUI is integrated with HP SMH


Access event viewer via Logs tab
• Allows user to view, search and delete events from local event archive
• Wide range of intuitive and user friendly search criteria is provided
– Event age, date-time, hardware device, event ID, text
– A combination of one or more such criteria can be specified at a time
to narrow down the search results
• /opt/sfm/bin/evweb eventviewer –L –e eq:4 –a 10:dd –s desc:eventid
Access subscription administration via Tools tab
• Allows users to view, create, delete and modify new and already existing
subscriptions on the local system
– Use either GUI or command line
• Subscription is characterized by criteria and destination
– Default subscription that is readily available is flagged as HP Advised
subscriptions
– Destination for such subscriptions cannot be changed
• It can be modified to have move destinations like syslog and email address
• The syslog destination is newly allowed in HP-UX 11i v3

54 March 2007

The EVWEB GUI is integrated with HP SMH, where event viewer and subscription administration
can be accessed via Logs and Tools tab respectively.

The EVWeb event viewer allows user to view, search and delete events from local event archive.
A wide range of intuitive and user friendly search criteria is provided that includes event age,
date-time, hardware device, event ID and text. A combination of one or more such criteria can
be specified at a time to narrow down the search results, as shown by the command line
example below: /opt/sfm/bin/evweb eventviewer –L –e eq:4 –a 10:dd –s desc:eventid.

The EVWeb Subscription administrator allows users to view, create, delete and modify new and
already existing subscriptions on the local system. All these operations can be performed both
by GUI and command line. The subscription is characterized by criteria and destination. The
default subscription that is readily available is flagged as HP Advised subscriptions. Though the
destination for such subscriptions cannot be changed, it can be modified to have move
destinations like syslog and email address. The syslog destination is newly allowed in HP-UX 11i
v3. Though an administrator can create a subscription that is applicable to one device, however,
he/she can also define subscription criteria such that it is applicable for all devices. Similarly, if
he/she wants the matching event to reach more than one destination, then all such destinations
can be defined using one subscription creation request. To know more about how to create /
modify subscriptions, please refer to section on Feature Highlights of SFM.

March 2007 Availability-54


HP-UX 11i v3 Delta Secured Availability

EVWeb on HP-UX 11i v3


EVWeb event viewer may be used to view
• WBEM indications sent by Event Monitoring Service (EMS)
• Indications equivalent to the SFMIndicationProvider
• HP threshold indications generated by HA Monitors
Syslog destination for subscriptions is newly allowed

55 March 2007

Several related tools have been enhance to send WBEM indications that may be viewed from
the EVWeb tool.

SFM converts EMS events into WBEM indications, which can be viewed from the EVWeb tool.

As of HP-UX 11i v3, the System Fault Management SFMIndicationProvider is not available.
However, a user can continue to view indications equivalent to those generated by the
SFMIndicationProvider, using the EVWEB Event Viewer.

HP threshold indications equivalent to indications generated by High Availability Monitors are


now supported. A user can view HP threshold indications using the EVWEB Event Viewer.

March 2007 Availability-55


HP-UX 11i v3 Delta Secured Availability

Recovery Module
Next Generation
Fault Management
Error Management
Technology (EMT)

56 March 2007

March 2007 Availability-56


HP-UX 11i v3 Delta Secured Availability

Hardware and software error troubleshooting


What is required to troubleshoot errors?
• Easy access to detailed error information
– Problem cause
– Action to resolve
Conveniently searchable for desired error information
Desired error information locally available
Ability to update error information when it changes
HP-UX answer
• Error Management Technology (EMT)
• Objective
– Provide Customers and Support Engineers with an online searchable
repository of problem/cause/action information for all system errors
• Part of HP’s overall manageability strategy
• Expect result
– Improved TCE (Total Customer Experience)
57 March 2007

Error Management Technology is a new feature of HP-UX 11i v3 that is intended to make
hardware and software error troubleshooting easier.

What is the first item that is required to perform error troubleshooting? The answer is easy access
to detailed error information. Ideally, it will include a problem cause and an action to resolve the
error.

Known errors and resolutions are kept in knowledge bases which should be conveniently
searchable for the desired error information. It is also best that this desired error information is
available locally, especially if the network or Internet access is down! Also, there should be the
ability to update error information when it changes.

The objective of EMT is to provide customers and support engineers with an online searchable
repository of problem/cause/action information for all system errors. It is an important part of
HP’s overall manageability strategy. The result is expected to be improved TCE (Total Customer
Experience).

March 2007 Availability-57


HP-UX 11i v3 Delta Secured Availability

Accessing HP-UX Error Messages


HP-UX 11i v2 HP-UX 11i v3
Accessing HP-UX error message Accessing HP-UX error
information: message information:
• Online HP-UX error utility on
• Go to HP ITRC website and search
each Customer system
on error message
• Examine each document returned
• Consolidated error information
HP-UX 11i v3 release and
• Search may return: HPUXERR01 onwards
• This is the outdated HP-UX Error
Message Catalog Manual that does
• Cause and action text provided
not contain information on HP-UX for errors
Releases after January 1991
• Document is limited in scope

58 March 2007

On HP-UX 11i v2, a user would typically access HP-UX error message information by going to
the HP ITRC website and performing a search on the error message. Then, he or she would have
to examine each document returned by the search.

Disappointingly, the search may return: HPUXERR01. This is the outdated HP-UX Error Message
Catalog Manual that does not contain information on HP-UX Releases after January 1991. The
document is limited in scope and is consistently in the list of top hit documents.

With EMT, on HP-UX 11i v3, a user can access HP-UX error message information online locally
using the EMT utility. It provides consolidated error information on HP-UX 11i v3 release and
onwards. It also provides the cause and action for known errors.

March 2007 Availability-58


HP-UX 11i v3 Delta Secured Availability

EMT Overview
• User friendly interface for keyword based search on error
messages
– Browser-based Graphical user interface
– Command line interface
• Local database with error information installed on customer
systems
• Supports various types of events
– NETTL, EMS, WBEM, etc.
• Delivered as a part of SysFaultMgmt bundle
• Security Model to avoid malicious updates

59 March 2007

EMT provides a user friendly interface for keyword based search on error messages. It has a
browser-based Graphical user interface as well as a command line interface.

EMT uses a local database with error information installed on customer systems. EMT supports
various types of events, such as NETTL, EMS, and WBEM.

EMT is delivered as a part of System Fault Management (SysFaultMgmt) bundle.

There is a security Model to avoid malicious updates.

March 2007 Availability-59


HP-UX 11i v3 Delta Secured Availability

EMT Features
• Integrated with hardware monitoring infrastructure of SFM
• Part of overall unified manageability
• Customizable corrective actions for errors
• Tools for third party vendors (IHV’s & ISV’s) to integrate error
metadata
• Upgradeable through media and web release
– IHV, ISV, and customer data unaffected
• Localizable error metadata
• C++ API to query error metadata

60 March 2007

There are several salient features of EMT. It is integrated with the hardware monitoring
infrastructure of System Fault Manangement (SFM). It is part of the overall HP-UX unified
manageability solution.

EMT has customizable corrective actions for errors. It provides tools for third party vendors, the
independent hardware and software vendors (IHV’s & ISV’s) to integrate error metadata.

EMT is upgradeable through media and web release. Doing so does not affect IHV, ISV, and
customer data. The error metadata is localizable. Finally, there is a C++ API to query error
metadata.

March 2007 Availability-60


HP-UX 11i v3 Delta Secured Availability

Integration with System Fault Mgmt


System Management Homepage
EMS Clients Event Viewer EMT UI

CIM HP WBEM Services


Repository

System fault Management


SFMProviderModule
EMS Error
Other
Wrapper Metadata
Providers
Provider Provider

CER
(Common Error Repository)

EMS Monitors

Events

Server Resources

61 March 2007

This slide illustrates the architecture and integration of EMT with SFM.

Note that CER is an acronym for Common Event Repository. And, CIM stands for Common
Information Model. CIM is a part of WBEM Services for HP-UX, the HP implementation of the
DMTF standard.

March 2007 Availability-61


HP-UX 11i v3 Delta Secured Availability

Positioning within Unified Infrastructure Manageability

62 March 2007

This slide shows a screenshot of the System Management Homepage. There is an area to access
Error Management Technology.

March 2007 Availability-62


HP-UX 11i v3 Delta Secured Availability

Customizable Error Resolution


By default, HP recommends steps to resolve the error
• An administrator can customize resolution text by adding
more site / customer specific details
• Retained only within customer environment
Value
• Ability to share troubleshooting knowledge across the system
administration team
• Improves troubleshooting efficiency by means of leveraging
the context specific knowledge

63 March 2007

EMT provides for customizable error resolution. By default, HP recommends steps to resolve the
error. However, an administrator can customize the resolution text by adding more site and
customer specific details. This custom information is only retained within the customer
environment.

This provides the ability to share troubleshooting knowledge across the system administration
team. It also improves troubleshooting efficiency by means of leveraging the context specific
knowledge.

March 2007 Availability-63


HP-UX 11i v3 Delta Secured Availability

EMT GUI integrated with SMH

Simple Search

Advanced
Search

Custom solution
administration
64 March 2007

Now, we are in the EMT area of SMH. The upper left circle is highlighting where the user can
go to perform a simple search. The user can look for a specific error number or error text and
also specify the type of match, e.g. exact phrase.

To the right of this is a button to click on to perform a more advanced search.

At the bottom is where there is the detailed error information. The administrator can add, delete,
and modify the solution to customize it for this system.

March 2007 Availability-64


HP-UX 11i v3 Delta Secured Availability

Error Metadata WBEM Provider


WBEM CIM Data Model
• WBEM interface to get detailed error
information about WBEM events
• Fetches data from CER and provides
an instance of the
HP_ErrorMetadataForAlertIndication
class

65 March 2007

There is a WBEM interface to obtain detailed error information about WBEM event. EMT gets
data from the CER and provides an instance of the HP_ErrorMetadataForAlertIndication class.

Examples of these are shown in the right.

March 2007 Availability-65


HP-UX 11i v3 Delta Secured Availability

Add Custom Solution

Command Preview
66 March 2007

This screen shot is of the area where an administrator can add a customer solution. It also has a
command preview pane. This is similar to kcweb and pdweb, with which the student may be
familiar.

March 2007 Availability-66


HP-UX 11i v3 Delta Secured Availability

Integration with other vendor’s error information


• Other vendors can register with HP
• DDK contains toolkit for preparing error metadata according
to the specified XML schema
• CER Update Tool can update other vendor’s error information
in CER at customer’s system
• Other vendor data is not affected by updates from HP
• Secure algorithm (transfer of data in encrypted format)
• Value
– Single stop solution for help on troubleshooting software and
hardware errors from all vendors on customer system

67 March 2007

EMT integrates with and provides tools for integration with other vendor’s error information.
Other vendors can register with HP. The DDK contains a toolkit for preparing error metadata
according to the specified XML schema. The CER Update Tool can update other vendor’s error
information in CER at the customer’s system. Other vendor data is not affected by updates from
HP. Updates are performed using a secure algorithm to transfer data in encrypted format. This
results in a single stop solution for help on troubleshooting software and hardware errors from all
vendors on customer system.

March 2007 Availability-67


HP-UX 11i v3 Delta Secured Availability

Other Vendor registration website

68 March 2007

This is a screen shot of the other vendor registration website.

March 2007 Availability-68


HP-UX 11i v3 Delta Secured Availability

Data upgrades from HP


• Error metadata in CER will be enhanced over upcoming
releases
– Through media release and web release
• Update does not affect existing data
– Custom solution or other vendor data
• Fault management solution (SFM) automatically takes the
latest installed error information

69 March 2007

Error metadata in the CER will be enhanced over upcoming releases. The updates are
performed through media release and web release. An update does not affect existing data,
either custom solution or other vendor data. The fault management solution (SFM) automatically
takes the latest installed error information.

March 2007 Availability-69


HP-UX 11i v3 Delta Secured Availability

Localizable Error Metadata


• Error metadata can be localized in the locales supported on
HPUX
– HPUX 11iv3 contains data in English only
• Event monitoring framework can send an event in desired
locale based on monitoring request
• Event viewer (evweb tool) can allow to view events in
different locales
• Value
– Better supportability

70 March 2007

EMT supports localizable error metadata. Error metadata can be localized in the locales
supported on HPUX; however, HPUX 11iv 3 contains data in English only.

The event monitoring framework can send an event in desired locale based on monitoring
request. Then the event viewer, the EVWeb tool, can allow users to view events in different
locales.

This provides better supportability.

March 2007 Availability-70


HP-UX 11i v3 Delta Secured Availability

C++ API
• Programmatic access to CER database
• Query CER and get the detailed error information
• EMT UI uses this API internally

71 March 2007

There is a C++ API that gives programmatic access to the CER database. Programs can be
written to query the CER and get the detailed information. The EMT UI uses this API internally.

March 2007 Availability-71


HP-UX 11i v3 Delta Secured Availability

Customer Benefits of EMT


More efficient troubleshooting
• No need to log on to ITRC
• HP-UX error message information resides locally on
customer’s systems
• Up-to-date error messages
• Cause and action information available for error messages
Improved Customer self-sufficiency
• Reduces need to call HP Support for information on error
messages
Context sensitive knowledge sharing
• Leverage across support staff
• Reduces time to resolve error
Error messages localized to customer’s language
72 March 2007

There are many customer and HP support benefits to the EMT solution. The troubleshooting
process is more efficient, in part, because there is no need to log on to ITRC since the HP-UX
error message information resides locally on customer’s systems. There is readily available up-to-
date error messages. There are cause and action information available for error messages.

This leads to improved customer self-sufficiency, which reduces the need to call HP Support for
information on error messages.

EMT allows for context sensitive knowledge sharing, which is leveraged across support staff and
reduces the time to resolve errors.

Error messages can be localized to the customer’s language.

March 2007 Availability-72


HP-UX 11i v3 Delta Secured Availability

Error Management Technology Summary


Is new on HP-UX 11i v3
Provides a quick, easy method of accessing error/cause/action information on all system
errors
Has GUI integrated with SMH and CLI interfaces
• The Tools -> Error Management Technology -> Query or Customize Error Data leads to the
EMT UI
Allows users to search and view metadata for all possible errors that can occur on HPUX
system
Allows administrator customize actions associated with events
• SFM installation provides HP recommended actions for the errors
• Administrator can add more custom actions that are persistent across system reboot
Maintains all error metadata in a database called Common Error Repository
• CER can be updated without impacting the custom actions
Allows Independent Hardware and Software Vendors to integrate error metadata of their
products within CER
• Allows user to view and customize metadata for products from various vendors using EMT
UI
• HP provides DDK (Device Development Kit) to IHVs to enable them to integrate with EMT on
customer’s system
LVM also supports EMT

73 March 2007

Error Management Technology (EMT) is new on HP-UX 11i v3. Error Management Technology
provides a quick, easy method of accessing error/cause/action information on all system errors.
It is an online searchable repository for accessing HP-UX error messages.

This unique feature allows a user to search, view and customize metadata for all possible errors
that can occur on HPUX system. It offers GUI integrated with SMH and CLI interfaces. The Tools -
> Error Management Technology -> Query or Customize Error Data leads to the EMT UI. While
all users can search and view error metadata, only administrator are allowed to customize
actions associated with events. The SFM installation provides HP recommended actions for the
errors. However, an administrator can add more custom actions, which are persistent across
system reboot. All error metadata is maintained in a database called CER (Common Error
Repository). The CER can be updated without impacting the custom actions.

EMT also allows Independent Hardware and Software Vendors (IHV and ISV) to integrate error
metadata of their products (software applications, device drivers etc.) within CER. This innovative
feature allows user to view and customize metadata for products from various vendors using
EMT UI. HP provides DDK (Device Development Kit) to IHV’s to enable them to integrate with
EMT on customer’s system.

LVM also supports Error Management Technology (EMT).

March 2007 Availability-73


HP-UX 11i v3 Delta Secured Availability

For Additional Information on EMT


• HP System Management Homepage (SMH)
– http://docs.hp.com/en/netsys.html#System%20Administration
• SysFaultMgmt (System Fault Management)
– www.hp.com/go/sysfaultmgmt
• WBEM (Web Based Enterprise Management)
– http://www.dmtf.org/

74 March 2007

For additional information on Error Management Technology, visit the following web pages.

HP System Management Homepage (SMH) at


http://docs.hp.com/en/netsys.html#System%20Administration
SysFaultMgmt (System Fault Management) at http://www.hp.com/go/sysfaultmgmt
WBEM (Web Based Enterprise Management) at http://www.dmtf.org/

March 2007 Availability-74


HP-UX 11i v3 Delta Secured Availability

Recovery Module
Diagnostics

75 March 2007

Diagnostics including Support Tools Manager and EMS Hardware Monitors. This is a short
summary of significantly changed tools in the Online Diagnostics product. You can find
information on these new tools as well as the old tools at:
http://wojo.rose.hp.com/hpux/diag/index.html and ERS documents at:
http://callahan.rose.hp.com/FMT/Products/repository/index.html

March 2007 Availability-75


HP-UX 11i v3 Delta Secured Availability

Online Diagnostics Changes at HP-UX 11i v3 (1 of 2)


CPE Monitor
• Supports sx2000 and zx2000 platforms
• Supports PCI Error Recovery
Memory Monitor
• Memory OLAD causes memlog and error history information
to be discarded
Memory Expert Tool on PA systems
• Test for write/read/compare only on page ranges in vPAR
The diagmond daemon
• Automatically rebuilds hardware inventory upon OLAD of
cell, CPU, or memory

76 March 2007

This is a short summary of significantly changed tools in the Online Diagnostics product. There
are no new tools.

The CPE monitor processes CPEs on uni-cellular as well as cellular systems. Recently, the monitor
has been enhanced to support sx2000 and zx2000 platforms. Traditionally, the monitor had
support for PCI-Error handling. In HP-UX 11i v3, the monitor is enhanced to support PCI-Auto
Error recovery feature as well.

If there is a change in the memory configuration due to Online Addition/Deletion of memory,


then memory_ia64 monitor discards memlog and error history information on Itanium-based
systems. Similary, on PA systems, the memory error logging daemon, memlogd, also discards
memlog and error history information whenever there is a change in the memory configuration
due to Online Addition/Deletion of memory.

The memory expert tool on PA systems has a feature called “Memory Test”. This feature enables
the user to perform a write/read/compare operation on a page. Now, with HP-UX 11i v3, the
user can perform tests for write/read/compare only on page ranges belonging to his vPAR only.

The diagmond daemon has been enhanced in HP-UX 11i v3 to rebuild the hardware inventory
automatically whenever there is a dynamic change in the system hardware as a result of online
addition/deletion of cell, CPU or memory.

March 2007 Availability-76


HP-UX 11i v3 Delta Secured Availability

Online Diagnostics Changes at HP-UX 11i v3 (2 of 2)


Supports Interface Expansion Program
• Large username, groupname, PIDs, and nproc
Supports additional features of HP-UX Virtual Partitions (vPars)
• Notification of events due to dynamic CPU migration
Supports the following new features
• Reporting extended hardware path of devices
• Reporting recovered Machine Check Aborts (MCA)
Supports more systems
• rx7640, rx8640, SD16B, SD32B, SD64B, rx3600, and rx6600 based
machines
Supports Automatic Error Recovery feature
Supports PCI Online Addition and Deletion operations
Supports new APOLLO host bridge adapter for legacy PCI
functionality
77 March 2007

Online Diagnostics has been enhanced to include the Interface Expansion Program (IEP) for
large username, groupname, PIDs, and nproc. It is also now supporting additional features of
HP-UX Virtual Partitions (vPars), such as support for notification of events due to dynamic CPU
migration.

Online Diagnostics understands and reports extended hardware path of devices and recovered
Machine Check Aborts (MCA). Additionally, it supports the Automatic Error Recovery, e.g. PCI-
Error Recovery, feature.

Online diagnostics has support for rx7640, rx8640, SD16B, SD32B, SD64B, rx3600, and
rx6600 based machines. It supports the PCI Online Addition and Deletion (PCI OL*) operations
and the new APOLLO host bridge adapter for legacy PCI functionality.

Finally, the kernel modules do not include compiler warnings.

March 2007 Availability-77


HP-UX 11i v3 Delta Secured Availability

EMS Hardware Monitors


EMS Hardware Monitors are part of Online Diagnostics
• Monitor wide variety of hardware products
• Alert user of failure or any unusual activities
EMS Hardware Monitors’ tasks
• Decode low-level info and translate to user friendly text
• Perform trend analysis
• Generate events for real problems
• Perform corrective actions
Documentation
• http://www.docs.hp.com/en/diag.html
• EMS Hardware Monitor specific release notes
– http://www.docs.hp.com/en/diag/ems/ems_rel.htm

78 March 2007

The EMS Hardware Monitors are part of the Online Diagnostics product. EMS Hardware
Monitors enable the system administrator to monitor the operation of a wide variety of hardware
products and be alerted of failure or any unusual activities.

Tasks performed by Hardware Monitors include decoding low level information and translating it
to user friendly problem/cause/action text. Hardware Monitors perform trend analysis where
possible. They only generate events for
real problems, for example, a high rate of correctable errors versus for each one. Hardware
monitors can perform corrective actions such as de-allocate memory pages, de-allocate
processors, and shut down system when a power failure causes a switch to UPS. Finally, they
can provide flood control by providing configurable reminder intervals and suppressing event
storms.

For further information and EMS documentation, see the following documents, available at
http://www.docs.hp.com/en/diag.html:
• Data Sheets
• EMS Hardware Monitors Quick Reference Guide
• EMS Hardware Monitors User’s Guide
• EMS HW Monitors for Hitachi Systems Running HP-UX
• Event Descriptions
• Frequently Asked Questions (FAQs)
• Multiple-View (Predictive-Enabled) Monitors
• Overview
• Quick Start: Anatomy of a Monitor (Controlling and Learning About Monitors)
• Requirements and Supported Products
• Release Notes at http://www.docs.hp.com/en/diag/ems/ems_rel.htm

March 2007 Availability-78


HP-UX 11i v3 Delta Secured Availability

EMS Hardware Monitors, OLAD, & diagmond


Cell OLAD in progress
• Map or amap displays “Cell Online Addition/Deletion in
progress” message
Memory OLAD in progress
• Map or amap displays “Memory Online Addition/Deletion in
progress” message
CPU OLAD in progress
• Map or amap displays “CPU Online Addition/Deletion in
progress” message
Messages displayed in cstm, mstm, and xstm
Enabling/disabling legacy mass storage stack causes a
remap
79 March 2007

In HP-UX 11i v3, new messages are logged in the System Activity Log.

Upon Online Addition/Deletion of a Cell, the following message will be logged in System
Activity Log, “Online addition/deletion of a Cell has been initiated.” As this will result in
dynamic change in the hardware, diagmond will rebuild the device map. diagmond will kill all
its daemons, active tools and bring down the EMS monitors. It will respawn the daemons and
the EMS monitors after the completion of online addition/deletion of Cell.

Upon Online Addition/Deletion of a CPU, the following message will be logged in System
Activity Log, “Online addition/deletion of CPU has been initiated.” As this will result in dynamic
change in the hardware, diagmond will rebuild the device map. diagmond will bring down the
CPU monitor and kill all the active tools. It will respawn CPU monitor after the completion of
online addition/deletion of CPU.

Upon Online Addition/Deletion of memory, the following message will be logged in System
Activity Log, “Online addition/deletion of Memory has been initiated.” As this will result in
dynamic change in the hardware, diagmond will rebuild the device map. diagmond will bring
down the memory monitor, the memlogd daemon and kill all the active tools. It will respawn
memory monitor after the completion of online addition/deletion of memory.

March 2007 Availability-79


HP-UX 11i v3 Delta Secured Availability

Support Tools Manager (STM) Overview


Provides online support tools
• Verify system information
• Troubleshoot system hardware
• Examine system logs
• Update firmware
Three User Interfaces
• X-11 based graphics tool
– xstm
• Menu-based interface
– mstm
• Command line interface
– cstm
80 March 2007

The Support Tools Manager, or STM, is also a part of the Online Diagnostics product. STM
provides a set of online support tools, which enable a system administrator to verify and
troubleshoot system hardware, and to examine system logs.

STM provides three interfaces that allow a user access to an underlying toolset, consisting of
information modules, firmware update tools, verifiers, diagnostics, exercisers, expert tools, and
utilities. The three interfaces are an X-11 based graphics interface, a menu based interface, and
a command line interface (CSTM).

March 2007 Availability-80


HP-UX 11i v3 Delta Secured Availability

New in STM on HP-UX 11i v3


New “page” option paginates map output
• cstm> map page
Can display agile view of devices
• CSTM: New “amap” command
– “page” option works on amap
• XSTM/MSTM: "View Agile Map" option in the "System" menu
• Hardware paths up to 556 characters long
– Default display length is 20 characters
– Hardware paths are now truncated
• Hardware path 0/4/1/0.2.19.255.14.1.0 appears as 0/4/1/***.255.14.1.0
• Use “Current Device Status” command to view complete path
LAN Exerciser/Verifiers support AutoPort Aggregation
• APA provides High Availability of the Network Interconnects
– Multiple Network Interfaces are configured as an aggregate
• IP address is no longer associated with a network interface
• IP address is associated with the aggregate only
• LAN exerciser and verifier can now be run over this aggregate.

81 March 2007

A new option called page is introduced for the map command. When the user types map at the
command prompt, the system map output is dumped on the screen. The system map output may
run into pages and may not be user friendly. The page option will display a paginated output of
the system map. Type the following command at the cstm prompt to get a paginated output of
the system map: cstm> map page The functionality of the map command is not altered. When
you type map at the cstm prompt you will still see the system map output, which is not
paginated.

SMT also supports the new agile view of devices. Within the command line tool, cstm, a new
command called “amap” has been introduced. The amap command displays the devices which
are mapped by diagmond in agile view. The existing “map” command will display the devices
which are mapped by diagmond in legacy view. Note that the page option works for amap just
as it does for map. With the XSTM and MSTM interfaces, select the "View Agile Map" option in
the "System" menu.

The hardware paths in HP-UX 11i v3 can be as long as 556 characters. The space allotted for
displaying the hardware path by default is 20 characters. The hardware paths will now be
truncated. For example consider a disk which has a hardware path -
0/4/1/0.2.19.255.14.1.0. It will now appear as 0/4/1/***.255.14.1.0 To view the
complete path, “Current Device Status” command needs to be executed.

LAN Exerciser/Verifiers support AutoPort Aggregation. Auto Port Aggregation (APA) is a


technique for providing High Availability of the Network Interconnects. In APA, multiple Network
Interfaces are configured as an aggregate. The IP address is no longer associated with a
network interface but is associated with the aggregate only. LAN exerciser and verifier can now
be run over this aggregate.

March 2007 Availability-81


HP-UX 11i v3 Delta Secured Availability

STM’s New Messages


Cell OLAD in progress
• Map or amap displays “Cell Online Addition/Deletion in
progress” message
Memory OLAD in progress
• Map or amap displays “Memory Online Addition/Deletion in
progress” message
CPU OLAD in progress
• Map or amap displays “CPU Online Addition/Deletion in
progress” message
Messages displayed in cstm, mstm, and xstm
Enabling/disabling legacy mass storage stack causes a
remap
82 March 2007

The user interface notifies the user whenever Cell Online Addition/Deletion takes place.
Whenever an Online Addition/Deletion of a cell is initiated and the user keys in map or amap
command in cstm, the “* * * Cell Online Addition/Deletion in progress : “ message is
displayed. The message automatically gets displayed instead of a map in both mstm and xstm,
too.

Similarly, whenever Online Addition/Deletion of memory occurs, the “* * * Memory Online


Addition/Deletion in progress :” message is displayed when the user keys in map or amap
command in cstm. The same message automatically gets displayed instead of a map in both
mstm and xstm, too.

Similarly, whenever Online Addition/Deletion of CPU occurs, the * * * CPU Online


Addition/Deletion in progress :” is displayed if the user keys in map or amap command in cstm.
It is also automatically displayed instead of a map in both mstm and xstm.

Whenever the legacy mass storage stack has been enabled/disabled a remap will occur.

March 2007 Availability-82


HP-UX 11i v3 Delta Secured Availability

STM Documentation and Tape Obsolescence


Documentation
• Man page
• STM Online Help
• STM Overview
• STM Tutorial
• Frequently Asked Questions
• Quick Reference
• Release Notes
• http://www.docs.hp.com/en/diag.html
Online Diagnostics Tape Support Obsolete
• Although some STM tools may function with tape drivers, they are NOT
supported
Diagnostic tools that do support tapes
• HP StorageWorks Library and Tape Tools (L and TT)
– Visit http://www.hp.com/support/tapetools

83 March 2007

There is a large amount of STM documentation available. Of course, there is the stm(1M) man
page and the Online help facility that is available while you are using STM. Additionally, there
is an STM Overview, Tutorial, Quick Reference, and FAQs. Updates to STM are always noted in
the HP-UX Release Notes, as well. Visit http://www.docs.hp.com/en/diag.html.

Obsolescence
As of May 2005 (HP-UX 11i v2) and September 2005 (HP-UX 11i v1), tape drives are no
longer supported. On HP-UX 11i v3, no tape drives are supported by Online Diagnostics.
Although some of the Support Tools Manager (STM) tools may function with tape drives, they are
not supported. The diagnostic tools and utilities that support these devices are HP StorageWorks
Library and Tape Tools (L and TT). These tools are available at
http://www.hp.com/support/tapetools.

March 2007 Availability-83


HP-UX 11i v3 Delta Secured Availability

Diagnostics and Pellston Technology in Montecito


Allows automatic disabling and replacement of L3 cache
lines having hard errors
• OS is unaware
• Diagnostics can query status and will be notified of errors
beyond a safe threshold
Benefits
• Easy, fast recovery of cache errors
• No OS involvement
• Applications unaffected
• Diagnostics
– Responds to thresholds and queries
• Handles exceeded threshold notification

84 March 2007

In Montecito systems, the Pellston technology allows automatic disabling and replacement of L3
cache lines having hard errors. The operating system can be unaware of this. However,
diagnostics can query status and will be notified of errors that are beyond a safe threshold.

There are many benefits to Pellston. It provides an easy and fast recovery of cache errors. The
OS is not involved; and, applications are also unaffected. Diagnostics responds to thresholds
and queries. Diagnostics handles exceeded threshold notification.

March 2007 Availability-84


HP-UX 11i v3 Delta Secured Availability

Recovery Module
System Dump Technology
Livedump
Concurrent dump

85 March 2007

The Crash Dump facility on HP-UX has been enhanced in HP-UX 11i v3 to provide greater
performance and scalability. There are two new crash dump features: Livedump and Concurrent
dump.

March 2007 Availability-85


HP-UX 11i v3 Delta Secured Availability

HP-UX 11i v3 Crash Dump Enhancements


Overview
What is a crash dump?
• Copy of system memory written onto disk in the event of a catastrophic
system failure
• Critical for system problem analysis and resolution
Dump performance is important
• Dumping is a part of system downtime
– Larger memory takes longer to save to disk
• Affects overall single-system availability.
HP-UX 11i v3 crash dump facility enhanced
• Livedump and Concurrent dump
• Greater performance, scalability, and availability
Dump format is unchanged
• Debuggers and utilities do not have any associated changes
• Backward compatibility is maintained
86 March 2007

There are two major crash dump enhancements to the system crash dump facilities in HP-UX 11i
v3 for Itanium-based systems.

First let’s review what a crash dump is. A crash dump is writing, or “dumping”, a copy of system
memory onto disk in the event of a catastrophic system failure. This picture of memory is critical
for system problem analysis and resolution.

Dump performance is important, particularly on large memory systems, because dump is a part
of system downtime and thus affects overall single-system availability.

The Crash Dump facility on HP-UX has been enhanced in HP-UX 11i v3 to provide greater
performance and scalability. There are two new crash dump features: Livedump and Concurrent
dump.

The dump format is unchanged in 11iv3. Hence debuggers and utilities do not have any
associated changes and backward compatibility is maintained.

March 2007 Availability-86


HP-UX 11i v3 Delta Secured Availability

Recovery Module
System Dump Technology
Livedump

87 March 2007

Livedump provides the ability to take a crashdump on a live system without a forced shutdown or
panic of that system. It is implemented for Itanium-based platforms only.

March 2007 Availability-87


HP-UX 11i v3 Delta Secured Availability

Availability
Livedump on HP-UX 11i v3
Performs a crashdump on a live system without a forced
shutdown or panic of that system
• Itanium-based platforms only
Use livedump to obtain a memory dump of the system
• System stays up and running, remaining stable
• Allows for subsequent offline analysis of system
Performance impact
• Saves the memory onto a file system
– Causes extra system load during this save
Documentation
• livedump(1M) man page

88 March 2007

Livedump is a brand new feature for HP-UX on HP-UX 11i v3 that provides the ability to take a
crashdump on a live system without a forced shutdown or panic of that system. Livedump is
implemented on Itanium-based platforms only.

Livedump is a feature for collecting a dump of a live running system. A system administrator can
use this feature to provide a dump of the system for offline analysis of the system kernel state. It
can be used in a stable system and doesn't affect the stability of that system.

Livedump saves the memory onto the file system in a format understood by kernel debuggers.
Hence, the system will experience extra load during the live dump time.

For further information, see the livedump (1M) manpage.

March 2007 Availability-88


HP-UX 11i v3 Delta Secured Availability

Livedump
Is saved in file system and uses VFS layer functions for I/O
Uses a new command line user utility for configuring and managing
live dump
• Uses pseudo driver infrastructure instead of system call for interaction with
user space utility
Uses existing page classification mechanism
• Has its own page class selection independent of crash dump
Saves the dump in CRASHDIR format
Shares the default target directory with savecrash target directory
• /var/adm/crash
Supports maximum image size option similar to savecrash
Runs in the context of a kernel daemon thread

89 March 2007

The livedump is saved in file system and uses the VFS layer functions for I/O. It uses a pseudo
driver infrastructure instead of system calls for interaction with user space utilities. It uses a new
command line user utility for configuring and managing live dump.

Livedump uses the existing page classification mechanism. It has its own page class selection
independent of crash dump.

Live dump saves the dump in CRASHDIR format. It shares the default target directory with
savecrash target directory. This directory is /var/adm/crash. Livedump supports the maximum
image size option similar to savecrash.

Finally, live dump operations are executed in the context of a kernel daemon thread.

March 2007 Availability-89


HP-UX 11i v3 Delta Secured Availability

Livedump Architecture Diagram

90 March 2007

The diagram illustrates livedump flow during normal system operation. Live dump is activated
from the init script. Once activated, it can be invoked from the user space or kernel space. A
user space utility is used for managing live dump infrastructure. The utility is used for both
starting/stopping a live dump, activating it from the init scripts and setting/querying
configuration. A pseudo driver interface is present for interaction with user space utility. The in-
kernel interface is used to initiate a live dump from within the kernel by various kernel
subsystems. The Live dump is actually done inside a kernel daemon thread. The in-kernel
interface and pseudo driver interface interact with the kernel daemon infrastructure.

March 2007 Availability-90


HP-UX 11i v3 Delta Secured Availability

Livedump Details (1 of 2)
Pseudo character device driver
• Interaction mechanism between the user utility and live dump kernel
functionality
Boot time initialization
• Create a kernel daemon thread and allocate memory for an I/O buffer
– Accomplished by issuing an ioctl from the user space utility
Kernel daemon infrastructure module
• Used for creating, managing and terminating a kernel daemon thread
– Actual live dump work is performed inside this thread
User entry point and kernel entry point
• User entry point is the execution path chosen during a user invocation
– Handles supplying user specified configuration and specifying
invocation as “user requested” dump to the core algorithm
• Kernel entry point is the execution path chosen during a kernel invocation
of live dump
– Supplies default configuration to live dump core algorithm and
specifies invocation as “kernel requested” to the core algorithm

91 March 2007

Live dump uses a pseudo character device driver as an interaction mechanism between the user
utility and live dump kernel functionality.

Live dump boot time initialization involves creating a kernel daemon thread and allocating
memory for I/O buffer. The boot time initialization is done by issuing an ioctl from the user
space utility.

The kernel daemon infrastructure module comprises creating, managing and terminating a kernel
daemon thread. The actual live dump work is performed inside this thread.

The live dump user entry point is the execution path chosen during a user invocation. This entry
point handles supplying user specified configuration and specifying the invocation as a “user
requested” dump to the core algorithm. The latter information is required because the core
algorithm module handles certain tasks like status and/or error reporting, blocking behavior, et
cetera, differently for user and kernel invocations.

The kernel entry point is the execution path chosen during a kernel invocation of live dump. This
entry point supplies the default configuration to the live dump core algorithm. It is similar to the
user entry point the kernel entry point. It specifies the invocation as “kernel requested” to the core
algorithm.

March 2007 Availability-91


HP-UX 11i v3 Delta Secured Availability

Livedump Details (2 of 2)
Core algorithm
• Contains core functionality to read memory pages and write them to the
file system in CRASHDIR format
• Synchronizes with savecrash for sharing the same target directory
Savecrash design change
• In order to share the same target directory for saving normal crash dumps
as well as live dump created dumps, the savecrash and the livedump
functionality need to synchronize their access to the “bounds” file
User Command
• Query current configuration, change configuration, start/shutdown live
dump functionality in the kernel
• Initiate a live dump from user space
– User interacts with the kernel using the pseudo driver interface.

92 March 2007

The core algorithm handles CRASHDIR formatting and bounds file access. It contains the core
functionality to read memory pages and write them to the file system in CRASHDIR format. The
synchronization with savecrash for sharing the same target directory is also handled by this
module.

In order to share the same target directory for saving normal crash dumps as well as live dump
created dumps, the savecrash and the livedump functionality need to synchronize their access to
the “bounds” file. This module handles the synchronization that savecrash needs.

The user command module is used for querying current configuration, changing configuration,
and starting and shutting down the live dump functionality in the kernel. It is also the mechanism
to initiate a live dump from user space. The user interacts with the kernel using the pseudo driver
interface.

March 2007 Availability-92


HP-UX 11i v3 Delta Secured Availability

Recovery Module
System Dump Technology
Concurrent dump

93 March 2007

On HP-UX 11i v3, you can configure your machine to perform a distributed parallel dump,
thereby improving the dump throughput and reducing dump time. I/O parallelism is used to
achieve concurrent dump technology.

March 2007 Availability-93


HP-UX 11i v3 Delta Secured Availability

Dump time reduction on HP-UX 11i Releases


In HP-UX 11i v1 and HP-UX 11i v2
• Two features to reduce system dump times
– Selection
• Reduces the size of the memory to be dumped
– Compression
• Reduces the size of the data that needs to be written to disk
In HP-UX 11iv3
• New third mechanism added
– I/O parallelism
• Increases the rate at which the data can be written to disk

94 March 2007

In the HP-UX 11i v1 and HP-UX 11i v2 releases, there are two methods available to the user to
reduce system dump times. They are selection and compression. Selection allows the user to
choose which classes of system memory were dumped. This reduces the amount of memory that
needed to be written to disk, thereby improving performance. Compression reduces the size of
the data to be written to disk. Multiple, e.g. four, CPUs would read memory and compress the
data, then send it to a fifth CPU, which would then write the data to the disk. Both of these were
covered in detail in the HP-UX 11i v2 and HP-UX 11.23PI Delta Training courses.

In HP-UX 11iv3 a third mechanism, I/O parallelism, has been added. I/O parallelism increases
the rate at which the data can be written to disk. This is called concurrent dump.

March 2007 Availability-94


HP-UX 11i v3 Delta Secured Availability

I/O Parallelism on HP-UX 11i v3


I/O parallelism
• Increases the rate at which the data can be written to disk
• Also referred to as “concurrency mode”
Characteristics are dependent on
• Type of dump driver
• Number of CPUs
• Configured dump devices
Dump time for a memory size of 1 TB is reduced
• As few as 3 minutes with suitable I/O configuration
Concurrency mode support
• Supported on Integrity systems at initial release of HP-UX 11i v3
• Support for PA systems planned for future releases of HP-UX 11i v3

95 March 2007

The I/O parallelism feature is called “concurrency mode” in the 11iv3 dump infrastructure. It has
characteristics which are dependent on the type of dump driver, and on the number of CPUs and
configured dump devices.
The dump time for a memory size of 1 TB can be reduced to as few as three minutes with
suitable I/O configuration.
In the initial release of HP-UX 11i v3, the concurrency mode support is only on Integrity systems.
Support for PA systems is planned for future releases of HP-UX 11i v3.

March 2007 Availability-95


HP-UX 11i v3 Delta Secured Availability

Concurrent Dump
Distributed parallel dump
• Improved dump throughput
• Reduced dump time
Add dump devices following recommended configurations
• Improvements depend on dump device configuration
Itanium-based systems only
Performance improvement example
• 64-CPU, 1 TB memory Integrity “Orca”
– Dumped in 3 minutes with only 4 dump units
See crashconf(1M) man page

96 March 2007

On HP-UX 11i v3, you can configure your machine to perform a distributed parallel dump,
thereby improving the dump throughput and reducing dump time.

Concurrent Dump is a new feature for HP-UX 11i v3 crash dump subsystem. This feature
enhances performance scalability of the HP-UX crash dump subsystem with machine
configuration if the customer follows recommended guidelines for dump device configuration.
This solution is implemented for Itanium-based platforms only.

HP-UX 11i v3 crash dumps can be done faster compared to previous HP-UX releases running on
the same machine. This speed improvement can be seen only on HP Itanium-based platforms. If
you add dump devices according to recommended configurations, you can see better
performance. If you enable this feature, you can see an improvement in dump speed. The
improvement will depend upon the dump device configuration. This speed improvement can be
seen only on HP Itanium-based platforms.

The dump time for a memory size of 1TB, on a 64 CPUs ia64 Orca Server, was achieved as
little as three minutes with only four dump units.

Documentation
For further information, see the crashconf (1M) manpage.

March 2007 Availability-96


HP-UX 11i v3 Delta Secured Availability

Dump Units
Independent sequential unit of execution within the dump
process
• Assigned a subset of the overall system resources needed to
perform the dump
– CPUs
– Portion of the physical memory to be dumped
– Set of configured dump devices
HP-UX 11i v3 dump infrastructure automatically partitions
system resources at dump time into Dump Unit(s)
• Each dump unit operates sequentially
• Parallelism is achieved by having multiple dump units
– They execute in parallel with each other

97 March 2007

A dump unit is an independent sequential unit of execution within the dump process. Each dump
unit is assigned a subset of the overall system resources needed to perform the dump. These
include CPUs, a portion of the physical memory to be dumped, and a set of configured dump
devices.

The dump infrastructure in HP-UX 11i v3 automatically partitions the system resources at dump
time into one or more dump units.

Each dump unit operates sequentially. Parallelism is achieved by having multiple dump units,
each of which executes in parallel with the others.

March 2007 Availability-97


HP-UX 11i v3 Delta Secured Availability

Basic Dump Unit Example

Single Dump Unit


Examples

Sequential Dump Compressed Dump

Multiple Dump Unit


Examples

Sequential Dump Compressed Dump


(5 Dump Units) (3 Dump Units)

= available CPUs = CPUs used during dump


= configured dump devices
98 March 2007

This slide illustrates the basic relationships between CPU resources, dump devices, and dump
units.

In a compressed dump, each dump unit is comprised of five CPUs, four of which are doing
compression and feeding one CPU that is writing to devices, and one or more devices. In a non-
compressed dump, each dump unit is comprised of one CPU.

The top two examples show single dump unit scenarios, one for sequential (non-compressed) and
one for compressed dump. If only one dump device is configured, the single dump unit scenarios
shown above will occur. This is equivalent to what occurs in HP-UX 11i v2. A single dump unit
will also occur, per the requirements mentioned above, if multiple dump units are configured that
are either all controlled by legacy dump drivers or are all visible through only one HBA port and
the corresponding dump driver is reentrant.

The bottom two examples show multiple dump unit scenarios for the compressed and non-
compressed case. In the non-compressed example, five dump devices are configured, each of
which are in a separate dump unit. Similarly, the compressed case shows three dump devices,
each in a separate dump unit. In a few more slides, we will see detailed examples of multiple
dump units, showing the relationships to driver capabilities, HBAs, and dump devices.

March 2007 Availability-98


HP-UX 11i v3 Delta Secured Availability

Full Dump Time for 1TB Integrity Server: v2 vs. v3

11iv3 Dump Time: 1TB Full compressed with concurrency on: <1Hr (51min)
99 March 2007

The 1TB crash dump performance tests were done on an 64-way Integrity Orca system with 1TB
RAM. The system had 2 HP A6826-60001 2Gb Dual Port PCI/PCI-X Fiber Channel Adapters.
HP HSV 210 disk controllers were used. The dump data was the full 1TB.

On HP-UX 11i v3, using four dump units, a full 1TB sequential dump time was reduced to 98
minutes from a time of 250 minutes on HP-UX 11i v2. A full 1TB compressed Dump time was
reduced to 51 minutes compared to the legacy HP-UX 11i v2 time of 198 minutes.

You may further reduce dump time by configuring more dump units.

March 2007 Availability-99


HP-UX 11i v3 Delta Secured Availability

Selective Dump Time for 1TB Integrity Server –


HP-UX 11i v2 vs. v3

Default Option in
11iv3

11iv3 Dump Time: selective compressed with concurrency on: 3min


100 March 2007

The 1TB crash dump performance tests were done on an 64-way Integrity “Orca” machine with
1TB RAM . The system had 2 HP A6826-60001 2Gb Dual Port PCI/PCI-X Fiber Channel
Adapters. HP HSV 210 disk controllers were used. In this particular test system, approximately
67.5 GB was selected out of total 1TB memory.

On HP-UX 11i v3, using four dump units, a selective sequential dump time is reduced to six
minutes compared to eighteen minutes on an HP-UX 11i v2 system. The selective compressed
dump time is reduced to three minutes compared to legacy HP-UX 11i v2 time of thirteen
minutes.

The HP-UX 11i v3 dump time can be further reduced by configuring additional dump units.

March 2007 Availability-100


HP-UX 11i v3 Delta Secured Availability

Driver Capabilities
I/O support during dump is provided via a dump driver
• Each configured dump driver reports its concurrency capabilities to the dump
infrastructure
• These capabilities are
– Legacy
• No support for any of the new I/O parallelism features
– Reentrant
• Supports one I/O per HBA port
– Concurrent
• Supports multiple I/Os per HBA port, one I/O per device
In the initial release of HP-UX 11i v3, HP-provided dump drivers have the
following concurrency capabilities
• Legacy
– IDE dump driver
• Reentrant
– C8xx, MPT, TL, CISS, SASD, FCD dump drivers
• Concurrent
– FCD dump driver

101 March 2007

I/O support during dump is provided via a dump driver. Each configured dump driver reports its
concurrency capabilities to the dump infrastructure. In legacy mode, there is no support for any
of the new I/O parallelism features. The reentrant capability supports one I/O per HBA port.
The concurrent capability supports multiple I/Os per HBA port and one I/O per device.

In the initial release of HP-UX 11i v3, the HP-provided dump drivers have the following
concurrency capabilities. The IDE dump driver runs in legacy mode. The C8xx, MPT, TL, CISS,
SASD, and FCD dump drivers are reentrant. And, the FCE dump driver is concurrent.

March 2007 Availability-101


HP-UX 11i v3 Delta Secured Availability

Multiple Dump Units and Driver Capabilities

Multiple Dump Units


Examples

Compressed Dump
Sequential Dump
(3 Dump Units,
(6 dump units,
4 dump devices)
8 dump devices)

= available CPUs = CPUs used during dump

Configured = device path via “legacy” driver


dump
deices = device path via “reentrant” driver

= device path via “concurrent” driver

= HBA port through which the dump device is configured

102 March 2007

The sequential dump example uses six CPUs to create six dump units with eight dump devices.
The four reentrant devices are visible through a total of three HBA ports, limiting them to three
dump units. The two concurrent devices each have their own dump unit. Any legacy devices, of
which two exist in this example, have to be together in a single dump unit.

The size of memory to be dumped and the corresponding size of the dump devices can also
affect the dump units created. For example, if the total size of memory to be dumped is 480
GBytes, then each of the six dump units would be given approximately 80 GBytes to dump,
requiring that the disk space assigned to each dump unit be at least that large. If in the
sequential example above the two concurrent devices were each only 60 GBytes in size, while
each of the other devices were 100G in size, then the two concurrent devices would need to be
in a single dump unit and the number of dump units reduced to five.

The compressed dump example has three dump units, which is the maximum number of dump
units achievable with sixteen CPUs, since each dump unit in compressed dump requires five
CPUs. The four dump devices shown have similar properties with respect to reentrant and
concurrent drivers, as described for the sequential example.

March 2007 Availability-102


HP-UX 11i v3 Delta Secured Availability

Concurrent Dump Scalability - Linear

Less than ½ hr to take 1TB Full Dump


103 March 2007

With concurrency turned on, a 1 TB compressed full dump with four dump units took 51 minutes
compared to 198 minutes in a single dump unit when concurrency was turned off. This is very
close to 1/4th, as expected with four dump units. Extrapolating this to eight dump units implies
25 minutes to do a full 1TB dump. The concurrent dump scalability has shown to be very close to
linear with respect to the number of dump units.

March 2007 Availability-103


HP-UX 11i v3 Delta Secured Availability

Concurrent Dump Benefits


Significantly reduces the dump time
• To one-half or one-quarter based upon dump configurations
Performance scalability
• Almost linear with respect to number of dump units

104 March 2007

March 2007 Availability-104


HP-UX 11i v3 Delta Secured Availability

Networking Module
ONC+ 2.3
LDAP
Auto Port Aggregation & LAN Monitor
CIFS
DLPI & VLAN
Internet Services
Miscellany

105 March 2007

ONC+ 2.3 (NFS 2.3 (NFSv4 protocol, secure NFS), AutoFS 2.4, CacheFS 2.3, NIS 2.3 Server,
Lock Manager 2.3, LIBNSL 2.3, NIS+ obsolesence)
LDAP (Netscape Directory Server, LDAP-UX Integration, NIS/LDAP gateway, NIS+/LDAP
migration)
Auto Port Aggregation & LAN Monitor
CIFS
Data Link Provider Interface (DLPI) & Virtual LAN (VLAN)
Internet services update (DHCP, tftp, telnet, sendmail 8.13, BIND 9.3)
PPPoe IPv6 suppport
HP-UX Web Server Suite performance Improvements

March 2007 Availability-105


HP-UX 11i v3 Delta Secured Availability

Networking Module
ONC+ 2.3 SubModule
NFS 2.3
NFSv4 protocol
Secure NFS
AutoFS 2.4
CacheFS 2.3
NIS 2.3 Server
Lock Manager 2.3
RPC
LIBNSL 2.3
NIS+ obsolete
106 March 2007

ONC+ 2.3 (NFS 2.3 (NFSv4 protocol, secure NFS), AutoFS 2.4, CacheFS 2.3, NIS 2.3 Server,
Lock Manager 2.3, LIBNSL 2.3, NIS+ obsolesence)

March 2007 Availability-106


HP-UX 11i v3 Delta Secured Availability

Open Network Computing ONC+ 2.3


• Technologies and services that help administrators implement
distributed applications in a heterogeneous distributed
computing environment
• Tools to administer clients and servers
• Consists of many components, or subsystems
– NFS
– AutoFS
– CacheFS
– NIS
– RPC
– Network Lock Manager
– Network Status Monitor

107 March 2007

Open Network Computing (ONC) consists of technologies and services that help administrators
implement distributed applications in a heterogeneous distributed computing environment. It also
provides tools to administer clients and servers. ONC consists of many components, or
subsystems, including NFS, AutoFS, CacheFS, NIS, RPC, Network Lock Manager and Network
Status Monitor.

March 2007 Availability-107


HP-UX 11i v3 Delta Secured Availability

ONC+ 2.3 Changes on HP-UX 11i v3


AutoFS/Automounter
• Store AutoFS maps using LDAP name service
• Browse list of potential mount points in an indirect AutoFS map without mounting FSs
• Configure AutoFS through the /etc/default/autofs file
• Use new startup/shutdown script for product
• Support for NFSv4, SecureNFS, and IPv6
Cache File System supports long file names and shared locks
Library RPC supports several new data types and IPv6
Network File System (NFS) Services
• New user mode daemon
– Generates and validates API security tokens
– Maps GSSAPI principal names to local user and group ids
• Secure NFS supports Kerberos through GSSAPI
• NFS access using a firewall
PCNFSD daemon is multithreaded, supports shadow password and secure RPC
Network Information Service (NIS) supports
• Shadow mode
• Enabling DNS forwarding mode
• Long uname, hostname, and username
NIS+ is obsolete

108 March 2007

The following ONC components have changes in the initial release of HP-UX 11i v3.

The AutoFS/Automounter is updated with support for LDAP name service to store AutoFS maps. It
also has the ability to browse the list of potential mount points in an indirect AutoFS map without
mounting the filesystems. AutoFS can be configured through the /etc/default/autofs file. There is
a new startup/shutdown script for product, which is no longer controlled by the NFS client
startup/shutdown script. Finally, there is also support for NFSv4, SecureNFS, and IPv6.

New features of the Cache File System, CacheFS, include long file name support and support for
shared locks.

The RPC Library routines support several new data types, add support for IPv6, and more.

Network File System, or NFS, Services provides numerous enhancements. There is a new user
mode daemon that generates and validates API security tokens and maps the GSSAPI principal
names to the local user and group Ids. NFS has additional security mechanisms, such as Secure
NFS that supports Kerberos through GSSAPI. There is the ability for NFS access using a firewall.

The pcnfsd daemon, which is multithreaded, supports shadow password and Secure RPC. The
wtmp entries can hold usernames up to the PCNFSD protocol limitation of 32 characters and
client hostnames up to the PCNFSD protocol limitation of 64. There is also support for printer
names up to the PCNFSD protocol limitation of 64 characters.

Network Information Service (NIS) provides several new features including support for shadow
mode and support for enabling DNS forwarding mode. It also supports long uname, hostname,
and username.

NIS+ is obsoleted.

March 2007 Availability-108


HP-UX 11i v3 Delta Secured Availability

Network File System (NFS)


• Provides transparent access to files on the network
• Shares a directory with other hosts on the network
• NFS client mounts the NFS server's directory
– NFS client users see directory as part of the local file system

109 March 2007

Network File System (NFS) provides transparent access to files on the network. An NFS server
makes a directory available to other hosts on the network by “sharing” the directory. An NFS
client provides access to the NFS server's directory by “mounting” the directory. To users on the
NFS client, the directory appears as a part of the local file system.

March 2007 Availability-109


HP-UX 11i v3 Delta Secured Availability

New NFS Features in HP-UX 11i v3 (1 of 2)


NFS Version 4 Protocol is supported on both client and server
• To use NFSv4, the nfsmapid daemon must be running on both the client and
server
• NFSv4 server delegation is disabled by default
– If enabled, access is not provided to the shared file system
• NFSv4 client delegations support requires nfs4cbd daemon to be running
nfsmapid maps NFS v4 owner and owner_group identification attributes
to/from local UIDs and GIDs
• Mapping domain is the DNS domain
– Modify /etc/default/nfs file to set to different domain
NFS supports new security mechanisms
• Secure RPC that supports Kerberos through GSSAPI
• GSSAPI supports Kerberos, Kerberos with Integrity, and Kerberos with Privacy
• To use Secure NFS with Kerberos, the gssd daemon must be running
• The share command can export file systems with multiple security modes
• The mount command allows specification of the security mode

110 March 2007

There are many new features with NFS on HP-UX 11i v3.

NFS Version 4 Protocol is supported on both the client and server. To use NFSv4, the nfsmapid
daemon must be running on both the client and server. Note that NFSv4 server delegation is
disabled by default. If enabled, access is not provided to the shared file system. For the NFS
client to support NFSv4 Delegations, the nfs4cbd daemon must be running.

The nfsmapid feature that maps NFS Version 4 owner and owner_group identification attributes
to/from local UID and GID numbers is supported. Mapping domain is the DNS domain, but it
can be set to a different domain by modifying the /etc/default/nfs file.

Additional security mechanisms, such as Secure RPC that supports Kerberos through GSSAPI, are
now supported. GSSAPI supports Kerberos, Kerberos with Integrity, and Kerberos with Privacy.
To use Secure NFS with Kerberos, the gssd daemon must be running. The share command can
now export file systems with multiple security modes. The mount command now enables you to
specify the security mode.

March 2007 Availability-110


HP-UX 11i v3 Delta Secured Availability

New NFS Features in HP-UX 11i v3 (2 of 2)


NFS Access using a Firewall is now supported
• Either use the NFS v4 protocol and specify the opening port as 2049 and port 111
(rpcbind) through the firewall
• Or configure a fixed port for statd and mountd and specify the opening port as 2049 and
port as 111 (rpcbind)
• Or use the configured fixed port for mountd, statd and specify 4045 as the port for lockd to
support NFS v2 and NFS v3
Use the share command to share directories with NFS clients
• Replace the exportfs command, which is now a script that calls the share command for the
NFS file type
NFS mount supports client side failover on read-only mounted filesystems
• NFS mount accepts an NFS URL defined by RFC 2224 or an IPv4 or an IPv6 address where
square brackets enclose the IPv6 address
The nsquery feature supports ipnodes lookup request and provides support to lookup IPv6
data in the backend libraries
Manipulation and viewing of ACLs over an NFS mount point is supported
• ACL manipulation does not fail (ENOSUP) over an NFS mount point
Use kctune tool for manipulating the NFS kernel variables, or tunables
• Replaces using adb
• Use kctune to tune the NFS server and NFS client parameters
– Changes made to parameters are persistent across a reboot, patch installation, or kernel
regeneration

111 March 2007

NFS Access using a Firewall is now supported. To access NFS through a Firewall, you must
either use the NFS v4 protocol and specify the opening port as 2049 and rpcbind port as 111
through the firewall. Or, you must configure a fixed port for statd and mountd and specify the
opening port as 2049 and port as 111 (rpcbind), Or, you must use the configured fixed port for
mountd, statd and specify 4045 as the port for lockd to support NFS v2 and NFS v3.

The share command, used to share directories with NFS clients, replaces the exportfs command.
The exportfs command is now a script that calls the share command for the NFS file type.

NFS mount supports client side failover on read-only mounted file systems. NFS mount accepts
an NFS URL defined by RFC 2224 or an IPv4 or an IPv6 address where square brackets enclose
the IPv6 address.

The nsquery feature supports ipnodes lookup request and provides support to lookup IPv6 data
in the backend libraries.

Manipulation and viewing of ACLs over an NFS mount point is supported and ACL manipulation
does not fail (ENOSUP) over an NFS mount point.

The adb tool is replaced by kctune tool for manipulating the NFS kernel variables. The kctune
tool helps you tune the NFS server and NFS client parameters. Changes made to the parameters
are persistent across a reboot, patch installation, or kernel regeneration.

March 2007 Availability-111


HP-UX 11i v3 Delta Secured Availability

Changes to NFS Commands in HP-UX 11i v3


NFS commands changes
• New options to spray command
– -d specifies time interval in microseconds before sending next
packet
– -t specifies the class of transports
• The setoncenv command
– Displays NFS configuration variables, NFS public/private
kctune variables, and subsystem specific variables
– Modifies contents of several default and config files

112 March 2007

There are several changes to NFS commands in HP-UX 11i v3.

The spray command provides the following new command options: -d and -t. The -d option
specifies the time interval in microseconds before the next packet is sent. The -t option specifies
the class of transports.

The nfs environment configuration, setoncenv, command displays all NFS configuration
variables, NFS public/private kctune variables, and subsystem specific variables. It can modify
the contents of /etc/default/nfs, /etc/default/autofs, /etc/default/keyserv,
/etc/default/nfslogd, /etc/rc.config.d/nfsconf, and /etc/rc.config.d/namesvrs.

March 2007 Availability-112


HP-UX 11i v3 Delta Secured Availability

Changes to NFS Daemons in HP-UX 11i v3 (1 of 2)


Pcnfsd daemon changes
• Is multithreaded
• Supports shadow password and Secure RPC
• Pcnfsd protocol changes
– Limits username entries to 32 characters, client hostname entries
to 64 characters, and printer names to 64 characters
– All successful authentication requests are logged in the wtmps
database
New user mode daemon – gssd
• Generates and validates API security tokens
• Maps GSSAPI principal names to local user and group Ids
Now only one nfsd process runs!
• Nfsd daemon is now multithreaded

113 March 2007

There are several changes to NFS daemons in HP-UX 11i v3.

The pcnfsd daemon is multithreaded and supports shadow password and Secure RPC. The
pcnfsd protocol limits username entries to 32 characters and client hostname entries to 64
characters in wtmps database, and printer names to 64 characters. All successful authentication
requests are logged in the wtmps database.

There is a new user mode daemon, gssd, that generates and validates API security tokens. It also
maps the GSSAPI principal names to the local user and group ids.

A single nfsd process runs on the system where NFS is enabled. The nfsd daemon is now
multithreaded. On systems where NFS is enabled, customers now do not see multiple nfsd
processes running.

March 2007 Availability-113


HP-UX 11i v3 Delta Secured Availability

Changes to NFS Daemons in HP-UX 11i v3 (2 of 2)


Lockd daemon is now a threaded kernel daemon
• Now uses a fixed port number of 4045
Mountd and statd daemons
• Can be configured for a fixed port number for RPC transport endpoint
• NFS Authentication service added to mountd daemon
– Sets access rights of client attempting to access NFS server mountd and statd daemons
The -l option does not provide its original functionality of overriding the default log file
• mountd and statd daemons ignore –l option
– The mountd.log or statd.log log files can now be found at /var/nfs/
• Specifying the -l option with the lockd daemon sets the listen queue on the lockd transport endpoint
New nfslogd daemon for operational logging to NFS server
• Generates activity log by analyzing RPC operations on server
• Daemon not enabled by default
The nfs4cbd daemon supports the NFSv4 Delegation feature
Keyserv daemon changes
• Use of –D debugging option creates default log file
• Enable use of default keys for nobody
– Use new –e option
– Use default parameter setting in new /etc/default/keyserv file
Biod daemon removed from system!
• Asynchronous I/O is handled through kernel threads per mount point

114 March 2007

The lockd daemon is now a threaded kernel daemon and its port number is fixed at 4045.

The mountd and statd daemons are now threaded and can be configured to support a fixed port
number for the RPC transport endpoint. The NFS Authentication service is added to the mountd
daemon, and the service sets the access rights of the client attempting to access the NFS server.

The -l option used with the lockd, mountd, and statd daemons does not provide its original
functionality of overriding the default log file and is not supported. If you specify the -l option
with the mountd or statd daemon, the option is ignored. The logfile (mountd.log or statd.log) can
now be found at the following fixed location /var/nfs/. If you specify the -l option with the lockd
daemon, the listen queue is set on the lockd transport endpoint.

A new daemon, nfslogd, supports operational logging to the NFS server. It generates the activity
log by analyzing RPC operations processed by the NFS server. This daemon is not enabled by
default.

The nfs4cbd daemon provides support for the NFSv4 Delegation feature.

The keyserv daemon is now multithreaded. When keyserv is started with the -D option to turn on
the debugging mode,
a default log file (/var/nfs/keyserv.log) is created. There are two new methods to enable the use
of default keys for nobody. They are to use a new option, -e, or to use the default parameter
setting in the new /etc/default/keyserv file.

The biod daemon is removed from the system. Asynchronous I/O is now handled through kernel
threads per mount point instead of by the biod daemon.

March 2007 Availability-114


HP-UX 11i v3 Delta Secured Availability

Changes to NFS Files in HP-UX 11i v3


New default configuration file for NFS services
• /etc/default/nfs
• Contains parameter values to set default behavior of various NFS commands and
daemons in NFS Services
• Using new default NFS Services configuration file instead of
/etc/rc.config.d/nfsconf
– Behavior of NFS daemons remains the same regardless of the way the
daemons are started
Script or command line
New nfs security file
• /etc/nfsssec.conf
• Provides a list of all valid and supported NFS security modes.
New default configuration file for keyserv
• /etc/default/keyserv
• Contains default parameter values to set use of default keys for nobody
/etc/dfs/dfstab file replaces /etc/exports
• Format of /etc/dfs/dfstab is different from /etc/exports
• If you have an existing parser application for /etc/exports
– Use the exp2dfs tool to convert the /etc/exportfs file to /etc/dfs/dfstab file

115 March 2007

There are several new NFS related files.

The new default configuration file for NFS services (/etc/default/nfs) contains the parameter
values to set the default behavior of various NFS commands and daemons in NFS Services. If
the new default NFS Services configuration file is used instead of /etc/rc.config.d/nfsconf, the
behavior of the NFS daemons remains the same regardless of the way the daemons are started
(script or command line).

The new nfs security file (/etc/nfssec.conf) provides a list of all valid and supported NFS security
modes.

The new default configuration file for keyserv (/etc/default/keyserv) contains the default
parameter values to set the use of default keys for nobody.

The /etc/exports file is replaced by /etc/dfs/dfstab. The format of /etc/dfs/dfstab is different


from /etc/exports. If you have an existing parser application for the /etc/exports file, the
application fails on HP-UX 11i v3 as the /etc/exports file is not supported. The application can
also fail if the /etc/exports file is moved from a system running an older version of HP-UX to a
system running HP-UX 11i v3. Use the exp2dfs tool to convert the /etc/exports file to the
/etc/dfs/dfstab file.

March 2007 Availability-115


HP-UX 11i v3 Delta Secured Availability

More changes to NFS in HP-UX 11i v3 (1 of 2)


Must configure LDAP database to store and retrieve keys
• Set the publickey entry in /etc/nsswitch.conf to ldap
• Change due to obsolescence of NIS+ database
Must start rpc.lockd with the -C option on all pre-HP-UX 11i v3 systems
• Ensures that consistency is maintained on a client system when a file lock is
cancelled
• Use if you have systems running different versions of HP-UX in your network
While creating key pair for either remote host or local host using newkey
command
• Previous releases prompted for both the local root login password and hostname’s
root login password
• HP-UX 11i v3 prompts only for hostname’s root login password or local root login
password
– Depending on which the key pair is being created for
Mounts with invalid options are ignored with a warning message instead of
an error

116 March 2007

With the obsolescence of NIS+ database, users must now configure LDAP database to store and
retrieve keys. To use ldap you must set the publickey entry in /etc/nsswitch.conf to ldap.

If you have systems running different versions of HP-UX in your network, you must start rpc.lockd
with the -C option on all pre-HP-UX 11i v3 systems to ensure that consistency is maintained on a
client system when a file lock is cancelled.

In previous releases, while creating a key pair for either the remote host or the local host using
the newkey command, the user is prompted for both the local root login password and
hostname’s root login password. With HP-UX 11i v3, the user is prompted only for the
hostname’s root login password or the local root login password, depending on which he or she
is creating the key pair for.

Mounts with invalid options are ignored with a warning message instead of an error.

March 2007 Availability-116


HP-UX 11i v3 Delta Secured Availability

More changes to NFS in HP-UX 11i v3 (2 of 2)


Sharing NFS file system using -rw or -ro option can take a hostname for a
parameter
• If the -w=<hostname> syntax is used and the NFS server uses DNS
– Specify the fully qualified hostname or the clients fails to mount the NFS server
Attemptnig to unmount a shared local file system now returns an EBUSY error
• The local file system remains mounted
– Until all shared directories within the local file system are unshared
Behavior of rpc_statd and rpc_lockd daemons
• During system startup and shutdown their behavior is the same as earlier HP-UX
releases
• Using the startup scripts to start or stop the NFS client or NFS server
– The statd or lockd daemons are not stopped
– Use the lockmgr startup script to start or stop the statd or lockd daemons
The multithreaded pcnfsd daemon and keyserv both provide better
performance
• However, memory consumption is impacted by the number of threads created,
the total number of threads and system configuration

117 March 2007

Sharing an NFS file system using the -rw option or the -ro option can take a hostname for a
parameter. If the -w=<hostname> syntax is used and the NFS server uses DNS, you must specify
the fully qualified hostname or the clients fails to mount the NFS server.

An attempt to unmount a shared local file system now returns an EBUSY error, and the local file
system remains mounted until all shared directories within the local file system are unshared.

During system startup and shutdown, the behavior of the rpc.statd and the rpc.lockd daemons is
the same as in earlier HP-UX releases. However, if you use the startup scripts to start or stop the
NFS client or NFS server, the statd or lockd daemons are not stopped. Use the lockmgr startup
script to start or stop the statd or lockd daemons.

The multithreaded pcnfsd daemon provides better performance. However, memory consumption
is impacted by the number of threads created and the total number of threads and your system
configuration.

The multithreaded keyserv provides better performance. However, memory consumption is


impacted by the number of threads created and the total number of threads and your system
configuration.

March 2007 Availability-117


HP-UX 11i v3 Delta Secured Availability

NFS Documentation and Obsolescence Notes


NFS Documentation
• NFS commands, daemons and features are documented in several man pages
– pcnfsd (1M), spray (3N), sprayd (1M), share_nfs (1M), share (1M), exportfs
(1M)
– keyserv (1M), newkey (1M), chkey (1) and getpublickey (3N)
– biod (1M), mount_nfs (1M), nfsd (1M), gssd (1M), nsquery (1M)
– rpc.lockd (1M), rpc.mountd (1M), rpc.statd (1M), setoncenv (1M)
– nfs(4), nfslogd (1M), nfssec.conf(4), nfsmapid (1M), nfs4cbd (1M), nfs (7),
nfssec (5)
• Visit http://docs.hp.com/en/netcom.html#NFS%20Services/
– NFS Services Administrator’s Guide
– ONC+ Release Notes (HP-UX 11i v3)
Obsolescence Notes
• Discontinued trusted mode in pcnfsd, newkey, and chkey
• Discontinued the nisplus database type as an option
• Deprecated -l option used with lockd, mountd, and statd daemons
• Obsoleted the /etc/rc.config.d/nfsconf
– Replaced by default NFS Services Configuration (/etc/default/nfs) file
• Use of adb to change values of kernel parameters is no longer supported

118 March 2007

NFS commands, daemons, and features are documented in several man pages. For further
information, see the following manpages.
• pcnfsd (1M)
• spray (3N), sprayd (1M)
• keyserv (1M), newkey (1M), chkey (1) and getpublickey (3N)
• share_nfs (1M), share (1M), exportfs (1M)
• biod (1M), mount_nfs (1M)
• nfsd (1M)
• rpc.lockd (1M), rpc.mountd (1M), rpc.statd (1M)
• setoncenv (1M)
• nfs(4), nfslogd (1M), nfssec.conf(4), nfsmapid (1M), nfs4cbd (1M), nfs (7), nfssec (5)
• gssd (1M)
• nsquery (1M)

In addition, see the NFS Services Administrator’s Guide and the ONC+ Release Notes for HP-UX
11i v3. Both are available at http://docs.hp.com/en/netcom.html#NFS%20Services.

There are several obsolescence notes regarding NFS on HP-UX 11i v3. Trusted mode support in
pcnfsd, newkey, and chkey is discontinued. The nisplus database type as an option is
discontinued in newkey, chkey, and keylogin commands, and getpublickey()/getsecretkey()
function calls in libnsl. The -l option used with lockd, mountd, and statd daemons is deprecated
in this release. The /etc/rc.config.d/nfsconf is obsoleted and replaced by the default NFS
Services Configuration (/etc/default/nfs) file. And, the use of adb to change values of kernel
parameters is no longer supported.

March 2007 Availability-118


HP-UX 11i v3 Delta Secured Availability

AutoFS 2.4 on HP-UX 11i v3 (1 of 2)


AutoFS/Automounter
• Mounts directories automatically when users or processes request access
to them
• Unmounts directories automatically if they remain idle for a specified
period of time
New in HP-UX 11i v3
• Ability to configure AutoFS through the /etc/default/autofs file
• New startup/shutdown script for AutoFs
– AutoFS no longer controlled by NFS client startup/shutdown script
• Support for NFS v4, Secure NFS, and IPv6
• Provides improved performance
– Concurrent mounting/unmounting allowed

119 March 2007

AutoFS/Automounter mounts directories automatically when users or processes request access to


them. AutoFS also unmounts the directories automatically if they remain idle for a specified
period of time.

There are many new features in AutoFS for HP-UX 11i v3. AutoFS now provides the ability to
configure AutoFS through the /etc/default/autofs file. Refer to the autofs(4) manpage for details.

AutoFS provides a new startup/shutdown script for AutoFs. It is no longer controlled by the NFS
client startup/shutdown script.

There is support for NFS v4, Secure NFS, and IPv6.

And, AutoFS provides improved performance through improved multithreading in the AutoFS
daemon that allows concurrent mounting and unmounting.

March 2007 Availability-119


HP-UX 11i v3 Delta Secured Availability

AutoFS 2.4 on HP-UX 11i v3 (2 of 2)


Additionally new in HP-UX 11i v3
• Support for LDAP name service to store AutoFS maps
• The ability to browse the list of potential mount points in an
indirect AutoFS map without mounting the filesystems
• A new option to disable LOFS mounts required for some of
the MC ServiceGuard configuration
• Support for managing CIFS file mounts
Documentation
• Man pages
– autofs (4), automount (1M), and automountd (1M)
• NFS Services Administrator’s Guide: HP-UX 11i version 3
– http://docs.hp.com/en/netcom.html#NFS%20Services/
120 March 2007

For customers who may be migrating from HP-UX 11i v1 systems, there are additional new
features. These include support for LDAP name service to store AutoFS maps. There is the ability
to browse the list of potential mount points in an indirect AutoFS map without mounting the file
systems. There is a new option to disable LOFS mounts required for some of the MC
ServiceGuard configuration. And, there is support for managing CIFS file mounts.

Documentation
For further information, see the autofs (4), automount (1M), and automountd (1M) manpages.
Also, refer to Chapter 2 of NFS Services Administrator’s Guide: HP-UX 11i version 3 at
http://docs.hp.com/en/netcom.html#NFS%20Services.

March 2007 Availability-120


HP-UX 11i v3 Delta Secured Availability

Cache File System - CacheFS


CacheFS is a general purpose file system caching mechanism
• Improves NFS server performance and scalability by reducing server and network load
How does CacheFS work?
• Data is cached on local disk when it is read from an NFS mounted file system
– Before CacheFS file system can be mounted, the cache directory on a local file system must be
configured
• Subsequent read requests are satisfied from the local disk cache
– Unless the original data has been modified on server
• CacheFS maintains consistency with the back file system using a consistency checking
model similar to that of NFS
– It polls for changes in file attributes
How is performance improved?
• Local disk caching of remote NFS-served file systems reduces network traffic
– Local disk access is faster than remote file system access
• Reduced access requests to the server increases the server's performance and allows more
clients to access the server
Performance improvements are dependent on the type of file system access
• Good for file systems where data is read more than once
• No impact on write performance or if data is read only once
CacheFS on HP-UX 11i v3 was ported from Solaris ONC+2.3 code
• Provides new features compared to previous versions of HP-UX

121 March 2007

The Cache File System (CacheFS) is a general purpose file system caching mechanism that
improves NFS server performance and scalability by reducing server and network load.
CacheFS performs local disk caching of remote NFS-served file systems, which reduces the
network traffic. Clients, especially on slow links such as PPP, notice an increase in performance
because local disk access is faster than remote file system access. Reduced access requests to
the server increases the server's performance and allows more clients to access the server.
Individual client machines become less reliant on the server, thereby decreasing overall server
load, which leads to an overall increase in performance on both client and server side.

As soon as the data is read from an NFS mounted file system, it is cached in the local disk and
subsequent read requests are satisfied from the local disk cache, unless the original data has
been modified on server. By default, CacheFS maintains consistency with the back file system
using a consistency checking model similar to that of NFS, which polls for changes in file
attributes. CacheFS can be used to cache NFS-mounted or automounted NFS file systems. Note
that before a CacheFS filesystem can be mounted, the cache directory on a local file system must
be configured. Please refer to “NFS Services Administrator's Guide” for details on administering
CacheFS.

CacheFS performance improvements are dependent on the type of file system access. It suits file
systems where data is read more than once. It has no impact on write performance or if data is
read only once.

The version of CacheFS available in HP-UX 11i v3 is CacheFS+2.3, which has been ported from
Solaris ONC+2.3 code. The ported code provides a few new features in addition to that of
CacheFS+1.2 which was used in the previous versions of HP-UX.

March 2007 Availability-121


HP-UX 11i v3 Delta Secured Availability

CacheFS 2.3 on HP-UX 11i v3 (1 of 3)


New cachefspack command
• Packs files and file systems in the cache
• Sets up and maintains files in the cache
– Allows unpacking of files
New weakconst option
• Verifies cache consistency with client copy of file attributes and delays
committing changes to serve
• Results in improved performance
Demandconst option fully implemented
• Consistency check succeeds only if file system was mounted with
“demandconst” option
• cfsadmin -s <directory> command prints an error message and returns a
non-zero value when run on an invalid directory, non-existing mount
point, or a cachefs mount point not mounted with demandconst option
Error message improvements
• Print to stderr instead of stdout
• Contain the command name

122 March 2007

CacheFS 2.3 on HP-UX 11i v3 introduces a new command, cachefspack. It packs files and file
systems in the cache. It also sets up and maintains files in the cache. It is used to force a file to
be entirely cached locally. This utility provides greater control over the cache, allowing the user
to decide on which files need to be always available in cache. This utility also allows users to
unpack a file, which makes it become a candidate for garbage collection the next time the
collector is run.

CacheFS 2.3 introduces a new option, weakconst, that verifies the cache consistency with the
NFS's client copy of file attributes and delays committing of changes to the server. The new
weakconst option, when used instead of the default option, results in improved response times of
CacheFS.

Additionally, CacheFS 2.3 provides the full implementation of the demandconst option. In
previous releases, users were able to issue a consistency check using “cfsadmin –s
<mount_point>” even when the file system was not mounted with the “demandconst” option. This
problem has been fixed in this release and consistency check succeeds only if the file system
was mounted with the “demandconst” option. An error message is printed otherwise.

The cfsadmin -s <directory> command prints an error message and returns a non-zero value
when run on an invalid directory, non-existing mount point, or a cachefs mount point not
mounted with demandconst option. In earlier releases, it returned 0.

The CacheFS error messaging has been improved. Error message now print to standard error
instead of to standard output. The error messages also now contain the command name.

March 2007 Availability-122


HP-UX 11i v3 Delta Secured Availability

CacheFS 2.3 on HP-UX 11i v3 (2 of 3)


Does not execute fsck automatically
• If a cache in the cache directory is deleted using cfsadmin –d cache_ID
<cache_directory>
– Must run fsck explicitly on cache directory before attempting to mount
cached file system using this cache directory
– If fsck is not run, the mount fails with an error message
Full CacheFS 2.3 functionality is planned for a future release and may
include
• Disconnected feature
• Caching access control list
• cachefslog and cachefswssize commands
Provides support for shared locks
Performance improvement when mounting a CacheFS file system
using a cache directory with a lot of cached data

123 March 2007

To improve mount command performance, fsck is no longer executed automatically. This means
that if a cache in the cache directory is deleted using cfsadmin –d cache_ID <cache_directory>,
fsck must now be run explicitly on the cache directory before attempting to mount a cached file
system using this cache directory. If fsck is not run, the mount fails with the following error
message: mount -F cachefs: mount failed No space left on device

Full CacheFS 2.3 functionality on HP-UX 11i v3 is planned for a future release and may include
the disconnected feature, the caching access control list, and the cachefslog and cachefswssize
commands.

CacheFS supports shared locks.

There is improvement in the time taken to mount a CacheFS file system using a cache directory
with a lot of cached data.

March 2007 Availability-123


HP-UX 11i v3 Delta Secured Availability

CacheFS 2.3 on HP-UX 11i v3 (3 of 3)


CacheFS is a DLKM
• Automatically loaded into kernel when used
• User may unload module if it is not going to be used
Support for changing mount options without deleting cache
• For example, when changing from default options to “non-shared” or
from “noconst” to “demandconst
• Can mount a CacheFS file system with the “non-shared” option when
updating data/reading a modified data
– After updating the cache, change the mount option back to “write-
around” without deleting the cache
Supports large files and file systems
• Allows creation of cache directory in huge file systems
• Allows caching files larger than 2 GB
Supports interface expansion
• 256 character hostnames
• Large PIDs
• 64-byte long file names

124 March 2007

In HP-UX 11i v3, CacheFS is a DLKM module. This allows for less downtime if a patch has to be
applied to the product. This also allows the user to unload the module if it is not used for some
period, thereby conserving system resources. By default, the state of the cachefs module is set to
“auto”, so the module is automatically loaded in to the kernel when somebody tries to use this
module.

CacheFS supports changing mount options without deleting the cache. There is no need to delete
the cache when switching mount options, for example, when changing from default options to
“non-shared” or from “noconst” to “demandconst”. This helps users change the mount options
depending on the circumstances. For example, users will be able to mount a CacheFS file system
with the “non-shared” option when updating data/reading a modified data and after updating
the cache, change the mount option back to “write-around” without deleting the cache.

CacheFS supports large file systems and large files, file systems and files of size greater than 2
GB. This allows the administrators to create a cache directory in a huge file system and also
allows users to cache files of size greater than 2 GB. File system sizes have been qualified at
32TB and will be increased based on customer needs.

CacheFS 2.3 supports interface expansion. It provides for large hostname support by allowing
hostnames of up to 256 characters. It is also large PID enabled. And, it supports 64-byte long
file names.

March 2007 Availability-124


HP-UX 11i v3 Delta Secured Availability

Using CacheFS 2.3 to Benefit


Use for file systems with man pages and programs
• Read often and rarely changed
Single system licensed applications with unlimited users
• Install application on server
• Access binaries through CacheFS mount
– Binary files cached on users’ local disk
Cachefspack files and data that need to be permanently cached
Put database files into single shared file system
• Use CacheFS mount to access data on compute hosts
– Reduces network traffic to database server
CAD applications
• Store master copies on server and cached copies on user systems

125 March 2007

Here are some examples of how CacheFS can be used.

CacheFS is useful in cases where data is read-only. Hence good choices for cached file systems
may include man pages and executable programs, which are read multiple times and rarely
modified.

It would also be useful in cases where license for an application is granted to be installable only
on a single system, but can be used by unlimited users. In such cases, it would be useful to install
the application on a server and to access the binaries through CacheFS mount. This would allow
all the users to cache the binary files on their local disk and to use the application as if it was
installed on the local system. Also since it is installed only on a single system, patching or
updating the application is quick with little impact to the users, the only overhead being when
the application is cached the first time it is accessed.

If there are files/data that are required to be cached permanently, such files can be packed
using the “cachefspack” utility. This ensures that the data will always be present in cache and
will not be removed by the garbage collector.

In some environments there are scenarios where a lot of pattern searching is done on a set of
databases whose indexes are updated infrequently. In this scenario, putting these database files
into a single shared file system and using CacheFS mount to access the data on the compute
hosts can cut down dramatically on execution time and also prevent high network traffic to
database server.

Another application could be in a CAD environment where master copies of drawing can be
stored in a server and cached copies can be used from user systems.

March 2007 Availability-125


HP-UX 11i v3 Delta Secured Availability

CacheFS 2.3 Common Issues


File system disk usage should be less than the threshold number of blocks
• Otherwise the mount fails with an error
• Specified as "threshblocks" in the cfsadmin command
– Default value for "threshblocks" is 85%
• Example
– # bdf /cfs_mnt
– Filesystem kbytes used avail %used Mounted on
– /dev/vg00/cfs 516096 462658 50105 90% /cfs_mnt
– # mount -F cachefs -o backfstype=nfs,cachedir=/cfs_mnt/cache_dir
$SERVER:/tmp /cfs_mnt/mnt
– mount -F cachefs: mount failed No space left on device
Create cache directory in a file system dedicated only for caching of data
Documentation
• mount_cachefs (1M), cfsadmin (1M), cachefsstat (1M), cachefspack (1M),
umount_cachefs (1M), fsck_cachefs (1M), and packingrules (4M)
• ONC+ Release Notes and the NFS Services Administrator’s Guide
– http://docs.hp.com/en/netcom.html#NFS%20Services

126 March 2007

Here are a couple of hints to avoid common set up issues.

The file system disk usage should be less than the threshold number of blocks, which is specified
as "threshblocks" in the cfsadmin command. Otherwise the mount fails with the error. Note that
the default value for "threshblocks" is 85%.

Here is an example, where mount fails when disk usage is 90% –


# bdf /cfs_mnt
Filesystem kbytes used avail %used Mounted on
/dev/vg00/cfs 516096 462658 50105 90% /cfs_mnt
#
# mount -F cachefs -o backfstype=nfs,cachedir=/cfs_mnt/cache_dir
$SERVER:/tmp /cfs_mnt/mnt
mount -F cachefs: mount failed No space left on device
#

It is recommended that the cache directory be created in a file system which is dedicated only
for caching of data.

For further information about the CacheFS product, please refer to the mount_cachefs (1M),
cfsadmin (1M), cachefsstat (1M), cachefspack (1M), umount_cachefs (1M), fsck_cachefs (1M),
and packingrules (4M) man pages.

Also visit http://docs.hp.com/en/netcom.html#NFS%20Services to review the ONC+ Release


Notes and the NFS Services Administrator’s Guide.

March 2007 Availability-126


HP-UX 11i v3 Delta Secured Availability

Network Information Service - NIS 2.3 Server (1 of 2)


Provides simple network lookup service
• Consists of databases and processes NIS Server version is 2.3 and NIS client
version is 1.2
Changes to namesvrs file
• NIS+ and DNS related variables and DNS entries removed
– Now contains information related to NIS
• New SHADOW_MODE variable to enable shadow mode
Provides support for ipnodes through the /etc/nsswitch.conf file
“yp*” commands and daemons changes
• NIS server daemon ypserv option enables DNS forwarding
• New options in the ypserv, yppasswdd, makedbm, and ypmake commands
• Unsupported options used with ypserv or rpc.yppasswdd are ignored and
daemon is started
Default host nickname used by ypcat and ypwhich
• Was hosts.byaddr in previous releases
– Other commands use hosts.byname
• Default map used by ypcat and ypwhich is now hosts.byname for consistency
– Different usage message with –x option

127 March 2007

The Network Information Service (NIS) provides a simple network lookup service consisting of
databases and processes. On HP-UX 11i v3, the NIS Server version is 2.3 and NIS client
version is 1.2. The 2.3 NIS client will be available in a future release.

NIS on HP-UX 11i v3, has removed NIS+ and DNS related variables and DNS entries from the
namesvrs file. The namesvrs file now contains information related to NIS. NIS supports shadow
mode if SHADOW_MODE, a new variable in the namesvrs file, is enabled. Additionally, NIS
provides support for ipnodes through the /etc/nsswitch.conf file.

NIS provides support for enabling DNS forwarding mode using an option to the NIS server
daemon ypserv. It provides new options in the ypserv, yppasswdd, makedbm, and ypmake
commands. If an unsupported option is used with ypserv or rpc.yppasswdd, the options are
ignored and the daemon is started.

On previous versions the default host nickname used by the commands ypcat and ypwhich is
hosts.byaddr, while other commands use hosts.byname. In HP-UX 11i v3, the default map used
by ypcat and ypwhich is now changed to hosts.byname for consistency. This change in the
default host nickname for ypcat and ypwhich results in a change in display. The usage message
displayed is different when you use the –x option with the ypcat and ypwhich commands.

March 2007 Availability-127


HP-UX 11i v3 Delta Secured Availability

Network Information Service - NIS 2.3 Server (2 of 2)


The rpc.yppasswdd daemon
• Starting it without the -D option displays usage message
• -D option and the passwd arguments are mutually exclusive
• A shadow file found in the same directory as the passwd file is assumed to contain the
password information arguments after -m are passed to make(1) after password changes
NIS supports
• Multi-homed node
• IPv6 data
• Interface expansion
– For long uname, long hostname, long username
NIS protocol version 1 (NIS v1) is deprecated in this release
• Will be obsoleted in a future HP-UX release
• HP recommends moving to the next protocol version of NIS
Documentation
• Manpages for domainname (1), ypcat (1), ypmatch (1), ypwhich (1), yppasswd (1),
yppasswdd (1M), ypset (1M), makedbm (1M), ypinit (1M), ypmake (1M), yppoll (1M),
yppush (1M), ypserv (1M), ypxfr(1M), rpc.nisd_resolv (1M), ypclnt (3C), and ypfiles (4)
• ONC+ Release Notes and the NFS Services Administrator’s Guide
– http://docs.hp.com/en/netcom.html#NFS%20Services

128 March 2007

With HP-UX 11i v3, if the rpc.yppasswdd daemon is started without the -D option, a usage
message is displayed. The -D option and the passwd arguments are mutually exclusive. A
shadow file found in the same directory as the passwd file will be assumed to contain the
password information arguments after -m are passed to make(1) after password changes. If the
daemon is started with the -D option, the usage message is no longer displayed.

NIS provides support for multi-homed node information in the NIS maps. NIS also supports IPv6
data.

NIS also provides interface expansion support for long uname, long hostname, and long
username.

NIS protocol version 1 (NIS v1) is deprecated in this release and will be obsoleted in a future
HP-UX release. HP recommends that you move to the next protocol version of NIS.

For further information, see the manpages for domainname (1), ypcat (1), ypmatch (1), ypwhich
(1), yppasswd (1), yppasswdd (1M), ypset (1M), makedbm (1M), ypinit (1M), ypmake (1M),
yppoll (1M), yppush (1M), ypserv (1M), ypxfr(1M), rpc.nisd_resolv (1M), ypclnt (3C), and
ypfiles (4).

Information on NIS1.2 enhanced features can be found at


http://docs.hp.com/en/netcom.html#NFS%20Services
Also at this site can be found the ONC+ Release Notes and the NFS Services Administrator’s
Guide.

March 2007 Availability-128


HP-UX 11i v3 Delta Secured Availability

Lock Manager 2.3 (KLM)


On HP-UX 11i v3, the locking mechanism contains both user space and
kernel space components, referred to as KLM, kernel lock manager
Lock manager updated from version 1.2 to version 2.3
• User space only components updated to kernel based and user space commands
• Shared lock support for NFS clients
• Supports use of synchronous RPC to communicate with the server
– Asynchronous RPC procedures continue to work with ONC+1.2 clients
• Support for selection of transport depending on the file system mount
• KLM 2.3 lockd is multi-threaded
• Supports retransmission capability for blocked lock requests
– Blocking thread periodically wakens to retransmit the blocked lock request
• Supports new security containment in HP-UX
Locking performance gains
Locking reliability

129 March 2007

In all previous releases the locking mechanism, Kernel Lock Manager (KLM), or just Lock
Manager for short, for NFS on HP-UX is in users space supporting protocol versions 2 and 3
implemented using the SUN version 1.2. In HP-UX 11i v3, it is based on SUN's implementation
of ONC+2.3. So, the locking mechanism contains both user space and kernel space
components that are collectively referred to as KLM.

The lock manager has been updated from version 1.2 to version 2.3. The earlier user space only
components have been updated to kernel based and user space commands. There is additional
support for shared locks. Share lock support at the VFS layer was implemented. KLM 2.3 added
shared lock support for NFS clients.

There is additional support for the use of synchronous RPC to communicate with the server.
Asynchronous RPC procedures continue to be supported to work with ONC+1.2 clients.

There is additional support for the selection of transport depending on the file system mount.

The lockd daemon is multi-threaded, which results in performance improvements.

The lock manager on HP-UX 11i v3 supports the retransmission capability for blocked lock
requests. The blocking thread periodically wakens to retransmit the blocked lock request.

Finally, new security containment features in HP-UX are supported by the lock manager.

March 2007 Availability-129


HP-UX 11i v3 Delta Secured Availability

RPC Command and Daemon Changes


RPC daemon changes
• rquotad returns quota information on a user for a file system mounted via
NFS from a remote machine
• rpcstatd returns performance statistics
• rusersd responds to queries from the rusers command
RPC command changes
• rpcinfo reports the status of RPC servers
• rup shows the status of the hosts on the local network
• rusers reports a list of users logged on to the remote machines
• rpcgen generates C code to implement an RPC protocol
RPC command/daemon pair changes
• rwall and rpc.walld send messages to all users on the network
• spray and rpc.sprayd send a specified number of packets to a host
• on and rpc.rexd execute a command on remote host with environment
similar to local

130 March 2007

The 11.31 release of HP-UX contains changes that impact ONC+ RPC Command, Daemons and
Libraries.

The rquotad daemon returns quota information on a user for a file system mounted via NFS from
a remote machine.
The rpcstatd daemon returns performance statistics obtained from the kernel.
The rusersd daemon responds to queries from the rusers command.
The rpcinfo command reports the status of RPC servers.
The rup command shows the status of the hosts on the local network.
The rusers command reports a list of users logged on to the remote machines.
The rpcgen utility generates C code to implement an RPC protocol.
The rwall and rpc.walld command and daemon pair sends messages to all users on the network.
The spray and rpc.sprayd command and daemon pair sends a specified number of packets to a
host.
The on and rpc.rexd command/daemon pair executes a command on remote host with
environment similar to local.

March 2007 Availability-130


HP-UX 11i v3 Delta Secured Availability

RPC Library Changes (1 of 3)


Remote Procedure Call (RPC) library routines
• Allow programs to make procedure calls on other machines across a network
• All require including rpc.h header file
• Support new data types for RPC program, version, procedure, port, and protocol
numbers
Use of RPC and External Data Representation (XDR) routines require linking
with the libnsl library
New types of operations for rpc_control
• RPC_SVC_THRMAX_SET specifies maximum number of threads
• RPC_SVC_THRMAX_GET retrieves maximum number of threads
• RPC_SVC_THRTOTAL_GET retrieves total number of currently active threads
• RPC_SVC_THRCREATES_GET gets total number of threads created by RPC library
• RPC_SVC_THRERRORS_GET obtains number of errors at time of thread creation
• RPC_SVC_USE_POLLFD sets number of file descriptors to unlimited
• RPC_SVC_CONNMAXREC_SET sets max buffer size required to send and receive
data
• RPC_SVC_CONNMAXREC_GET gets maxbuffer size required to send and
receive data

131 March 2007

Library routines for Remote Procedure Calls (RPC) allow programs to make procedure calls on
other machines across a network. All RPC routines and routines that take the netconfig require
the header rpc.h be included. Applications using RPC and External Data Representation (XDR)
routines must be linked with the libnsl library.

RPC library routines now support new data types for the RPC program, version, procedure, port,
and protocol numbers.

There are several new types of operations for rpc_control. They are:
RPC_SVC_THRMAX_SET to specify the maximum number of threads
RPC_SVC_THRMAX_GET to retrieve the maximum number of threads
RPC_SVC_THRTOTAL_GET to retrieve the total number of currently active threads
RPC_SVC_THRCREATES_GET to retrieve the total number of threads created by RPC library
RPC_SVC_THRERRORS_GET to retrieve the number of errors at time of thread creation in the RPC
library
RPC_SVC_USE_POLLFD sets the number of file descriptors to unlimited
RPC_SVC_CONNMAXREC_SET is a non-blocking I/O enhancement for TCP services to specify
the maximum buffer size required to send and receive data.
RPC_SVC_CONNMAXREC_GET is a non-blocking I/O enhancement for TCP services to receive
the maximum buffer size required to send and receive data.

March 2007 Availability-131


HP-UX 11i v3 Delta Secured Availability

RPC Library Changes (2 of 3)


SVCGET_XID is a new operation for svc_control()
• Retrieves transaction id of connection-oriented (vc) and
connectionless (dg) transport service calls
svc_fd_negotiate_ucred() is a new function
• Informs underlying loop back transport that it wishes to
receive user credentials for local calls, including those over IP
transport
New library routines for timed client creation
• New routines are clnt_create_timed, clnt_create_vers_timed,
and clnt_tp_create_timed
• Similar to existing client creation routines
• Except they take the timeout parameter to specify the
maximum amount of time allowed for each transport class

132 March 2007

SVCGET_XID, a new operation for svc_control(), retrieves the transaction id of connection-


oriented (vc) and connectionless (dg) transport service calls.

A new function svc_fd_negotiate_ucred() to inform the underlying loop back transport that it
wishes to receive user credentials (ucreds) for local calls, including those over IP transport.

New library routines for timed client creation, which are similar to existing client creation
routines, except they take the timeout parameter to specify the maximum amount of time allowed
for each transport class. The new routines are clnt_create_timed, clnt_create_vers_timed, and
clnt_tp_create_timed.

March 2007 Availability-132


HP-UX 11i v3 Delta Secured Availability

RPC Library Changes (3 of 3)


Many new types of operation for clnt_control()
• CLSET_IO_MODE() and CLGET_IO_MODE() to set/get I/O blocking modes for TCP clients
– Use RPC_CL_BLOCKING() and RPC_CL_NON-BLOCKING() to set the I/O mode settings
• CLSET_FLUSH_MODE() and CLGET_FLUSH_MODE() to set /get the flush mode
– CLSET_FLUSH_MODE() can only be used for non-blocking I/O mode
• Accepts arguments for best effort or blocking flushes
• CLFLUSH() to flush pending requests
– Can only be used for non-blocking I/O mode
– Accepts arguments for best effort or blocking flushes
• CLSET_CONNMAXREC_SIZE to specify the maximum buffer size
• CLGET_CONNMAXREC_SIZE to retrieve the maximum buffer size
• CLGET_CURRENTREC_SIZE to return size of pending requests stored in the buffer for non-blocking I/O
mode
• CLGET_SERVER_ADDR / CLGET_SVC_ADDR to retrieve the server’s address
• CLSET_PROG and CLGET_PROG to set/get the program numbers
Two new header files
• clnt_stat.h
• rpc_com.h
HP recommends using the new data types instead of u_long and long

See rpc(3N) and associate manpages

133 March 2007

There are several new types of operation for clnt_control(). They are:
CLSET_IO_MODE() and CLGET_IO_MODE() to set and get I/O blocking modes for TCP clients.
Use RPC_CL_BLOCKING() and RPC_CL_NON-BLOCKING() to set the I/O mode settings.
CLSET_FLUSH_MODE() and CLGET_FLUSH_MODE() to set and get the flush mode.
CLSET_FLUSH_MODE() can only be used for non-blocking I/O mode and accepts arguments for
best effort or blocking flushes.
CLFLUSH() to flush pending requests. This operation can only be used for non-blocking I/O mode
and accepts arguments for best effort or blocking flushes.
CLSET_CONNMAXREC_SIZE to specify the maximum buffer size.
CLGET_CONNMAXREC_SIZE to retrieve the maximum buffer size.
CLGET_CURRENTREC_SIZE to return the size of pending requests stored in the buffer for non-
blocking I/O mode.
CLGET_SERVER_ADDR / CLGET_SVC_ADDR to retrieve the server’s address.
CLSET_PROG and CLGET_PROG to set and retrieve the program numbers.

There are two new header files. They are clnt_stat.h and rpc_com.h

HP recommends that you use the new data types introduced with this release. These new data
types replace u_long and long. While u_long and long data types continue to be supported, you
receive a compiler warning if you continue to use them.

For documentation, see the rpc(3N) manpage and the manpages mentioned therein.

March 2007 Availability-133


HP-UX 11i v3 Delta Secured Availability

NIS+ Obsolete
NIS+ is not NIS
• Completely different product
NIS+ is a distributed database system
• Maintain commonly used configuration information on a master server
– Propagate information to all the hosts in the network
• Maintain configuration information for many hosts in a set of distributed
databases
– Read or modify databases from any host in the network
NIS+ is difficult to administer
• Requires dedicated system administrators trained in NIS+ administration
• Very different from NIS administration
• NIS+ databases not automatically backed up to flat files
NIS+ is obsolete as of HP-UX 11i v3
• HP recommends customers migrate to LDAP.
Documentation
• For information on how to migrate to LDAP
– See NIS+ to LDAP Migration Guide
• http://docs.hp.com/en/J4269-90054/J4269-90054.pdf

134 March 2007

The Network Information Service Plus (NIS+) is an entirely different product than NIS, not an
enhancement to NIS. It is a distributed database system that allows the user to maintain
commonly used configuration information on a master server and propagate the information to
all the hosts in the network. NIS+ maintains configuration information for many hosts in a set of
distributed databases. With the proper credentials and access permissions, a user can read or
modify these databases from any host in the network. Common configuration information, which
would have to be maintained separately on each host in a network without NIS+, can be stored
and maintained in a single location and propagated to all of the hosts in the network.

The disadvantage of NIS+ is that it is difficult to administer. It requires dedicated system


administrators trained in NIS+ administration. NIS+ administration is very different from NIS
administration. Also, the NIS+ databases are not automatically backed up to flat files. The
system administrator must create and maintain a backup strategy for NIS+ databases, which
includes dumping them to flat files and backing up the files.

Due to the declining demand for NIS+, HP is discontinuing NIS+. The last HP-UX release on
which NIS+ was released is HP-UX 11i v2. Starting with HP-UX 11i v3, NIS+ is no longer
supported. HP recommends customers migrate to LDAP.

For information on how to migrate to LDAP, see the NIS+ to LDAP Migration Guide located at
http://docs.hp.com/en/J4269-90054/J4269-90054.pdf

March 2007 Availability-134


HP-UX 11i v3 Delta Secured Availability

Networking Module
LDAP and Directory Servers

135 March 2007

This section includes LDAP-UX Integration, NIS+/LDAP migration, and Directory Servers on HP-
UX.

March 2007 Availability-135


HP-UX 11i v3 Delta Secured Availability

LDAP-UX Integration
Uses the Lightweight Directory Access Protocol (LDAP) to centralize user, group and
network information management in an LDAP directory
LDAP-UX Integration includes
• LDAP-UX Client Services
– Simplifies HP-UX system administration by consolidating account, group and other
configuration information into a central LDAP directory server
– Works with a variety of LDAP v3 capable directory servers
– Fully tested with Red Hat Directory Server and the Microsoft Windows 2000/2003 Active
Directory Server
• LDAP-UX Client Administration Tools and Migration Scripts
– LDAP-UX Client administration tools can help you to manage data in an LDAP directory server.
• Mozilla LDAP C Software Development Kit (SDK) subcomponents
– Contains set of LDAP APIs to allow building LDAP-enabled clients
– Enables clients to connect to LDAP v3-compliant servers and perform the LDAP functions
LDAP-UX Integration product B.04.00.10 is included on HP-UX 11i v3 release
• Provides defect fixes in addition to the new features provided in version B.04.00.02
Refer to documentation http://docs.hp.com/en/internet.html
• LDAP-UX Client Services B.04.00 Administrator’s Guide, edition 3
• LDAP-UX Client Services B.04.00 with Microsoft Windows 2000/2003 Active Directory
Server Administrator’s Guide
• LDAP-UX Client Services B.04.00.02 Release notes

136 March 2007

LDAP-UX Integration uses the Lightweight Directory Access Protocol (LDAP) to centralize user,
group and network information management in an LDAP directory.

LDAP-UX Integration includes the LDAP-UX Client Services, LDAP-UX Client Administration Tools
and Migration Scripts, and Mozilla LDAP C Software Development Kit (SDK) subcomponents.

LDAP-UX Client Services simplifies HP-UX system administration by consolidating account, group
and other configuration information into a central LDAP directory server. LDAP-UX Client Services
software works with a variety of LDAP v3 capable directory servers and is fully tested with Red
Hat Directory Server and the Microsoft Windows 2000/2003 Active Directory Server.

LDAP-UX Client administration tools can help you to manage data in an LDAP directory server.

The Mozilla LDAP C SDK contains a set of LDAP Application Programming Interfaces (APIs) to
allow you to build LDAP-enabled clients. Using the functionality provided with the SDK, you can
enable your clients to connect to LDAP v3-compliant servers and perform the LDAP functions.

The LDAP-UX Integration product B.04.00.10 is included on HP-UX 11i v3 release. It provides
defect fixes in addition to the new features provided in version B.04.00.02.

Refer to the following documentation available at http://docs.hp.com/en/internet.html


• LDAP-UX Client Services B.04.00 Administrator’s Guide, edition 3
• LDAP-UX Client Services B.04.00 with Microsoft Windows 2000/2003 Active Directory
Server Administrator’s Guide
• LDAP-UX Client Services B.04.00.02 Release notes

March 2007 Availability-136


HP-UX 11i v3 Delta Secured Availability

NIS+/LDAP Migration
For information on how to migrate to LDAP
• NIS+ to LDAP Migration Guide
– http://docs.hp.com/en/J4269-90054/J4269-90054.pdf

137 March 2007

For information on how to migrate to LDAP, see the NIS+ to LDAP Migration Guide located at
http://docs.hp.com/en/J4269-90054/J4269-90054.pdf

March 2007 Availability-137


HP-UX 11i v3 Delta Secured Availability

Directory Servers on HP-UX 11i v3 (1 of 2)


Netscape Directory Server (NDS) and Red Hat Directory Server
(RHDS) are two separate products
• Red Hat Directory Server for HP-UX (NSDirSv7) B.07.10.20 is included in
the HP-UX 11i v3 release
• RHDS 7.1 provides more functionality than NDS 6.21 does, which is on
the HP-UX 11i v3 release
HP provides an industry-standard centralized directory service to build
the intranet or extranet on
• Red Hat servers and other directory-enabled applications use the directory
service as a common, network-accessible location for storing shared data,
such as user and group identification, server identification, and access
control information
• Red Hat Directory Server can be extended to support entire enterprise
with a global directory service that provides centralized management of
the enterprise’s resource information
Documentation
• See http://www.docs.hp.com/en/internet.html for Red Hat Directory
Server B.07.10.20 for HP- X Release Notes and other documents

138 March 2007

Red Hat Directory Server for HP-UX (NSDirSv7) B.07.10.20 is included in the HP-UX 11i v3
release. Netscape Directory Server (NDS) and Red Hat Directory Server (RHDS) are two
separate products. RHDS 7.1 provides more functionality than NDS 6.21 does, which is on the
latest HP-UX 11i v3 release.

HP provides an industry-standard centralized directory service to build your intranet or extranet


on. Your Red Hat servers and other directory-enabled applications use the directory service as a
common, network-accessible location for storing shared data, such as user and group
identification, server identification, and access control information. In addition, the Red Hat
Directory Server can be extended to support your entire enterprise with a global directory
service that provides centralized management of your enterprise’s resource information.

For more information, refer to the following documents available at


http://www.docs.hp.com/en/internet.html
Red Hat Directory B.07.10.20 for HP-UX Release Notes
Red Hat Directory Server 7.10 Installation Guide
Red Hat Directory Server 7.10 Configuration, Command, and File Reference
Red Hat Directory Server 7.10 Deployment Guide
Red Hat Directory Server 7.10 Administrator’s Guide
Red Hat Directory Server 7.10 Schema Reference
Red Hat Directory Server 7.10 Plug-In Programmer’s Guide
Red Hat Directory Server 7.10 Gateway Customization Guide
Red Hat Directory Server 7.10 DSML Gateway

March 2007 Availability-138


HP-UX 11i v3 Delta Secured Availability

Directory Servers on HP-UX 11i v3 (2 of 2)


Red Hat Directory Server new features for security, memory and performance
• Windows User Synchronization
– Synchronizes changes in groups, user entries, attributes, and passwords between Red Hat
Directory Server and Microsoft Active Directory and Windows NT4 Server in a process similar
to replication
• Get Effective Right Control
– Allows LDAP client to request access control permissions set on each attribute within an entry
• Administrators can find and control the access rights set on an individual entry
• Wide-Area Network Replication
– Achieves much higher performance over high-delay network paths by not waiting for
acknowledgements before sending updates
• Fractional Replication
– A way of replicating a database without replicating all information in it
• Allows an administrator to select a set of attributes that will not be replicated
• Password Change Extended Operation
– Supports the password change extended operation
• File system Replica Initialization
– Adds capability to initialize a replica using the database files from the supplier server,
avoiding the need to rebuild the consumer server database, and can be done at essentially the
speed of the raw network between the two servers by transferring the files
– Performance is significatnt where the servers contain gigabytes of data

139 March 2007

Red Hat Directory Server B.07.10.20 contains defect fixes in addition to new features provided
in Red Hat Directory Server B.07.10.00 for HP-UX. These new features for security, memory and
performance are listed.

Windows User Synchronization synchronizes changes in groups, user entries, attributes, and
passwords between Red Hat Directory Server and Microsoft Active Directory and Windows NT4
Server in a process similar to replication.

Get Effective Right Control an LDAP client to request the access control permissions set on each
attribute within an entry, allowing administrators to find and control the access rights set on an
individual entry.

Wide-Area Network Replication achieves much higher performance over high-delay network
paths by not waiting for acknowledgements before sending updates, allowing more changes to
be relayed more quickly.

Fractional Replication introduces fractional replication, a way of replicating a database without


replicating all information in it. This feature allows an administrator to select a set of attributes
that will not be replicated.

Password Change Extended Operation supports the password change extended operation as
defined in RFC 3062. This allows users to change their passwords using a suitable client,
according to industry standards.

File system Replica Initialization adds the capability to initialize a replica using the database
files from the supplier server, avoiding the need to rebuild the consumer server database, and
can be done at essentially the speed of the raw network between the two servers by transferring
the files. Where the servers contain gigabytes of data, the improvement in performance is
significant.

March 2007 Availability-139


HP-UX 11i v3 Delta Secured Availability

Networking Module
Auto Port Aggregation (APA) & LAN Monitor
Common Internet File System (CIFS)
DLPI and VLAN
Internet Services
Web Server Suite

140 March 2007

This section covers Auto Port Aggregation and LAN Monitor, CIFS, DLPI and VLAN. It also covers
Internet Services and the Web Server Suite.

March 2007 Availability-140


HP-UX 11i v3 Delta Secured Availability

Auto Port Aggregation (APA) and LAN Monitor


HP APA/LM is a networking based kernel product
• Used to aggregate multiple networking links to form a single logical link
– Called fat pipe, link aggregate or a trunk
– Provides a logical grouping of two or more physical ports into a single “fat pipe”
– This port arrangement provides more data bandwidth than would otherwise be available
• Provides automatic link failure detection and recovery
• Offers optional support for load balancing of network traffic across all of the links in the
aggregation
– Enables building large bandwidth “logical” links into the server that are highly available and
completely transparent to the client and server applications
Provides link aggregation, or trunking
• An aggregation of two or more network physical ports into a single logical pipe
– Transparent to upper layers
– Upgrades to higher bandwidth with existing infrastructure
– Link Aggregation can be achieved either manually or automatically
• Automatic aggregation is achieved through Cisco's PAgP protocol or with the IEEE 802.3ad Link
Aggregation Control Protocol.
Provides various advantages
• Bandwidth scalability, high availability, and load balancing

141 March 2007

HP Auto Port Aggregation (APA) is a software product that creates link aggregates, often called
“trunks,” which provide a logical grouping of two or more physical ports into a single “fat pipe.”
This port arrangement provides more data bandwidth than would otherwise be available.

Two additional features are automatic link failure detection and recovery; it also offers optional
support for load balancing of network traffic across all of the links in the aggregation. This
enables you to build large bandwidth “logical” links into the server that are highly available and
completely transparent to the client and server applications.

HP APA/LM is a networking based kernel product, which runs on HP-UX platforms. It provides
link aggregation, or trunking, which is, basically, an aggregation of two or more network
physical ports into a single logical pipe. This is transparent to upper layers, and a solution to
upgrade to higher bandwidth with existing infrastructure. Link Aggregation can be achieved
either manually or automatically. Automatic aggregation is achieved through Cisco's PAgP
protocol or with the IEEE 802.3ad Link Aggregation Control Protocol.

HP APA/LM product is used to aggregate multiple networking links to form a single logical link
called "fat" pipe, link aggregate or a trunk. The APA/LM product provides various advantages
such as, bandwidth scalability, high availability, and load balancing.

March 2007 Availability-141


HP-UX 11i v3 Delta Secured Availability

APA/LM on HP-UX 11i v3


Supports Single System High Availability Online-Deletion of NICs
Supports the creation of failover groups
• Provides a failover capability for links
• Upon a link failure, LAN Monitor automatically migrates traffic to an active,
standby link
Provides enhanced APA management tool
Supports existing and new networking technologies
Support for enabling HP-UX VLAN over APA aggregates and LAN Monitor
failover groups.
Several utilities support HP-UX APA operations
• HP System Management Homepage
• New HP SMH Terminal User Interface
• New nwmgr CLI
Support LACP (IEEE802.3ad) fast timeout
Support Nortel’s SMLT (Split-Multiple-Link-Trunking) technology
LAN Monitor co-existence with ServiceGuard
Provides IPv6 support over aggregates

142 March 2007

There are many new features of APA/LM on HP-UX 11i v3.

APA supports Single System High Availability (SSHA) features, such as Online Deletion of NICs.

HP APA supports the creation of failover groups, providing a failover capability for links. In the
event of a link failure, LAN Monitor automatically migrates traffic to an active, standby link.

This release provides an enhanced APA management tool. APA has support for existing and
new networking technologies. It has support for enabling HP-UX VLAN over Auto-Port
Aggregation (APA) aggregates and LAN Monitor failover groups.

Several utilities support HP-UX APA operations. These are the HP System Management
Homepage, the new HP SMH Terminal User Interface, and the new command line interface,
nwmgr, which we will cover shortly.

APA supports LACP (IEEE802.3ad) fast timeout. It supports Nortel’s SMLT (Split-Multiple-Link-
Trunking) technology. There is LAN Monitor co-existence with ServiceGuard. And, APA provides
IPv6 support over aggregates.

March 2007 Availability-142


HP-UX 11i v3 Delta Secured Availability

APA/LM Customer Benefits


VLAN over APA
• Support for HP-UX VLANs over APA aggregates and LAN Monitor failover groups meets
customer requirement to combine the traffic isolation benefits of VLAN technology with the
high availability and high bandwidth capabilities of the APA and LAN Monitor technology
NCWeb support for HP-UX APA
• Offers the customer a better, web-based, GUI experience
nwmgr support for HP-UX APA
• CLI will result in increased customer satisfaction with HP-UX I/O tools having a common look
and feel
Performance benefits
• Online Deletion support improves system and network availability
• LACP fast timeout helps HP-APA to detect link failures more quickly
A link aggregate can span over more than one switch with SMLT support
• Removes switches as a single point of failure
LAN Monitor failover group can be used to provide local NIC failover solution for HP
ServiceGuard
Documentation
• nwmgr (1M), nwmgr_apa (1M)(1M), and apa (7) manpages
• “HP APA Performance and Scalability” White Paper at http://docs.hp.com/
• HP-UX APA Administrator’s Guide at HP-UX 11i v3 at
http://docs.hp.com/en/netcom.html#Auto%20Port%20Aggregation%20%28APA%29

143 March 2007

The new HP-UX APA features at HP-UX 11i v3 offer the following customer benefits.

On releases prior to HP-UX 11i v3, HP-UX VLANs are supported only over physical interfaces. At
HP-UX 11i v3, support for HP-UX VLANs over APA aggregates and LAN Monitor failover groups.
This meets the customer requirement to combine the traffic isolation benefits of VLAN technology
with the high availability and high bandwidth capabilities of the APA and LAN Monitor
technology on HP-UX servers.

NCWeb support for HP-UX APA offers the customer a better, web-based, GUI experience.
Ncweb a new GUI that is part of HP SMH. Also, there is a new command line interface called
nwmgr that supports HP-UX APA. Being part of the new nwmgr CLI will result, over time, in
increased customer satisfaction, with HP-UX I/O tools having a common look and feel.

HP-APAs support of Online Deletion will improve system and network availability. LACP fast
timeout helps HP-APA to detect link failures more quickly. With support of SMLT, a link
aggregate can span over more than one switch, which removes switches as a single point of
failure. LAN Monitor failover group can be used to provide local NIC failover solution for HP
ServiceGuard.

For further information, see the following:


Manpages: nwmgr (1M), nwmgr_apa (1M)(1M), and apa (7)
White Papers: “HP APA Performance and Scalability” White Paper at
http://docs.hp.com/en/7662/new-apa-white-paper.pdf
Documents: HP-UX APA Administrator’s Guide at HP-UX 11i v3 at
http://docs.hp.com/en/netcom.html#Auto%20Port%20Aggregation%20%28APA%29

March 2007 Availability-143


HP-UX 11i v3 Delta Secured Availability

Common Internet File System (CIFS)


CIFS is the networking protocol developed by Microsoft for sharing files and printers
HP CIFS Server is an application suite on HP-UX
• Provides the File, Printing and Browsing Services to the CIFS Clients
• Can be configured as Primary Domain Controller in an NT-domain to provide Directory and
Authentication services
• Avoids data corruption by providing lock interoperability among CIFS Clients, NFS Clients
and local HP-UX processes
HP CIFS Client
• Enables the mounting of a shared (exported) file-system on a CIFS File Server onto a mount-
point on HP-UX
• Allows all the applications running on a HP-UX box to transparently access a mounted CIFS
share
• Is a DLKM
– Avoids reboot during the installation, update and removal of the product.
PAM-NTLM is a PAM module
• Allows logins on a UNIX host to be authenticated by CIFS hosts that use NTLM protocol
• See pam.conf manpage
HP CIFS Products support the interface expansion
• Allow large PIDs and long uname and hostname.

144 March 2007

HP CIFS is the native networking protocol on Microsoft Windows operating systems for sharing files and
printers. The HP CIFS products for HP-UX provide a wide range of integration strategies for HP-UX and
Windows. CIFS stands for the Common Internet File System. It is synonymous with SMB, which is an
acronym for Server Message Blocks. It is packaged with all flavors of Microsoft Windows and is the
default used on those platforms.
The HP CIFS Client enables the HP-UX host to mount directories shared by remote CIFS servers. (These
may be Windows, HP-UX, and other server platforms on which CIFS has been implemented.) The HP CIFS
Server enables the HP-UX host to provide access to its own shared directories by remote CIFS clients.
(Again, these may be Windows, HP-UX, and other CIFS clients). It emulates Windows file and print
services. The HP CIFS Client bundle also includes PAM-NTLM, a “pluggable authentication module” that
allows HP-UX logins to be authenticated by a centralized service on a CIFS domain. HP CIFS Client is
updated to version A.02.02.01 with support for MS Distributed File System (DFS) and DLKM feature and
other changes.
HP CIFS Server is an application suite on HP-UX that primarily provides the File, Printing and Browsing
Services to the CIFS Clients. It can be configured as Primary Domain Controller in an NT-domain to
provide Directory and Authentication services. HP CIFS Server with CIFS Extended File System avoids data
corruption by providing lock interoperability among CIFS Clients, NFS Clients and local HP-UX processes.
The CIFS Client enables the mounting of a shared (exported) file-system on a CIFS File Server onto a
mount-point on HP-UX. This allows all the applications running on a HP-UX box to transparently access a
mounted CIFS share. HP CIFS Client with the kernel component developed as a DLKM avoids reboot
during the installation, update and removal of the product.
PAM stands for Pluggable Authentication Module. It validates login sessions on a UNIX host. PAM
broadly describes the mechanism by which such programs, or modules, can be "plugged into" the native
login verification procedures on the Unix host. The modules can be "stacked" to operate interdependently.
NTLM is the Windows "NT LAN Manager" authentication protocol. It is a challenge-response system in
which no password is transmitted on the network. It is the default authentication mechanism used on
Microsoft Windows NT servers and earlier. It supports both Kerberos and NTLM.
PAM-NTLM is a PAM module that allows logins on a UNIX host to be authenticated by CIFS hosts that use
the NTLM protocol, usually, Windows NT servers. Further information can be found in the HP-UX
manpages; see pam.conf.
HP CIFS Products support the interface expansion of allowing large PIDs and long uname and hostname.

March 2007 Availability-144


HP-UX 11i v3 Delta Secured Availability

HP CIFS Server A.02.03 on HP-UX 11i v3


Supports non-blocking, asynchronous request/reply behavior
Provides better scalability in large domain environments and on high latency
networks
Improves file locking interoperability between CIFS clients and NFS clients
• Supports CIFS file locks to interoperate with accesses from NFS clients and other HP-UX processes
• Supports file locking interoperation for record locks, share mode locks and opportunistic locks
• CFSM (CIFS File System Module) DLKM used to enforce the file locks held by CIFS clients
– Off by default and can be enabled on a per file system basis
– Enabling this functionality prevents the possibility of file corruption
• Allows for performance enhancing opportunistic locks to be safely used
• Creates CFSM specific log file
New cfsmutil utility controls logging level, log file location and
maximum log file size
– The kernel modules are part of an HP-UX 11i v3 only product, which is part of the CIFS Server
bundle
CIFS Server supports Trivial Database (TDB) Memory Map
• Enabling configuration parameter “use mmap” takes advantage of fixed size memory map access
• The mechanism and number of TDB files using memory-mapped access has been tuned
• Supports UFC and allows memory-mapped TDB files to fall back to the file I/O operations
– In case of memory-mapped failures, e.g. low on memory resource or exceeding fixed memory map
size

145 March 2007

HP CIFS Server is a SMB/CIFS-based product on HP-UX systems. It enables HP-UX systems to interoperate with PC
clients running Microsoft Windows NT, XP, 2000/2003, and UNIX systems running the CIFS client and thus provides
a fully integrated network of UNIX and Windows systems running as clients, servers, or both. HP CIFS Server is
updated to 3.0f version A.02.03. It has a redesign of Winbind code, file locking interoperation functionality, support
for long user and group names, and support for TDB Memory Map.
HP CIFS Server 3.0f version A.02.03 is included on HP-UX 11i v3 release, it is based on Samba 3.0.22. It supports
idmpa-rid. The idmap-rid facility with winbind support provides a unique mapping of windows SIDs to local UNIX
UIDs and GIDs throughout a domain without requiring an LDAP directory. There is support for Publishing printers in an
MS Windows 2000/2003 ADS domain.
The winbind code has been re-designed to support the non-blocking, asynchronous request/reply behavior with the
exception of user and group enumeration. With this new enhancement, winbind provides better scalability in large
domain environments and on high latency networks.
HP CIFS Server A.02.03 for HP-UX 11i v3 includes new functionality to improve file locking interoperability between
CIFS clients and NFS clients. To protect customers from data corruption, CIFS Server provides file locking
interoperation enhancements that support the CIFS file locks to interoperate with accesses from NFS clients and other
HP-UX processes. HP CIFS Server supports file locking interoperation for record locks, share mode locks and
opportunistic locks. A new DLKM known as CFSM (CIFS File System Module) can be used to enforce the file locks held
by CIFS clients. This functionality is off by default and can be enabled on a per file system basis. Enabling this
functionality prevents the possibility of file corruption due to concurrent file accesses from both CIFS and NFS, and
allows for performance enhancing opportunistic locks to be safely used. The kernel modules are part of an HP-UX 11i
v3 only product, which is part of the CIFS Server bundle. If CFSM is used, a new CFSM specific log file is created,
and a new utility called cfsmutil can be used to control the logging level, log file location and maximum log file size.
There is support for long user and group name support. HP CIFS Server supports HP-UX user and group name of up to
256 bytes.
There is TDB Memory Map Support. This release supports performance enhancements which include enabling the
configuration parameter “use mmap” to take advantage of fixed size memory map access of CIFS's Trivial Database
(TDB) files. The mechanism and number of TDB files using memory-mapped access has been tuned respective of the
OS (HP-UX 11i v3 PA-RISC or 11i v3 Itanium-based). In case of memory-mapped failures such as low on memory
resource or exceeding the fixed memory map size, HP CIFS Server for HP-UX 11i v3 supports Unified File Cache and
allows memory-mapped TDB files to fall back to file I/O operations.
For more information, refer to the following documentation.
The Official Samba-3 HOWTO and Reference Guide and the Samba-3 by Example Samba books are provided with
the HP CIFS Server product. They are available through at /opt/samba/docs/samba-HOWTO-Collection.pdf and
/opt/samba/swat/help.
The HP CIFS Server 3.0f Administrator’s Guide version A.02.03 and the HP CIFS Server 3.0f Release Note version
A.02.03 can be found at http://docs.hp.com/en/netcom.html and navigating to CIFS.

March 2007 Availability-145


HP-UX 11i v3 Delta Secured Availability

HP CIFS Client on HP-UX 11i v3


Supports MS Distributed File System
• Enables system administrators to build single, hierarchical view of multiple file servers and
file server shares on their network
• Unites files on different computers into a single name space
Is a Dynamically Loadable Kernel Module (DLKM
• Supports both static binding and dynamic loading
• Installation, removal, and update of the HP CIFS Client do not require a system re-boot
Several new configuration parameters
New cifslogout -a (all) option allows users to log out from all current CIFS login session
• Useful in environments that use CIFS Client's Distributed File System feature (DFS)
– Where several CIFS logins can be created in the background as users traverse a DFS tree
Uses 32-bit error codes with the CIFS servers by default
Several logging enhancements
• Help HP diagnose potential problems customers encounter using this software
The sockMode, sockOwner and sockGroup parameters are no longer configurable

146 March 2007

HP CIFS Client has MS Distributed File System (DFS) Support. DFS for the Microsoft Windows Server
operating systems is a network server component that enables system administrators to build a single,
hierarchical view of multiple file servers and file server shares on their network. DFS is a means for uniting
files on different computers into a single name space. HP CIFS Client A.02.02.01 is available on the HP-
UX 11i v3 release. This release of the HP CIFS Client contains several enhancements and defect fixes in
addition to the new features provided in HP CIFS Client A.02.02.
HP CIFS Client is a Dynamically Loadable Kernel Module (DLKM). The kernel component of the HP CIFS
Client is implemented as a dynamically linked kernel module to support both static binding and dynamic
loading. With DLKM support, installation, removal, and update of the HP CIFS Client do not require a
system re-boot.
There are several new configuration parameters that are fully documented in the HP CIFS Client
Administrators’ Guide. They are mnttabPrefix, allowBackslashesInPaths, fileCreateMask, oldUdbEncrypt,
preventCreationEnable, preventCreationPattern, unixExtensions, and smbOverTCP.
HP CIFS Client A.02.02.01 contains the following enhancements and changes.
There is a new cifslogout -a (all) option that allows users to log out from all current CIFS login sessions.
This is particularly useful in environments which use the CIFS Client's Distributed File System feature (DFS),
wherein several CIFS logins can be created in the background as users traverse a DFS tree.
The CIFS Client now uses 32-bit error codes with the CIFS servers by default, rather than the older DOS
error class.
Several logging enhancements have been made in this release. These are mainly intended to help HP
engineers diagnose potential problems customers encounter using this software.
The sockMode, sockOwner and sockGroup parameters are no longer configurable. The values of these
parameters are sockMode = 0666, sockOwner = root, sockGroup = root.
For more information, refer to the following documentation, which can be found at
http://docs.hp.com/en/netcom.html (navigate to CIFS):
• HP CIFS Client A.02.02 Administrator's Guide
• HP CIFS Client A.02.02 Release Note
• HP CIFS Client A.02.02.01 Release Note
Also see the following manpages:
• cifsclient (1M)
• cifsdb (1M)
• cifslist(1)
• cifslogin (1), cifslogout(1)
• cifsmount (1M), cifsumount (1M)
• mount_cifs (1M), umount_cifs (1M)

March 2007 Availability-146


HP-UX 11i v3 Delta Secured Availability

HP Data Link Provider Interface (DLPI)


DLPI is industry standard definition for message communications to STREAMS-based
network interface
• Provides core link layer infrastructure for networking drivers
• Provides extensions enabling feature rich and high performance networking stacks on HP-UX
On HP-UX 11i v3, DLPI
• Enables Online Deletion (OLD) of I/O card instances claimed by LAN drivers without a
system reboot
• Enables LAN drivers as DLKMs
• NOSYNC STREAMS synchronization level for improved performance and scalability for
high speed links
• Changes for non-root users
– Users of HP DLPI RAW mode service must be granted PRIV_NETRAWACCESS privilege
– Users that transmit or receive IPv4, IPv6 or ARP packets must be granted PRIV_NETADMIN
privilege
– Users that reset hardware statistics must be granted PRIV_NETADMIN privilege
– See HP-UX 11i Security Containment Administrator's Guide at http://docs.hp.com.
DLPI Programmer’s Guide is posted at http://docs.hp.com

147 March 2007

HP Data Link Provider Interface (DLPI) has enhancements that include NOSYNC STREAMS
synchronization level for improved performance and scalability for high speed links, online
deletion (OLD) of I/O card instances, and dynamic loading and unloading of LAN drivers
without reboot.

Data Link Provider Interface (DLPI) is an industry standard definition for message communications
to STREAMS-based network interface drivers. The HP implementation of the DLPI standard
conforms to the DLPI Version 2.0 specification. HP DLPI provides the core link layer infrastructure
for networking drivers and provides various extensions that enable feature rich and high
performance networking stacks on HP-UX.

On HP-UX 11i v3, DLPI enables Online Deletion (OLD) of I/O card instances claimed by LAN
drivers without a system reboot. It also enables LAN drivers as Dynamically Loadable Kernel
Modules (DLKMs) High Availablity (HA) capabilities in LAN drivers so that LAN drivers may be
dynamically loaded or unloaded without a system reboot.

There is a no sync feature. In early versions of HP DLPI, the STREAMS synchronization used by
HP DLPI and other modules in the networking stack on HP-UX restricted the scalability of high
bandwidth links such as HP APA aggregates and 10 Gigabit. HP DLPI now uses the new
NOSYNC STREAMS synchronization level to provide vastly improved performance and
scalability for high speed links such as HP APA aggregates. For more details, see the HP Auto
Port Aggregation Performance and Scalability White Paper, posted at
http://docs.hp.com/en/7662/new-apa-white-paper.pdf

There are several changes for non-root execution of DLPI applications. Using HP DLPI RAW
mode service requires the PRIV_NETRAWACCESS privilege. Transmitting or receiving IPv4, IPv6
or ARP packets requires PRIV_NETADMIN privilege. And, various sys admin tasks, like resetting
hardware statistics, must now be granted the PRIV_NETADMIN privilege. For information on
how these privileges may be granted to the affected applications, see HP-UX 11i Security
Containment Administrator's Guide, available at http://docs.hp.com.

March 2007 Availability-147


HP-UX 11i v3 Delta Secured Availability

HP Virtual LAN (VLAN) on HP-UX 11i v3


HP's DLPI implementation delivers the Virtual LAN (VLAN) features based on
IEEE 802.1p/Q standards
• A VLAN is a logical or virtual network segment spanning multiple physical
network segments
• Provides ability to extend the network VLAN implementation into the host in the
form of VLAN interfaces
• VLAN interfaces allow configuring applications to utilize the traffic isolation
features of VLANs
HP VLAN on HP-UX 11i v3
• Support for HP-UX VLANS over APA aggregates or LAN-monitor failover groups
– Combines traffic isolation benefits of VLAN technology with HA and high
bandwidth capabilities of APA/LM technology
• SMH-Network Interfaces Configuration tool can configure HP-UX VLAN
– Offers a better, web-based GUI experience for performing administrative tasks
• New nwmgr command can be used to perform HP-UX VLAN operations and to
manage all LAN-based and IB-based Network interfaces.
Documentation
• vlan(7), nwmgr_vlan(1M), and nwmgr(1M) manpages
• VLAN white papers at http://docs.hp.com

148 March 2007

New features in HP-UX VLAN on HP-UX 11i v3 include support for HP-UX VLANS over APA aggregates or
LAN-monitor failover groups, SMH-Network Interface Configuration support for Web-based VLAN
configuration, and nwmgr support for HP-UX VLANS.

DLPI is a de-facto STREAMS-based networking standard providing APIs for user space and kernel space
applications to access the data-link layer (layer 2 in the OSI model). The HP implementation of the DLPI
standard also provides various extensions that enable feature rich and high performance networking
stacks on HP-UX. HP's DLPI implementation delivers the Virtual LAN (VLAN) features based on IEEE
802.1p/Q standards.)

A Virtual LAN (VLAN) is a logical or virtual network segment that can span multiple physical network
segments. The main benefit of a host-based VLAN product like HP-UX VLAN is the ability to extend the
network VLAN implementation into the host in the form of VLAN interfaces. VLAN interfaces let you
configure applications to utilize the traffic isolation features of VLANs.

There is support for enabling HP-UX VLAN over Auto-Port Aggregation (APA) aggregates or LAN-Monitor
failover groups.

The SMH-Network Interfaces Configuration tool may be used to configure HP-UX VLAN. This offers a
better, web-based GUI experience for performing administrative tasks. The new nwmgr command can be
used for performing for HP-UX VLAN operations and for managing all LAN-based and IB-based Network
interfaces.

Pre-HP-UX 11i v3 HP-UX VLANs are supported only over physical interfaces. At HP-UX 11i v3, support for
HP-UX VLANs over APA aggregates or LAN-monitor failover groups combines the traffic isolation benefits
of VLAN technology with the high availability and high bandwidth capabilities of APA/LM technology on
HP-UX servers.

Refer to the vlan(7), nwmgr_vlan(1M), and nwmgr(1M) manpages. There are also VLAN white papers at
http://docs.hp.com. They are “Planning and Implementing VLANs with HP-UX” and “HP-UX VLAN
Administrator’s Guide”.

March 2007 Availability-148


HP-UX 11i v3 Delta Secured Availability

Internet Services on HP-UX 11i v3


IS delivers and supports essential networking services for interoperating in a
network based on the TCP/IP framework
• All IS products are part of base OS
These networking services include
• FTP
• R-commands such as rcp, rlogin, and remsh
• Mailers such as mailx, elm, and sendmail
• DNS/BIND
• Routing services such as gated, mrouted and ramD
IS product is split into multiple smaller products based on functionality
• DHCPv6, DHCPv4, sendmail, NameService, Gated-Mrouted, RAMD, FTP, NTP,
TCPWrappers, SLP
– Products can be deselected during installation or be removed individually
• Except for sendmail, which is required and always loaded
– SLP is optional and will only be installed if the user specifically selects it
See HP-UX Internet Services Administrator’s Guide
• http://docs.hp.com/en/netcom.html#Internet%20Services

149 March 2007

Internet Services delivers and supports the networking services considered essential to HP-UX
system administrators interoperating in a network based on the TCP/IP framework. These
networking services include FTP; r-commands such as rcp, rlogin, and remsh; mailers such as
mailx, elm, and sendmail; DNS/BIND; and routing services such as gated, mrouted and ramD.

The Internet Services product is split into multiple products based on product functionality. All of
these products are part of the HP-UX base operating system in HP-UX 11i v3. According to the
new architecture of the Internet Services suite of products, the Internet Services product is split
into twelve smaller products. They are DHCPv6, DHCPv4, sendmail, NameService, Gated-
Mrouted, RAMD, FTP, NTP, TCPWrappers, and SLP. These products can be deselected during
installation or can be removed individually, except for sendmail, which is required and always
loaded. Sendmail cannot be deselected during installation. Also, SLP is optional; therefore, it will
only be installed if the customer specifically selects it.

There is an HP-UX Internet Services Administrator’s Guide, available at


http://docs.hp.com/en/netcom.html#Internet%20Services

March 2007 Availability-149


HP-UX 11i v3 Delta Secured Availability

Internet Services Products Updates in HP-UX 11i v3


DHCP server supports the PXE protocol
• Can support any PXE client now
• Itanium-based firmware comes with PXE clients
All IS products support 256 bytes of username/hostname/uname
• Except for DHCP and rwhod because of protocol limitations
• Long uname/hostname/username enables interoperation with other vendor OSs
Networking parts of libc are UNIX 2003 compliant
• Eases the source code migration across different UNIX derivatives.
IPv6 support is added to tftp
Support of IPv6 routing protocols like RIPng, BGP+ and IS-IS
• Eases customer deployment of HP-UX in IPv6 environment
Telnet uses error management tool
New sendmail version, Sendmail-8.13 includes
• TLS support, Milter API's, new maps like socket and dns
• LDAP URL support, better security
Support for SAFeR Plus
Support of Multimedia Streaming protocol framework
• Helps customers build their own Media Server Application
• Other web release products like DHCPv6 and RAMPIPv6 are also available.

150 March 2007

The various Internet Services products on HP-UX 11i v3 have been updated.

The DHCP server supports the PXE protocol, so that it can support any PXE client now. The
Itanium-based firmware comes with PXE clients.

The entire Internet Services product support 256 bytes of username/hostname/uname, except
for DHCP and rwhod products. These are due to protocol limitations. Enabling long
uname/hostname/username in HP-UX will make it possible to interoperate with other vendor
OSs.

The networking parts of libc are UNIX 2003 compliant. This should ease the source code
migration across different UNIX derivatives.

IPv6 support is added to tftp. There is support of IPv6 routing protocols like RIPng, BGP+ and IS-
IS. Support of IPv6 routing protocol will help ease customer deployment of HP-UX in IPv6
environment.

Telnet is using the error management tool. This should reduce support calls.

The new version of sendmail, sendmail-8.13, includes TLS support, Milter API's, new maps like
socket and dns, LDAP URL support and better security.

There is support for SAFeR Plus

There is support of Multimedia Streaming protocol framework, which will help customers build
their own Media server Application. Other web release products like DHCPv6 and RAMPIPv6
are also available.

March 2007 Availability-150


HP-UX 11i v3 Delta Secured Availability

PPPv6 (PPPoE) on HP-UX 11i v3


PPPoE allows connecting multiple hosts at a remote location through
the same customer access device
• Reduces cost of providing dial-up services using Point-to-Point Protocol
(PPP)
HP-UX 11i v3 PPPoE
• Handles IPv6 datagrams in addition to IPv4 datagrams
• Provides all the required connectivity to end-users from a remote network
• Several defect fixes
• IPv6 support.
See Documents at http://docs.hp.com
• Installing and Administering PPP
• PPPoE/v6 Administrator’s Guide
• HP-UX PPP Enhancements - PPPoE and PPPv6 for TOUR 2.0
• HP-UX PPP Enhancements - PPPoE and PPPv6 for TOUR 1.0

151 March 2007

HP-UX PPPv6 PPPoE allows you to connect multiple hosts at a remote location through the same
customer access device, reducing the cost of providing dial-up services using Point-to-Point
Protocol (PPP). The key function of the HP- X PPPv6 software is to handle IPv6 datagrams in
addition to IPv4 datagrams and to provide all the required connectivity to end-users from a
remote network.

There are several defect fixes and there is IPv6 support.

The following documents are available at


http://docs.hp.com/en/netcom.html#InternetTransport:
• Installing and Administering PPP
• PPPoE/v6 Administrator’s Guide
• HP-UX PPP Enhancements - PPPoE and PPPv6 for TOUR 2.0
• HP-UX PPP Enhancements - PPPoE and PPPv6 for TOUR 1.0

March 2007 Availability-151


HP-UX 11i v3 Delta Secured Availability

HP Web Server Suite v2.16 on HP-UX 11i v3


HP-UX Web Server Suite, version 2.16, is a free product available for the HP-UX platform
• Contains key software products necessary to deploy, manage, and implement a mission
critical web server
• Components are HP-UX Apache-based Web Server, HP-UX Tomcat-based Servlet Engine,
HP-UX Webmin-based Admin, and HP-UX XML Web Server Tools
Apache is upgraded to 2.0.58.00
Tomcat is upgraded to 5.5.9.04
Webmin is upgraded to 1.070.08
HP-UX XML Web Server Tools is unchanged for the initial release of HP-UX 11i v3
Documentation
• See HP-UX Web Server Suite release notes for details
• Bundled documentation, such as Release Notes, Admin Guides, User Guides, Migration
Guides and FAQs, install into /opt/hpws/hp_docs
• Browse to http://yourserver.com/hp_docs on the appropriate port
– For Webmin on port 10000 the URL should be http://yourserver.com:10000/hp_docs
Latest information is on the product Web site at http://www.hp.com/go/webserver

152 March 2007

The HP-UX Web Server Suite, version 2.16, is a free product available for the HP-UX platform. It contains
key software products necessary to deploy, manage, and implement a mission critical web server. The
following components can be separately installed: HP-UX Apache-based Web Server, HP-UX Tomcat-
based Servlet Engine, HP-UX Webmin-based Admin, and HP-UX XML Web Server Tools.

HP enhances the software in the areas of performance, encryption, reliability, customization, and
administration. HP ensures the suite of products work together with HP-UX 11.x operating environment.
Additionally Oracle, BEA, Siebel and other application vendors have developed application plug-ins for
the HP-UX Web Server Suite. The different components of the HP-UX Web Server Suite demonstrate
leadership in the following areas: Reliability, Availability, Serviceability, Internet and Web Application
Services, Scalability, Directory and Security Services.

HP-UX Web Server Suite, version 2.16, includes the following changes. Apache is upgraded to
2.0.58.00. HP-UX Tomcat-based Servlet Engine provides customers Java-based extensions for dynamic
content generation via Servlets and JavaServer Pages (JSPs). Tomcat is upgraded to 5.5.9.04. HP-UX
Webmin-based Admin is a configuration and administration GUI with extensive enhancements for the HP-
UX Apache-based Web Server. Webmin is upgraded to 1.070.08. HP-UX XML Web Server Tools is a
collection of a Java-based XML tools used for XML parsing, stylesheet, and XSL processing, web-publishing
and image translating from the open source projects: Xerces, Xalan, Cocoon, FOP, and Batik. The HP-UX
XML Web Server Tools is unchanged for the initial release of HP-UX 11i v3. These components are based
on software developed by the Apache Software Foundation (http://apache.org) and Webmin
(http://www.webmin.com/)

Please see the HP-UX Web Server Suite release notes for details. Also, bundled documentation, such as
Release Notes, Admin Guides, User Guides, Migration Guides and FAQs, install into
/opt/hpws/hp_docs. You can access these documents through the HP-UX Apache-based Web Server, HP-
UX Tomcat-based Servlet Engine, or HP-UX Webmin-based Admin by browsing to
http://yourserver.com/hp_docs on the appropriate port. For example, for Webmin on port 10000 the
URL should be http://yourserver.com:10000/hp_docs. Also, note that shared documentation, such as
Migration Guides and FAQs, are located at /opt/hpws/hp_docs and are included in the HP-UX Webmin-
based Admin product. The latest information can also be found on the product Web site:
http://www.hp.com/go/webserver

March 2007 Availability-152


HP-UX 11i v3 Delta Secured Availability

Security Module
Security Overview
Bastille 3.0
Identity Management
Security Containment
Auditing
Encrypted Volume and File System
Host Intrusion Detection System 4.0
IPFilter
IPSec
OpenSSL
Secure Shell
153 March 2007

The Security module will begin with a security overview. Then, we will cover Bastille 3.0,
Security Containment, Auditing, Encrypted Volume and File System, HIDS Host Intrusions
Detection System Release 4.0, IPFilter, IPSec, OpenSSL, and Secure Shell.

March 2007 Availability-153


HP-UX 11i v3 Delta Secured Availability

Security Overview

154 March 2007

Security Overview

March 2007 Availability-154


HP-UX 11i v3 Delta Secured Availability

Customer Examples
Customer Situation Key Benefits
Enterprise customers:
Layered security with in-depth
• Take security threats seriously
protection
• Must comply with regulations and
audits Lower operational costs – security built
• Have to deal with increasing identity into the OE
management/remote device
connectivity issues Reduced risk and increased adherence
As a result, pain points include: to compliance standards
• Compromised productivity when Reduced time to implementation and
security is breached lower IT costs
– Unplanned downtime is costly
– Failed compliance audits Fully supported security under HP-UX
– Bad publicity 11i support agreement
• Potential loss of shareholder Easy to download and implement
HP-UX reduces risk from threats, simplifies identity management, and
is part of the compliance solution

155 March 2007

Computer security is a hot topic in today’s business world. Enterprise customers take security
threats seriously. They must deal with increasing identity management and remote device
connectivity issues. And, they must comply with regulations and audits. As a result, businesses
face several new dilemmas and losses resulting from breached security. Productivity is
compromised because unplanned downtime is always costly. Failed compliance audits are
costly, as is bad publicity. Potential shareholder loss is also a worry of publicly traded
companies.

In this section, we will see how HP-UX reduces risk from threats, simplifies identity management,
and is part of the compliance solution. There are many key security benefits on HP-UX 11i v3.
There is layered security with in-depth protection. Since security is built into the OEs, there is
lower operational and IT costs because there is reduced implementation time for security
measures. There are also additional security features that are easy to download and implement.
Security is fully supported under the HP-UX 11i support agreement. All of these lead to reduced
risk and increased adherence to compliance standards.

March 2007 Availability-155


HP-UX 11i v3 Delta Secured Availability

Security Directions – An Integrated Approach


What if HP-UX 11i v3 were a bank?

Network Intrusion Intrusion Detection


Mitigation
Security Prevention & Analysis

• IPSec • IPFilter • Host-Based IDS Compartments


• Secure Shell • Bastille • Stack Buffer RBAC
Overflow Protection
• SSL • PAM authentication Fine-grained Privileges
NEW:
• Kerberos • LDAP-UX integration Encrypted Volume & FS
• Audit in Standard
• AAA Server • Security Bulletin Trusted Comp. Services
currency (SPC) • & Trusted Mode
• Directory (LDAP)
• Install-time Security

156 March 2007

This slide illustrates the integrated approach that HP-UX 11i v3 takes toward managing security.

Network security is like an armored car that safely transports valuable resources over public
thoroughfares, which is analogous to private data traveling through the Internet. HP-UX network
security features include IPSec, Secure Shell, SSL, Kerberos, AAA Server, and LDAP.

Intrusion prevention is like a bank where all of the valuables are kept securely locked away and
access is only provides to those with codes, keys, and proper identification and authorization.
This is analogous to only allowing access to computers for properly identified and authenticated
users; and, then, only allowing access to the programs that they are allowed to use. HP-UX
intrusion prevention features include IPFilter, Bastille, PAM authentication, LDAP-UX integration,
Security Bulletin currency (SPC) (Security Patch Check), and Install-time Security.

Intrusion detection and analysis is analogous to having both physical monitoring systems and
audits of in a bank. In HP-UX, there are host-based intrusion detection systems and stack buffer
overflow protection, which prevents one flawed subsystem from overwriting, or accessing,
another’s private data. New in HP-UX is the ability to audit in standard and trusted mode.

Finally, mitigation is concerned with lessening the severity of any security breach.
Compartmentalization limits the valuables that an intruder can access analogously to having
multiple vaults in a bank. RBAC stands for Role Based Access Control which is a systematic way
of limiting users with different roles access to different programs. HP-UX 11i v3 also implements
fine-grained privileges. There is a new ability to encrypt file systems and volumes. And, HP-UX
provides trusted computer services.

March 2007 Availability-156


HP-UX 11i v3 Delta Secured Availability

Common Criteria Certification


What is it?
•An independent analysis of a product
(e.g. HP-UX) and the set of security
features that it provides
•Internationally recognized security
standard
•Composed of 2 major elements:
– Protection Profile: The set of
requirements that the product meets
– Assurance Level (EAL): How rigorously
the product was designed, developed,
documented, tested, and certified
Is HP-UX certified?
•HP-UX 11i v1 is certified at EAL4-CAPP
(Controlled Access Protection Profile)
•HP-UX 11i v2 is certified at EAL4+ CAPP,
RBACPP (Role-based Access Control
Protection Profile)
•HP-UX 11i v3 certification is imminent

157 March 2007

Common Criteria certification is an independent analysis of a product, such as the HP-UX


operating system, and the set of security features that it provides. It is an internationally
recognized security standard that is composed of two major elements. A Protection Profile is the
set of requirements that the product meets. And, the Assurance Level, or EAL, is how rigorously
the product was designed, developed, documented, tested, and certified.

HP-UX 11i v1 is certified at EAL4-CAPP (Controlled Access Protection Profile). HP-UX 11i v2 is
certified at EAL4+ CAPP, RBACPP (Role-based Access Control Protection Profile). And, HP-UX 11i
v3 certification is imminent.

March 2007 Availability-157


HP-UX 11i v3 Delta Secured Availability

Increasing Threat Velocity


Four Factors at work
• More Vulnerabilities Discovered = More Attacks Reported
• Evolving set of players (bad guys)
• Attacks Sooner and Faster Advanced Information
• Attacks: ease-of-use & sophistication Warfare States

Industrial-State
sponsored Espionage

Resources
Organized Crime

Corporate Espionage
S
R
Hostile-State E
sponsored Espionage
ID
S
Terrorist Networks IN
Hacker Groups

Individuals
Effectiveness
158 March 2007

There are four major factors at work to increase security threat velocity. These include the fact
that as more vulnerabilities are discovered, more attacks are reported. There is an ever-evolving
set of malicious hackers who create attacks sooner and faster. Sophisticated malicious hackers
are also creating attacks that are easier to use by non-sophisticated hackers like script kiddies.

March 2007 Availability-158


HP-UX 11i v3 Delta Secured Availability

Information Security Maturity

Blissful Awareness Corrective Operations Excellence


Ignorance Phase Phase Phase

Conclude "Catch-Up" 5%
Projects
Continuous
Process
Improvement
Design
Maturity

Architecture 15% Track Technology and


50% Business Change

Institute Processes
Develop New
Policy Set

Assess Current State


Initiate Strategic
Program
30%
Establish (or Re-Establish)
Security Team

NOTE: Population distributions represent typical, large Global 2000-type organizations Time
Source: Gartner, May 2, 2006

159 March 2007

The figure shows the relative maturity of organizations with regard to security programs. Key
stages are marked along the maturity curve, starting with an assessment of the current state and
followed by the establishment of a security team. As maturity increases, the initiation of a
strategic program marks the beginning of the first significant improvement in information
security. As part of this program, domains and the associated trust levels (or security baselines)
are created that will characterize a mature and useful program.

Organizations achieve genuine progress when they design an architecture that supports
domains and trust. Instituting processes moves organizations out of the ad hoc, reactive mode
and into the proactive mode. Processes are repeatable, survivable, measurable and can be
improved on an ongoing basis.

At the top of the curve, organizations have reached a maturity level where they have organized
all of the processes and can focus on improving good practice. Gartner estimates that 80
percent of organizations are in the initiation phase, or the lowest levels of this maturity, and that
15 percent are working hard to mature their processes. The figure indicates that 5 percent of
organizations have achieved the highest levels of maturity, but Gartner estimates that less than 5
percent of organizations have achieved this level in practice.

March 2007 Availability-159


HP-UX 11i v3 Delta Secured Availability

Motivation for Multiple Defenses


Protect against known security defects
• Security Bulletin Currency
– Security Patch Check->Software Assistant
• SWA is HP Confidential
Protect against unknown vulnerabilities
• Configuration Hardening and Lockdown with Bastille and Install-Time
Security
– Turn off unneeded services
– Tighten needed services
• System Minimization
– Only install what you need
• Reduces need to monitor and apply security bulletins
– Ignite/UX: Always Installed / Default Installed / Selectable
– HP-UX 11i v3 significantly reduces the size of Always Installed
• HP Confidential
Ability to detect and respond to attack
• Protect->Detect->React -> Intrusion Detection System

160 March 2007

We have covered many reasons to have multiple defenses in HP-UX. For example, to protect
against known security defects, HP-UX provides a Security Bulletin Currency. Use the Software
Assistant in the Security Patch Check tool. SWA is HP Confidential.

To protect against unknown vulnerabilities, HP-UX provides configuration hardening and


lockdown through the Bastille and Install-Time Security systems to turn off unneeded services
tighten security on needed services. Another way to protect against unknown vulnerabilities is to
reduce the size of the system by only installing required packages. This results in a reduced need
to monitor and apply security bulletins. Ignite-UX has packages categorized as Always Installed,
Default Installed, or Selectable. HP-UX 11i v3 significantly reduces the size of the Always
Installed packages. This information is HP Confidential.

HP-UX also provides and Intrusion Detection System giving users the ability to detect and
respond to an attack.

March 2007 Availability-160


HP-UX 11i v3 Delta Secured Availability

Bastille 3.0

161 March 2007

The first security feature that we will cover is Bastille 3.0.

March 2007 Availability-161


HP-UX 11i v3 Delta Secured Availability

Bastille on HP-UX Overview


Bastille is a security hardening and lockdown tool
• Provides customized lockdown on a system-by-system basis
• Originally developed by open source community
• Works with Install-Time Security and Security Patch Check tools
HP-UX Bastille
• Bastille is the first to provide an intuitive wizard-style interface
– Easy to use by non-security experts
• Allows even inexperienced security administrators to make appropriate
security decisions and tradeoffs
• Wizard educates about security issues.
– Saves experienced users time with its supported and tested
configuration changes
• Locks down the HP-UX operating system
– Configures daemons and system settings to be more secure.
– Optimizes the tradeoff between security, usability and functionality
(e.g. better security with service disabled)
– Incorporates features from a number of popular processes and
checklists
162 March 2007

HP-UX Bastille is a security hardening/lockdown tool which can be used to enhance the security
of the HP-UX operating system. It provides customized lockdown on a system-by-system basis by
encoding functionality similar to the Bastion Host and other hardening/lockdown checklists.

Bastille was originally developed by the open source community for use on Linux systems. HP is
contributing by providing Bastille on HP-UX. This tool, along with Install-Time Security (ITS) and
Security Patch Check (SPC), introduces new, out-of-the-box security functionality. HP-UX Bastille
has been available on the HP-UX 11i v2 OEs since September 2004 on the 11.23 PI release.

Bastille is the first security tool to provide an intuitive wizard-style interface, which is easy to use
by non-security experts. It allows even inexperienced security administrators to make appropriate
security decisions and tradeoffs. The wizard also educates and provides information about
security issues. Bastille saves experienced users time with its supported and tested configuration
changes.

HP-UX Bastille locks down the HP-UX operating system. It configures daemons and system
settings to be more secure. It optimizes the tradeoff between security, usability and functionality.
If a service is disabled, it cannot provide a security hole or vulnerability. Bastille incorporates
features from a number of popular processes and checklists.

March 2007 Availability-162


HP-UX 11i v3 Delta Secured Availability

Benefits of Bastille on HP-UX


Efficient / Easy to use
• Bastille can easily configure security for a variety of uses
– Mail or Web server
– Management server
• Systems Insight Manager has default configuration
– General purpose locked down server
Replication of lockdown config across multiple machines
• Direct use of the configuration engine
– --os option
Lockdown-ratchet
• Prevents security “accidents”
• Will not unlock (loosen) system unless -r revert function is used
• Will add additional incremental security
Educational
• Interface educates the security novice
• Speeds up the expert

163 March 2007

Bastille is both easy and efficient to use. It can easily configure security for a variety of uses,
such as a mail server, web server, or general purpose locked down server. Systems Insight
Manager also comes with a default Bastille configuration is assist in creating a secure
management server.

Bastille allows for a replication of lockdown configurations across multiple machines. It makes
direct use of the configuration engine. See the --os option.

Bastille provides a lockdown-ratchet to prevent security “accidents”. It will not unlock, or loosen,
a system unless the -r revert function is used, but it will add additional incremental security.

Finally, Bastille is also provides an educational interface for the security novice. And, it helps
speed up the expert.

March 2007 Availability-163


HP-UX 11i v3 Delta Secured Availability

Bastille 3.0 Includes Bastille 2.1 Features


Configures: Provides Safety Net (Revert)
• System daemons, kernel, OS • Bastille configuration can revert to
settings, network, and software the Pre-Bastille state
• Removes risk associated with Checks Bulletin Currency Level:
unused features • Configures Security Patch Check
Disables: to run automatically
• Unneeded services, such as echo Firewalls:
and finger • Configures Simple IPFilter firewall
• Lowers patch urgency for • Comprehensive “deny all”
disabled products inbound
Creates:
• chroot “jails”
• Provides additional security layer
for Internet services such as web
and Domain Name Service (DNS)

164 March 2007

Since Bastille first came out on HP-UX 11i v2, it has gone through updated versions. HP-UX 11i
v3 has Bastille 3.0 which also includes key features from Bastille 2.1 that are new since the
initial Bastille release.

Bastille configures system daemons, kernel, OS settings, network, and software. It removes risk
associated with unused features by disabling unneeded services, such as echo and finger. It
lowers patch urgency for disabled products. It creates chroot “jails”, and it provides an
additional security layer for Internet services such as web and Domain Name Service (DNS).

Bastille provides a safety net that allows the user to revert to the Pre-Bastille state. It configures
Security Patch Check to run automatically and checks the Bulletin Currency Level. Bastille also
works with firewalls. It configures Simple IPFilter firewall and provides a comprehensive “deny
all” inbound rule set.

March 2007 Availability-164


HP-UX 11i v3 Delta Secured Availability

Bastille 3.0 on HP-UX 11i v3 (1 of 2)


Bastille drift analysis
• Lockdown drift reporting
– Reports when configuration does not match policy
• Is system still locked down conformant to intended policy?
– Visibility into undone hardening
• Useful for regulatory compliance
– Assists with regulatory SOX compliance
Improved ease of use
• No need to re-run Bastille config
– Re-running risked breaking system if changes had been intentional
Bastille Assessment (hardening reporting)
• What’s currently vulnerable that Bastille can harden?
• Detect if hardening configuration was tampered with
• Detect if system config activities loosened hardening

165 March 2007

With the HP-UX 11i v3 release, HP-UX Bastille, version 3.0.x, includes several new
enhancements, capabilities, features, and benefits, including bastille_drift analysis. These
represent additional items that Bastille will be able to lock down, additional usability
improvements, and a new ability for Bastille to ensure that each cluster node has a consistent set
of security settings. With the HP-UX 11i v3 release, HP-UX Bastille, version 3.0.x, includes the
following enhancements:

A new feature called bastille_drift analysis is able to report when system's hardening/lockdown
configuration no longer matches policy, when Bastille config is applied. This drift report provides
visibility into undone hardening to allow planned response without risking unexpected system
breakage. It also assists with regulatory SOX compliance.

Now, it is easy to tell whether any system's hardening configuration remains consistent with what
was applied without risking system changes. Previously, you would need to re-run Bastille config
and risk breaking system if change had been intentional, which is impractical on production
systems.

Bastille 3.0 has a Bastille Assessment feature for hardening reporting. It runs an assessment to
discover what is currently vulnerable that Bastille can harden. Bastille makes it is easier to detect
if the system hardening configuration has been tampered with. This enables planned
remediation. It is also easier to detect if any unintentional side effect of system configuration
activities. For example, it can determine whether installing new software or patches loosened the
hardening configuration.

March 2007 Availability-165


HP-UX 11i v3 Delta Secured Availability

Bastille 3.0 on HP-UX 11i v3 (2 of 2)


Acceptance of ICMP (ping) requests in Sec20MngDMZ level
• Greater compatibility with discovery and monitoring frameworks
Additional new Bastille hardening questions
Documentation
• HP-UX System Administrator's Guide: Security Management
• bastille (1M) manpage
– Add /opt/sec_mgmt/share/man/ to MANPATH)
• Bastille User’s Guide
• HP-UX Bastille Web site at
• HP-UX 11i v2 Installation and Update Guide
– Details on effects of levels
Support through HP's IT Resource Center’s HP-UX Security Forum

166 March 2007

Bastille 3.0 acceptance of ICMP echo (ping) requests in Sec20MngDMZ level allows for greater
compatibility with management frameworks discovery / monitoring.

Bastille 3.0 has additional new hardening questions.

Though Bastille does not directly affect performance, its configuration of IPFilter settings (host-
based firewall), can cause a slight network performance decrease.

For additional information


• HP-UX System Administrator's Guide: Security Management, available online at
http://docs.hp.com/en/oshpux11iv3.html (specifically, Chapter 10)
• bastille (1M) manpage (add /opt/sec_mgmt/share/man/ to MANPATH)
• Bastille User’s Guide, delivered in /opt/sec_mgmt/bastille/docs/user_guide.txt
• HP-UX Bastille Web site at
http://www.software.hp.com/portal/swdepot/displayProductInfo.do?productNumber=B68
49AA
• HP-UX 11i v2 Installation and Update Guide, available online at
http://docs.hp.com/en/oshpux11iv3.html

Support is also offered through HP's IT Resource Center’s HP-UX Security Forum at
http://forums1.itrc.hp.com/service/forums/parseCurl.do?CURL=%2Fcm%2FCategoryHome%2F
1%2C%2C155%2C00.html&admit=-682735245+1157685896487+28353475

March 2007 Availability-166


HP-UX 11i v3 Delta Secured Availability

Bastille and Systems Insight Manager (SIM)


Systems Insight Manager (SIM)
• Default configuration
– SIM Central Management Server Bastille config file
Systems Insight Manager Integration
• Execute a Bastille config file on SIM managed nodes
• Run a Bastille Assessment
• Create a Named Security Baseline
• Compare Drift to Named Baseline

167 March 2007

Bastille 3.0 is highly integrated with HP’s System Insight Manager, or SIM. SIM ships with a
default SIM Central Management Server Bastille config file. Bastille provides a tested System
Insight Manager CMS Policy, which is a pre-built HP Systems Insight Management (SIM) server
Central Management Server (CMS)-hardened configuration.

Bastille and SIM integration allow execution of a Bastille config file on SIM managed nodes.
SIM can be used to run a Bastille Assessment, create a Named Security Baseline, and compare
Drift to Named Baseline.

March 2007 Availability-167


HP-UX 11i v3 Delta Secured Availability

Using Bastille
Interactive use with X11 interface
• For better security, secure Bastille’s X11 traffic by tunneling it over ssh,
using application locally, or avoiding X11 by using either the SIM or
Ignite-UX integrations

168 March 2007

This slide shows a screen shot of Bastille’s X11-based GUI. For better security, secure Bastille’s
X11 traffic by tunneling it over ssh, using the application locally, or avoiding X11 by using either
the SIM or Ignite-UX integrations.

March 2007 Availability-168


HP-UX 11i v3 Delta Secured Availability

HP-UX Identity Management

169 March 2007

In this section, we will look at Identity Management on HP-UX 11i v3.

March 2007 Availability-169


HP-UX 11i v3 Delta Secured Availability

Identity Management is High Priority

http://www.csoonline.com/poll/results.cfm?poll=3080

170 March 2007

Obviously customers are getting motivated by identity management. Identity Management has
become an assumed part of the enterprise IT and security infrastructure. Total Identity
Management market size is over $2 billion in product and services. According to Forrester
Research 2006, Fortune 1000 companies will continue their identity management built-out with
more than 60% purchasing product within 12 months. Additionally, over 50% are spending at
least $250K on products, over 15% are spending more than $1M on products, and over 25%
market growth size. Directory market size in products and services was $669M in 2005 and
projected to double in 2009.

March 2007 Availability-170


HP-UX 11i v3 Delta Secured Availability

Identity Management
“The management of the identity life cycle
of entities”
Data Repositories: Maintaining identity information
Provisioning Services: Allocating or removing
identity information across multiple repositories
Authentication Services: Providing a centralized
source for identity assurance and policy management.
Access Control (Authorization) Services:
Identity
Providing a centralized source for defining rights to Storage &
access various resources. Retrieval

Federation Services: Providing single sign-on across


multiple organizations by transfer of identity, assurances
and attributes.
Auditing Services: Tracking user actions within an
organization

Graphic courtesy of The Burton Group ©2003


171 March 2007

When we talk about Identity Management we are usually referring to a lifecycle of an identity
that begins with provisioning, integrates authentication and authorization systems and eventually
ends with de-provisioning, while maintaining an audit trail throughout the process.

As you can see there are many components that make up this lifecycle, but we will focus on
those that are available free on HP-UX and directly integrate with system authentication and
authorization. The three areas we will explore are Data Repositories, Authentication Services,
and Access and Authorization Control Service.

March 2007 Availability-171


HP-UX 11i v3 Delta Secured Availability

Where Does HP fit?


OpenView Solutions
• Select Access – Access Control /Authentication Service
• Select Audit – Auditing Service
• Select Federation – Federation Service
• Select Identity – Provisioning Service
Not free

172 March 2007

The HP Openview Identity Management product suite covers all of the Identity Management
components listed on the previous slide. These products are available on HP-UX as well as
other Operating Systems, but they are not free.

March 2007 Availability-172


HP-UX 11i v3 Delta Secured Availability

Where Does HP-UX fit?


Data Repositories
• Red Hat Directory Server (LDAP Server)
• LDAP-UX Integration (LDAP Client)
Authentication Services
• HP-UX Identity Management Integration (HP IdMI)
• HP-UX Select Access for IdMI (HP SA-IdMI)
Authorization Services
• HP-UX Identity Management Integration (HP IdMI)
• HP-UX Select Access for IdMI (HP SA-IdMI)
• HP-UX Role Based Access Control (HP RBAC)
All are free on HP-UX

173 March 2007

This sub-module will focus on the three areas that are important to HP-UX Identity Management.

For identity storage and retrieval, Red Hat Directory Server and LDAP-UX Integration provide
Client and Server LDAP services provide data repository services.

On the Authentication and Authorization side, there is HP-UX Identity Management Integration
and HP-UX Select Access for HP-UX Identity Management that support the Client and Server
authentication and authorization services respectively. There is also HP-UX Role Based Access
Control for delegated System Administration control.

All of these products are available free of charge on HP-UX

March 2007 Availability-173


HP-UX 11i v3 Delta Secured Availability

LDAP Features of HP-UX


Red Hat Directory Server LDAP-UX Client Integration
• LDAP • LDAP client for HP-UX
– Lightweight directory access • Retrieve user, group and other
protocol system information from LDAP
– Based Directory Server directory server
• Centralized management • LDAP authentication with LDAP v3
compliant directory servers
– Of people and their “profiles”
• Open source based
– Fedora
• Multi Master replication
• SSL/TLS Support
• HP Supported

174 March 2007

Directory Servers have become a commodity. The Red Hat Directory server is an Open Source
based Directory Server. Red Hat open sourced it through Fedora when it purchased Fedora
from AOL, the old Netscape Directory Server. HP provides full support for it, from customer
contact to source code maintenance and delivery. HP has a very close working relationship with
Red Hat.

The LDAP-UX Client Integration product allows HP-UX identities to be stored and authenticated
against an LDAP Directory Server. The LDAP-UX client can be used with any LDAP v3 compliant
directory Server, not just the Red Hat Directory Server. LDAP is probably the repository most
often used in Identity Management deployments. Both client and server products are available
on HP-UX.

March 2007 Availability-174


HP-UX 11i v3 Delta Secured Availability

HP Select Access for HP-IdMI


Select Access for
• Centralized
HP-IdMI
– Access control
HP-UX
management System A
– Authentication Policy

authority
Enforcement
Point

– Policy decision point


HP-UX
• Easy to use policy System B
builder GUI HP-UX
Policy
Enforcement
System
Point A
• No-fee version of the
HP OpenView Select
Access HP-UX HP-UX RedHat
System C Directory Server
• For use with HP-IdMI only
Policy
Enforcement
Identity Security
Point Store Policy

175 March 2007

HP Select Access allows you to centralize your security policy management for resources within
the enterprise. The resources that Select Access can control access to could be just about
anything from web enabled applications, web service,s or HP-UX resources.

Select Access provides an easy to use point and click management GUI called the Policy Builder
that allows user access rights to be changed with as little as one click.

HP Select Access for HP-IdMI is a no-fee version of the Openview Select Access product that can
be used only for the HP-UX Identity Management Integration product to provide Authentication
and Authorization services for HP-UX specific resources only.

If you want to use Select Access to protect other resources the full, for pay, version must be used.

March 2007 Availability-175


HP-UX 11i v3 Delta Secured Availability

HP-UX Identity Management Integration - IdMI

• Enables HP-UX system


HP Select Access
for HP-IdMI
authentication and
authorization integration with
Select Access
HP-IdMI
• HP RBAC (Role Based Access HP-UX
Control) integration with
Select Access Policy
Enforcement
• Centralized HP-UX Point

authentication and access


control
HP-UX RedHat
• GUI configuration through SA Directory Server
Identity Security
Policy Builder Store Policy

176 March 2007

HP-UX Identity Management Integration is the client component that allows the HP-UX operating
system to integrate authentication and authorization with Select Access. It can be either HP SA
for HP-IdMI or the standard Openview Select Access product.

HP-UX IdMI allows users to be authenticated against Select Access through the PAM Select
Access module. Additionally, a plugin for HP Role Based Access Control, or RBAC, lets system
management authorization decisions be made based on authorizations created in Select Access.

The Select Access Policy builder GUI, shown on the upper right in the slide, is used to define
access rights for system authentication. Authorizations are enforced using HP RBAC.

March 2007 Availability-176


HP-UX 11i v3 Delta Secured Availability

HP Role Based Access Control - RBAC


• Delegated administration capabilities
– Without providing root access
– Without complicated rule configuration
• Integration with centralized policy
management or local policy database
• Wrapper commands

es ion
qu at

n
re riz
t

pl tio
– For RBAC un-aware programs

re riza
th
au

y
o
• Access Control Policy Switch API

th
au
– Integrates policy decision making into
applications execute

177 March 2007

HP RBAC is an alternative to the traditional "all-or-nothing" root user model, which grants
permissions to the root user for all operations, and denies permissions to non-root users for
certain operations.

HP RBAC allows you to delegate subsets of root privileges to non-root users with giving them root
access.

HP RBAC is installed with pre-defined authorizations that allow for a quick deployment out of the
box. A wrapper command can be used for legacy applications that are not RBAC aware so that
users can run those commands with the appropriate privileges without giving them direct access
to those privileges.

HP RBAC uses a local configuration for authorization decisions, but can be redirected to Select
Access when deployed with HP-UX Identity Management Integration.

March 2007 Availability-177


HP-UX 11i v3 Delta Secured Availability

HP-UX Administrator Scenario


IT call logged because user’s home directories are not being
exported from NFS Server

Remediation
• Login to NFS Server
• Export home directory file system
178 March 2007

In this scenario, a trouble ticket has been dispatched to an HP-UX administrator. The problem
reported is that user’s home directories are no longer available. The administrator must login to
the NFS server that serves the users home directory and export the file system that holds the
user’s home directories.

The next few slides show how the components that we have covered come into play during the
administrator’s tasks to resolve this problem.

March 2007 Availability-178


HP-UX 11i v3 Delta Secured Availability

Login: Identity
User ifox logs into the NFS
server as their standard, non-
root, user account
User account information is
retrieved from the Red Hat
Directory Server via LDAP-UX

179 March 2007

Part of the login process to any HP-UX system is identifying a valid user, even before attempting
to authenticate the user. When the user attempts to login, HP-UX retrieve ther user information
from The Red Hat Directory Server using the LDAP-UX client.

March 2007 Availability-179


HP-UX 11i v3 Delta Secured Availability

Login: Authentication and Authorization


User authenticated by Select
Access if
9security policy allows it and
9correct password is supplied
PAM_SA/HP-UX IdMI
authenticates user based on
supplied password

180 March 2007

After we have identified the user, he or she is prompted for the user password. The PAM and
the Identity Management Integration product are then used to authenticate the user against
Select Access.

Select Access first determine,s based on the resource, if the user is allowed access. In this
example, the user is allowed access to the resource, shown by the check mark at the intersection
of the user’s name and the resource, which in this case is hpux.pam.auth.sshd.

There are a couple of things to note. First, at the intersection of the user and resource, pointed to
by the lower green arrow, there is a grey check mark and a yellow box underneath it. This
indicates that access to the resource was inherited. In this case access was inherited based on a
role pointed to by the lower red arrow.

Now look at the intersection of the hpux.pam.auth resource on the left and the Select Auth
Column on the top to see a head colored in. This means that in order to access the “resource”,
login via SSH, the user must perform an authentication.

At this point in the login process, the user has successfully proved he or she knows the password.
Now the system must determine if the user is AUTHORIZED to login. Just like the previous slide,
look to see if the user has access to a specific resource. This time the resource is
hpux.pam.account.sshd.

Looking at the intersection between the user and the resource, see the upper green arrow, notice
the gray check box indicating access is allowed and it is inherited from the users Role, HP-UX
Net Admin, indicated by the upper red arrow.

Finally, when you look at the intersection of the hpux.pam.account resource on the left and the
Select Auth Column on the top, the head is not colored in. This indicates that authentication is
not required to gain access to this resource.

March 2007 Availability-180


HP-UX 11i v3 Delta Secured Availability

Role Authorization
After a successful login the user now uses the privrun wrapper (RBAC)
command to execute the privileged exportfs program to export the
/home filesystem

181 March 2007

Now that the user is logged as a non-root user, he or she needs to execute the exportfs
command that normally requires root access.

The first attempt to execute the command fails with the message “cannot execute”. The user must
use the HP RBAC privrun wrapper command to execute the command with elevated privileges.
However, the administrator should not t want just any user to be able to use privrun to execute
the exportfs command. So, before privrun will execute the command, it needs to determine if the
user is authorized.

March 2007 Availability-181


HP-UX 11i v3 Delta Secured Availability

Role Authorization (1 of 2)
#-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
# Command :Args: Authorizations :U/GID :Cmpt :Privs :Auth :Flags
#-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/usr/sbin/setboot :dflt :(hpux.admin.boot.config,*) :0/0// :dflt :dflt :dflt :
/usr/sbin/mkboot :dflt :(hpux.admin.boot.make,*) :0/0// :dflt :dflt :dflt :

/sbin/ipfs :dflt :(hpux.network.filter.config,*) :0/0// :dflt :dflt :dflt :
/sbin/ipfstat :dflt :(hpux.network.filter.readstat,*) :0/0// :dflt :dflt :dflt :
/usr/sbin/exportfs :dflt :(hpux.network.nfs.export,*) :0/0// :dflt :dflt :dflt :

privrun looks up exportfs


command in local
command-priv database
Required authorization is
hpux.network.nfs.export

182 March 2007

The privrun wrapper program first looks up the command to be executed in a local database to
gather several pieces of information related to the request. Initially, privrun will find out what
authorization the user must have in order to execute the command. Then, it needs to know how
the command should be executed.

In this example, the user must have the hpux.network.nfs.export authorization. Then, the
command will be executed with a uid and gid of 0, which is root.

March 2007 Availability-182


HP-UX 11i v3 Delta Secured Availability

Role Authorization (2 of 2)

privrun (via HP-UX IdM)


sends authorization
request for the user to
Select Access
Select Access consults
security policy
• Replies with ALLOW or
DENY
183 March 2007

Using the plugin provided by HP-UX Identity Management Integration, the privrun command
sends a query to Select Access asking if the user jfox is allowed access to the
hpux.network.nfs.export authorization.

Looking at the Select Access Policy builder, notice that at the intersection of the user and the
resource is a gray check mark indicating the user is allowed, and that the access is inherited, in
this case, from the Role HP-UX NetAdmin, which has been granted access to all hpux.network
authorizations.

The privrun wrapper receives an ALLOW reply from Select Access and executes the exportfs
command as defined in the local database with any parameters the user supplied.

In this example privrun is executed three times. The first time determines which file systems are
exported, none in this case. The second time, it exports the /home file system. The third time it
verifies that the /home file system has been successfully exported.

March 2007 Availability-183


HP-UX 11i v3 Delta Secured Availability

HP-UX Identity Management

HP-UX HP-UX

HP-UX
HP-UX
HP IdMI/RBAC
Authentication & Authorization

Select
Select Access
Access LDAP-UX
Validator
Validator

Po

s
tie
licy

nti
&

Ide
HP-UX
Sto

I de

er
re

nti

Us
ty
Red
Red Hat
Hat Directory
Directory Server
Server

184 March 2007

This slide graphically illustrates the HP-UX Identity Management triangle and the relationships
between the components covered in this sub-module.

March 2007 Availability-184


HP-UX 11i v3 Delta Secured Availability

SAFeR
Fine Grained Privileges
Compartments

185 March 2007

SAFeR is the Security Containment component of HP-UX. On HP-UX 11i v3, it offers Fine Grained
Privileges and Compartments.

March 2007 Availability-185


HP-UX 11i v3 Delta Secured Availability

Security Containment on HP-UX 11i v3 (1 of 3)


Security Containment Features on HP-UX 11i v3
• Compartments
• Fine-grained privileges
• Provide a highly secure operating environment
• No application modification required
• Both technologies are part of the core
Compartments
• Isolate unrelated resources on a system
– Prevents catastrophic system damage if one compartment is penetrated
• An application, including its binaries, data files and communication channels
used, has restricted access to resources outside its compartment
– Enforced by the kernel and cannot be overridden unless specifically
configured to do so
– If the application is compromised, it will not be able to damage other parts of
the system
• Optional feature
• Turning it on may result in a performance loss depending on how the
compartment rules are configured
– Typical loss is around 10% for non-trivial rule setup

186 March 2007

HP-UX 11i Security Containment provides two core technologies, compartments and fine-grained
privileges. Together, these components provide a highly secure operating environment without
requiring applications to be modified. Fine-grained privileges and compartments are now part of
the core.

Compartments isolate unrelated resources on a system to prevent catastrophic system damage if


one compartment is penetrated. When configured in a compartment, an application, including
its binaries, data files and communication channels used, has restricted access to resources
outside its compartment. This restriction is enforced by the HP-UX kernel and cannot be
overridden unless specifically configured to do so. If the application is compromised, it will not
be able to damage other parts of the system because it is isolated by the compartment
configuration.

The compartment feature is optional. Turning it on may result in a performance loss depending
on how the compartment rules are configured. A typical loss is around 10% for non-trivial rule
setup.

March 2007 Availability-186


HP-UX 11i v3 Delta Secured Availability

Security Containment on HP-UX 11i v3 (2 of 3)


Fine-Grained Privileges
• Traditional “all or nothing” administrative privileges
– Based on the effective UID of the process that is running
– If the process is running with the effective uid of 0, it is running
as root and is granted all privileges
• With fine-grained privileges
– Processes are granted only privileges needed for task
– Optionally granted only for the time needed to complete the
task
• Privilege-aware applications
– Elevate privilege to the required level for the operation
– Lower it after the operation completes

187 March 2007

Traditional UNIX operating systems grant “all or nothing” administrative privileges based on the
effective UID of the process that is running. If the process is running with the effective uid of 0, it
is running as root and is granted all privileges. With fine-grained privileges, processes are
granted only the privileges needed for the task and, optionally, only for the time needed to
complete the task. Applications that are privilege-aware can elevate their privilege to the
required level for the operation, and lower it after the operation completes.

Applications developed to use the fine-grained feature are more secure than those developed to
a simpler administrative model of monolithic privilege model, for example requiring an effective
uid of 0. Customers can compartmentalize applications so that they use only pre-defined files,
IPCs, and network interfaces. Using Secure Resource Partitions, or SRP, a compartment can also
be restricted from using too much resources, where resources can be CPUs, disk bandwidth, etc.

March 2007 Availability-187


HP-UX 11i v3 Delta Secured Availability

Security Containment on HP-UX 11i v3 (3 of 3)


Fine-Grained Privileges Continued
• Applications using fine-grained feature are more secure
– Compartmentalize applications to use only pre-defined files, IPCs, and
network interfaces
– Using Secure Resource Partitions, or SRP, a compartment can also be
restricted from using too much resources, where resources can be
CPUs, disk bandwidth, etc.
• The features are compatible
– Fine-grained privilege is implemented such that applications developed
to monolithic privilege model do not see any behavioral difference
• Fine-grained privilege is part of the kernel
– Cannot be turned off
– No performance loss
Documentation
• privileges(3), compartments(4), compartments(5), cmpt_tune(1M)
manpages

188 March 2007

The features are compatible: e.g., the fine-grained privilege is implemented such that
applications developed to monolithic privilege model do not see any behavioral difference.

Fine-grained privilege is part of the kernel. It cannot be turned off. There is no performance loss.

For further information see the privileges(3), compartments(4), compartments(5), and


cmpt_tune(1M) manpages.

March 2007 Availability-188


HP-UX 11i v3 Delta Secured Availability

Auditing

189 March 2007

This sub-module covers auditing on HP-UX 11i v3.

March 2007 Availability-189


HP-UX 11i v3 Delta Secured Availability

Auditing on HP-UX 11i v3 Overview


Auditing system
• Records instances of access by subjects to objects
• Acts as a deterrent against system abuses
– Detects any (repeated) attempts to bypass the protection mechanism
– Detects any misuses of privileges
• Exposes potential security weaknesses in the system
Auditing System on HP-UX 11i v3 compared to HP-UX 11i v1 September 2005 overview
• Auditing subsystem is now working without converting the system to trusted mode.
• Standard mode audit user selection information is stored in a per-user configuration user
database
– Similar to /tcb in Trusted Mode
• The userdbset command specifies which users are to be audited in standard mode
HP-UX Auditing System on HP-UX 11i v3 compared to HP-UX 11i v2 June 2006
• Standard Mode Auditing is part of core products
• Multi-threaded kernel audit daemon
– Dedicated to logging data into configurable number of files for better performance
• Collected audit data are more comprehensive
Refer to userdb(4) manpage

190 March 2007

The purpose of the auditing system is to record instances of access by subjects to objects and to
allow detection of any (repeated) attempts to bypass the protection mechanism and any misuses
of privileges, thus acting as a deterrent against system abuses and exposing potential security
weaknesses in the system.

The auditing system has been enhanced in a number of ways.

Enhancements to the HP-UX Auditing System on HP-UX 11i v3 compared to HP-UX 11i v1
September 2005 include the following. The auditing subsystem is now working without
converting the system to trusted mode. In standard mode, the audit user selection information is
stored in a per-user configuration user database, which is similar to /tcb in Trusted Mode. The
userdbset command specifies which users are to be audited in standard mode.

Enhancements to the HP-UX Auditing System on HP-UX 11i v3 compared to HP-UX 11i v2 June
2006 include the following. Standard Mode Auditing now part of core products. Multi-threaded
kernel audit daemon is now dedicated in logging the data into configurable number of files for
better performance. Collected audit data are more comprehensive.

Refer to the userdb(4) manpage.

March 2007 Availability-190


HP-UX 11i v3 Delta Secured Availability

Auditing on HP-UX 11i v3


Compared to HP-UX 11i v1 0509 (1 of 2)
Auditing works without converting the system to trusted mode
In standard mode, audit user selection information is stored in a per-user
configuration user database
• Similar to /tcb in Trusted Mode
• userdbset command specifies which users are to be audited
– Equivalent to audusr command in trusted mode
– Refer to the userdbset (1M) manpage.
• Refer to the userdb(4) manpage
New audit tags for audit identity
• Uniquely identifies each login session and responsible user
• For applications that use PAM for authentication and session management
– pam_hpsec PAM module transparently sets audit tag info
• Refer to pam_hpsec(5) manpage
New libsec routines: getauduser() and setauduser()
• Similar to getaudid() and setaudid() system calls
• Manage the audit tags
• Refer to getauduser(3), setauduser(3), and audit(5) manpages
191 March 2007

The auditing system on HP-UX 11i v3 has been enhanced in a number of ways compared to HP-
UX 11i v1 September 2005.

The auditing subsystem is now working without converting the system to trusted mode.

The standard mode audit user selection information is stored in a per-user configuration user
database, which is similar to /tcb in Trusted Mode. Refer to the userdb(4) manpage. The
userdbset command specifies which users are to be audited in standard mode. This functionality
is equivalent to the audusr command in trusted mode. Refer to the userdbset (1M) manpage.

A more flexible form of audit identity called audit tags is introduced. It uniquely identifies each
login session and responsible user. For applications that use PAM for authentication and session
management, the pam_hpsec PAM module transparently handles the setting of the audit tag
information. Refer to the pam_hpsec (5) manpage.

Two new libsec routines, getauduser() and setauduser(), are similar to the getaudid() and
setaudid() system calls. The new libsec routines manage the audit tags. Refer to the getauduser
(3), setauduser (3), and audit (5) manpages.

March 2007 Availability-191


HP-UX 11i v3 Delta Secured Availability

Auditing on HP-UX 11i v3


Compared to HP-UX 11i v1 0509 (2 of 2)
Multi-threaded kernel audit daemon logs data
• Uses a configurable number of files for better performance
• See -N option in audsys(1M) manpage
Collected audit data
• More comprehensive
• Unified data source for both C2 level auditing and HIDS/9000 product
– But configured differently
• Audisp output
– Modified to be more self-descriptive and more friendly to text process tools
• Memory consumption for audit data is configurable
– See audit_memory_usage(5) and diskaudit_flush_interval(5) manpage
Audit overflow monitor daemon
• Can auto-switch audit trails
• Run an external command to run at each auto-switch point
• See audomon(1M) manpage
Audit events or profiles can be customized
• See audit.conf(4) manpage
Audit system tries to track current working and root directory for each process
• Reports the full path name of a given file
• See audit_track_paths(5) manpage

192 March 2007

A multi-threaded kernel audit daemon is now dedicated in logging the data into configurable
number of files for better performance. See -N option in audsys (1M) manpage.

Collected audit data are more comprehensive. The data source for both C2 level auditing and
HIDS/9000 product is now unified, but are configured differently. Audisp output is modified to
be more self-descriptive and more friendly to text process tool. Memory consumption for audit
data is now configurable. See audit_memory_usage(5) and diskaudit_flush_interval(5) manpage.

Audit overflow monitor daemon is now capable of auto-switching audit trails and run an external
command to run at each auto-switch point. See audomon (1M) manpage.

Audit events or profiles can be customized. See audit.conf (4) manpage.

Audit system now tries to track the current working and root directory for each process and
reports the full path name of a given file. See audit_track_paths (5) manpage.

March 2007 Availability-192


HP-UX 11i v3 Delta Secured Availability

Auditing on HP-UX 11i v3


Compared to HP-UX 11i v2 0606
Standard mode auditing is now part of core
Multi-threaded kernel audit daemon logs data
• Uses a configurable number of files for better performance
• See -N option in audsys(1M) manpage
Collected audit data
• More comprehensive
• Audisp output
– Modified to be more self-descriptive and more friendly to text process tools
• Memory consumption for audit data is configurable
– See audit_memory_usage(5) and diskaudit_flush_interval(5) manpage
Audit overflow monitor daemon
• Can auto-switch audit trails
• Run an external command to run at each auto-switch point
• See audomon(1M) manpage
Audit events or profiles can be customized
• See audit.conf(4) manpage
Audit system tries to track current working and root directory for each process
• Reports the full path name of a given file
• See audit_track_paths(5) manpage

193 March 2007

The auditing system on HP-UX 11i v3 has been enhanced in a number of ways compared to HP-
UX 11i v2 June 2006.

Standard Mode Auditing is now part of core products.

A multi-threaded kernel audit daemon is now dedicated in logging the data into configurable
number of files for better performance. See -N option in audsys (1M) manpage.

Collected audit data are more comprehensive. Audisp output is modified to be more self-
descriptive and more friendly to text process tools. Memory consumption for audit data is now
configurable. See audit_memory_usage (5) and diskaudit_flush_interval (5) manpage.

Audit overflow monitor daemon is now capable of auto-switching audit trails and taking an
external command to run at each auto-switch point. See audomon (1M) manpage.

Audit events or profiles can be customized. See audit.conf (4) manpage.

Audit system now tries to track the current working and root directory for each process, and
report the full path name of a given file. See audit_track_paths (5) manpage.

March 2007 Availability-193


HP-UX 11i v3 Delta Secured Availability

Auditing on HP-UX 11i v3


Impact and Compatibility
Impact
• Run auditing without converting system to trusted mode
• Must modify applications or scripts that process audisp output data
– Due to changes in audisp output
• Each audit trail is identified as a directory instead of a file if running in regular mode
– Modify applications or scripts that handle audit trails as files to handle it as a directory
– Or force audit system to use compatibility mode using -N 0 option
• See -N option in audsys(1M) manpage
• Compatibility mode will be obsoleted in any future releases after HP-UX 11i v3
• Audit overflow management now requires less manual interference
– See -X option in audomon (1M) manpage
• Write script to run at each auto-switch point to archive/backup audit trails
• Less performance impact when turning on auditing
Compatibility
• Audit commands audsys, audisp, audevent and audomon work the same with added new
options
• The userdbset(1M) command
– Used to configure audit user in standard mode
• Instead of audus (1M) which still works in trusted mode

194 March 2007

The previous two slides presented quite a long list of changes to the auditing system on HP-UX
11i v3. Here is a summary of the primary impact of these changes.

HP-UX 11i v3 customers may run auditing without converting the system to trusted mode.

Because of the difference in audisp output, applications or scripts that process audisp output
data need to be modified.

Since each audit trail is now identified as a directory instead of a file if running in regular mode,
users must modify applications or scripts that handle audit trails as files to handle the audit trail
as a directory. Or, users can force the audit system to use compatibility mode using -N 0 option.
Refer to the audsys(1M) manpage.

Audit overflow management now requires less manual interference. See -X option in
audomo(1M) manpage.

Users may write a script to run at each auto-switch point to archive/backup audit trails.

Finally, there is less performance impact when turning on auditing.

Compatibility
The audit commands audsys, audisp, audevent and audomon still work the same way with a few
new options added.

The userdbset(1M) command is used to configure audit user in standard mode instead of
audusr(1M), which still works in trusted mode.

March 2007 Availability-194


HP-UX 11i v3 Delta Secured Availability

Auditing on HP-UX 11i v3


Performance and Documentation
Performance Impacts
• Multi-threaded kernel audit daemon logs data into a configurable number
of files
– See –N option in audsys(1M) manpage
– Results in improved performance
• Audit system tracks current working and root directory for each process
– Results in slight degrade in performance
– See audit_track_paths(5) manpage
• Maximum specified memory consumption for storing audit data
– See audit_memory_usage(5)
• How often kernel audit daemon flushes audit data onto disk.
– See diskaudit_flush_interval(5) manpage
Documentation
• audit(5), audsys(1M), audevent(1M), audisp(1M), audomon(1M),
audusr(1M), audit.conf(4), getauduser(3), setauduser(3), pam_hpsec(5)
manpages
195 March 2007

There are a couple of performance impacts to note due to auditing changes on HP-UX 11i v3.

A multi-threaded kernel audit daemon is now dedicated in logging the data into configurable
number of files. See –N option in audsys(1M) manpage. This results in better performance.

The audit system now tracks the current working and root directory for each process. This results
in a little degrade in performance. See audit_track_paths(5) manpage.

Performance is also impacted by the maximum specified memory consumption for storing audit
data and how often kernel audit daemon flushes audit data onto disk. See audit_memory_usage
(5) and diskaudit_flush_interval (5) manpages.

Documentation
For further information, refer to the following manpages: audit (5), audsys (1M), audevent (1M),
audisp (1M), audomon (1M), audusr (1M), audit.conf (4), getauduser (3), setauduser (3),
pam_hpsec (5).

March 2007 Availability-195


HP-UX 11i v3 Delta Secured Availability

Auditing on HP-UX 11i v3 Obsolescence


• HP-UX 11i v3 is last release to support trusted systems functionality
– Including those for auditing
• E.g. audusr command
– Compatibility mode (i.e., -N 0) and -x option for audsys
• Solely supported for backward compatibility
• Will be obsoleted in any future releases after HP-UX 11i v3
• Several auditable system call names are being deprecated in HP-UX 11i v3
– putpmsg(), setcontext(), nsp_init(), exportfs(), t64migration(), privgrp()
– In HP-UX 11i v3, audevent and audisp still take them as valid arguments but perform no action
on these names
– After HP-UX 11i v3, audevent and audisp will reject these names with errors
• Several auditable system calls were not being documented
– They are renamed in HP-UX 11i v3
– In HP-UX 11i v3, audevent and audisp still take them as valid arguments and map them to
their new names
– After HP-UX 11i v3, audevent and audisp will reject these names with errors
• [gs]etaudid(), [gs]etevent, and audctl() are provided purely for backward compatibility
– New applications use [gs]etauduser(), audevent command, and audsys command instead
– See setauduser(3), audevent(1M), and audsys(1M) manpages

196 March 2007

HP-UX 11i v3 will be the last release to support trusted systems functionality including those for
auditing (e.g., audusr command). Compatibility mode (i.e., -N 0) and -x option for audsys are
solely supported for backward compatibility and will be obsoleted in any future releases after
HP-UX 11i v3.

The following auditable system call names are being deprecated in HP-UX 11i v3: putpmsg(),
setcontext(), nsp_init(), exportfs(), t64migration(), privgrp(). In HP-UX 11i v3, audevent and
audisp still take them as valid arguments but perform no action on these names. After HP-UX 11i
v3, audevent and audisp will reject these names with errors.

The following auditable system calls were not being documented, and they are being renamed
in HP-UX 11i v3:utssys(), _set_mem_window(), toolbox(), modadm(), spuctl(), __cnx_p2p_ctl(),
__cnx_gsched_ctl(), mem_res_grp(), lchmod(), socket2(), socketpair2(), ptrace64(), ksem_open(),
ksem_close(), ksem_unlink(). In HP-UX 11i v3, audevent and audisp still take them as valid
arguments and map them to their new names. After HP-UX 11i v3, audevent and audisp will
reject these names with errors.

[gs]etaudid() is provided purely for backward compatibility. HP recommends that new


applications use [gs]etauduser() instead. See setauduser (3) manpage.

[gs]etevent is provided purely for backward compatibility. HP recommends that new applications
use audevent command to get events and system calls that are currently being audited. See
audevent (1M) manpage.

audctl() is provided purely for backward compatibility. HP recommends that new applications
use audsys command to configure the auditing system. See audsys (1M) manpage.

March 2007 Availability-196


HP-UX 11i v3 Delta Secured Availability

Encrypted Volume
& File System (EVFS)
(Pre-enablement)

197 March 2007

This module describes the Encrypted Volume and File System, which is pre-enabled on HP-UX
11i v3.

March 2007 Availability-197


HP-UX 11i v3 Delta Secured Availability

Enterprise Security Levels

System

System
Data Storage Securing the Data
Encrypt Data at-rest

Enterprise

Securing the Perimeter Securing the Network Infrastructure Securing the System
Firewall Directory and Authentication Services System Firewall
VPN Communication security Hardened OS, Containment,
AAA (Network) Intrusion Detection and Response Host IDS, Crypto

198 March 2007

This slide illustrates the various levels or rings of security in an enterprise environment. The
outermost ring is the perimeter. Firewalls, VPNs, and AAA help to secure the edge of the
enterprise from the rest of the world. Within the enterprise, the network infrastructure must also
be secured. Directory and Authentication Services, communication security, and network
intrusion detection and response systems provide security to the enterprise network infrastructure.

The enterprise network infrastructure consists of many systems, each of which must be secured.
This can be achieved by using a hardened operating system, a system firewall, containment, a
host IDS, and cryptography. Then within the system itself, the data must be protected. This is the
last line of defense. In this module, we will learn about Encrypted Volume and File System, or
EVFS, and how EVFS can secure the enterprise’s data.

March 2007 Availability-198


HP-UX 11i v3 Delta Secured Availability

Data Protection
Need reliable way to mitigate exposure of data to risk
OS-based data encryption solution
• Encrypt data at rest
• Retain existing storage devices
– No storage upgrades, migration, or modification
Data is stored on the storage device
• Traditional in clear text
• Encrypted
– With EVFS
– Using cryptography keys
– With recovery agents

199 March 2007

The last line of security defense requires a reliable mechanism to mitigate exposure of sensitive
data to risk. Encrypted Volume and File System, or EVFS, is an OS-based data encryption
solution. It encrypts the data at rest. With EVFS, the enterprise can retain its existing storage
devices. There are no required storage upgrades, migration, or modification.

Traditionally, data is stored on a storage device in clear text. But, with EVFS, the data is
encrypted using cryptography keys with recovery agents.

March 2007 Availability-199


HP-UX 11i v3 Delta Secured Availability

Why EVFS?
Continued digitization of the world
• Exponential growth in the volume of electronic documents, e-mail, IM and
other business sensitive information
• Increasingly more sensitive data stored on storage media
• Need to protect digital assets against outside and inside unauthorized
and malicious access
Legislative pressure for privacy protection
• Federal legislation
– HIPPA, GLBA, Sarbanes-Oxley Act
• California SB-1386 & State of Washington SB-6043
• Four other states have passed laws that will go into effect soon
• Other states considering similar legislation
• Compliance a big IT driver
Negative publicity
• Headline news
– Lost tapes
– Compromised customer data
200 March 2007

With the continued digitization of the world, there is an ongoing exponential growth in the
volume of electronic documents, e-mail, IM and other business sensitive information. Increasingly
more sensitive data is stored on storage media. There is a critical need to protect digital assets
against outside and inside unauthorized and malicious access.

Additionally, there is legislative pressure for privacy protection. Federal legislation includes
HIPPA, GLBA, and the Sarbanes-Oxley Act, or SOX. Sarbanes-Oxley has an escape clause that
states that the requirement is to “only communicate compromised data if it is unencrypted”.
Besides the federal legislation, California and the state of Washington have passed laws. Four
other states have passed laws that will go into effect soon. Other states are considering similar
legislation. Compliance with these laws is a big IT driver.

Recently, there has been negative publicity about lost and stolen personal data. Here are some
examples. A Lexis-Nexis database was compromised in Europe with 310,000 records affected.
ChoicePoint had 100,000 consumer profiles stolen. Bank of Americas lost tapes, with a third-
party implicated. CitiGroup (CitiFinancial) and UPS lost backup tapes with 3.9 million customer
financial records. Non-compliance results in very bad publicity, from which it may be impossible
to recover!

EVFS may not satisfy all regulations. And, it is not a panacea to regulatory woes. However, it
can help by encrypting the data with minimal impact to current enterprise operations.

March 2007 Availability-200


HP-UX 11i v3 Delta Secured Availability

Technology Trends
Data privacy and protection are key enablers for end-to-
end secure electronic commerce and collaboration
Protection of data in transit well established
Protection of data at rest the next logical step
• According to Gartner Information Security Hype Cycle data-
at-rest protection is in the early adoption phase
Extension of data protection scheme
• Perimeter, network, host, data
Convergence of technologies
• PKI interoperability
• Other technologies on the horizon – Trusted Computing
Group Trusted Platform Module (TPM) based protection of
encryption keys
201 March 2007

Data privacy and protection are key enablers for end-to-end secure electronic commerce and
collaboration. The protection of data in transit is well established. (Think of armored cars
carrying money!) So, the protection of data at rest is the next logical step. According to Gartner
Information Security Hype Cycle, data-at-rest protection is in the early adoption phase.

It is an extension of the data protection scheme that we saw illustrated in the first slide – protect
the perimeter, network, host, and data. There is a convergence of technologies including Public
Key Infrastructure, or PKI, interoperability. Other technologies are on the horizon, such as
Trusted Computing Group Trusted Platform Module (TPM) based protection of encryption keys

March 2007 Availability-201


HP-UX 11i v3 Delta Secured Availability

Encrypted Volume & File System (EVFS)


Objective
• Ensure confidentiality and integrity of critical business
information
• Protect ‘data-at-rest’ against unauthorized use
Advantages
• Transparent to applications
• Supports long-term data retrieval for auditing purpose
• Robust and flexible encryption key management
• Design centered on high-availability environments

202 March 2007

The objective of EVFS is to ensure the confidentiality and integrity of critical business information
and to protect ‘data-at-rest’ against unauthorized use.

The advantages of EVFS are that it is transparent to applications and that it supports long-term
data retrieval for auditing purposes. Additionally, it has robust and flexible encryption key
management. And, its design is centered on high-availability environments.

March 2007 Availability-202


HP-UX 11i v3 Delta Secured Availability

Encrypted Volume & File System


Two ways to encrypt data on disk
Per Volume Basis
• Encryption at volume granularity
– Entire volume is encrypted
– One symmetric (encryption) key for entire volume
Per File Basis
• Encryption at file granularity
– Encryption by individual file
– One symmetric (encryption) key for each file

203 March 2007

There are two ways to encrypt data on disk: per-volume and per-file.

Encryption at volume granularity means that the entire volume is encrypted. There is one
symmetric encryption key for the entire volume. Encryption at file granularity means that
encryption is done on individual files. And, that there is one symmetric encryption key for each
file.

Key creation is transparent at file level. EVFS can convert individual files to EFS and can default
individual files to EFS. There are also automatic, propagation tools available.

March 2007 Availability-203


HP-UX 11i v3 Delta Secured Availability

How It Works: Volume Encryption

DATA
at-rest

user process ! unencrypted Clear data VolumeA


AR file
read/write CLE system
application ! mounted
AR encrypted Encrypted VolumeB
read/write CLE volumes data

204 March 2007

This slide shows how volume encryption works.

The data on volumeA is unencrypted. When the data goes from the disk through the I/O systems
and into the file system, it is clear and unencrypted.

The data on volumeB is encrypted. When this data traverses the same path to the file system, it
remains encrypted. The EVFS part of the file system unencrypts the data for clear presentation to
the user process or application.

March 2007 Availability-204


HP-UX 11i v3 Delta Secured Availability

How It Works: File Encryption

DATA
at-rest

access
user process CL Clear File!
/A file
read/write ugo system VolumeA

user process mounted Encrypted


read/write EMD volume access File!

205 March 2007

This slide illustrates how file encryption works.

March 2007 Availability-205


HP-UX 11i v3 Delta Secured Availability

High Level EVFS Architecture


EVFS Product Modules
• EVFS Pseudo-Disk Driver
non EVFS-aware non EVFS-aware
EVFS tools application application

• Stackable Filesystem Module


ioctl system call existing VFS system calls

• EVFS Tools
Stackable FS Module
Encrypted Volume System
• I/O must traverse EVFS Pseudo Driver File Systems

Encrypted File System EVFS pseudo-disk driver

• I/O must traverse Stackable FS


volume manager
Volumes can be
• LVM, VxVM, Raw Volume physical disk physical disk

206 March 2007

This slide shows the EVFS architecture and product modules. It illustrates where they fit in within
the existing volume management, file system, and I/O infrastructure. The EVFS tools reside at
the user level. We will look at some EVFS commands shortly.

Non-EVFS-aware applications, e.g. most applications!, continue doing their reads, writes, etc. as
normal going through the regular virtual file system layer. New in HP-UX 11i v3 are stackable
file system modules. The EVFS is one of these. This is what handles the encryption aspects of
EVFS for encrypted files.

The EVFS tools communicate with the EVFS pseudo-disk driver using ioctl() system calls. Then this
pseudo driver communicates to the volume manager. The EVFS pseudo-disk driver must handle
the encryption aspects of EVFS for the encrypted volume. (Note that this works for all types of
volumes, LVM, VxVM, or plain raw volumes.)

March 2007 Availability-206


HP-UX 11i v3 Delta Secured Availability

More EVFS Architecture Details


Data encryption keys
• Encryption Meta Data – EMD field EVFS tools
non EVFS-aware non EVFS-aware
application application
• Stored in file/volume with data
User-Access encryption keys ioctl system call existing VFS system calls

• Loaded prior to data access


• At mount time for volume encryption Stackable FS Module

• At login time for file encryption


File Systems
Encryption Authentication
• Addition to Unix access control
EVFS pseudo-disk driver
• Like ACL within EMD
Private/Public keys volume manager

• For protection of encryption keys


• Enables key sharing and recovery physical disk physical disk

207 March 2007

There are several keys involved with EVFS. There is the symmetric bulk encryption key, the
Public/Private key pair for key sharing, and a Key Pass phrase to protect private key.

Data encryption keys have an Encryption Meta Data, or EMD, field, which is stored in the file or
volume with the data.

There are User-Access encryption keys that are loaded prior to data access. This is done at
mount time for volume encryption and at login time for file encryption.

Encryption Authentication is an addition to normal HP-UX access control. It is like an ACL within
EMD.

Private/Public key pairs are used to protect the encryption of the data access keys. These enable
key sharing and recovery.

Encrypted file open and create have a performance impact; however, the read/write buffer
cache minimizes the actual cryptography latency. There are less perceptible performance
impact. There is a one time penalty for key generation.

March 2007 Availability-207


HP-UX 11i v3 Delta Secured Availability

Detailed Architecture
Net- EVFS User Interface EVFS Interface for
User Key Net-
work Applications
Database work for Key Management EAC attributes

User Space

Kernel Space System call Interface

User Authentication VFS


Pseudo-Driver

Stacked Encryption File System (SEFS)


User Key Table

Physical File System Remote File System


File System & Volume (hfs, vxfs) (NFS, CIFS)
Key Table
File System & Encryption Proxy
Volume Encryption Pseudo-Driver Server

File Key Volume Manager (LVM, VxVM)


Dameon

Physical Disk Driver

Network Network
Network Network

EMD
Legend:
File Data
EMD
Non-EVFS Components
Database File system | Volume | Swap | Dump EVFS Components

208 March 2007

This slide shows a more kernel detailed view of EVFS implementation. Those of you familiar with
HP-UX Internals will notice the new layers at the file system and volume management levels.
There is also the key infrastructure.

March 2007 Availability-208


HP-UX 11i v3 Delta Secured Availability

EVFS Deployment Scenarios


Secure Direct Attached Storage

Storage

pSCSI, FC
Connectivity
HP-UX Servers Clear Data

Encrypted Data

209 March 2007

This and the following three slides illustrate different EVFS deployment scenarios. With secure
direct attached storage, both clear and encrypted data flow between the server and the storage
over pSCSI and FC.

March 2007 Availability-209


HP-UX 11i v3 Delta Secured Availability

EVFS Deployment Scenarios


Secure SAN Storage
HP-UX Servers

SAN
SAN

Fibre Channel
Connectivity

Clear Data
Storage
HP-UX Servers
Encrypted Data

210 March 2007

This slide shows a secure SAN. Both clear and encrypted data flow between the storage units
through to the SAN. Clear data will flow between the SAN and clear servers; while encrypted
data will go between the SAN and protected servers.

March 2007 Availability-210


HP-UX 11i v3 Delta Secured Availability

EVFS Deployment Scenarios


Secure NAS Appliance

HP-UX Servers

LAN
LAN

iSCSI, NFS, CIFS


NAS Appliance
Connectivity

Clear Data
Servers
Encrypted Data

211 March 2007

Here is the same clear and encrypted data flow diagram for data flowing between HP-UX
servers and a NAS Appliance over the LAN.

March 2007 Availability-211


HP-UX 11i v3 Delta Secured Availability

EVFS Deployment Scenarios


Secure Backup and Disaster Recovery

Server Shared Storage Disaster Recovery

Connectivity

Clear Data

Encrypted Data

Tape Backup

212 March 2007

Lastly, this slide illustrates data flow between a server and shared storage. From the shared
storage data goes to a tape backup and to a disaster recovery storage.

March 2007 Availability-212


HP-UX 11i v3 Delta Secured Availability

EVFS Set Up
Key creation
• Generate user public/private key pair (e.g. RSA-1024)
– evfspkey keygen –c rsa-1024 –k rootkey
• Generate recovery key if needed (e.g. RSA-2048)
– evfspkey keygen –c rsa-2048 –r recovkey
• Generate private key passphrase
– evfspkey passgen –k rootkey
Auto-boot setup
• Set up EVFS environment so that user/admin is not prompted for a private
key password
– Edit /etc/evfs/evfstab file
– Add “v /dev/evfs/evol1 /dev/vg00/lvol1
root.rootkey”
– Note that auto-boot without a pass phrase can weaken security (when
TPM support is introduced, the risk can be mitigated)
• EVFS rc scripts will auto-enable encrypted volumes so file systems can be
mounted at mount time

213 March 2007

EVFS use requires key creation. Generate a user public/private key pair (e.g. RSA-1024) by
using the following command: evfspkey keygen –c rsa-1024 –k rootkey.
Generate a recovery key if needed (e.g. RSA-2048) using evfspkey keygen –c rsa-
2048 –r recovkey. Generate a private key pass phrase with evfspkey passgen –
k rootkey.

Auto-boot setup is supported. Set up the EVFS environment so that the user or admin is not
prompted for a private key password. To do this, first edit the /etc/evfs/evfstab
file and add “v /dev/evfs/evol1 /dev/vg00/lvol1 root.rootkey”.
Note that auto-boot without a pass phrase can weaken security. (When the trusted
platform module (TPM) support is introduced, the risk can be mitigated.) EVFS rc scripts
will auto-enable encrypted volumes so file systems can be mounted at mount time.

Note that keys must be duplicated across systems. A future enhancement will be ldap-enabled
key storage and retrieval. Also, auto-boot security will be solved by TPM.

March 2007 Availability-213


HP-UX 11i v3 Delta Secured Availability

EVFS Commands Overview

evfspkey evfsadm evfsvol evfsfile


User Space

Kernel
EVFS Subsystem

Key Management Æ evfspkey


EVFS Subsystem Admin Æ evfsadm
Encrypted Volume Admin Æ evfsvol
Encrypted File System Admin Æ evfsfile

214 March 2007

The EVFS command line user interface consists of four commands.


• Evfspkey is used for key maintenance to generate, manage, and protect keys.
• Evfsadm is used to initilize and prepare data structures. It is a one-time execute.
• Evfsvol is used to tune-tweak volumes. Use it to add users, recover volumes, and perform
backup.
• Evfsfile is used to tune-tweak a file system. It is similar to evfsvol, except for filesystem.

March 2007 Availability-214


HP-UX 11i v3 Delta Secured Availability

EVFS High Level Operations


Create keys (evfspkey)
• Generate user public/private key pair
• Generate recovery key
• Generate private key passphrase
Create encrypted volume (evfsvol)
• Auto-boot setup possible
Note that auto-boot can weaken security (in the future, TPM
can mitigate risk)
• Create file system (if needed)
Initialize and activate encrypted volume access
• Initialize EVFS subsystem (evfsadm)
• Enable encrypted volume (evfsvol)
• Map a raw volume or mount a file system (evfsvol)
215 March 2007

Use the evfspkey command to create keys. It can generate user public/private key pair,
generate recovery key, and generate private key pass phrase.

The evfsvol command is used to create encrypted volume. Auto-boot set up is possible. Note that
auto-boot can weaken security. In the future, TPM can mitigate risk. Efsvol can also create file
system, if needed.

To initialize and activate encrypted volume access, first initialize EVFS subsystem using evfsadm.
Then enable encrypted volume with evfsvol. Finally, map a raw volume or mount a file system
with evfsvol.

March 2007 Availability-215


HP-UX 11i v3 Delta Secured Availability

Data Conversion (Encrypted Volume)


In-line encryption of non-encrypted data not supported in
initial phases
Protection of non-encrypted data easily achieved by
creating an encrypted copy
• Quiesce data by preventing updates to data
• Backup the data – safety only (not necessary)
• Choose desired copying mechanism: cp command, disk
mirroring, backup/restore
• Transfer unprotected data to encrypted volume
• Sanity check data on the encrypted volume
• Switch application to access data on the encrypted volume

216 March 2007

In-line encryption of non-encrypted data is not supported in initial phases.

However, the protection of non-encrypted data is easily achieved by creating an encrypted


copy. First, quiesce data by preventing updates to data. Next, backup the data. This is not a
required step and is a safety precaution. Then choose desired copying mechanism: cp
command, disk mirroring, backup/restore. Then transfer the unprotected data to an encrypted
volume. Perform a sanity check data on the encrypted volume. Lastly, switch application to
access data on the encrypted volume.

In a future version, there will be in-line data conversion.

March 2007 Availability-216


HP-UX 11i v3 Delta Secured Availability

Other Considerations
EVFS effectively introduces an additional access control
mechanism
• Ownership on a per-user basis, based upon user
private/public keys
• Sharing granted by the owner
• Super user privileges cannot circumvent EVFS access control
– Some vulnerabilities exist that will be addressed in future
releases
– RBAC and Secure Resource Partitions can be used to mitigate
some of the risks
Procedural implications
• Process for data recovery; escrow agents

217 March 2007

EVFS effectively introduces an additional access control mechanism. It allots ownership on a per-
user basis, based upon user private/public keys. It allows sharing granted by the owner. Super
user privileges cannot circumvent EVFS access control.

Some vulnerabilities exist that will be addressed in future releases. RBAC and Secure Resource
Partitions can be used to mitigate some of the risks.

Procedural implications are that there is a process for data recovery and there may be escrow
agents.

March 2007 Availability-217


HP-UX 11i v3 Delta Secured Availability

Data Base Protection (Volume Encryption)


Creation of new encrypted data base
• Create DB logical volume
– lvcreate –l 1024 –n oracle vg00
• Map original DB volume into encrypted volume
– evfsvol map /dev/vg00/oracle
• Create encrypted volume disk layout
– evfsvol create –k rootkey eoracle
• Add the following to /etc/evfs/evfstab
– v /dev/evfs/eoracle /dev/vg00/oracle
Initialize EVFS subsystem
• evfsadm start
Activation/deactivation of encrypted volume
• evfsvol enable eoracle
• evfsvol disable eoracle

218 March 2007

To use volume encryption to perform database protection. Set up a new volume by creating the
database logical volume using lvcreate. Set up volume encryption by using evfsvol map to map
the original database volume into an encrypted volume. Use evfsvol create to lay out the EMD
and create the encrypted volume disk layout. The key used here is the symmetric public/private
key. Add the appropriate line the /etc/evfs/evfstab.

Then initialize the EVFS subsystem with evfsadm start. Finally, activate the encrypted volume iwht
evfsvol enable. This is similar to mount and can be automated with an RC script. Finally, the key
is loaded.

March 2007 Availability-218


HP-UX 11i v3 Delta Secured Availability

Data Base Protection (Volume Encryption)


Creation of file system on encrypted volume
• Create and map
• Add the following to /etc/fstab
– /dev/evfs/elvol1 /opt/ vxfs delaylog 0 2
• Enable encrypted volume
– evfsvol enable elvol1
• Overlay file system on encrypted volume
– newfs –F vxfs /dev/evfs/relvol1
• Mount encrypted file system
– mount –F vxfs /dev/evfs/elvol1

219 March 2007

To create a file system on an encrypted volume. Perform create and map commands similar to
the last slide. Add the appropriate line to /etc/fstab. Enable the encrypted volume. Now,
overlay the file system o nthe encrypted volume using newfs. This writes in the encrypted volume.
Note that a user without a key cannot even do an ls command to an encrypted volume.

Volume encryption encrypts data and meta data. File encryption does not encrypt meta data
and only encrypts the file data.

March 2007 Availability-219


HP-UX 11i v3 Delta Secured Availability

File System Encryption (File Encryption)


Creation of new encrypted file system
• Create logical volume
– lvcreate –l 1024 –n lvol1 vg00
• Map original logical volume into encrypted volume
– evfsvol map /dev/vg00/lvol1
• Edit /etc/evfs/evfstab
– f /dev/evfs/elvol1 /dev/vg00/lvol1
Initialize EVFS subsystem
• evfsadm start

220 March 2007

This slide shows how to file system encryption. To create a new encrypted file system, first create
a logical volume using lvcreate. Then map the original logical volume into the encrypted volume
using evfsvol map. Then add the appropriate line to /etc/evfs/evfstab. Finally, initialize the
EVFS subsystem with evfsadm start.

March 2007 Availability-220


HP-UX 11i v3 Delta Secured Availability

File System Encryption (File Encryption)


Creation of file system on encrypted volume
• Create and map
• Edit /etc/fstab
– /dev/evfs/elvol1 /opt/ vxfs delaylog 0 2
• Enable encrypted volume
– evfsadm enable elvol1
• Overlay file system on encrypted volume
– newfs –f vxfs /dev/evfs/relvol1
• Mount encrypted file system
– mount –F vxfs –o stackfs=efs /dev/evfs/elvol1
• Login into secure session
– evfsfile enable

221 March 2007

To create a file system on an encrypted volume, do the create and map and add the line to
/etc/fstab. Then enable the encrypted volume and overlay the file system on it with newfs.
Mount the encrypted file system. And, login into a secure session to do evfsfile enable.

March 2007 Availability-221


HP-UX 11i v3 Delta Secured Availability

Encryption Access Control


EVFS effectively introduces an additional access control
mechanism
• Ownership on a per-user basis, based upon user private/public
keys
• Sharing granted by the owner
• Super user privileges cannot circumvent EVFS access control
Sharing of protected data
• Give user johndoe access to evol1
– evfsvol add –e evol1 –u johndoe
• Give user johndoe encryption ownership to evol1
– evfsvol assign –e evol1 –u johndoe
• Recover an encrypted volume when its owner has left the company
(recovery agent like just like another user)
– evfsvol assign –e evol1 –u root –r recoverykeyfile
Procedural considerations
• Management of and process for data recovery; escrow agents

222 March 2007

EVFS effectively introduces an additional access control mechanism. It gives ownership on a per-
user basis, based upon user private/public keys and allows sharing granted by the owner.
Super user privileges cannot circumvent EVFS access control. Root must have key to gain access.
No automatic root access.

This example shows how to share protected data. Give user johndoe access to evol1 but using
evfsvol add –e evol1 –u johndoe. Then give user johndoe encryption ownership to evol1 by
using evfsvol assign –e evol1 –u johndoe. Recover an encrypted volume when its owner has left
the company (A recovery agent is just like another user.) To change ownership of a volume, use
evfsvol assign –e evol1 –u root –r recoverykeyfile

So, far this design is proprietary. Future may integrate into PKI digital certificates. Users must
design procedures to NOT LOSE KEYS since you can lose data if keys are lost.

March 2007 Availability-222


HP-UX 11i v3 Delta Secured Availability

Backup of Protected Data


Encryption meta data backup
• Volume encryption: Copy metadata backup file
– cp /var/evfs/emd/* …
• File encryption: cp myfile …
Backup of encrypted volume in encrypted/clear forms
• Enable encrypted volume and use regular backup tools
• Disable encrypted volume and use regular backup tools
Import and export of encrypted volumes
• Similar to LVM vgexport/vgimport
• Removes encrypted volume from EVFS subsystem and environment:
evfsvol export eoracle
• Brings encrypted volume into EVFS subsystem and environment:
evfsvol import /dev/vg00/oracle
Backup of encrypted files
• Disable EVFS encryption and use regular file backup tools

223 March 2007

This slide describes how to back up encrypted data.

To perform a back up of encryption meta data, for volume encryption, copy metadata backup
file, e.g. cp /var/evfs/emd/* … For file encryption, just cp myfile …

To perform back up of encrypted volume in encrypted/clear forms, enable/disable encrypted


volume and use regular backup tools.

The import and export of encrypted volumes is similar to LVM vgexport/vgimport. It removes
encrypted volume from EVFS subsystem and environment. Use evfsvol export eoracle. Then it
brings encrypted volume into EVFS subsystem and environment. Use evfsvol import
/dev/vg00/oracle

To perform back up of encrypted files, disable EVFS encryption and use regular file backup
tools.

March 2007 Availability-223


HP-UX 11i v3 Delta Secured Availability

HP-UX EVFS Delivery Plan


Phased delivery of features
• Pre-enablement at initial HP-UX 11i v3 release
• File Level and Volume encryption
– Subsequent HP-UX 11i v3 release
Delivery Methods
• Product bits will be delivered via the web initially
• Subsequently delivered on HP-UX Application Release media

224 March 2007

At the initial HP-UX 11i v3 release, EVFS is pre-enabled. Then, initially, the product will be
delivered via the web at http://software.hp.com. Subsequently, EVFS will appear on the HP-UX
Application Release media.

March 2007 Availability-224


HP-UX 11i v3 Delta Secured Availability

Host Intrusion Detection System


(HIDS) 4.0

225 March 2007

HIDS Host Intrusions Detection System Release 4.0

March 2007 Availability-225


HP-UX 11i v3 Delta Secured Availability

HIDS Host Intrusions Detection System Release 4.0


Host-based security product
• Enables security admins to monitor, detect, and respond to attacks
HIDs 4.0 changes from version 3.1
• Alert aggregation reduces alert volume by aggregation of alerts
– Lasts until process(es) terminate or certain amount of time has passed
• Monitoring only critical files reduces alert volume
– For example, /etc/passwd, /etc/shadow are monitored instead of
monitoring all /etc
• System templates support new properties to specify critical user names
• Template properties that specify user IDs support specification of both user
IDs and user names
• New idscor option (-t) measures rate of events generated by a system and
monitored by HIDS
– Refer to the HIDS Tuning and Sizing document to determine the impact
of HIDS on memory and CPU consumption

226 March 2007

HP-UX Host Intrusion Detection System (HIDS) Release 4.0 is a host-based HP-UX security product
for HP computers running HP-UX 11i. HP-UX HIDS Release 4.0 enables security administrators to
proactively monitor, detect, and respond to attacks targeted at specific hosts. Since there are
many types of attacks that can bypass network-based detection systems, HP-UX HIDS Release
4.0 complements existing network-based security mechanisms, bolstering enterprise security.

The following are the features new from HIDS version 3.1 on HP-UX 11i v1.

HIDS 4.0 supports a new feature called alert aggregation that can significantly reduce the alert
volume for a monitored system. When enabled, alerts that are generated by a process or a
group of related processes are aggregated until the processes terminate, or a certain amount of
time elapses. It also reduces alert volume by monitoring only critical files. The template property
values of the file related preconfigured groups and templates have been modified to monitor
only the core critical files. For example, only certain files in the /etc directory such as
/etc/passwd and /etc/shadow are monitored instead of monitoring the entire directory.

In earlier releases, the system templates (login/logout and su) hard coded root and ids as being
critical for determining alerts with high severity. Since applications like HP-UX Security
Containment support the assignment of root privileges to several users, HIDS supports
configuration of critical users. The system templates support new template properties to specify
the critical user names. Additionally, the template properties that specify user IDs such as
priv_uid_list now support the specification of both user IDs and user names.

A new idscor option (-t) is used to measure the rate of events generated by a system and
monitored by HIDS. Knowing the event rate, one can refer to the HIDS Tuning and Sizing
document to determine the impact of HIDS on memory and CPU consumption.

March 2007 Availability-226


HP-UX 11i v3 Delta Secured Availability

HIDS Release 4.1 and Documentation


HIDS version 4.1 should release soon after the initial HP-UX 11i v3
release
• New automated configuration tool will simplify and automate HIDS
deployment and maintenance
• New duplicate alert suppression feature will reduce alert volumes
– Suppresses HIDS alerts that are considered duplicates
• Reducing costs
• New reporting feature generates reports in html and pdf
– Contain a summary of alerts and errors reported by HIDS agents
Documentation
• Several manpages in /opt/ids/share/man/man1m after installing HIDS
• In Internet and Security Solution section at http://docs.hp.com
– HP-UX Host Intrusion Detection System Release Notes
– HP-UX Host Intrusion Detection System Administrator’s Guide
• Information about HP OpenView Operations SMART Plug-in for HP-UX
HIDS is available at
– http://openview.hp.com/products/spi/spi_ids/index.html

227 March 2007

HIDS version 4.1 is scheduled to release shortly after the initial HP-UX 11i v3 release. This
version of HIDS will contain the following new features. An automated configuration tool will
simplify and automate HIDS deployment and maintenance. A duplicate alert suppression feature
reduces alert volumes by suppressing HIDS alerts that are considered duplicates thereby
reducing costs. And, a new reporting feature generates reports in html and pdf that contain a
summary of alerts and errors reported by HIDS agents.

Following manpages are available at /opt/ids/share/man/man1m on installing HP-UX HIDS


4.0.
IDS_checkAdminCert (1M)
IDS_checkAgentCert (1M)
IDS_checkInstall (1M)
IDS_genAdminKeys (1M)
IDS_genAgentCerts (1M)
IDS_importAgentKeys (1M)
idsadmin (1M)
idsagent (1M)
idsgui (1M)
ids.cf (5)

The HP-UX Host Intrusion Detection System Release Notes and the HP-UX Host Intrusion Detection
System Administrator’s Guide are available on http://docs.hp.com in the Internet and Security
Solutions section. Additionally, information about the HP OpenView Operations SMART Plug-in
for HP-UX HIDS is available at http://openview.hp.com/products/spi/spi_ids/index.html

March 2007 Availability-227


HP-UX 11i v3 Delta Secured Availability

IPFilter
IPSec

228 March 2007

March 2007 Availability-228


HP-UX 11i v3 Delta Secured Availability

IPFilter on HP-UX 11i v3


Provides system firewall capabilities by filtering IP packets to control
traffic in and out of a system
HP-UX IPFilter version A.03.05.13 new features
• Filters on X.25 interfaces and 10GigE interfaces
• Not enabled by default
– Installed but not configured
– When enabled, relevant module placed in networking stack
– Use of Bastille/ITS with Sec20MngDMZ or Sec30DMZ install time
security levels automatically enables
• Enabling does not require a reboot
– Does have brief network outage
– HP ServiceGuard and other timing sensitive applications should
schedule an appropriate time to enable HP-UX IPFilter
• Use kctune to use ipl_buffer_sz, ipl_suppress, ipl_logall, and cur_iplbuf_sz
tunables
– Not ndd
229 March 2007

The security product, HP-UX IPFilter version A.03.05.13, provides system firewall capabilities by
filtering IP packets to control traffic in and out of a system.

HP-UX IPFilter version A.03.05.13 on HP-UX 11i v3 is functionally equivalent to IPFilter on prior
HP-UX releases. Additionally, it has new features and enhancements. For example, IPFilter can
now filter on X.25 interfaces and 10GigE interfaces.

HP-UX IPFilter is not enabled in the networking stack by default, and, therefore, is not providing
filtering security. It is installed but not configured. When the user enables HP-UX IPFilter, the
relevant module will be part of the networking stack. However, if Bastille/ITS is used with the
Sec20MngDMZ or Sec30DMZ install time security levels, then HP-UX IPFilter will be
automatically enabled. Enabling HP-UX IPFilter does not require a reboot but does involve a brief
network outage. HP ServiceGuard customers or anyone running timing sensitive applications
should schedule an appropriate time to enable HP-UX IPFilter.

An obsolescence note is that the tunable parameters ipl_buffer_sz, ipl_suppress, ipl_logall, and
cur_iplbuf_sz are now tuned using the kctune command and not ndd.

For further information, see the following manpages.


ipf (4) packet filtering kernel interface
ipf (5) IP packet filter rule syntax
ipf (8) alters packet filtering kernel’s internal lists
ipl (4) data structure for IP packet log device
ipmon (8) monitors /dev/ipl for logged packets
ipstat (8) reports on packet filter statistics and filter list
iptest (1) test packet rules with arbitrary input
In addition, see the HP-UX IPFilter version A.03.05.13 Administrator’s Guide and HP-UX IPFilter
A.03.05.13 Release Notes available at http://docs.hp.com/en/internet.html#HP-UX%20IPFilter

March 2007 Availability-229


HP-UX 11i v3 Delta Secured Availability

IPSec on HP-UX 11i v3 Overview


Provides infrastructure for secure communications over IP
networks between systems and devices that implement the
IPsec protocol suite
• Provides authentication, integrity, and confidentiality
• Supported on host systems in host-to-host and host-to-gateway
topologies
• Provides security in internal networks
• Provides VPN solutions across public Internet communication
• Secure packets between gateway or proxy application
servers that are publicly accessible and backend application
servers

230 March 2007

HP-UX IPSec A.02.01.01 provides an infrastructure to allow secure communications


(authentication, integrity, confidentiality) over IP networks between systems and devices that
implement the IPsec protocol suite.

HP-UX IPSec is supported on host systems in host-to-host and in host-to-gateway topologies. You
can use HP-UX IPSec to provide security in internal networks and to provide Virtual Public
Network (VPN) solutions across public Internet communication. You can also use HP-UX IPSec to
secure packets between gateway or proxy application servers that are publicly accessible and
backend application servers.

March 2007 Availability-230


HP-UX 11i v3 Delta Secured Availability

Benefits of IPSec on HP-UX 11i v3 (1 of 2)


• Adheres to all IPSec standards
• IKE (Internet Key Exchange) for automated key generation
• Provides for data privacy, data integrity and authentication
– Application-transparent
• High-speed encryption
– Throughput as high as 91.95 Mb/s in a 100 Mb/s topology
• Dynamic data encryption key management using IKE
• Proven multi-vendor interoperability
– Interoperates with over 25 other vendor implementations
• E.g. Cisco, Microsoft, Linux

231 March 2007

IPSec has many benefits. It adheres to all relevant IPSec standards, including IKE (Internet Key
Exchange) for automated key generation. It provides for data privacy, data integrity and
authentication in an application-transparent manner. It does high-speed encryption, with
throughput for encrypted data transmission as high as 91.95 Mb/s in a 100 Mb/s topology. It
does dynamic data encryption key management using IKE. It has proven multi-vendor
interoperability and interoperates with over 25 other vendor implementations, including Cisco,
Microsoft, and Linux.

March 2007 Availability-231


HP-UX 11i v3 Delta Secured Availability

Benefits of IPSec on HP-UX 11i v3 (2 of 2)


• Provides host-based authentication using pre-shared keys and
digital certificates
• Supports HP-UX Mobile IPv6 and ServiceGuard
• Easy CLI configuration supports batch-mode configuration
• Flexible, packet-based configuration
• Configuration test utility
• Diagnostic and monitoring
– Logging and audit trail for accountability and intrusion alerts
• Host-based IPsec topologies

232 March 2007

IPSec provides host-based authentication using pre-shared keys and digital certificates. IPSec
supports HP-UX Mobile IPv6 and ServiceGuard. There are several powerful and flexible
management utilities. The easy-to-use Command-Line Interface (CLI) configuration supports batch-
mode configuration. There is flexible, packet-based configuration. There is a configuration test
utility. There are diagnostic and monitoring tools that include logging and audit trail for
accountability and intrusion alerts. Finally, there are host-based IPsec topologies.

March 2007 Availability-232


HP-UX 11i v3 Delta Secured Availability

Changes to IPSec on HP-UX 11i v3


Software bundle name is now IPsec instead of J4256AA
No dependencies on TOUR or HP-UX Transport patches
Customers using versions of HP-UX IPSec prior to A.02.01
• Must use the ipsec_migrate utility to migrate configuration data
Performance
• HP-UX Performance White Paper contains performance statistics and information
for HP-UX IPSec on HP-UX 11i v2
– Customers will experience similar performance on HP-UX 11i v3 systems
Documentation
• See manpages in the 1M section
– ipsec_admin, ipsec_config, ipsec_config_add, ipsec_config_batch,
ipsec_config_delete, ipsec_config_export, ipsec_migrate, ipsec_policy,
ipsec_report
• HP-UX IPSec version A.02.01 Administrator’s Guide (J4256-90015)
• HP-UX IPSec version A.02.01.01 Release Notes (J4256-90022)
• HP-UX IPSec Performance and Sizing White Paper
• Using OpenSSL Certificates with HP-UX IPSec A.02.01

233 March 2007

There are only a couple of small differences in the IPSec A.02.01.01 version for HP-UX 11i v3
from the A.02.01 and A.02.01.01 versions for 11i. The software bundle name is now IPsec
instead of J4256AA. There are no dependencies on TOUR or HP-UX Transport patches.
Customers using versions of HP-UX IPSec prior to A.02.01 must use the ipsec_migrate utility to
migrate configuration data.

The HP-UX Performance White Paper contains performance statistics and information for HP-UX
IPSec on HP-UX 11i v2. Customers will experience similar performance on HP-UX 11i v3
systems.

For further information, see the following manpages in the 1M section: ipsec_admin,
ipsec_config, ipsec_config_add, ipsec_config_batch, ipsec_config_delete, ipsec_config_export,
ipsec_migrate, ipsec_policy, and ipsec_report.

In addition, see the following documents, available at


http://docs.hp.com/en/internet.html#IPSec.
HP-UX IPSec version A.02.01 Administrator’s Guide (J4256-90015)
HP-UX IPSec version A.02.01.01 Release Notes (J4256-90022)
HP-UX IPSec Performance and Sizing White Paper
Using OpenSSL Certificates with HP-UX IPSec A.02.01

March 2007 Availability-233


HP-UX 11i v3 Delta Secured Availability

OpenSSL
Secure Shell

234 March 2007

March 2007 Availability-234


HP-UX 11i v3 Delta Secured Availability

OpenSSL on HP-UX 11i v3


Default version is OpenSSL 1.00.09.08d
• Toggle script changes default versions
– /opt/openssl/switchversion.sh
– Change default version makes A.00.09.08d command line features
unavailable
• However still support for OpenSSL A.00.09.08d libraries
New features
• Supports hardware ENGINES
– 4758cca, aep, atalla, chil, cswift, gmp, nuron, sureware, ubsec
• Supports elliptic curve cryptography protocols
– Elliptic Curve Crypto (ECC)
– Elliptic Curve Diffie-Hellman (ECDH)
– Elliptic Curve Digital Signature Algorithm (ECDSA)
• Supports X.509 and X.509v3 certificates

235 March 2007

OpenSSL on HP-UX 11i v3 is updated to version A.00.09.08 with support in default version for
several hardware ENGINES, support for elliptic curve cryptography, and EVP, the library that
provides a high-level interface to cryptographic functions.

OpenSSL A.00.09.09d is based on the open source OpenSSL 0.9.7l and 0.9.8d products
installed in the /opt/openssl/0.9.8 and /opt/openssl/0.9.7 directories. The default version of
OpenSSL enabled in HP-UX 11i v3 is OpenSSL A.00.09.08d. A toggle script switchversion.sh is
available in /opt/openssl. Use this script to change the default version of OpenSSL between
OpenSSL A.00.09.08d and OpenSSL A.00.09.07l.

OpenSSL on HP-UX 11i v3 supports the following hardware ENGINES: 4758cca, aep, atalla,
chil, cswift, gmp, nuron, sureware, and ubsec. This version of OpenSSL also supports elliptic
curve cryptography. The public key elliptic curve cryptography protocols supported by OpenSSL
A.00.09.08d are the Elliptic Curve Crypto (ECC), the Elliptic Curve Diffie-Hellman (ECDH)
protocol, and the Elliptic Curve Digital Signature Algorithm (ECDSA).

Note that only OpenSSL A.00.09.08d has hardware ENGINE support libraries and elliptic
curve cryptography. If you change the default version of OpenSSL, the openssl A.00.09.08d
command line features will not be available. However, there is support for OpenSSL
A.00.09.08d libraries.

OpenSSL A.00.09.07l and OpenSSL A .00.09.08d support X.509 and X.509v3 certificates.
OpenSSL A.00.09.07l and OpenSSL A.00.09.08d also contain a few defect fixes.

For more information, refer to the OpenSSL A.00.09.07l/A.00.09.08d Release Notes at


http://docs.hp.com under the section 'Internet and Security Solutions'. Another source of
information is the OpenSSL Changelog at http://www.openssl.org/news/changelog.html

March 2007 Availability-235


HP-UX 11i v3 Delta Secured Availability

Secure Shell on HP-UX 11i v3 (1 of 2)


HP-UX Secure Shell A.04.40 is based on the public domain OpenSSH 4.4p1
Client/server architecture
• Supports SSH-1 and SSH-2 protocols
• Provides secured remote login, file transfer, and remote command execution
Version A.04.30 introduced
• Usage of TCP Wrappers that support IPv6
• An sftponly solution in a chroot environment.
Version A.04.40 introduced
• Conditional configuration in sshd_config file using 'Match‘ directive
– Allows user to selectively override some configuration options
• If specific criteria based on user, group, hostname or address are met
• New directives added sshd_config
– ForceCommand forces execution of specified command regardless of user
request
• Similar to command='...' option accepted in ~/.ssh/authorized_keys.
• Useful in conjunction with new 'Match' directive
– 'PermitOpen' directive
• Mirrors the permitopen='...' authorized_keys option
• Allows fine-grained control over port-forwardings that user is allowed to establish

236 March 2007

HP-UX Secure Shell A.04.40 is based on the public domain OpenSSH 4.4p1. The client/server
architecture supports the SSH-1 and SSH-2 protocols and provides secured remote login, file
transfer, and remote command execution.

There are several new features in HP-UX Secure Shell A.04.40.003 as compared to
A.04.20.009 on HP-UX 11i v2. Version A.04.30 introduced HP-UX Secure Shell’s usage of TCP
Wrappers that support IPv6. Also, this version provided an sftponly solution in a chroot
environment. The rest of the new features were introduced in Secure Shell version A.04.40.

Conditional configuration in the sshd_config file using the 'Match‘ directive was implemented.
This allows the user to selectively override some configuration options if specific criteria based
on user, group, hostname or address are met.

A ForceCommand configuration directive was added to sshd_config(5). It is similar to the


command='...' option accepted in ~/.ssh/authorized_keys. It forces the execution of the
specified command regardless of what the user requested. This is very useful in conjunction with
the new 'Match' directive. Additionally, a 'PermitOpen' directive was also added to
sshd_config(5). This mirrors the permitopen='...' authorized_keys option, allowing fine-grained
control over the port-forwardings that a user is allowed to establish.

March 2007 Availability-236


HP-UX 11i v3 Delta Secured Availability

Secure Shell on HP-UX 11i v3 (2 of 2)


Version A.04.40 features continued
• Enables optional logging of transactions to sftp-server
• Added 'ExitOnForwardFailure' option to cause ssh
– Exits with a non-zero exit code when requested port forwardings are
not established
• Extended sshd_config 'SubSystem' declarations to allow specification of
command-line arguments
• Replaced all integer overflow susceptible invocations of malloc and
realloc with overflow-checking equivalents
• Modified ssh to record port numbers for hosts stored in
~/.ssh/known_hosts when a non-standard port has been requested
Documentation
• HP-UX Secure Shell Getting Started Guide
• HP-UX Secure Shell A.04.40.003 Release Notes
• sshd_config(5), ssh_config(5), and ssh(1) manpages
237 March 2007

This version enables optional logging of transactions to sftp-server. It added an


'ExitOnForwardFailure' option to cause ssh(1) to exit with a non-zero exit code when requested
port forwardings are not established. It extended sshd_config 'SubSystem' declarations to allow
the specification of command-line arguments. It replaced all integer overflow susceptible
invocations of malloc(3) and realloc(3) with overflow-checking equivalents. Lastly, it modified ssh
behavior so that ssh now records port numbers for hosts stored in ~/.ssh/known_hosts when a
non-standard port has been requested. HP-UX Secure Shell A.04.40.003 also contains some
defect fixes.

Refer to the HP-UX Secure Shell Getting Started Guide and the HP-UX Secure Shell A.04.40.003
Release Notes available on http://docs.hp.com in the 'Internet and Security Solutions' section.
Also, refer to the sshd_config(5), ssh_config(5), and ssh(1) manpages.

March 2007 Availability-237


HP-UX 11i v3 Delta Secured Availability

ServiceGuard Module
ServiceGuard
ECMT
SGeSAP
SGeRAC

238 March 2007

The ServiceGuard module includes changes to ServiceGuard, ECMT, SGeRAC, and SGeSAP on
HP-UX 11i v3.

March 2007 Availability-238


HP-UX 11i v3 Delta Secured Availability

ServiceGuard on HP-UX 11i v3


ServiceGuard is a high availability software product
• Protects mission critical applications from wide variety of hardware and software
failures
ServiceGuard version A.11.17.01 supports
• Persistent DSF naming
– Refer to intro(7) manpage and HP-UX System Administrator’s Guide to
understand capabilities of new persistent device-file (DSF) naming protocol
• Dynamic multipathing
• Large PID
• 39-character hostname
olrad -C command identifies NICs that are part of the ServiceGuard cluster
configuration
SMH has a new ServiceGuard Manager GUI plug-in
• Previous standalone ServiceGuard Manager GUI is also still supported
RS232 serial line as cluster heartbeat is no longer supported
VxVM 3.5 is no longer supported, but VxVM 4.1 is supported
Cluster File System and Cluster Volume Manager are not supported in the
initial release of ServiceGuard 11.17.01 on HP-UX 11i v3
• Customers who need CFS or CVM should not upgrade to HP-UX 11i v3 until CFS
and CVM are available on that platform
239 March 2007

HP ServiceGuard is a high availability software product for protecting mission critical


applications from a wide variety of hardware and software failures.

HP ServiceGuard on HP-UX 11i v3 is updated to version A.11.17.01 with support for persistent
device special file (DSF) naming and dynamic multipathing, large PID, and 39-character
hostname. Customers upgrading to HP-UX 11i v3 should read the intro(7) manpage and the
relevant sections of HP-UX System Administrator’s Guide to understand the capabilities of the
new persistent device-file (DSF) naming protocol.

HP-UX olrad -C command identifies networking interfaces (NICs) that are part of the
ServiceGuard cluster configuration
A new ServiceGuard Manager GUI is delivered as a plug-in to HP System Management
Homepage (HP SMH). The old (standalone) ServiceGuard Manager GUI is also still supported.

Support for RS232 serial line as cluster heartbeat support and support for version 3.5 of Veritas
Volume Manager (VxVM) from Symantec are obsolete and no longer supported with
ServiceGuard 11.17.01 on HP-UX 11i v3. However, VxVM 4.1 is supported.

Veritas Cluster File System (CFS) and Cluster Volume Manager (CVM) from Symantec are not
supported in the initial release of ServiceGuard 11.17.01 on HP-UX 11i v3. Customers who
need CFS or CVM should not upgrade to HP-UX 11i v3 until CFS and CVM are available on that
platform.

Go to http://docs.hp.com and navigate to High Availability to find informative documents such


as Managing ServiceGuard (13th Edition), HP ServiceGuard Version A.11.17 on HP-UX 11i v3
Release Notes, and HP ServiceGuard Quorum Server Version A.02.00 Release Notes.

March 2007 Availability-239


HP-UX 11i v3 Delta Secured Availability

HP-UX Cluster Solutions Architecture

HP “Stack” Symantec “Stack”

Cluster Administration Layer

ISV
HP ServiceGuard
ISU
VERITAS VCS

VxCFS modules: Messaging (GAB),


Lower Level Transport (LLT), Lock managers (GLM)
OEM

VxFS (includes core CFS code)

VxVM / CVM

HP-UX

240 March 2007

HP ServiceGuard or Veritas VCS is required for clustering and for managing cluster membership
and applications migration and failover. HP ServiceGuard is better integrated with HP’s
technology, e.g. WLM and AE administration tools.

March 2007 Availability-240


HP-UX 11i v3 Delta Secured Availability

Enterprise Cluster Master Toolkit (ECMT) Version 4.0


with SG 11.17.01 on HP-UX 11i v3
ECMT is a set of templates and scripts
• Allows configuring ServiceGuard packages for the HP Internet servers as
well as for third party database management systems
• Unified set of high availability tools is being released on HP-UX 11i v3
HP-UX 11i v3 has ECMT version 4
• ECMT version 3 added
– Support for VERITAS Cluster File System (CFS) in a ServiceGuard
A.11.17 environment.
– Enhancements to toolkit README files in ECMT bundle
• ECMT version 4 added
– Support for ServiceGuard 11.17.01 (non-CFS) for
• CIFS
• Tomcat and Apache
• Oracle 10g R2
Oracle Toolkit cannot handle password protected listeners
Refer to Enterprise Cluster Master Toolkit Version B.04.00 Release
Notes at http://docs.hp.com

241 March 2007

The Enterprise Cluster Master Toolkit (ECMT) is a set of templates and scripts that allow you to
configure ServiceGuard packages for the HP Internet servers as well as for third party database
management systems. This unified set of high availability tools is being released on HP-UX 11i
v3.

HP-UX 11i v3 has ECMT version 4. ECMT Version B.03.00 included support for VERITAS Cluster
File System (CFS) in a ServiceGuard A.11.17 environment and enhancements to the README
file of each toolkit in the ECMT bundle.

ECMT Version B.04.00 with SG 11.17.01 on HP-UX 11i v3 additionally includes support for
ServiceGuard 11.17.01 (non-CFS) for CIFS, Tomcat and Apache, and Oracle 10g R2.

Oracle Toolkit cannot handle password protected listeners. For detailed information on this issue
and defect fixes, refer to the Enterprise Cluster Master Toolkit Version B.04.00 Release Notes

Customers migrating from HP-UX 11i v1 will notice additional enhancements. These include
enhancements to the Oracle Toolkit to support scripts for Oracle 9i and 10g database
applications and assurance that all scripts are owned and executed by root. Performance
Enhancement occurs at package start-up by checking for the availability of the DB instance and
returning a success/failure code. If the instance cannot be successfully accessed, a failure will
be returned to ServiceGuard’s package manager to halt any additional attempts to bring up the
package.

March 2007 Availability-241


HP-UX 11i v3 Delta Secured Availability

HP ServiceGuard Extension for SAP 4.5 (SGeSAP) on


HP-UX 11i v3 (1 of 2)
Extends ServiceGuard's failover cluster capabilities to SAP application environments
Provides a flexible framework of package templates to easily define cluster packages that
protect ABAP-only, JAVA-only, double-stack and add-in installations of SAP components
• Includes SAP R3, mySAP, sap liveCache and SAP Netweaver 2004(s) based applications
• Also clusters underlying Oracle and MaxDB databases
Supports SAP enqueue replication technologies for both ABAP and JAVA-based SAP
applications
• Can create double-stack configurations with two replications for a single SAP system
• Supports SAP Netweaver 2004’s naming convention for replication service
• Supports up to 16 nodes in a replication cluster
• Automation feature implements a follow-and-push failover behavior
Optional SGeSAP dispatcher service monitor observes health of a SAP dispatcher process
• Can use regular health polling of dispatcher to retrieve an updated mapping of OS
processes to SAP work process type
– E.g. batch, dialog, update and spool
• Mapping information can be used by other tools
– Can be piped into HP Workload Manager (WLM) to allow workload rules to be defined with
sub-instance granularity

242 March 2007

HP ServiceGuard Extension for SAP B.04.50 (SGeSAP) extends HP ServiceGuard's failover


cluster capabilities to SAP application environments. SGeSAP provides a flexible framework of
package templates to easily define cluster packages that protect ABAP-only, JAVA-only, double-
stack and add-in installations of SAP components. This includes SAP R3, mySAP, sap liveCache
and SAP Netweaver 2004(s) based applications. SGeSAP also clusters underlying Oracle and
MaxDB databases.

SGeSAP B.04.50 supports SAP enqueue replication technologies for both ABAP and JAVA-
based SAP applications. It is now possible to create double-stack configurations with two
replications for a single SAP system. Support for the SAP Netweaver 2004’s naming convention
for replication services was added. SGeSAP supports up to 16 nodes in a replication cluster.
The automation feature implements a follow-and-push failover behavior.

The optional SGeSAP dispatcher service monitor continuously observes the health of a SAP
dispatcher process. It is possible to use the regular health polling of the dispatcher to retrieve an
updated mapping of OS processes to SAP work process types like batch, dialog, update and
spool. This mapping information can be used by other tools. Currently it can be piped into HP
Workload Manager (WLM) to allow workload rules to be defined with sub-instance granularity.

March 2007 Availability-242


HP-UX 11i v3 Delta Secured Availability

HP ServiceGuard Extension for SAP 4.5 (SGeSAP) on


HP-UX 11i v3 (2 of 2)
Delivered on the AR only
• Installed in staging area
• Existing package directories must be updated with the new templates
manually
• Customers upgrading from SGeSAP 3.11 or older must change their
SGeSAP package configuration files to a new format
Initial release of SGeSAP B.04.50 on HP-UX 11i v3 does not support
• DB/2 database failover
• HP ServiceGuard Cluster File System
• Future releases will include support for SGCFSRAC
• UPCC support will be added in SG 11.18 release timeframe
Documentation
• Managing ServiceGuard Extension for SAP
• ServiceGuard Extension for SAP B.04.50 Release Notes

243 March 2007

SGeSAP is delivered on the AR only and is installed in a staging area. Existing package
directories must be updated with the new templates manually. Customers upgrading from
SGeSAP 3.11 or older must change their SGeSAP package configuration files to a new format.

The initial release of SGeSAP B.04.50 on HP-UX 11i v3 does not support DB/2 database
failover or HP ServiceGuard Cluster File System. Future releases will include support for
SGCFSRAC. UPCC support will be added in SG 11.18 release timeframe.

The user guide Managing ServiceGuard Extension for SAP and the ServiceGuard Extension for
SAP B.04.50 Release Notes are available from http://docs.hp.com/

March 2007 Availability-243


HP-UX 11i v3 Delta Secured Availability

ServiceGuard Module
SGeRAC

244 March 2007

The ServiceGuard submodule focuses on changes to SGeSAP on HP-UX 11i v3.

March 2007 Availability-244


HP-UX 11i v3 Delta Secured Availability

HP ServiceGuard Extension for RAC (SGeRAC) on


HP-UX 11i v3 Overview
HP ServiceGuard Extension for RAC
• Formerly ServiceGuard OPS Edition
Allows a group of servers to be configured as a highly
available cluster that supports Oracle Real Application
Clusters (RAC)
• Integration offers best aspects of HP's enterprise clusters and
RAC
– High availability, data integrity, flexibility, scalability and
reduced database administration costs

245 March 2007

HP ServiceGuard Extension for RAC (formerly ServiceGuard OPS Edition) is updated to version
A.11.17.01 with many new features that are detailed on the next slide.

HP ServiceGuard Extension for RAC (formerly ServiceGuard OPS Edition) version A.11.17.01
allows a group of servers to be configured as a highly available cluster that supports Oracle
Real Application Clusters (RAC). The integration of the product offers the best aspects of HP's
enterprise clusters and RAC: high availability, data integrity, flexibility, scalability and reduced
database administration costs.

March 2007 Availability-245


HP-UX 11i v3 Delta Secured Availability

Classification of Failures
A system running an Oracle database server may become
unavailable from unplanned outages
Internal Failures
• Not visible to Oracle clients
• Client connection is not interrupted
• Protected by redundant components
• Examples: network failure, storage link failures
External Failures
• Visible to Oracle clients
• Client connection is interrupted
• Need to restore services as quickly as possible
• Examples: node failures, database instance failures
Goal of SGeRAC is to minimize interruption to Oracle clients

246 March 2007

A system running an Oracle database server may become unavailable from various unplanned
outages. With a properly configured SGeRAC and RAC cluster, we can minimized the impact
to the Oracle clients. Failures that can be completely masked from the client are Internal
Failures. External failures are failures that will cause client connection interruptions.

March 2007 Availability-246


HP-UX 11i v3 Delta Secured Availability

SGeRAC and Oracle 10g RAC Concept


RAC uses a multiple access point paradigm
• Oracle instances on multiple node
• Concurrent access to same database
• Clients connect to service from available instances
Oracle Clusterware (OC)
• Management of database and resources
• Management of ASM, if configured
SGeRAC extends ServiceGuard clustering for RAC
• Cluster/group membership to OC through GMS
• Shared storage access (CFS, CVM, SLVM, ASM over SLVM)
• Local LAN failover
• Storage Link failover
– SLVM pvlinks
– CVM dynamic multi-path
247 March 2007

SGeRAC and RAC uses a multiple access point paradigm. The access point is typically an IP
address and a port number.

For Oracle clients, the access point is the connect descriptor, which resolves to an IP address
and port number.

Oracle Clusterware (OC) resources include database instances, services, virtual IP addresses
(VIPs), and listeners. It supports VIP local failover and remote failover.

SGeRAC extends ServiceGuard clustering for RAC. Node monitoring is accomplished through
heartbeats. It supports LAN failover with network sensor. It supports storage link failover using
SLVM pvlinks and CVM dynamic multi-path. It supports concurrent shared storage access through
SLVM, CVM, CFS, and ASM over SLVM.

March 2007 Availability-247


HP-UX 11i v3 Delta Secured Availability

Fast Service Recovery


Restores service to Oracle clients
Combination of client and server configurations
Client side
• Clients configure with multiple connect descriptors
• Clients connects with first connect descriptor
• If connect fails, try next connect descriptor
Server side
• IP address failover from failed node to avoid TCP/IP timeout
• Surviving node accepts new connection on its IP address

248 March 2007

For fast service recovery, the clients should be configured to able to connect to multiple service
points.
The server needs to failover the IP address so new connections can avoid the TCP/IP connect
timeout.

March 2007 Availability-248


HP-UX 11i v3 Delta Secured Availability

10g RAC Redundant Configuration


Clients Clients Clients Clients Clients Clients

TCP/IP Public Network

Primary Standby
Switch Switch
Network Network

HB + Public HB + Public HB + Public HB + Public


(Primary) (Standby) (Primary) (Standby)

Listener VIP A VIP B Listener


A B
Server 10g RAC 10g RAC Server
Instance A Private
processes Instance B processes
Primary HB
Switch

OC Private OC
SGeRAC Standby HB
SGeRAC
Switch

Node A Node B

Storage link Storage link


DB
DB

249 March 2007

This slide illustrates an Oracle 10g RAC Redundant Configuration.

The 10g RAC model uses a single VIP per node. The Oracle Clusterware (OC) relies on the
platform for high-availability. The preferred set up is to have ServiceGuard primary and standby
with the network monitored by ServiceGuard and ServiceGuard local LAN failover.

APA is also supported. It has a single pipe and is load balanced. But, it has a single point of
failure at the swich. APA/Hot Standby is also supported to provide an active and standby.

Note that RIP and VIP can not be on same network; and, that the VIP is managed by the OC.
But, OC does not support multiple subnets for VIP. (As a side note, 9i RAC uses FIP and allows
multiple subnet support for multiple instances.)

There is a VIP local fail over lag after an SG local LAN failover, but there is no lag on APA and
APA/Hot Standby failover.

March 2007 Availability-249


HP-UX 11i v3 Delta Secured Availability

Network for Cluster Communications


General principle
• All interconnect traffic go on same network
– ServiceGuard will monitor the network
– SG can resolve interconnect failures by cluster reconfiguration
Not always possible to place all interconnect traffic on same network
• Very high RAC cache fusion traffic may require a separate network for
RAC-IC
• IB is not supported by CFS/CVM
– RAC-IC IB traffic is on a separate network
• SGeFF requires dual SG heartbeat network
– RAC-IC does not support multiple network for HA purposes
• In these cases, there will be a longer time to recover network failures

250 March 2007

The general principle for a network for cluster communications is to have all interconnect traffic
go on the same network so that ServiceGuard will monitor the network and resolve interconnect
failures by cluster reconfiguration. But, sometimes, it is not possible to place all interconnect
traffic on the same network.

For example, a system may have very high RAC cache fusion traffic, so a separate network for
RAC-IC may be needed.
Another case is that Infiniband (IB) is not supported by CFS/CVM, so the RAC-IC IB traffic is on
a separate network. Another example is that SGeFF requires a dual SG heartbeat network, and
RAC-IC does not support multiple networks for HA purposes.

For these cases, the customer will see longer time to recover some network failures unless special
logic is developed.

March 2007 Availability-250


HP-UX 11i v3 Delta Secured Availability

Types of Traffic
ServiceGuard heartbeat and communications traffic (SG-HB)
• Supported on multiple subnet networks
CSS-HB traffic and communications traffic for OC
• Uses a single logical connection
• Does not run over multiple subnet networks
RAC-IC
• RAC instance peer to peer traffic and communications for GCS and GES
• Network HA is provided by the HP-UX platform
GAB/LLT (CFS/CVM)
• Veritas cluster heartbeat and communications traffic
• GAB/LLT communicates over link level protocol (DLPI)
– Runs over all SG-HB networks, including primary and standby links
• GAB/LLT is not supported over InfiniBand or APA

251 March 2007

There are many types of traffic in this complex environment.

ServiceGuard heartbeat and communications traffic (SG-HB) is supported on multiple subnet


networks. CSS-HB traffic and communications traffic for OC uses a single logical connection and
does not run over multiple subnet networks. RAC-IC is RAC instance peer to peer traffic and
communications for GCS and GES. In RAC-IC, network HA is provided by the HP-UX platform.
Finally, there is GAB/LLT (CFS/CVM), Veritas cluster heartbeat and communications traffic.
GAB/LLT communicates over link level protocol (DLPI) and runs over all SG-HB networks,
including primary and standby links. But, GAB/LLT is not supported over InfiniBand or APA.

March 2007 Availability-251


HP-UX 11i v3 Delta Secured Availability

Light and Moderate (Single)


Timeouts
• SG-HB, CSS-HB (default), RAC-IC (default)

SG-HB
CSS-HB Private
lan1 lan1
RAC-IC (Primary)

Private
lan2 lan2
(Standby)

Node A Node B

252 March 2007

HA provided by primary and standby pairs. The SG reconfiguration time is less than the CSS HB
timeout. The CSS reconfiguration time is less than the RAC IMR timeout. The general guideline is
that the CSS timeout should be greater than either 180 seconds or 25 times the SG
NODE_TIMEOUT.

March 2007 Availability-252


HP-UX 11i v3 Delta Secured Availability

Heavy Cache Fusion Traffic (Multiple)


Timeouts
• SG-HB, CSS-HB (default)
• RAC-IC (default) Private
Subnet Monitoring (Primary)
RAC-IC
lan1 lan1
lan2 lan2
Private
(Standby)

Private
SG-HB
(Primary)
CSS-HB
lan3 lan3
lan4
lan4
Node A Private Node B
(Standby)

253 March 2007

HA provided by primary and standby pairs. All heartbeat traffic remains on same network.
Without subnet monitoring, double networks failures are discovered by timeout. For example,
consider the result of both lan1 and lan2 failing.

March 2007 Availability-253


HP-UX 11i v3 Delta Secured Availability

Faster Failover (SGeFF and SGeRAC)


Dual SG heartbeat required.
Two node only
Subnet monitoring Private
(Primary)
CSS-HB
lan1 RAC-IC lan1
lan2 lan2
Private
(Standby)

SG-HB Private
lan3 (Primary) lan3

lan4
lan4
Node A SG-HB Node B
Private
(Primary)

254 March 2007

This is a suggested configuration. Two SG heartbeat on two primary networks allows faster
failover. Any combination of dual SG heart beat is supported.

March 2007 Availability-254


HP-UX 11i v3 Delta Secured Availability

Storage
CVM / CFS 4.1
• Simple package dependency
• Multi-node package
• Single OC HOME; Single ORACLE HOME; Single archive designation
• DMP; Mirroring; Online reconfiguration
• Many DG/CFS storage options
CVM
• DMP; Mirroring; Online reconfiguration
• Two CVM DG options
• Several DG/local FS storage options
SLVM
• PVlinks, mirroring, and SNOR
• Two SLVM VG options
• Several VG/local FS storage options
ASM over SLVM
• New in 10g

255 March 2007

CVM / CFS 4.1 storage has simple package dependency and a multi-node package. It supports
Single OC HOME, Single ORACLE HOME, Single archive designation, and DMP, mirroring,
and online reconfiguration. Storage options with CVM / CFS 4.1 include DG/CFS for OC
HOME, DG/CFS for OCR and Voting Disk, DG/CFS for Oracle HOME, DG/CFS for Oracle
RAC Instance Data files with one DG per instance, and DG/CFS for flash recovery / archive
designation.

CVM supports DMP, mirroring, and online reconfiguration. CVM supports CVM DG for OCR
and Voting Disk, CVM DG for Oracle RAC instance data files with one DG per instance,
DG/local FS for OC HOME, and DG/local FS for Oracle HOME. DG/local FS for Flash
Recovery / Archive logs can be placed in failover package so that node performing Oracle
recovery can have access to all archive logs.

SLVM supports PVlinks, mirroring, and SNOR. SLVM supports SLVM VG for OCR and Voting
Disk, SLVM VG for Oracle RAC instance data files with one VG per instance, VG/local FS for
OC HOME, and VG/local FS for Oracle HOME. VG/local FS for Flash Recovery / Archive logs
can be places in the failover package so that the node performing Oracle recovery can have
access to all archive logs.

Another option is ASM over SLVM, which we will cover next.

March 2007 Availability-255


HP-UX 11i v3 Delta Secured Availability

ASM over SLVM


ASM introduced by Oracle in 10g
Alternative to platform file systems and volume managers
• Multi-path support
• Supports same names on all nodes which eases ASM configuration
• Protects ASM data against inadvertent overwrites from nodes
inside/outside the cluster
ASM disk groups must be raw logical volumes managed by SLVM
ASM supports
• Datafiles, control files, online and archive redo log files
ASM does not support
• Oracle binaries, trace files audit files, alert logs, backup files, export
files, tar files, core files
• Oracle cluster registry devices and quorum device (voting disk)
• Application binaries and data

256 March 2007

ASM on HP-UX is supported since Oracle 10g became available on HP-UX two configurations
that do not require SG/SGeRAC. They are the non-clustered single instance Oracle and the
Oracle single instance and RAC databases running in a pure OC environment.

There are some restrictions on SG/SGeRAC A.11.17 support of ASM. ASM support is available
only for RAC databases with SG/SGeRAC A.11.17. ASM disk groups members must be raw
logical volumes managed by SLVM. ASM is not supported with SGeRAC for any of these
disaster-tolerance solutions: MetroClusters, ContinentalClusters, or Extended Clusters. The
Extended Cluster-like configuration in which ASM is used for inter-site mirroring, with ASK disk
group members being un-mirrored SLVM raw LVs, is also not supported.)

ASM over SLVM provides multi-path support. It supports the same names on all nodes which
eases ASM configuration. And, ASM over SLVM protects ASM data against inadvertent
overwrites from nodes inside/outside the cluster.

There are a couple of disadvantages of ASM over SLVM. Additional configuration and
management tasks are imposed by the extra layer of volume management. There is a small
performance impact from the extra layer of volume management. SLVM has some restrictions in
the area of online reconfiguration, for example adding a new disk using SNOR.

SLVM VGs for ASM DGs present raw LVs to ASM to resemble raw disks, as far as possible. The
LV restrictions include that each LV occupies usable space of one single PV, the LV should not be
striped or mirrored, span multiple PVs, or share a PV with other LVs. There is a finite timeout for
each SLVM LV. For example, the value = total physical paths to PV * PV timeout.

March 2007 Availability-256


HP-UX 11i v3 Delta Secured Availability

Packages
SG packages
• Strongly recommended use of packages
• Use to synchronize SG/SGeRAC and OC/RAC
– Start and stop synchronization
– Resource access synchronization
– SGeRAC -> Storage -> OC -> FIP ->RAC
A.11.17 on HP-UX 11i v3
• New storage model for CFS (MNPs)
• Consider using package integration framework
– Uniform method to coordinate operations of the combined software
stack.
Made possible because Oracle’s improved support for
• On-demand startup and shutdown of OC and RAC
• Invocation of OC commands from customer-developed scripts

257 March 2007

The framework provides a uniform, easy-to-manage and intuitive method to coordinate the
operation of this combine software stack of RAC and SGeRAC, across the full range of storage
management options supported by SGeRAC.

SGeRAC provides cluster membership to CSS and clustered storage to meet the needs of OC
and RAC database instance. This includes OCR and Voting disks on CFS, CVM, or CFS, and
OC HOME on CFS; and, RAC database file on SLVM, CVM, ASK over SLVM, and CFS OC.
SGeRAC manages database and associated resources, such as the DB instance, services, VIP,
and listener. It also manages the ASM instance, if configured.

The user must make sure that the combined stack starts up and shuts down in the proper
sequence. It is possible to automate the start up and shut down sequences, if desired. The
storage required by OC must be available before OC starts, and similarly, the storage required
by RAC database must be available before RAC instance starts. On shutdown, this sequence is
reversed.

Traditionally, the package is used to encapsulated the start and stop of storage. A.11.17
introduced MNPs model for CFS.

March 2007 Availability-257


HP-UX 11i v3 Delta Secured Availability

Package Integration Framework


SG/SGERAC A.11.17
Simple Dependency and Multi-node Package Framework
• Configure OC as one MNP
• Configure RAC database MNPs to depend on OC MNP
• Simple Dependency and Multi-node Package are not for general purpose
use at A.11.17
Reduces total package count
• Simplifies SGeRAC package configuration
Combined stack must starts up and shuts down in proper sequence
• Can automate start up and shut down sequences
• Storage required by OC must be available before OC starts
• Storage required by RAC database must be available before RAC
instance starts
• On shutdown, this sequence is reversed

258 March 2007

Emphasis that Simple Dependency and Multi-node Package are not for general purpose use at
A.11.17. Only use for package integration .

The user must make sure that the combined stack starts up and shuts down in the proper
sequence. It is possible to automate the start up and shut down sequences, if desired. The
storage required by OC must be available before OC starts, and similarly, the storage required
by RAC database must be available before RAC instance starts. On shutdown, this sequence is
reversed.

Traditionally, the package is used to encapsulated the start and stop of storage. A.11.17
introduced MNPs model for CFS.

March 2007 Availability-258


HP-UX 11i v3 Delta Secured Availability

Resource Management by SGeRAC and OC


SGeRAC
SG/SGeRAC PM
Object Services

Oracle
Clusterware Start
Object Stop
Monitor RAC DB Start
RAC DB
SGeRAC Instances Stop ASM Instance
Monitor Instances
Simple MNP
Dependency

Oracle Start
Clusterware Stop
Dependency Database Node Apps
Monitor
Storage (e.g., Virtual IP)
Optional
SGeRAC
MNP and
Dependency
Oracle Start
Oracle
Clusterware Stop
Optional Monitor Clusterware
Oracle
MNP
Clustwerware
resource and
Dependency
Oracle
Clusterware
Storage
259 March 2007

This slide depicts resource management of SGeRAC and OC.

OC is monitored by init. DB instance is monitored by OC.

The MNP monitoring is done so that MNP status reflects status of what MNP has started. For
example, if DB instance is halted outside of MNP, then the MNP halts.

March 2007 Availability-259


HP-UX 11i v3 Delta Secured Availability

HP ServiceGuard Extension for RAC (SGeRAC) on HP-


UX 11i v3 New Features
• 16 nodes with SLVM
• Version rolling upgrade support
• Monitoring of RAC instances
• Faster group reconfiguration for Oracle instance crash
• Support configuration through SG GUI Manager
• Extended Campus Cluster support
• IP over IB (InfiniBand) support
• Oracle 10g RAC support
• Online node addition with SLVM
• SLVM single node online volume reconfiguration support
• SGeFF support in SLVM configuration
• Capacity Expansion support (large PID, 39 characters in hostname, 64 characters
in username)
• DRD safe and DRD useful
• Support native multi-pathing and new device file naming scheme
• Transition to new version by performing a rolling upgrade

260 March 2007

SGeRAC on HP-UX 11i v3 supports 16 nodes with SLVM, version rolling upgrade, monitoring of
RAC instances, and faster group reconfiguration for an Oracle instance crash. It also supports
configuration through SG GUI Manager. It provides Extended Campus Cluster support. It
enables IP over IB (InfiniBand) and Oracle 10g RAC support.

SGeRAC on HP-UX 11i v3 supports online node addition with SLVM and SLVM single node
online volume reconfiguration. There is SGeFF support in SLVM configuration.

SGeRAC supports the interface capacity expansion thereby allowing large PIDs, 39 characters
in a hostname, and 64 characters in a username.

SGeRAC is DRD safe and DRD useful. And, it supports native multi-pathing and new device file
naming scheme.

Additionally, customers migrating from HP-UX 11i v1 September 2005 will have Oracle 9i RAC
support, Quorum Server Support, IPv6 Support, MCOE support, and Administration and
configuration support through SG GUI Manager.

Since SGeRAC was on HP-UX 11i v2, SGeRAC has been updated to version A.11.17.01 on HP-
UX 11i v3 and now has many new features. If you want to use the new features, the transition to
the new version can be done by performing a rolling upgrade.

ServiceGuard Extension for RAC A.11.17 on HP-UX 11i v3 Release Notes details the new
features. Additionally, look for the SGeRAC manual and manpages updates.

March 2007 Availability-260


HP-UX 11i v3 Delta Secured Availability

Additional Information
http://haweb.cup.hp.com/SGeRAC/
http://haweb.cup.hp.com/SGeRAC/RAC-HowTo/
http://haweb.cup.hp.com/Support
http://haweb.cup.hp.com/ATC/Applications/ISVM/Orac
le/index.html
http://docs.hp.com/hpux/ha

261 March 2007

Refer to documents at the following URLs.


http://haweb.cup.hp.com/SGeRAC/
http://haweb.cup.hp.com/SGeRAC/RAC-HowTo/
http://haweb.cup.hp.com/Support
http://haweb.cup.hp.com/ATC/Applications/ISVM/Oracle/index.html
http://docs.hp.com/hpux/ha

March 2007 Availability-261


HP-UX 11i v3 Delta Secured Availability

Section Summary
This section described
• Features that ensure system and services availability
– Networking
– Security
• Secured Availability topics
– Montecito Processor
– Recovery
– Networking
– Security
– ServiceGuard

262 March 2007

In this section, we covered features that ensure system and services availability, such as
Networking and Security products. Additionally, we explored other Secured Availability topics.
These included additional Montecito Processor features, recovery subsystems and tools, and
ServiceGuard.

March 2007 Availability-262


HP-UX 11i v3 Delta Secured Availability

Learning
check

263 March 2007

March 2007 Availability-263


HP-UX 11i v3 Delta Secured Availability

Learning Check Q1

1. Which provider has been added in the new release of


SysFaultMgmt?
a. CPU instance provider
b. Memory instance provider
c. Environmental instance provider
d. EMS Wrapper provider

264 March 2007

March 2007 Availability-264


HP-UX 11i v3 Delta Secured Availability

Learning Check Q2

2. Which management application has added support for


SysFaultMgmt in the new release?
a. HP Systems Insight Manager
b. System Management Homepage
c. HP OpenView Operations
d. Global Workload Manager

265 March 2007

March 2007 Availability-265


HP-UX 11i v3 Delta Secured Availability

Learning Check Q3

3. Which CPU provider configuration field has been


added in the new release of SysFaultMgmt?
a. Architecture revision
b. Processor speed
c. Firmware ID
d. Serial number

266 March 2007

March 2007 Availability-266


HP-UX 11i v3 Delta Secured Availability

Learning Check Q4

4. To which file do all of the SysFaultMgmt providers write


any errors that occur?
a. sfm.log
b. FMLoggerconfig.xml
c. fmControl.log
d. sfm_error.log

267 March 2007

March 2007 Availability-267


HP-UX 11i v3 Delta Secured Availability

Learning Check Q5

5. How can you subscribe to indications from the EMS


wrapper provider?
a. CLI only for HP SIM and SMH
b. GUI only for HP SIM and SMH
c. Both CLI and GUI for HP SIM and SMH
d. Both CLI and GUI for HP SIM only

268 March 2007

March 2007 Availability-268


HP-UX 11i v3 Delta Secured Availability

Overview and Updated Providers Learning Check Questions


1. Which provider has been added in the new release of
SysFaultMgmt?
a. CPU instance provider
b. Memory instance provider
c. Environmental instance provider
d. EMS Wrapper provider
2. Which management application has added support for
SysFaultMgmt in the new release?
a. HP Systems Insight Manager
b. System Management Homepage
c. HP OpenView Operations
d. Global Workload Manager
3. Which CPU provider configuration field has been added in the
new release of SysFaultMgmt?
a. Architecture revision
b. Processor speed
c. Firmware ID
d. Serial number
4. To which file do all of the SysFaultMgmt providers write any errors
that occur?
a. sfm.log
b. FMLoggerconfig.xml
c. fmControl.log
d. sfm_error.log
5. How can you subscribe to indications from the EMS wrapper
provider?
a. CLI only for HP SIM and SMH
b. GUI only for HP SIM and SMH
c. Both CLI and GUI for HP SIM and SMH
d. Both CLI and GUI for HP SIM only

March 2007 Availability-269


HP-UX 11i v3 Delta Secured Availability

Lab
activity

270 March 2007

See Lab Guide.

March 2007 Availability-270


HP-UX 11i v3 Delta Simplified Management

Simplified
Management
Section 6

© 2007 Hewlett-Packard Development Company, L.P.


The information contained herein is subject to change without notice

This section covers applications enabling Simplified Management of HP-UX 11i v3 (11.31)
systems. This section includes HP System Management Homepage (SMH), Event Manager
(EVWeb), and Event Monitoring Service (EMS).

March 2007 Management-1


HP-UX 11i v3 Delta Simplified Management

Section Objectives
Upon completion of this section, you will be able to
• Describe and perform HP-UX 11i v3 system management
tasks using the HP System Management Homepage (SMH)
– Transition from SAM to SMH and other tools
• Describe and use the Event Manager (EVM)
• Describe and user the Event Monitoring Service (EMS)
• Describe and use the Unified Command Line Interface

2 March 2007

This section includes the following modules.

• The System Management Module includes SMH and its applications, such as ncweb
and nwmgr, EVweb, the graphics manager, and SAM. It also has a WBEM module
including new and legacy providers.

• The Event Manager (EVM) Module describes EVM, which is a general mechanism for
posting and distributing events from any part of the operating system to any interested party.
Cell OL* will generate EVM events. EVM CIM provider generates WBEM indications. This
module includes Process Set Manager (ProcSM), which is a facility that uses EVM and by
which commands, utilities, and applications can find processes and be notified of their
termination.

• The Event Monitoring Service (EMS) Module describes EMS, which is an


infrastructure for monitoring HP-UX system. It includes MIB, Disk (LVM), RDBMS (Oracle,
Informix), and Unified Command Line Interface.

The Labs in this section include the following:


• The SMH Lab lets the student use SMH instead of SAM to configure systems manually.
• There is an optional EVM Lab to practice using command-line utilities to post and handle
events from shell scripts and the command line. The student may use the configurable event
logger to have full control over which events are logged.
• The optional EMS Lab allows the student to use EMS to monitor HP-UX system resources.
Practice using the high availability EMS monitors: MIB, Disk (LVM), RDBMS (Oracle,
Informix).

March 2007 Management-2


HP-UX 11i v3 Delta Simplified Management

System Management

SMH
SMH Applications and Tools
SAM
WBEM

3 March 2007

The System Management Module includes System Management Homepage, including several of
its associated applications and tools. This module also includes SAM and WBEM.

March 2007 Management-3


HP-UX 11i v3 Delta Simplified Management

System Management
System Management Homepage

4 March 2007

This sub-module covers HP System Management Homepage. It includes two new tools for
network management, ncweb and nwmgr. It also includes EVWeb.

March 2007 Management-4


HP-UX 11i v3 Delta Simplified Management

HP System Management Homepage (HP SMH)


Overview
Web-based system administration tool for managing HP-UX 11i
• Provides Web-based systems management functionality
• At-a-glance monitoring of system component health
• Consolidated log viewing
• High performance UI that responds rapidly
– Provides significant performance improvements over SAM
• Provides Terminal User Interfaces
SMH for HP-UX provides many key customer benefits
• Host based authentication and tight integration with existing security infrastructure
• Management tools that consume minimal system resources
• Includes “start on demand” capabilities
– System resources are not used when tool is not in use
• Highly responsive user interface supporting “access from anywhere” via a browser
• Usable “out of the box” (default installed) by root with no user configuration
• Seamless, secure integration with HP System Insight Manager (HP SIM)
• WARNING: You Must use either command sequence or HP SMH to perform any operation
that SMH supports
– Attempting to start an operation with commands and completing it with SMH can result in
errors and possibly corrupt data or data structures

5 March 2007

HP System Management Homepage (HP SMH) is the system administration tool for managing
HP-UX 11i. HP SMH provides Web-based systems management functionality, at-a-glance
monitoring of system component health, and consolidated log viewing. SMH has a Web-based
interface that consolidates and simplifies single system management for HP servers on Windows,
Linux, and HP-UX operating systems. HP SMH has a high performance UI that responds rapidly.
It also provides significant performance improvements over its predecessor SAM. HP SMH also
provides Terminal User Interfaces (TUIs). (Note SMH is the replacement for SAM, which is
deprecated in this release.)

SMH for HP-UX provides many key customer benefits. It has host based authentication and tight
integration with existing security infrastructure. Its management tools consume minimal system
resources. SMH includes “start on demand” capabilities so that system resources are not used
when the tool is not in use. It has a highly responsive user interface supporting “access from
anywhere” via a browser. It is usable “out of the box” (default installed) by root with no user
configuration. And, it provides seamless, secure integration with HP System Insight Manager (HP
SIM).

WARNING: You must use either the command sequence or HP SMH to perform any operation
that HP SMH supports. Attempting to start an operation with commands and completing it with
HP SMH can result in errors and possibly corrupt data or data structures.

March 2007 Management-5


HP-UX 11i v3 Delta Simplified Management

HP System Management Homepage (HP SMH) on HP-


UX 11i v3
SMH is updated to version A.2.2.5
• New Web-based solutions for Networking and Communications
– ncweb
• ServiceGuard complex management
– sgmgr
• Many more system management tools
• Defect fixes
• Minor roll of earlier versions
– A.2.2.1 and A.2.2.3 released for HP-UX 11i v1 0509 and HP-UX 11i v2
0606 respectively
Documentation
• smh(1M), hpsmh(1M) and smhstartconfig(1M) manpages
• Documents at http://docs.hp.com
– Navigate to Network and Systems Management System Administration
• HP System Management Homepage Release Notes
• HP System Management Homepage Installation Guide
• “Next generation single-system management on HP-UX 11i v2 (B.11.23)” white
paper
• HP SMH Online Help included with the product

6 March 2007

On HP-UX 11i v3, HP System Management Homepage is updated to version A.2.2.5. As an


infrastructure for integrating system management tools, there is no difference between HP SMH
for previous releases of HP-UX 11i and HP SMH for HP-UX 11i v3. The main new features in
SMH are the new Web-based solutions for Networking and Communications (ncweb) and
ServiceGuard complex management (sgmgr) that are introduced. Many more system
management tools are integrated in HP SHM for HP-UX 11i v3, in addition to defect fixes. This
SMH version is a minor roll of earlier versions (A.2.2.1 and A.2.2.3) released for previous HP-
UX 11i releases (HP-UX 11i v1 0509 and HP-UX 11i v2 0606).

HP System Management Homepage manpages included with product are hpsmh(1M) and
smhstartconfig(1M). For further information, see the following documents, available at
http://docs.hp.com by navigating to Network and Systems Management System Administration.
The documents are HP System Management Homepage Release Notes, HP System Management
Homepage Installation Guide, “Next generation single-system management on HP-UX 11i v2
(B.11.23)” white paper, and HP SMH Online Help included with the product.

March 2007 Management-6


HP-UX 11i v3 Delta Simplified Management

SMH TUI Shows SMH Functional Area Launchers

7 March 2007

This screen shot of the SMH TUI shows the different SMH functional area launchers. In the
following sub-module, we will cover several of these in more detail.

March 2007 Management-7


HP-UX 11i v3 Delta Simplified Management

SMH GUI on HP-UX 11i v3

8 March 2007

This is a screen shot of the SMH GUI page that you are dropped in after logging is through the
SMH login screen.

March 2007 Management-8


HP-UX 11i v3 Delta Simplified Management

System Management

SMH Applications and Tools

9 March 2007

This sub-module covers several of the functional areas within SMH, including fsweb, secweb,
and the Xserver graphics manager.

March 2007 Management-9


HP-UX 11i v3 Delta Simplified Management

Disk & File Systems Administration with fsweb


Disks and File Systems administration tool – fsweb
• Provides a Web-based GUI for File System and Disks system administration tasks
– HP SMH provides an extended GUI to manage the Disks and File Systems tasks for HP-UX
• Provides a new TUI to replace the legacy SAM interface
– Completely replaces File System and Disks functional area in SAM
• Supports the new mass storage stack
• Provides additional Logical Volume Manager (LVM) support
fsweb tool is the primary interface for File System and Disks system administration tasks
• Can be launched from the HP System Management Homepage (SMH) and HP SIM
• Can also be launched using the fsweb command
fsweb supports system administration tasks
• Management of logical volumes and volume groups, disk management tasks, and file system
tasks
fsweb supports several file systems
• Cache File System (CFS), Compact Disc File System (CDFS), Common Internet File System
(CIFS), Hierarchical File System (HFS), Network File System (NFS), Veritas File System (VxFS)
fsweb bundle is FileSystems
• FileSystems bundle is available on the Operating Environment DVD and the Applications
DVD
• HP recommends that you install FileSystems

10 March 2007

The Disks and File Systems administration tool, fsweb, provides a Web-based graphical user
interface (GUI) for File System and Disks system administration tasks. In addition, a new TUI
replaces the legacy SAM interface. Fsweb supports the new mass storage stack. It provides
additional Logical Volume Manager (LVM) support and has other new functionalities.

The Disks & File Systems (fsweb) tool is the primary interface for File System and Disks system
administration tasks. The tool provides both web-based Graphical User Interface (GUI) and Text
User Interface (TUI). The Disks & File Systems tool can be launched from the HP System
Management Homepage (HP SMH) and HP Systems Insight Manager (HP SIM). In the HP-UX 11i
v3 release, the tool can also be launched using the fsweb command.

The Disks & File Systems tool supports system administration tasks, including management of
logical volumes and volume groups, disk management tasks, and file system tasks. The tool
supports these file systems: Cache File System (CFS), Compact Disc File System (CDFS), Common
Internet File System (CIFS), Hierarchical File System (HFS), Network File System (NFS), and
Veritas File System (VxFS).

In the 11i v3 release, the Disks & File Systems tool completely replaces the File System and Disks
functional area in the HP-UX Systems Administration Manager (SAM) application. HP SAM is
deprecated in the HP-UX 11i v3 (B.11.31) release. HP SMH provides an extended GUI to
manage the Disks and File Systems tasks for HP-UX.

The name of the fsweb bundle is FileSystems. The FileSystems bundle is available on the
Operating Environment DVD and the Applications DVD. When you install an HP-UX 11i v3
Operating Environment, HP recommends that you install FileSystems.

March 2007 Management-10


HP-UX 11i v3 Delta Simplified Management

The fsweb Tool on HP-UX 11i v3


Provides significant performance improvements over SAM
Better reliability and ease of use
Improved visualization
Command preview
New fsweb features on HP-UX 11i v3
• New Text User Interface (TUI) replaces the legacy SAM interface
• Directly launch the Disks & File Systems (fsweb) tool with fsweb command
• fsweb supports new mass storage stack
• Provides additional Logical Volume Manager (LVM) support
• Provides Volume Group Distribute and Undistribute functionality for ServiceGuard support
• Provides Disk Operations
– Set and display disk attributes
– Set the view for a mass storage stack
– View disk statistics
– Set the disk device ID
• Supports localized version of the tool in different European and Asian languages.
Documentation
• Disks and File Systems Online Help
• fsweb (1M), sam (1M) and smh (1M) manpages

11 March 2007

The HP-UX 11i v1 Operating Environments do not support the Disks & File Systems (fsweb) tool.
The first release of the Disks & File Systems (fsweb) tool was in the HP-UX 11i v2 December
2005 release. The HP-UX 11i v3 release provides significant performance improvements over
SAM, better reliability, ease of use, improved visualization, command preview, and new
features.

There are several differences between the first release and the HP-UX 11i v3 Enterprise Release.
A new Text User Interface (TUI) replaces the legacy SAM interface. The fsweb command can
directly launch the Disks & File Systems (fsweb) tool. It supports the new mass storage stack. It
provides additional Logical Volume Manager (LVM) support. It provides Volume Group (VG)
Distribute and Undistribute functionality for ServiceGuard support. It provides Disk Operations
such as setting and displaying disk attributes, setting the view for a mass storage stack, viewing
disk statistics, and setting the disk device ID. Finally, it also supports localized version of the tool
in different European and Asian languages.

For additional information, consult the fsweb(1M), sam(1M), and smh(1M) man pages. Also, you
may access the Disks and File Systems Online Help.

March 2007 Management-11


HP-UX 11i v3 Delta Simplified Management

Compatibility Issues with fsweb


In the TUI, the tasks are available under different headings such as
Disks, Volume Groups, Logical Volumes, and File Systems
• Unlike in legacy SAM, user must perform each task separately by
navigating to appropriate heading
– In legacy SAM, user could navigate to Disks and create volume
groups, logical volumes and even file systems
– In new TUI, user must navigate to Volume Groups to create a volume
group, navigate to Logical Volumes to create a logical volume, and so
on
Features unsupported in this release
• Disk Array maintenance
• Hot spare administration
• Converting VG to VxVM disk group
• Replacing Hot Pluggable Disk
• Configuring Swap functionality is not supported

12 March 2007

There are a few known compatibility issues with fsweb on this release. In HP-UX 11i v3, a new
Text User Interface (TUI) is provided in place of the legacy SAM. In the TUI, the tasks are
available under different headings such as Disks, Volume Groups, Logical Volumes, and File
Systems. This means that, unlike in the legacy SAM, the user must perform each task separately
by navigating to the appropriate heading. For example, in the legacy SAM, the user could
navigate to Disks and create volume groups, logical volumes and even file systems. In the new
TUI, the user must navigate to Volume Groups to create a volume group, navigate to Logical
Volumes to create a logical volume, and so on.

Additionally, Disk Array maintenance is not supported in the first release of HP-UX 11i v3. This
release of fsweb does not support hot spare administration, converting a VG to a VxVM disk
group, or replacing a Hot Pluggable Disk. Finally, configuring Swap functionality is not
supported.

March 2007 Management-12


HP-UX 11i v3 Delta Simplified Management

SMH TUI View of fsweb Area

13 March 2007

This slide illustrates how the different areas within the Disks and File System functional area are
divided. This screen shot was taken by using the smh command and following it with the f
command to enter the Disks and File Systems area. If you use the fsweb command instead of
smh->f, then you will see the same screen except the second line of the header does not have the
“SMH->” in front of the “Disks and File Systems”.

March 2007 Management-13


HP-UX 11i v3 Delta Simplified Management

SMH GUI View of fsweb Area

14 March 2007

This slide shows the SMH GUI view of the fsweb area. Notice the four tabs for File Systems,
Logical Volumes, Volume Groups, and Disks. In this screen shot, the File Systems area is
displayed.

March 2007 Management-14


HP-UX 11i v3 Delta Simplified Management

Security Attributes Configuration Tool - secweb


Easy-to-use tool for configuring system-wide and per-user values of security
attributes of local and Network Information Service (NIS) users
• Provides both web-based GUI and Text User Interface (TUI)
• Launch the tool from SMH, SIM, or by using the secweb command
Features of secweb
• Configure system-wide values of security attributes from System Defaults tab
• Configure per-user values of security attributes of local users from Local Users tab
• Configure per-user values of security attributes of NIS users from NIS Users tab
• Preview commands that support the GUI actions prior to execution
• View lock information of security attributes of local and NIS users
• Supports the long user name
Documentation
• secweb(1M), sam(1M), and smh(1M) man pages
• Security Attributes Configuration Online Help

15 March 2007

The HP-UX Security Attributes Configuration tool (secweb) is an easy-to-use tool for configuring
system-wide and per-user values of security attributes of local and Network Information Service
(NIS) users. The tool provides both web-based Graphical User Interface (GUI) and Text User
Interface (TUI). You can launch the tool from HP System Management Homepage (SMH) or HP
Systems Insight Manager (SIM), or by using the secweb command.

There are several features of the HP-UX Security Attributes Configuration tool. It is used to
configure system-wide values of security attributes from the System Defaults tab. It can configure
per-user values of security attributes of local users from the Local Users tab. The secweb tool also
can configure per-user values of security attributes of NIS users from the NIS Users tab. It allows
the user to preview commands that support the GUI actions prior to execution. It also allows
users to view lock information of security attributes of local and NIS users. Finally, secweb
supports long user names.

For more information, refer to the secweb(1M), sam(1M), and smh(1M) man pages. There is also
a Security Attributes Configuration Online Help.

March 2007 Management-15


HP-UX 11i v3 Delta Simplified Management

SMH secweb Area

16 March 2007

This slide shows the secweb, or Security Attributes Configuration, entry point from within SMH.
From here, you may click on either Local Users or System Defaults.

March 2007 Management-16


HP-UX 11i v3 Delta Simplified Management

Secweb’s SMH view of Local Users

17 March 2007

This slide shows a view from within the Local Users tab of the security attributes configuration
area. The root user has been clicked on to display that there are no per user exceptions for root.

March 2007 Management-17


HP-UX 11i v3 Delta Simplified Management

XServer Graphics Manager


Xserver configuration integrated into SMH
• Xserver is an intermediary between client applications and local hardware and input
devices
• Graphics Display Manager
– Xserver’s configuration tool
– Available via the HP SMH Functional Area Launcher interface
– Enables graphics administration capabilities manner consistent with overall HP SMH
infrastructure
Graphics Display Manager
• Provides same functionality and features as the “Display” module previously available via
SAM
– Use smh command and type g to enter the Display area
• Choose to enter either Monitor or XServer configuration areas
• Graphics Display Manager uses the same OBAM based module associated with the old
“Display” component in SAM
• Customers can change various capabilities of the available graphics devices and the
associated Xserver
– For example, a user can direct the Xserver to use different visuals, enable/disable Xserver
extensions, configure multi-display options, and change monitor resolutions
• No functions are directly accessible by customer or ISV applications
– Applications are not affected
Man pages
• graphdiag(1), setmon(1), X(1), Xf86(1), Xhp(1), and Xserver(1)

18 March 2007

The Xserver product is a component of the X Window System. The Xserver product acts as an
intermediary between client applications and local hardware and input devices. Xserver’s
configuration tool is available via the HP SMH Functional Area Launcher interface. Xserver’s
integration into HP SMH enables graphics administration capabilities in a manner consistent with
the overall HP SMH infrastructure.

The Graphics Display Manager component, also known as the Xserver configuration tool, is
launched by SMH to configure graphics capabilities on HP-UX 11i v3. This component provides
the same functionality and features as the “Display” module previously available via SAM on
earlier releases of HP-UX. Because the ObAM framework is present in HP-UX 11i v3, the
Graphics Display Manager uses the same module associated with the old “Display” component
in SAM.

This product provides the same behavior and functionality as seen on previous HP-UX releases.
The PA-RISC and Itanium-based architectures offer different graphics devices with differing
capabilities, but those devices are compatible with their behavior on previous HP-UX releases.

Using the Graphics Display Manager on HP-UX 11i v3, customers can change various
capabilities of the available graphics devices and the associated Xserver. For example, a user
can direct the Xserver to use different visuals, enable/disable Xserver extensions, configure multi-
display options, and change monitor resolutions.

Features available in previous HP-UX releases in the “Display” module within SAM continue to
work under the Graphics Display Manager via SMH in HP-UX 11i v3. The Display Manager
does not have any functions directly accessible by customer or ISV applications, so applications
are not affected.

Refer to the graphdiag(1), setmon(1), X(1), Xf86(1), Xhp(1), and Xserver(1) man pages for
further information.

March 2007 Management-18


HP-UX 11i v3 Delta Simplified Management

System Management
EVWeb

19 March 2007

This sub-module covers changes to SAM in HP-UX 11i v3.

March 2007 Management-19


HP-UX 11i v3 Delta Simplified Management

Event Viewer and Subscription Manager (EVWeb)


Provides tools for viewing events generated on managed HP-UX system
Provides tools for managing subscriptions, or action on events, for events that can occur
on a single HPUX system
• Interface for defining action on events occurring on a system
• Facilitates subscribing for WBEM events
Is a WBEM consumer for event archive and email
• Archived event are displayed in event viewer
Integrates with other SMH web applications
• Improves usability and enhances customer experience
• Consists of CLI and browser based GUI that is integrated with SMH
Has man pages and online help
Has applicable features for a single managed system
• Does not offer a single system view in any cluster environment
Packaged as a part of SysFaultMgmt bundle
• SFMIndicationProvider is not available
– Use EVWEB Event Viewer to view equivalent indications
• HP threshold indications equivalent to indications generated by HA Monitors are supported
– Use EVWeb Event Viewer to view HP threshold indications
Does not include EVWeb Log Viewer

20 March 2007

Event Viewer and Subscription Manager Tool (EVWeb) is new to the HP-UX 11i v3 Release. EVWeb
provides tools for viewing events and managing subscriptions, or action on events, for the events that can
occur on a single HPUX system. It is also a WBEM consumer for event archive and email.

EVWeb provides an interface to define action on events occurring on a system. It also integrates with
other SMH web applications, so as to improve usability and enhance customer experience. EVWeb
consists of a CLI and a browser based GUI that is integrated with SMH. It has an event archive, man
pages, and online help.

EVWeb facilitates subscribing for WBEM events and viewing the events generated on the managed HPUX
system.

Using the event subscription administration facility, a user can subscribe to events that can occur on a
system and using the event viewer, the user can view the events that have occurred on the system. The
events can be archived in the event archive and these archived events are displayed in the event viewer.
Both of the features are applicable for a single managed system, but do not offer a single system view in
any cluster environment.

EVWeb is integrated with SMH, the single system manageability portal and hence it provide a browser
based GUI to the user. It also provides a CLI interface that works independently of SMH. It is locally on
every HPUX 11i v3 server.

EVWeb also provides an interface for other SMH web applications to seamlessly integrate the EVWeb
functionality into their application.

EVWeb is packaged as a part of SysFaultMgmt.

EVWeb on HP-UX 11i v3 does not support the Log Viewer or Throttle Configuration functionalities.

March 2007 Management-20


HP-UX 11i v3 Delta Simplified Management

EVWEB Architecture
Web Browser Shell Prompt
Client
Server
SMH
HP WBEM
Not in initial
11.31 release EVWEB Services
EVWEB EVWEB
GUI Plug-in CLI
CIMOM
LOG Log Viewer
Miner Subscription
Management
Event Viewer
Command Library CIM/XML Handler
DBLib
Event Archive
Event
Consumer SFM Indication Provider
Archive
DB Email Consumer ThrottlingConfig Provider
Low
Level EMS Wrapper Provider
Log DB User’s Mailbox SMTP Server
Other Providers
SysFaultMgmt
Not in initial 11.31 release
21 March 2007

This slide shows the EVWeb architecture. EVWeb may be used from a GUI or a TUI on the
client. The interface to the server is through SMH for the GUI and through the EVWeb CLI on the
server. EVWeb consisnts of the EVWeb GUI plug-in and the EVWeb CLI. Both of these interface
with components of the EVWeb command library.

The command library components include the log viewer, event viewer, and subscription
management. The log and event viewers are connected to database components external to
EVWeb and to the EVWeb event archive database. The subscription management uses WBEM
Services. There are also System Fault Management providers that interface with WBEM
CIMOM. The information from the providers goes through the CIM handler to the EVWeb event
consumers. The Event Archive Consumer stores the event into the Event Archive Database. And,
the Email consumer sends an email about the event.

March 2007 Management-21


HP-UX 11i v3 Delta Simplified Management

EVWeb Integration with Management Applications


Evweb integrates with SMH directly
Evweb integrates with SIM through SMH
• HP SIM has links to SMH of the systems under it
– This way evweb can be reached from SIM

22 March 2007

Evweb integrates with SMH directly. Evweb integrates with SIM through SMH. For example, HP
SIM has links to SMH of the systems under it. This way evweb can be reached from SIM.

March 2007 Management-22


HP-UX 11i v3 Delta Simplified Management

EVWeb GUI is integrated with HP SMH – Subscription


Administration
Access subscription administration via Tools tab
Allows users to view, create, delete and modify new and
already existing subscriptions on the local system
• Use either GUI or command line
Subscription is characterized by criteria and destination
• Default subscription that is readily available is flagged as
HP Advised subscriptions
• Destination for such subscriptions cannot be changed
– It can be modified to have move destinations like syslog and
email address
– The syslog destination is newly allowed in HP-UX 11i v3

23 March 2007

The EVWeb Subscription administrator allows users to view, create, delete and modify new and
already existing subscriptions on the local system. All these operations can be performed both
by GUI and command line. The subscription is characterized by criteria and destination. The
default subscription that is readily available is flagged as HP Advised subscriptions. Though the
destination for such subscriptions cannot be changed, it can be modified to have move
destinations like syslog and email address. The syslog destination is newly allowed in HP-UX 11i
v3. Though an administrator can create a subscription that is applicable to one device, however,
he/she can also define subscription criteria such that it is applicable for all devices. Similarly, if
he/she wants the matching event to reach more than one destination, then all such destinations
can be defined using one subscription creation request. To know more about how to create /
modify subscriptions, please refer to section on Feature Highlights of SFM.

March 2007 Management-23


HP-UX 11i v3 Delta Simplified Management

SMH -> Tools -> Evweb -> Subscription Admin

24 March 2007

From within SMH, click on Tools. From within Tools, you can find the Evweb Subscription
Administration entry point.

March 2007 Management-24


HP-UX 11i v3 Delta Simplified Management

Evweb Subscription Administration

25 March 2007

The only current subscription is the HP_General Filter@1_V1 on the system shown.

March 2007 Management-25


HP-UX 11i v3 Delta Simplified Management

EVWeb GUI integrated with HP SMH – Event Viewer


Access event viewer via Logs tab
Allows user to view, search and delete events from local
event archive
Wide range of intuitive and user friendly search criteria is
provided
• Event age, date-time, hardware device, event ID, text
• A combination of one or more such criteria can be
specified at a time to narrow down the search results
– /opt/sfm/bin/evweb eventviewer –L –e eq:4 –a 10:dd –s
desc:eventid

26 March 2007

The EVWEB GUI is integrated with HP SMH, where event viewer and subscription administration
can be accessed via Logs and Tools tab respectively.

The EVWeb event viewer allows user to view, search and delete events from local event archive.
A wide range of intuitive and user friendly search criteria is provided that includes event age,
date-time, hardware device, event ID and text. A combination of one or more such criteria can
be specified at a time to narrow down the search results, as shown by the command line
example below: /opt/sfm/bin/evweb eventviewer –L –e eq:4 –a 10:dd –s desc:eventid.

March 2007 Management-26


HP-UX 11i v3 Delta Simplified Management

SMH Logs Screen

27 March 2007

This is a screen shot of the Logs part of SMH. In the upper left, you will see the Evweb Event
Viewer.

March 2007 Management-27


HP-UX 11i v3 Delta Simplified Management

Evweb Event Viewer in SMH GUI

28 March 2007

This system is squeaky clean! This is a screen shot of the Evweb Event Viewer in the Logs section
of SMH. You can see that there are zero events of each of the types.

March 2007 Management-28


HP-UX 11i v3 Delta Simplified Management

EVWeb Event Viewer Advanced Search

29 March 2007

This is a screen shot of the EVWeb Event Viewer Advanced Search page.

March 2007 Management-29


HP-UX 11i v3 Delta Simplified Management

EVWeb Event Viewer Search by Event Category

30 March 2007

There is an Event Category drop down menu. As you can see, there are several event categories
to choose from if you would like to narrow down your search.

March 2007 Management-30


HP-UX 11i v3 Delta Simplified Management

EVWEB CLI – Command Summary


evweb list
• List the event categories in these classes
– HP_AlertIndication, HP_EVMEventIndication, HP_ThresholdIndication
evweb subscribe
• Create, modify, or delete subscriptions
• List and view the details of subscriptions
evweb eventviewer
• View the events summary list
• View event details
• Search events
• Delete events
See evweb(1), evweb_list(1), evweb_subscribe(1), and
evweb_eventviewer(1) man pages
• Note the underscore in the man page and not in the command!

31 March 2007

EVWeb has a command line interface, too.

The evweb list command lists the event categories in the HP_AlertIndication,
HP_EVMEventIndication, and HP_ThresholdIndication classes. The evweb subscribe is used to
create, modify, or delete subscriptions. It can also be used to list and view the details of
subscriptions. The evweb eventviewer command is used to view the events summary list, view
event details, search events, and delete events.

Note that the evweb logviewer is not supported in HP-UX 11i v3.

See evweb(1), evweb_list(1), evweb_subscribe(1), and evweb_eventviewer(1) man pages. Note


the underscore in the man page and not in the command!

March 2007 Management-31


HP-UX 11i v3 Delta Simplified Management

Evweb list output

32 March 2007

Type the evweb list –v command to get a full list of the event categories.

March 2007 Management-32


HP-UX 11i v3 Delta Simplified Management

System Management
ncweb and nwmgr

33 March 2007

This sub-module covers changes to SAM in HP-UX 11i v3.

March 2007 Management-33


HP-UX 11i v3 Delta Simplified Management

Network Interfaces Configuration and Network


Services Configuration (ncweb) (1 of 2)
ncweb is an enhanced version of SAM-NCC
• Launch ncweb from the HP System Management Homepage
(SMH)
• SAM-NCC is not available on HP-UX 11i v3 and onwards
ncweb is a new web-based network interfaces
configuration tool
• Configure various network interfaces
– Auto Port Aggregation (APA), Network Interface Cards (NIC),
Virtual LANs (VLAN), and Remote Direct Memory Access
(RDMA)
• Configure various network services
– Bootable devices, DHCPv6, DNS, Hosts, Network News,
Networked File Systems, X.25, SNA, System Access and Time
ncweb has a web-based GUI and a TUI
34 March 2007

System Administration Manager Networking and Communications functional area, SAM-NCC, is


not available on HP-UX 11i v3 and onwards. The Network Interfaces Configuration tool and the
Network Services Configuration tool, or ncweb, for short, is an enhanced version of SAM-NCC.
You can launch ncweb from the HP System Management Homepage (SMH).

Ncweb is a new web-based network interfaces configuration tool to configure various network
interfaces, including Auto Port Aggregation (APA), Network Interface Cards (NIC), Virtual LANs
(VLAN), and Remote Direct Memory Access (RDMA). It is also used to configure various network
services, like bootable devices, DHCPv6, DNS, Hosts, Network News, Networked File Systems,
X.25, SNA, System Access and Time, on an HP-UX system. The services of NCweb previously
supported by SAM_NCC are X-based and launched through SMH.

Ncweb has a web-based graphical user interface and a terminal user interface.

March 2007 Management-34


HP-UX 11i v3 Delta Simplified Management

Network Interfaces Configuration and Network


Services Configuration (ncweb) (2 of 2)
SMH plug-ins are available after installing this product
• Network Interfaces Configuration tool
– Configure APA, NIC, RDMA, VLAN, and X.25 interfaces
• Network Services Configuration tool
– Configure various network services
• Tool to share and unshare local file systems from a Network Services
Configuration HP SMH plug-in
• Presents new look and feel for the Network Interfaces Configuration TUI
and Share/Unshare File Systems TUI
ncweb supports
• IPv6 configuration
– Configuring IPv6 address over VLAN and APAs
• Creating Fail over Groups in APA subsection
• Configuring Default, Host, and Net Routes
ncweb provides
• Preview button to view the command line equivalent for a task
• GUI and TUI Online Help
35 March 2007

There are two HP System Management Homepage (SMH) plug-ins available after installing this
product. The Network Interfaces Configuration tool is for configuring APA, NIC, RDMA, VLAN,
and X.25 interfaces. And, the Network Services Configuration tool is for configuring various
network services. There is a new web-based tool to share and unshare local file systems from a
Network Services Configuration HP SMH plug-in. (This functionality was previously called Export
Local File Systems.) It presents a new look and feel for the Network Interfaces Configuration
terminal user interface and Share/Unshare File Systems terminal user interface. The NFS client
mounts all the configured file systems and at boot time enables mounting of all the configured
systems.

Ncweb supports IPv6 configuration, configuring IPv6 address over VLAN and APA's, creating
Fail over Groups in APA subsection, and configuring Default, Host and Net Routes.

Ncweb provides a Preview button to view the command line equivalent for a task. And, GUI and
TUI Online Help is integrated with the tool.

March 2007 Management-35


HP-UX 11i v3 Delta Simplified Management

Network Services Configuration - ncweb

36 March 2007

This is a screen shot of the Network Services Configuration area of ncweb in the SMH GUI. In
addition to the configuration areas displayed, a Time area is visible by scrolling down.

March 2007 Management-36


HP-UX 11i v3 Delta Simplified Management

Network Interfaces Configuration - ncweb

37 March 2007

This slide shows the Network Interfaces Configuration area for ncweb within SMH. From here,
you may click on Network Interface Cards or Virtual LANs to configure those types of interfaces.

March 2007 Management-37


HP-UX 11i v3 Delta Simplified Management

NIC area of ncweb in SMH

38 March 2007

This is a screen shot from within the NIC area. When lan0 is selected (click in circle on left), the
details of the lan0 interface are displayed. Also, on the right, additional actions become
available.

March 2007 Management-38


HP-UX 11i v3 Delta Simplified Management

Unified Command Line Interface (UCLI) Value


Improves the Total Customer Experience (TCE) for I/O CLIs
• Provides a uniform CLI syntax
• Provides a common method to execute common operations
• Provides a consistent naming convention
• Provides consistent help text and usage information
Reduces development and support costs for I/O CLIs
Conforms to HP defined specifications

39 March 2007

The Unified Command Line Interface, or UCLI), improves the Total Customer Experience (TCE) for
I/O and Networking CLIs. It provides a uniform CLI syntax, a common method to execute
common operations, a consistent naming convention, and a consistent help text and usage
information. Additionally, the UCLI reduces development and support costs for the CLIs. HP
defined specifications for the UCLI.

March 2007 Management-39


HP-UX 11i v3 Delta Simplified Management

Unified Command Line Interface on HP-UX 11i v3


Network interface management command – nwmgr
• Single tool for performing all network interface-related tasks
• Based on the Unified I/O CLI (UCLI) specifications
• Provides unified CLI management of NW interfaces
– LAN physical interfaces
• Ethernet NICs
– Logical interfaces
• APA and VLAN
• Used to manage IB-based network interfaces
– All RDMA-based interfaces
• IB, RNICs
Ability to make persistent configuration changes
Scriptable output
40 March 2007

The network interface management command, referred to as nwmgr, is based on the Unified
I/O CLI (UCLI) specifications. The command is used to manage LAN-based and IB-based
network interfaces. The LAN-based interfaces include LAN physical interfaces and logical
interfaces such as APA and VLAN. The IB interfaces include all RDMA-based interfaces. The
nwmgr command is a single tool for performing all network interface-related tasks. Customers
can use the single tool, nwmgr, to perform all tasks related to network interfaces.

March 2007 Management-40


HP-UX 11i v3 Delta Simplified Management

Network Interface Management Command Line


Interface (nwmgr) – New on HP-UX 11i v3
Network Interface Management Command Line Interface - nwmgr
• Based on the Unified I/O CLI (UCLI) specifications
• Provides unified management of LAN, e.g. Ethernet NICs, APA, VLANs),
and RDMA, e.g. IB and RNICs
nwmgr manages LAN-based and IB-based network interfaces
• LAN-based interfaces include LAN physical interfaces and logical
interfaces such as APA and VLAN
• IB interfaces including all RDMA-based interfaces
• Supports functionalities of lanadmin, lanscan, linkloop, lan*conf, and itutil
commands
– These commands are deprecated in HP-UX 11i v3
– Users should migrate to nwmgr as it will replace them in future release
Customers can use this single tool for performing all network interface-
related tasks

41 March 2007

There is a new Network Interface Management Command Line Interface. The nwmgr command
is used to manage LAN-based and IB-based network interfaces. The LAN-based interfaces
include LAN physical interfaces and logical interfaces (APA, VLAN). The IB interfaces include all
RDMA-based interfaces.

The network interface management command referred to as nwmgr (version 1.00.00), is based
on the Unified I/O CLI (UCLI) HP defined specifications. Customers can use this single tool for
performing all network interface-related tasks.

The nwmgr command includes support for the unified management of LAN, e.g. Ethernet NICs,
APA, and VLANs, and RDMA, e.g. IB and RNICs.

It supports the functionalities of the lanadmin, lanscan, linkloop, lan*conf, and itutil commands.
These commands are being deprecated in HP-UX 11i v3. Users should start getting familiar with
nwmgr as it will replace them in a future HP-UX release.

March 2007 Management-41


HP-UX 11i v3 Delta Simplified Management

The nwmgr Command Features


Supports the functionalities of and replaces
• Existing LAN commands
– lanadmin, lanscan, linkloop
• APA-specific commands
– lanqueryconf, lancheckconf, lanapplyconf, landeleteconf, and apamonitor
• Infiniband command
– itutil
Supports updating saved values in the configuration files
Supports modifying the current attribute values
Supports creation and deletion of logical interfaces
Supports displaying information about the interface
Has an operation preview
Provides scriptable and readable output formats
Supports specifying multiple attributes in a single command
Provides a statistics monitor
Refer to the nwmgr(1) man page
42 March 2007

The nwmgr interface is intended to replace the existing LAN commands such as lanadmin,
lanscan, and linkloop. It also replaces the APA-specific commands such as lanqueryconf,
lancheckconf, lanapplyconf, landeleteconf, and apamonitor. Finally, it also replaces the
Infiniband utility, itutil.

Additionally, nwmgr supports updating saved values in the configuration files. It supports
modifying the current attribute values. It allows for creating and deleting logical interfaces.
Nwmgr supports displaying information about the interfaces. It provides an operation preview.

Nwmgr provides scriptable and readable output formats. It has support for specifying multiple
attributes in a single command. Finally, it provides a statistics monitor.

For more information, refer to the nwmgr(1) man page.

March 2007 Management-42


HP-UX 11i v3 Delta Simplified Management

nwmgr Architecture
nwmgr
LAN
IB/RDMA
LAN shared libraries

HP drivers
nwmgr RDMA
Library
IHV drivers
LAN
VLAN Services
IT API
APA CONF
FILES
USER

KERNEL
HP or Native DLPI RDMA Stack

43 March 2007

This slide illustrates the nwmgr architecture. nwmgr consists of nwmgr and LAN Services in the
middle of the drawing. There are several LAN shared libraries. These include HP drivers, IHV
drivers, VLAN and APA. There are also LAN related configuration files. nwmgr interfaces with
these LAN services to manage LAN operations going through the HP or native DLPI routines in
the HP-UX kernel. nwmgr interfaces with the RDMA library and the IT API to access the RDMA
stack in the kernel to manage IB/RDMA operations.

March 2007 Management-43


HP-UX 11i v3 Delta Simplified Management

nwmgr on HP-UX 11i v3

44 March 2007

This slide illustrates using nwmgr. Using nwmgr –h –S all displays the subsystems supported by
nwmgr on the system. The nwmgr command without any arguments displays all the network
interfaces in the system, including physical LAN interfaces (NICs), virtual LAN interfaces (VLANs
and APA aggregates and failover groups), and RDMA interfaces.

March 2007 Management-44


HP-UX 11i v3 Delta Simplified Management

Using nwmgr to get details on a LAN

45 March 2007

Here is an example of using nwmgr to get verbose (-v) details on the class instance (-c) lan0.
Compare this output to that shown in the screen shot of the “NIC area of ncweb in SMH” slide
above where the lan0 was clicked. All of the information should match.

March 2007 Management-45


HP-UX 11i v3 Delta Simplified Management

Customer Benefits of ncweb and nwmgr


New management framework
• New nwmgr command based on UCLI
• New SMH plug-in NCWeb (GUI + TUI)
Provides a uniform CLI syntax
• Common method to execute common operations
• Consistent naming convention
• Consistent help text and usage information
nwmgr and NCWeb
• Unify management of LAN, IB, RNICs
• Unify management of physical and virtual interfaces
nwmgr provides readable and scriptable output
NCWeb provides CLI Preview
46 March 2007

There are many benefits to the new ncweb and nwmgr tools. HP has provided customers with a
new network management framework.

The new nwmgr command based on the UCLI provides a uniform CLI syntax. It gives customers a
common method to execute common operations. There is a consistent naming convention and
help and usage information. nwmgr also provides readable and scriptable output.

The new SMH plug-in NCWeb has both a GUI and a TUI. NCWeb provides a CLI Preview.

Both nwmgr and NCWeb unify management of LAN, IB, RNICs and unify management of
physical and virtual interfaces.

March 2007 Management-46


HP-UX 11i v3 Delta Simplified Management

System Management
SAM Changes in HP-UX 11i v3

47 March 2007

This sub-module covers changes to SAM in HP-UX 11i v3.

March 2007 Management-47


HP-UX 11i v3 Delta Simplified Management

SAM and SMH


SAM merely launches SMH in HP-UX 11i v3
• Displays deprecation message
• Runs SMH automatically
SMH (System Management Homepage)
• Replaces SAM going forward
• Launches GUI or TUI from command line
– Which one is launched depends if the DISPLAY variable is set and
if usable web browser is found
• If DISPLAY is set and web browser is found, GUI is launched in web
browser
• Otherwise TUI is launched
• /usr/sbin/smh
• Helps new HP-UX customers
– Single management tool strategy for both web and TUI
• Helps to migrate customers from SAM to SMH

48 March 2007

SAM continues to exist in HP-UX 11i v3; however, when used, it displays a deprecation
message. Then it launches SMH automatically. So, while you may type “sam” at the prompt, you
are really using smh. It is better to type “smh”.

SMH is an acronym for System Management Homepage. SMH replaces SAM going forward. It
is located in /usr/sbin/smh. It launches either the GUI web interface or the TUI from the
command line. Whether the GUI or TUI will be launched depends on if the DISPLAY variable is
set and if a usable web browser is found. If the DISPLAY variable is set and a web browser is
found, the GUI will be launched in a web browser; otherwise, the TUI will be launched.

SMH helps new HP-UX customers by providing a single management tool strategy for both web
and TUI access. It also helps migrate customers from SAM to SMH.

March 2007 Management-48


HP-UX 11i v3 Delta Simplified Management

Changes in HP-UX 11i v3 SAM


SMH Functional Area Launcher (FAL) is CMENU based
• Earlier SAM was using OBAM FAL
Most SMH FAL applications such as File Systems and Users & Groups
are converted to web-based tools
• These applications do not support X11-based Graphical User Interface
OBAM is supported on HP-UX 11i v3
• Some Legacy applications still use it
X11-based Graphical User Interface is supported on HP-UX 11i v3
• Some legacy applications still use X11-based GUI
– Terminals & Modems and Trusted Systems
Some functional areas are obsolete based on
• Analysis of customer data on usage
• Alternate easy ways to perform the same tasks in command line
• Migrate customers toward SMH and HP SIM
49 March 2007

The main objective of SAM on HP-UX 11i v3 is to replace the existing SAM command and FAL
and to give a single point entry for both the SMH GUI and System Management TUI. Another
objective was to integrate OBAM based TUI, and GUI, and sysmgmt cmenu and launch point for
SMH applications.

The SMH Functional Area Launcher (FAL) is CMENU based, whereas the earlier SAM was using
OBAM FAL.

Most SMH FAL applications, such as File Systems and Users & Groups, are converted to web-
based tools. These applications do not support X11-based Graphical User Interface.

OBAM is supported on HP-UX 11i v3 because some Legacy applications still use it.

The X11-based Graphical User Interface is supported on HP-UX 11i v3 because some legacy
applications still use it. These include Terminals & Modems and Trusted Systems.

Some functional areas are obsolete based on the analysis of customer data on usage. Another
factor contributing to an obsolescence decision was the availability of alternate easy ways to
perform the same tasks from the command line. Finally, obsolescence migrates customers toward
SMH and HP SIM.

March 2007 Management-49


HP-UX 11i v3 Delta Simplified Management

HP-UX 11i v3 SMH/SAM Contents, 1 of 2


Application Name Graphical User Interface Terminal User Interface

Auditing & Security X11-based OBAM


(No change from previous (No change from previous
releases) releases)

Account for Users & Groups Web-launch (UGweb) CMENU Framework

Disks & File Systems Web-Launch (FSweb) CMENU Framework

Peripheral Devices Web-Launch (PDweb) CMENU Framework

Kernel Configuration Web-Launch (KCweb) CMENU Framework

Networking & Communication Web-launch (NCweb) for Network CMENU Framework for Network
Interfaces Configuration & X11 Interfaces Configuration & Obam
Based for Network Services Based for Network Services
Configuration Configuration

50 March 2007

This slide and the next show the applications supported by SAM (SMH) on HP-UX 11i v3. This
two-slide table also provides notes on the GUI and TUI interfaces for each supported application.

March 2007 Management-50


HP-UX 11i v3 Delta Simplified Management

HP-UX 11i v3 SMH/SAM Contents, 2 of 2


Application Name Graphical User Interface Terminal User Interface

Printers & Plotters X11 Based OBAM


(No change from previous (No change from previous
releases) releases)

Security Attributes Configuration Web-launch (Secweb) CMENU Framework

Resource manager X11 based OBAM


(No change from previous ((No change from previous
releases) releases)

Software management X11 Based Uses both OBAM & CMENU


(No change from previous framework
releases)

Partition manager Web-launch (parmgr) N/A

Display X11 based OBAM


(No change from previous (No Change from previous
releases) releases)

51 March 2007

March 2007 Management-51


HP-UX 11i v3 Delta Simplified Management

New features in HP-UX 11i v3 SMH


• Ease of operation
– Navigation is through simple menus
• Single Key option in SMH FAL to launch individual applications
– For example: k – will launch Kernel Configuration tool
• Option to launch individual Web-applications from SMH FAL
– For example: w – will launch the web page of the application
highlighted
• Command line options to launch the functional areas
– For example: ugweb to launch Users & Groups
– Another example: pdweb launches peripheral devices tool
• Command preview provided to show Sys Admins the command being
executed for a particular operation
• Significant performance improvements
– Backend is light-weight
– Commands are enhanced
• Ability to launch man pages and help files online
52 March 2007

There are several features to note as you compare SAM to SMH on HP-UX 11i v3.

SMH provides ease of operation allowing navigation through simple menus. There is a single
Key option in SMH FAL to launch individual applications. For example, -k launches the Kernel
Configuration tool. There is an option to launch individual Web applications from SMH FAL. For
example, -w launches the web page of the application highlighted.

There are command line options to launch the functional areas. For example, ugweb launches
Users & Groups. And, pdweb launches the peripheral devices tool.

A command preview shows System Administrators the command being executed for a particular
operation.

SMH provides significant performance improvements. The backend is light-weight. And, many of
the commands are enhanced.

Finally, SMH provides the ability to launch man pages and help files online.

March 2007 Management-52


HP-UX 11i v3 Delta Simplified Management

Running SAM on HP-UX 11i v3

53 March 2007

Executing SAM as superuser, or root, will launch the new System Management Homepage and
will print the deprecation message. If it runs into issues, for example it cannot find a usable
browser or the DISPLAY variable is set incorrectly, you will also get a helpful error message that
gives you the URL to paste into a browser window.

March 2007 Management-53


HP-UX 11i v3 Delta Simplified Management

Running SAM on HP-UX 11i v3 with Login

54 March 2007

When you run SAM (or SMH) on HP-UX 11i v3 and your DISPLAY environment variable is set
accurately, SAM/SMH will look for a usable running browser.

Note that if you execute the command in user mode, you will be presented with a login screen
first.

Once logged in, you will arrive at the SMH Home page, which is displayed in this slide. You
may then click on “Tools” to go to the Tools page, which is displayed in the next slide.

Note: If there are any issues with opening a browser for you, SAM/SMH will tell you the exact
URL that you can type in a browser window that you open. This URL is http://<machine
name>:2301/?chppath=Tools. In this case, you will also be presented with a login screen but
will then be taken straight to the Tools menu. (There is no need to click on “Tools” as above.)

March 2007 Management-54


HP-UX 11i v3 Delta Simplified Management

Running SAM on HP-UX 11i v3 with DISPLAY as root

55 March 2007

When you run SAM (or SMH) on HP-UX 11i v3 and your DISPLAY environment variable is set
accurately, SAM/SMH will look for a usable running browser. If you are root and everything
went fine, you will be brought immediately to the SMH Tools page, which is displayed above.

March 2007 Management-55


HP-UX 11i v3 Delta Simplified Management

Command Line Options Differences


On HP-UX 11i v2
• /usr/sbin/sam [-display display] [-f login] [-r]
• –display Æ If not specified, value of DISPLAY environment variable is checked
• -f Æ Available for root user only
• -r Æ Restricted SAM builder
On HP-UX 11i v3
• /usr/sbin/sam [ -f <login> | -r ]
– -f is available for root user only
– -r is for restricted SAM/SMH user; restricted SAM builder is the rsam functionality
• /usr/sbin/smh [ -F | -w | -r ]
– -w
• Launch the SMH web page with Security Warning
• First available browser on the system is used to launch SMH
– -F
• Force flag and bypass security risk message
• Launch the SMH web application with no security warnings
• By default, it will launch SMH
– -r is for restricted SAM/SMH user
By default, if no options are specified, SMH TUI will be run

56 March 2007

This slide summarizes the command line options differences between SAM on HP-UX 11i v2 and
SAM/SMH on HP-UX 11i v3.

March 2007 Management-56


HP-UX 11i v3 Delta Simplified Management

Additional SAM Notes


• If DISPLAY environment variable is not set, TUI will launch
• Executing with locales other than C results in warning
message about potential data corruption
– Set LANG environment variable to C
• Privileges set from TUI do not apply to GUI
– Use of –r
– GUI has different way of setting privileges

57 March 2007

There are a couple of additional facts to note.

If your DISPLAY environment variable is not set, then the TUI will launch. This is the same as on
previous releases.

Executing with locales other than C results in warning message about potential data corruption.
You should set your LANG environment variable to C.

Since the GUI has a different way of setting privileges, privileges set from the TUI do not apply
to the GUI. Note this in regards to the use of the –r option.

March 2007 Management-57


HP-UX 11i v3 Delta Simplified Management

Obsolete SAM/SMH Functional Areas


in HP-UX 11i v3
Obsolete Functional Area How to use functionality on HP-UX 11i v3

Backup & Recovery Use fbackup, frecover(1m) commands

Run SAM on remote Systems Use HP SIM

Process Management Use ps(1) , top(1), nice(1), crontab(1m)


Commands

Terminals & Modems Use command line options. Edit /etc/inittab


(Available through Peripheral Devices FAL) file & /usr/sam/lbin/gettty commands.

UPS Use command line options. Edit /etc/inittab


(Available through Peripheral Devices FAL) file & /usr/sam/lbin/gettty commands.

Routine tasks Use cron(1m) & top(1) commands

Performance monitors Partly replaced in HP SIM. Use vmstat(1),


sar(1m), iostat(1m), ipcs(1) commands.

Note: Refer to sam(1m) and smh(1m) man pages for details


58 March 2007

This table summarize the obsolete functional areas in SAM/SMH in HP-UX 11i v3.

It also provides HP recommended ways to use the same functionalities.

March 2007 Management-58


HP-UX 11i v3 Delta Simplified Management

System Management
Web Based Enterprise Management (WBEM)

59 March 2007

This sub-module provides an overview of WBEM on HP-UX 11i v3. It also covers WBEM
providers.

March 2007 Management-59


HP-UX 11i v3 Delta Simplified Management

HP WBEM Services on HP-UX 11i v3


Platform and resource independent Distributed Management Task Force (DMTF) standard
• Defines both a common model, or description, and protocol, or interface, for monitoring
and controlling a diverse set of resources.
HP WBEM Services for HP-UX version A.02.05.01 is the HP-UX implementation of the
DMTF WBEM standard for release on HP-UX 11i v3
• Based on The Open Group (TOG) Pegasus Open Source Software (OSS) project
• OpenPegasus 2.5 source base and CIM Schema 2.9
• http://www.openpegasus.org/
Allows customers to manage their HP-UX systems
• Provide integrated solutions that optimize a customer’s infrastructure for greater operational
efficiency
Key differences on WBEM on HP-UX 11i v3 from previous HP-UX 11i releases
• Association providers
• Internationalization support for CIM operations
• CIM Schema Upgrade
• Out-of-Process Support
• Run-as-Requestor Support
• Certificate Based Authentication
• Email and Syslog indication handlers

60 March 2007

Web-Based Enterprise Management (WBEM) is a platform and resource independent Distributed


Management Task Force (DMTF) standard that defines both a common model, or description,
and protocol, or interface, for monitoring and controlling a diverse set of resources. (See
http://www.dmtf.org/ for information on DMTF.)

The HP WBEM Services for HP-UX version A.02.05.01 is the HP-UX implementation of the DMTF
WBEM standard for release on HP-UX 11i v3. This product is based on The Open Group (TOG)
Pegasus Open Source Software (OSS) project. (See http://www.openpegasus.org/ for more
information.)

HP WBEM Services for HP-UX version A.02.05 is a major update to the HP WBEM Services for
HP-UX version A.02.00 currently released with HP-UX 11i v1 and HP-UX 11i v2. HP WBEM
Services for HP-UX version A.02.05 is based on OpenPegasus 2.5 source base and CIM
Schema 2.9, whereas the HP WBEM Services for HP-UX version A.02.00 released with the
September 2005 11i v1 OE is based on OpenPegasus 2.3.1 and 2.4.2 source bases and CIM
Schema 2.7.

The following are key differentiators from WBEM on earlier HP-UX releases to WBEM Services
on HP-UX 11i v3.
• Association providers
• Internationalization support for CIM operations
• CIM Schema Upgrade
• Out-of-Process Support
• Run-as-Requestor Support
• Certificate Based Authentication
• Email and Syslog indication handlers

March 2007 Management-60


HP-UX 11i v3 Delta Simplified Management

HP WBEM Services 2.05 on HP-UX 11i v3


New default that has the WBEM Provider run as the user who issued
the management request
• “Run-As-Requestor”
• Prior to this release, all WBEM Providers executed in a privileged context.
• Could break backward compatibility for certain types of Providers.
To remedy this situation, developers have the following alternatives.
• To continue running their Provider in a privileged context
– Explicitly register their Provider to run in a “Privileged User” context
• Configuration file change only
• To support running in the “Requestor” context
– Write Provider to allow multiple instances of the Provider to run at the
same time in different user contexts
– In some cases, the Provider may need to coordinate the actions of the
Provider instances
– Only perform privileged operations if those operations are only
expected/required to succeed when invoked by a user who already
has the necessary privileges

61 March 2007

Starting with the A.02.05 release, HP WBEM Services for HP-UX supports an option that allows
a WBEM Provider (i.e., the management instrumentation) to run as the user who issued the
management request. Prior to this release, all WBEM Providers executed in a privileged context.
With the release of HP-UX 11i v3, WBEM Providers will, by default, be invoked in the context of
the user requesting an operation (i.e., “Run-As-Requestor”). This default setting can break
backward compatibility for certain types of Providers. This means that existing Providers that run
in the user context of the CIM Server may break.

To remedy this situation, developers have the following alternatives.

To continue running their Provider in a privileged context, developers will need to explicitly
register their Provider to run in a “Privileged User” context. This is a configuration file change
and should not require a change to the Provider library. That is, developers will not be required
to recompile/relink their providers to continue running in a privileged context. To register their
Provider to run in a “Privileged User ” context, developers need to modify the
PG_ProviderModule instance definition in their Provider Registration mof by changing the
InterfaceVersion from “2.1.0” to “2.5.0” and adding the new property UserContext = 2.

To support running in the “Requestor” context, developers need to ensure that their Provider has
been written to allow multiple instances of the Provider to run at the same time in different user
contexts. In some cases, the Provider may need to coordinate the actions of the Provider
instances. In cases where the Provider is simply a “pass-through” to a managed resource, no
coordination may be necessary. In addition, providers running in the “Requestor” context must
only perform privileged operations if those operations are only expected/required to succeed
when invoked by a user who already has the necessary privileges.

March 2007 Management-61


HP-UX 11i v3 Delta Simplified Management

HP WBEM Services 2.05 Documentation


Many man pages available
• cimmof (1), cimprovider (1), osinfo (1), wbemexec (1),
cimauth (1M), cimconfig (1M), cimserver (1M), ssltrustmgr
(1M), cimserverd (8), cimtrust (1M)
Visit http://docs.hp.com, got to Network and Systems
Management, then to HP WBEM Services
• HP WBEM Services for HP-UX Release Notes
• HP WBEM Services Software Developer’s Kit Release Notes
• HP WBEM Services for HP-UX System Administrator’s Guide
• HP WBEM Services Software Developer’s Kit for HP-UX
Provider and Client Developer’s Guide

62 March 2007

For further information, see the HP WBEM Services for HP-UX manpages included with product:
cimmof (1)
cimprovider (1)
osinfo (1)
wbemexec (1)
cimauth (1M)
cimconfig (1M)
cimserver (1M)
ssltrustmgr (1M)
cimserverd (8)
cimtrust (1M)

In addition, see the following documents, available at http://docs.hp.com (navigate to Network


and Systems Management, then to HP WBEM Services for HP-UX):
HP WBEM Services for HP-UX Release Notes
HP WBEM Services Software Developer’s Kit Release Notes
HP WBEM Services for HP-UX System Administrator’s Guide
HP WBEM Services Software Developer’s Kit for HP-UX Provider and Client Developer’s Guide

March 2007 Management-62


HP-UX 11i v3 Delta Simplified Management

New WBEM Providers on HP-UX 11i v3 (1 of 2)


HP-UX WBEM Fibre Channel Provider
• Get information about HP-UX Fibre Channel HBAs on the system
• Updated to version HP-UX 11i v3.01
• All functionalities for association classes are implemented
HP-UX WBEM File System Provider:
• Makes file system information available
• Instruments HPUX_HFS, HP_LOFS, HP_CDFS, HP_VxFS, HP_NFS,
HP_MountPoint and HPUX_Mount classes
HP-UX WBEM IOTree Provider
• Get information about HP-UX IOTree host-bus adapters (HBAs)
• Displays information about all slots on HP-UX 11i v3 system
HP-UX WBEM LAN Provider for Ethernet Interfaces
• Is a CIM Provider for Ethernet-based LAN technologies on HP-UX
• Determine all Ethernet LAN links available on the system, registered and
known to HP-UX DLPI, and collect information

63 March 2007

There are several new WBEM Providers on HP-UX 11i v3.

The HP-UX WBEM Fibre Channel Provider is updated to version HP-UX 11i v3.01. All
functionalities for association classes are implemented. Client applications can use this provider
to get information about HP-UX Fibre Channel HBAs on the system. All functionalities for
association classes were not implemented in June 2006 HP-UX 11i v2 release, but are
implemented in HP-UX 11i v3 release.

The HP-UX WBEM File System Provider makes file system information available. It instruments the
HPUX_HFS, HP_LOFS, HP_CDFS, HP_VxFS, HP_NFS, HP_MountPoint and HPUX_Mount classes.

The HP-UX WBEM IOTree Provider is used by client applications to get information about HP-UX
IOTree host-bus adapters (HBAs) on the system. The IOTree provider displays information about
all the slots on HP-UX 11i v3 system,
but displays information only about hot pluggables on HP-UX 11i v2 system.

The HP-UX WBEM LAN Provider for Ethernet Interfaces is a CIM Provider for Ethernet-based LAN
technologies on HP-UX. Client applications can use this provider to determine all Ethernet LAN
links available on the system, registered and known to HP-UX DLPI, and collect information about
them. them. The LAN Provider uses CIM Schema v2.7 and supports the following classes:
• HPUX_EthernetPort subclassed from CIM_EthernetPort
• HPUX_EthernetLANEndpoint subclassed from CIM_LANEndpoint
• HPUX_EthernetPortImplementsLANEndpoint subclassed from CIM_PortImplementsEndpoint

March 2007 Management-63


HP-UX 11i v3 Delta Simplified Management

New WBEM Providers on HP-UX 11i v3 (2 of 2)


HP-UX WBEM SCSI Provider
• Get information about HP-UX SCSI host-bus adapters (HBAs) on the
system
• Updated to version HP-UX 11i v3.01
LAN Provider product
• Used to collect information about the Ethernet links on the system
Utilization Provider version A.01.05.00.x is a lightweight daemon
(utild)
• Records system-utilization data on a 5-minute interval
• Data recorded includes CPU, memory, disk, and network utilization
HP-UX vPar Provider
• Extracts information about virtual partitions on a system
• Read-only provider
HP-UX WBEM Online Operations Service Provider
• Not currently supported
– Intended to support features in future releases of HP-UX 11i v3

64 March 2007

The HP-UX WBEM SCSI Provider is used by Client Applications to get information about HP-UX
SCSI host-bus adapters (HBAs) on the system. It is updated to version HP-UX 11i v3.01.

The LAN Provider product is new. You can use WBEM-based clients to access the LAN Provider
and collect information about the Ethernet links on your system.

The Utilization Provider version A.01.05.00.x is a lightweight daemon, utild, that records
system-utilization data on a 5-minute interval. The data recorded includes CPU, memory, disk,
and network utilization. This product also includes a WBEM provider for access to the data.

The HP-UX vPar Provider is a HP WBEM Services for HP-UX provider for extracting information
about virtual partitions on a system. As it is a read-only provider, clients cannot modify virtual
partition configurations. This provider can be used through a Web-based Enterprise
Management (WBEM) interface.

The HP-UX WBEM Online Operations Service (OLOS) Provider is not currently supported, but it
is intended to support features in future releases of HP-UX 11i v3. This provider is delivered as
the olosProvider product as part of the SysMgmtMin bundle. To disable the OLOS Provider, use
the following command: cimprovider -d –m HP_OLOSProviderModule. For details, see the
cimprovider (1) manpage.

March 2007 Management-64


HP-UX 11i v3 Delta Simplified Management

Event Manager (EVM)

65 March 2007

This module covers the new Event Manager, EVM. It is new to HP-UX on HP-UX 11i v3. It was
ported from Tru64. EVM is built to recognize cell OL* events among others. EVM can send
information to other management tools. For example, EVM CIM provider generates WBEM
indication. And, other tools use EVM. For example, ProcSM is a facility that uses EVM and by
which commands, utilities, and applications can find processes and be notified of their
termination.

March 2007 Management-65


HP-UX 11i v3 Delta Simplified Management

Event Manager (EVM) – New on HP-UX 11i v3


General mechanism for posting and distributing events from any part
of the operating system to any interested party
Comprehensive event management system
• Works in cooperation with other event mechanisms
• Enables event information to be accessed in a uniform manner
– Regardless of method used to post or store it
• Enables posting, receiving, storing, retrieving and monitoring events
EVM event is the basis for all EVM operation
• Used to transport and store event information from many sources
Provides programming and user-level tools
• For creation, display and management of events
Consists of kernel components, user libraries, system daemon, and
command line utilities
Event Manager-Common Information Model (EVM-CIM) Provider is
introduced
66 March 2007

Event Manager (EVM) is a general mechanism for posting and distributing events from any part
of the operating system to any interested party. It is a comprehensive event management system
that works in cooperation with other event mechanisms. EVM enables event information to be
accessed in a uniform manner, regardless of the method used to post or store it. It enables you to
post, receive, store, retrieve and monitor events.

Its design follows the UNIX philosophy of providing a number of simple tools, each of which
performs one task well and can be used as a building block in producing more complex tools.
EVM is controlled through user-editable configuration files.

EVM implements its own event structure, referred to as an EVM event, which is the basis for all
EVM operation. The EVM event is used to transport and store event information from many
sources. EVM provides programming and user-level tools for the creation, display and
management of events.

EVM consists of kernel components, user libraries (libevm.so), system daemons, and a set of
command line utilities.

Finally, the Event Manager-Common Information Model (EVM-CIM) Provider is introduced.

March 2007 Management-66


HP-UX 11i v3 Delta Simplified Management

EVM Features
Enables users and applications to post, receive, store, retrieve, and monitor events
• Common format for events
• Single access point for event information
• Real-time event monitoring
Supports creation and customization of event channels
• Event selection with simple filtering language
Facilitates users to select summary or detailed event data
• Retrieves stored events from evmlog.
Provides users with a full set of command line utilities that can be used to post, monitor,
retrieve and format events
Configurable event logger that allows full control event over which events are logged
• Optimizes storage space used by identical events
Configurable event forwarding
• Enables automatic notification to other system entities, or authorized processes, of selected
events
Supports configurable authorization for posting or accessing events
Supports log file management that automatically archives and purges log files
Provides application programming interface (API) library for programmers to handle
events

67 March 2007

The EVM system offers many features. It enables users and applications to post, receive, store,
retrieve, and monitor events. It provides a common format for events. EVM provides a single
point of focus for the multiple channels, such as log files, through which system components
report event and status information. It provides real-time event monitoring.

EVM supports the creation and customization of event channels. Event selection is accomplished
using a simple filtering mechanism. Then, EVM give the user a choice to select summary-line or
detailed view event data, including online explanations. It retrieves stored events from evmlog.

EVM provides users with a full set of command line utilities that can be used to post and handle
events from shell scripts and the command line. Additionally, it supports a configurable event
logger that enables you to control event logging. Additionally, the logger optimizes the storage
space used by identical events.

EVM has a configurable event forwarding that enables you to automatically notify other system
entities, or authorized processes, of selected events.

EVM supports configurable authorization for posting or accessing events. It supports log file
management that automatically archives and purges log files. And, it provides an application
programming interface (API) library for programmers to use to handle events.

March 2007 Management-67


HP-UX 11i v3 Delta Simplified Management

EVM Main Components


EVM Daemon (evmd)
• Receives events from posting clients and distributes them to subscribing clients
EVM Logger
• Receives events from the daemon
• Writes events to each of the logs if the filter string matches
• evmlogger command serves as an event forwarding agent
– Use to configure to take an action when required
– EVM daemon automatically starts the logger
EVM Channel Manager
• Executes the periodic functions defined for any channel
– For example, initiating nightly event log cleanup activity
• EVM daemon starts the channel manager (evmchmgr) automatically
EVM Command Line Utilities
• Use to administer the Event Manager system and to post or obtain events
EVM Application Programming Interface library, libevm.so
• Contains extensive range of event management functions
– These enable programs to post events, send requests and notifications to the
daemon, or receive responses and information from the daemon

68 March 2007

There are five main components of EVM.

The EVM Daemon (evmd) receives events from posting clients and distributes them to the
subscribing clients, that is, clients that have subscribed for the events. It handles the distribution
of event packages to subscribing processes.

The EVM Logger receives events from the daemon and writes them to each of the logs if the filter
string matches. The evmlogger command also serves as an event forwarding agent that you can
configure to take an action when required. The Event Manager daemon automatically starts the
logger.

The EVM Channel Manager executes the periodic functions defined for any channel. For
example, the EVM Channel Manager can initiate nightly event log cleanup. The Event Manager
daemon starts the channel manager (evmchmgr) automatically.

The EVM Command Line Utilities are provided by EVM to administer the Event Manager system
and to post, monitor, retrieve, sort, and display events.

The EVM Application Programming Interface, or Event Manager API library, libevm.so, contains
an extensive range of event management functions. The API functions enable programs to post
events, send requests and notifications to the daemon, or receive responses and information from
the daemon.

March 2007 Management-68


HP-UX 11i v3 Delta Simplified Management

EVM Architecture

69 March 2007

This slide illustrates the key relationships between these components and EVM’s clients. The EVM
daemon picks up events from both posting clients and kernel posters. The daemon interfaces with
the channel manager, logger, and get server. There are both subscribing and retrieving clients.

March 2007 Management-69


HP-UX 11i v3 Delta Simplified Management

EVM Functionality
Provides flexible infrastructure that can be used as an event distribution channel by
• Internal development groups at HP
• Independent software vendors
• Customer-application developers
• Existing event channels
Event notification, or just event
• Mechanism used to pass event information
• Component generating the event is known as the event poster
– Event-posting mechanism is a one-way communication channel
– Allows poster to communicate information to any entity that cares to receive it
• Not necessary for poster to know which entities, if any, are interested in receiving an event
– Some event may never be subscribed to
Event subscriber
• Entity with an interest in receiving event information is known as an event subscriber
– Might include system administrators, other software components, or ordinary users
Events can be posted and subscribed to by any process
• Same process can be both a poster and a subscriber
• Ability to post and receive specific events is governed by security authorizations

70 March 2007

EVM provides a centralized means of posting, distributing, storing, and reviewing event
information, regardless of the event channel used by individual event posters and without
requiring existing posters to change how they interact with their current channels. EVM makes
event information more accessible to system administrators than was previously possible, and it
provides a flexible infrastructure that can be used as an event distribution channel by internal
development groups at HP, independent software vendors, customer-application developers, and
existing event channels.

The mechanism used to pass event information is known as an event (or event notification), and
the component generating the event is known as the event poster. EVM's event-posting
mechanism is a one-way communication channel. It is intended to allow the poster to
communicate information to any entity that cares to receive it. It is not necessary for the poster to
know which entities, if any, are interested in receiving an event being posted.

An entity that expresses an interest in receiving event information is known as an event


subscriber. Depending on the event, subscribers might include system administrators, other
software components, or ordinary users. It is quite possible that some events will never be
subscribed to.

Events can be posted and subscribed to by any process, and the same process can be both a
poster and a subscriber. However, in all cases, the ability to post and receive specific events is
governed by security authorizations.

March 2007 Management-70


HP-UX 11i v3 Delta Simplified Management

EVM Command Line Utilities


evmpost
• Accepts a file or a stream of events in text format
• Posts them to the Event Manager daemon for distribution
evmshow
• Accepts one or more Event Manager events
• Outputs them in the specified format
evmsort
• Reads a stream of events
• Sorts the events according to the specified criteria
evmwatch
• Subscribes to the specified events
• Outputs them as they arrive
evmget
• Retrieves stored events from a configured set of log files and event channels
– Uses channel-specific retrieval functions
evmreload
• Posts control events that instruct EVM components to reload the configuration files
– Must use this command to load the new configuration, when you modify any EVM
configuration file

71 March 2007

There are six command line utilities.

The evmpost command accepts a file or a stream of events in text format, and posts them to the
Event Manager daemon for distribution.

The evmshow command accepts one or more Event Manager events and outputs them in the
specified format.

The evmsort command reads a stream of events, and sorts the events according to the specified
criteria.

The evmwatch command subscribes to the specified events and outputs them as they arrive.

The evmget command retrieves stored events from a configured set of log files and event
channels, using channel-specific retrieval functions.

The evmreload command posts control events, which instruct the Event Manager components to
reload the configuration files. You must use this command to load the new configuration, when
you modify any EVM configuration file.

March 2007 Management-71


HP-UX 11i v3 Delta Simplified Management

Life Cycle of an EVM Event


Software component detects that something has happened
• Component creates EVM event package
– Contains information about what happened
• Posts the event by sending it to the EVM daemon
EVM daemon
• Receives the event
• Enhances it with information from its template event database
• Distributes events to any subscribing clients that have registered an
interest in this event
Subscribing clients
• Receive the event
• Examine the information it contains
• Take any appropriate action
• If the EVM logger is a subscriber, the event is stored in the EVM logfile
Sys Admin may run the evmget command or the event viewer to
retrieve the event from the logfile and display its contents
Eventually channel cleanup purges logfile
72 March 2007

The following sequence illustrates the typical life cycle of an EVM event.

A software component detects that something interesting has happened.

The component creates an EVM event package containing information about what happened.

The component posts the event by sending it to the EVM daemon.

The EVM daemon receives the event, and enhances it with information from its template event
database.

The daemon distributes the events to any subscribing clients that have registered an interest in
this event.

The subscribing clients receive the event, examine the information it contains, and take any
appropriate action. If the EVM logger is a subscriber, the event is stored in the EVM logfile.

The system administrator may run the evmget command or the event viewer to retrieve the event
from the logfile and display its contents.

Eventually the logfile is purged by a channel cleanup function.

March 2007 Management-72


HP-UX 11i v3 Delta Simplified Management

Classes of Clients
Posting clients
• Pass event information to EVM for distribution
Subscribing clients
• Request notification when EVM receives event information from posting
clients
• Can select events that they are interested in by using event filters
Service clients
• Request configurable services
• Event retrieval service
– Used by evmget command to retrieve events from logfiles
• The evmget command is also referred to as a retrieving client
Events can be distributed within the kernel and posted to user space
for distribution to user-level subscribing clients

73 March 2007

EVM provides services to several classes of clients. There are posting clients that pass event
information to EVM for distribution. Subscribing clients request notification when EVM receives
event information from posting clients. Subscribers can select the set of events in which they are
interested by the use of event filters. Service clients request configurable services. The only
service defined by the current version of EVM is the event retrieval service, which is used by the
evmget command to retrieve events from log files. The evmget command is also referred to as a
retrieving client. Events can be distributed within the kernel and posted to user space for
distribution to user-level subscribing clients.

March 2007 Management-73


HP-UX 11i v3 Delta Simplified Management

EVM and Cell OL*


Cell OL* generates EVM events and WBEM indications
• During Cell OL* operations
• During Dynamic Resource Reconfiguration operations
Events are generated
• At beginning of online addition and deletion operations
– Progress indicators during online addition and deletion
operations
• At completion of online addition and deletion operations
Events and indications reflect granularity of operation and
specific resources being added or deleted

74 March 2007

Cell OL* will generate EVM events and WBEM indications during Cell OL* operations and
Dynamic Resource Reconfiguration operations. These events are generated at the beginning of
online addition and deletion operations. They also have progress indicators during online
addition and deletion operations. Events are generated at the completion of online addition and
deletion operations. The EVM events and indications reflect the granularity of the operation and
specific resources being added or deleted.

March 2007 Management-74


HP-UX 11i v3 Delta Simplified Management

More on EVM
Essential Services Monitor (ESM)
• Keeps EVM highly available
– Restarts the daemons if ESM loses its connection to EVM
• ESM is a single program
– Two entries in the /etc/inittab file
– Depends on ProcSM
HP-UX 11i v3 includes an EVM CIM provider that
generates WBEM indications from EVM events

75 March 2007

The Essential Services Monitor (ESM) is used to keep EVM highly available by restarting the
daemons if ESM loses its connection to EVM. ESM is a single program, a makefile, and two
entries in the /etc/inittab file. ESM is necessary because the respawn functionality in inittab is
insufficient for the needs of EVM. ESM depends on ProcSM, which we will cover momentarily.

HP-UX 11i v3 includes an EVM CIM provider that generates WBEM indications from EVM
events.

March 2007 Management-75


HP-UX 11i v3 Delta Simplified Management

EVM on HP-UX versus Tru64 UNIX


HP-UX 11i v3 includes EVM core ported from Tru64
• Base kernel module allows events to be posted and subscribed to within the kernel
• Part of the base HP-UX kernel
– Provides API for posting events from within the kernel
– Provides API for posting events from within the kernel at interrupt level
– Provides pseudo-device through which the EVM daemon reads kernel-
generated events
– Provides API for subscribing to events from within the kernel
– Provides supporting code to create, manipulate, match, and validate event
data
Port does not include
• TruCluster kernel EVM module
– Distributes cluster-wide events to all TruCluster members
• Binlog support
• misclog support
• evmviewer
• syslogd may not be instrumented to forward events to EVM

76 March 2007

EVM single system functionality was ported from Tru64 UNIX Version 5.1B to HP-UX 11i v3.

The Tru64 UNIX Event Manager includes two kernel components. The base kernel EVM module
that allows events to be posted and subscribed to within the kernel was ported to HP-UX. The
TruCluster kernel EVM module that distributes cluster-wide events to all TruCluster members was
not ported to HP-UX.

The base kernel EVM module is part of the base HP-UX kernel. It provides the API for posting
events from within the kernel, the API for posting events from within the kernel at interrupt level, a
pseudo-device through which the EVM daemon reads kernel-generated events, the API for
subscribing to events from within the kernel, and supporting code to create, manipulate, match,
and validate event data.

There are a few components that are not supported. These include binlog, misclog, and
evmviewer. Also, note that syslogd may not be instrumented to forward events to EVM.

March 2007 Management-76


HP-UX 11i v3 Delta Simplified Management

EVM Documentation
Event Manager man page
• evm(5)
– Includes references other EVM-related manpages
See documents available at http://docs.hp.com
• Event Manager Administrator’s Guide
• Event Manager Programmer’s Guide

77 March 2007

For further information, see the Event Manager man page, evm(5), which includes references to
most of the other EVM-related man pages. In addition, see the Event Manager Administrator’s
Guide and the Event Manager Programmer’s Guide, available at http://docs.hp.com.

March 2007 Management-77


HP-UX 11i v3 Delta Simplified Management

Process Set Manager (ProcSM) Overview


DLKM that monitors the state of vital processes that are registered with it
• New on HP-UX 11i v3
• Consists of APIs and KPIs
– For creation of Category, Listing of Categories, and Removal of Category
– For addition of Members to a Category, Listing of Members, and Removal of Members from a
Category
• System call for funneling all the user space calls to kernel space
Facility for applications to find processes and be notified of their termination
• Important processes can register with the facility
– Register themselves
– Be registered by others
• Enables monitoring the state of a registered process
• Upon termination, an EVM event is generated that can be received by those user space
processes that need to be informed
Can limit the number of instances of an application
Provides reliable means to get notified when registered process terminates
• Improves performance over traditional method of using ‘ps’ and ‘grep’
Each process to be monitored is registered as a member of a category
• A category is a collection of members, or processes
• APIs to create categories and members within a category
• Programs use APIs to register themselves with ProcSM

78 March 2007

Process Set Manager, or ProcSM, is a facility by which commands, utilities, and applications
can find processes and be notified of their termination. Important processes can register with the
facility. They may register themselves, or be registered by others. Upon termination, an EVM
(Event Manager) event is generated that can be received by those user space processes that
need to be informed. ProcSM can also be used for limiting the number of instances of an
application.

ProcSM provides facilities to monitor the state of a registered process. It provides facilities to limit
the number of instances of a program. It also provides a reliable means to get notified when a
registered process terminates. This gives a performance improvement over the traditional
method of using ‘ps’ and ‘grep’.

ProcSM consists of APIs and KPIs for creation of Category, Listing of Categories, and Removal of
Category. It also has APIs and KPIs for Addition of Members to a Category, Listing of Members,
and Removal of Members from a Category. Finally, there is a system call for funneling all the
user space calls to kernel space.

ProcSM is a loadable kernel subsystem that monitors the state of vital processes that are
registered with it. Each process to be monitored is registered as a member of a category. A
category is a collection of members, or processes. ProcSM provides APIs to create categories
and members within a category. Programs use these APIs to register themselves with ProcSM.

March 2007 Management-78


HP-UX 11i v3 Delta Simplified Management

ProcSM Functionality (1 of 2)
For each process, ProcSM maintains
• PID, flags, saved argument vector
• User-modifiable state string that has two predefined values
– running or exited
– Data is in-memory
• Not persistent across boot
Rules for monitoring of processes
• After process exit, termination status continues to be available until the "pid"
wraps around or the category member limit exceeds
• When a process registers
– If an entry with same pid exists, it will be reused
– If there is no entry with the same pid and the number of entries in the category
has reached the limit, any exited process’s entry in the target category will be
reused by the new instance of the program
• A specific process is registered in only one category
– However there can be one or more processes in a category
• Create a category named “httpd”
Can be multiple httpd processes on a host that register under the
category named “httpd”

79 March 2007

For each process, ProcSM maintains the pid, process flags, saved argument vector and a user-
modifiable state string that has two predefined values: running and exited. This data is in-
memory and is not persistent across boot.

Monitoring of processes by ProcSM is subject to several rules. After a process exits, termination
status continues to be available until the "pid" wraps around or the category member limit
exceeds. When a process registers with ProcSM and an entry with same pid exists, it will be
reused. If there is no entry with the same pid and the number of entries in the category has
reached the limit, any exited process’s entry in the target category will be reused by the new
instance of the program. A specific process is registered in only one category. However there
could be one or more processes in a category. For example, one could create a category
named “httpd”. There could be multiple httpd processes on a host that register under the
category named “httpd”.

March 2007 Management-79


HP-UX 11i v3 Delta Simplified Management

ProcSM Functionality (2 of 2)
ProcSM EVM to generate events whose names are well known
• Other users and system entities use these well known names to subscribe for the events
• ProcSM generates an event when processes are registered or when they exit
• Application no longer has to poll to see if some important process has failed
– It can be notified instantly via the exit event
• Events can be used by other subsystems to synchronize with services that they are
dependent upon,
– Instead of using unreliable methods, like sleeping for some number of seconds
Several activities result in ProcSM posting events which are available for subscription
• Creation/deletions of a Category
• Addition/deletion of a process or member
• Termination of a process
ProcSM can be used to prevent multiple copies of the same application from running
• Easier for applications needing event notification
• procsm_catadd() API takes an additional ‘maxmemcnt’ argument
– Maximum number of processes that is permitted to register in the category specified by
catnam
– If application uses procsm_memadd() routine during its startup to register itself with ProcSM,
the number of instances of this application will be maxmemcnt

80 March 2007

ProcSM uses the Event Manager (EVM) to generate events whose names are well known. Other
users and system entities use these well known names to subscribe for the events. For example,
ProcSM generates an event when processes are registered or when they exit. An application no
longer has to poll to see if some important process has failed because it can be notified instantly
via the exit event. These events can also be used by other subsystems to precisely synchronize
with the services that they are dependent upon, instead of using unreliable methods, like
sleeping for some number of seconds.

Several activities result in ProcSM posting events, which are available for subscription. These
include the creation/deletion of a category, the addition/deletion of a process or member, and
the termination of a process.

ProcSM can be used to prevent multiple copies of the same application from running. This
eliminates the hoops that application code must step through to determine whether another copy
of the same program is already running, for example, lock file, socket bind, etc. The
procsm_catadd() API is modified to take an additional ‘maxmemcnt’ argument. This count is the
maximum number of processes that is permitted to register in the category specified by catnam.
Hence, as long as the application uses procsm_memadd() routine during its startup to register
itself with ProcSM, the number of instances of this application will be maxmemcnt.

March 2007 Management-80


HP-UX 11i v3 Delta Simplified Management

Event Monitoring Service (EMS)

81 March 2007

This module describes changes to EMS in HP-UX 11i v3.

March 2007 Management-81


HP-UX 11i v3 Delta Simplified Management

Event Monitoring Service (EMS)


Framework for monitoring system resources
• Configures, checks resource status, and sends notification
when configured conditions are met
• Provides a common interface for monitor configuration and
event notification
EMS monitors
• Provide help primarily with fault and resource management
• Are designed for use in high availability environments
Documentation
• Using Event Monitoring Service
• Using High Availability Monitors
• http://docs.hp.com

82 March 2007

The Event Monitoring Service (EMS) is a framework for monitoring system resources which
includes configuring, checking resource status, and sending notification when configured
conditions are met.

EMS provides a common interface for monitor configuration and event notification. EMS
monitors provide help primarily with fault and resource management. EMS monitors are
designed for use in high availability environments.

Refer to the EMS documentation “Using Event Monitoring Service” and “Using High Availability
Monitors” available at
http://docs.hp.com/en/ha.html#Event%20Monitoring%20Service%20and%20HA%20Monitors.

March 2007 Management-82


HP-UX 11i v3 Delta Simplified Management

EMS Resource Monitors


Applications written to gather and report information about
specific resources on the system
Resource monitors
• Provide a list of resources that can be monitored
• Provide information about the resources
• Monitor the resources it supports
• Provide values to the EMS API notification
EMS framework evaluates the data to determine if an event
has occurred
• If an event has occurred, the EMS API sends notification in
the appropriate format to the configured target(s)

83 March 2007

EMS resource monitors are applications written to gather and report information about specific
resources on the system. Each resource monitor provides a list of resources that can be
monitored and provides information about the resources. It monitors the resources it supports;
and it provides values to the EMS API notification.

The EMS framework evaluates the data to determine if an event has occurred. If an event has
occurred, the EMS API sends notification in the appropriate format to the configured target(s).

March 2007 Management-83


HP-UX 11i v3 Delta Simplified Management

EMS Architecture

84 March 2007

This slide illustrates the EMS architecture. Systems that are having their resources monitored, e.g.
systemA and systemB, are registered with the system, e.g. systemC, that is running the EMS
monitors. Named pipes run between the registrars and the monitors. The monitors can monitor
different resources on each system that is being monitored.

March 2007 Management-84


HP-UX 11i v3 Delta Simplified Management

EMS on HP-UX 11i v3


Supports new High Availability EMS Monitors
• MIB Objects Monitors
– Set of MIB monitors for monitoring system resources, network interfaces, package
dependencies and cluster resources
– RFCs 2619 and 2621 describe the MIB objects for HP-UX AAA Server
• Since HP-UX AAA Server performs both authentication and accounting functions some of the MIB
objects return duplicated information
• RADIUS MIB objects are sent to EMS
• Disk Monitor
– Used for monitoring physical and logical volumes that are configured by LVM
– LVM commands replace ioctl calls made in LVM Disk monitor
• These ioctl calls are not supported
• RDBMS Monitor
– Used for monitoring Oracle and Informix databases and database servers
Sends WBEM indications
• Uses a wrapper WBEM provider
• View using the EVWEB tool
GUI continues to be an X-based application
• Integrates with SMH
Supports large PIDs, long hostnames and uname

85 March 2007

On HP-UX 11i v3, EMS supports three High Availability EMS Monitors. These include MIB
(Management Information Base) objects from AAA Security, Disk (LVM),and RDBMS (Oracle,
Informix).

The EMS MIB Monitor is actually a set of MIB monitors for monitoring system resources, network
interfaces, package dependencies and cluster resources. RFCs 2619 and 2621 describe the
MIB objects for HP-UX AAA Server. Since the HP-UX AAA Server performs both authentication
and accounting functions, some of the MIB objects return duplicated information. RADIUS MIB
objects are sent to EMS in HP-UX 11i v3.

The EMS Disk Monitor is used for monitoring physical and logical volumes that are configured
by LVM. Ioctl calls made in the LVM Disk monitor are replaced with LVM commands since these
ioctl calls are not supported.

The EMS RDBMS monitor is used for monitoring Oracle and Informix databases and database
servers.

In addition to the existing notification methods that EMS supports, EMS is now enhanced to send
WBEM indications. The WBEM indication are provided using a wrapper WBEM provider. The
WBEM indications can be viewed from the EVWEB tool.

On HP-UX 11i v3, the EMS GUI continues to be an X-based application; and, it integrates with
SMH. It also supports large PIDs, long hostnames and uname.

March 2007 Management-85


HP-UX 11i v3 Delta Simplified Management

Section Summary
This section described
• System management tools
• Event Manager
• Event Monitoring Service

86 March 2007

This section covers Simplified Management of HP-UX 11i v3. The topics included the System
Management Tools, the Event Manager, and the Event Monitoring Service.

March 2007 Management-86


HP-UX 11i v3 Delta Simplified Management

Learning
check

87 March 2007

See the Lab Guide.

March 2007 Management-87


HP-UX 11i v3 Delta Simplified Management

Lab
activity

88 March 2007

See the Lab Guide.

March 2007 Management-88

Potrebbero piacerti anche