Sei sulla pagina 1di 60

VeritasTM Services and Operations Readiness Tools

.ra_feedback{ margin-top:5px; margin-left:10px; float: right; cursor:pointer; font-weight:normal; color:#0000ff !important; }

Risk Assessment Checklist Report for

Product: Storage Foundation HA


Platform: Linux
Product version: 6.2.x
Product component: All
Check category: All

Summary
VCS HAFD apache configuration file VCS HAFD mount point configuration
VCS HAFD apache owner VCS HAFD mount point existence
VCS HAFD application checksum VCS HAFD VxFS license
VCS HAFD application program VCS HAFD NFS lock directory
VCS HAFD application user VCS HAFD NFS configuration
ASL and APM consistency VCS HAFD NIC
Boot volumes completion VCS ToleranceLimit
Campus cluster allsites volumes VCS HAFD Oracle home
Campus cluster auto reattach daemon state VCS HAFD Oracle owner
Campus cluster config copy availability VCS HAFD Oracle PFile
Campus cluster disk with site tag VCS HAFD process program
Campus cluster host site tag VCS HAFD share directory exists
Campus cluster site-consistency VCS HAFD triggers checksum
Concat volumes with multiple disks VCS HAFD triggers program
Concat volumes across disk arrays VCS disk connectivity
Dynamic Multi-Pathing (DMP) driver paths state VCS duplicate disk group name
Dynamic Multi-Pathing (DMP) restore daemon state VCS free swap space
Dynamic Multi-Pathing (DMP) restore daemon consistency VCS GAB Startup Configuration Check
Encapsulated SAN boot disk VCS LLT startup configuration check
Dynamic Multi-Pathing (DMP)/DDL enclosure claim VCS OS version and patch
Mirrored volumes with a single disk array VCS Cluster ID
Full fsck required VCS ClusterAddress
Sufficient memory for full fsck VCS ClusterService OnlineRetryLimit
Mirrored volumes in FSS disk group VCS configuration
Removable VxFS File System checkpoints VCS GAB jeopardy
File System disk layout version VCS Cluster read-only status
Unmounted VxFS File System(s) VCS SysName
HBA controller state VCS unique cluster name
VxVM Volume Manager disk error VCS Disabled resources
Hot relocation daemon state VCS frozen groups
Verify I/O fencing for cluster VCS virtual host attributes
SFHA Kernel module consistency Detach policy for shared Disk group with A/P arrays
Mirrored volumes with no mirrored DRL Campus cluster disk group fail policy
License keys Consistency Campus cluster disk detach policy
LLT links full duplex setting Non-CDS disk group
LLT links high priority and private link Number of disk group configuration backup copies
LLT link count Number of disk group configuration copies
Mirrored volumes with single disk controller Disk group spare space
Package Consistency Verify support package
RAID-5 volumes with unmirrored log Fragmented VxFS File System
Volume Replicator (VVR) RVG and RLINK state check. Verify software patch level
Root mirror validity Mirrored-stripe volumes
Fencing Configuration Volume Replicator SRL protection
Fencing CP server Configuration Volume Replicator (VVR) Storage Replicator Log (SRL)
LLT Link jumbo frame setting Volume Replicator (VVR) SRL striped and mirrored
LLT Link MTU check Volume Manager system name
LLT configuration- cluster ID Volume Replicator (VVR) network bandwidth limit
LLT links cross-connection Volume Replicator (VVR) consistent data and Storage R
LLT Link speed autonegotiation and MAC address check names.
SF Oracle RAC ODM configuration Volume Replicator (VVR) consistent data and SRL volum
SF Oracle RAC Oracle integration - library linking check VCS IfconfigTwice attribute
System architecture type VCS NetworkHosts attribute

1 of 60
VeritasTM Services and Operations Readiness Tools

Time synchronization Campus cluster volume read policy


Temporary license keys Input/output accelerators for databases
Volume Replicator version consistency VxFS File System intent log size
VxVM configuration backup daemon state Input/output access time
VCS critical resources Input/output fragment
VCS HAFD disks Input/output wait state
VCS HAFD VxVM license DRL size
VCS HAFD disk UDID comparison Mirrored volumes without Dirty Region Log (DRL)
VCS HAFD VxVM components SmartIO feature awareness
VCS HAFD DNS Domain Information Groper (DIG) Disk group configuration database
VCS HAFD DNS keyfile Underutilized disks
VCS HAFD DNS master File System old Storage Checkpoint
VCS faulted agents VxFS File System utilization
VCS faulted resources VxFS volume and file system size
VCS HAFD IP device Multi-volume storage tiering
VCS HAFD IP route Unused volume components
VCS HAFD VxFS FsckOpt Storage Foundation thin provisioning
VCS HAFD mount point availability Unused volumes

VCS HAFD apache configuration file

Check category: Availability

Check description: Checks whether HttpDir and ConfFile exist.

Check procedure:

Checks whether the specified HttpDir directory is a valid directory on the target cluster systems.

Checks whether the specified ConfFile directory exists on the target cluster systems.

Check recommendation: Make sure that HttpDir is a valid directory and that the ConfFile file exists on the clustered system.

Learn More...
Action taken for Apache agent

VCS HAFD apache owner

Check category: Availability

Check description: Determines whether the resource owner defined by the User agent attribute exists.

Check procedure:

Check whether the user has valid UNIX login on the clustered system.

Check recommendation: Make sure that the User agent attribute specifies a valid UNIX account on the clustered system. To determine if the u
following command:
# /usr/bin/id user_name.

Learn More...
Action taken for Apache agent

VCS HAFD application checksum

2 of 60
VeritasTM Services and Operations Readiness Tools

Check category: Availability

Check description: Compares the checksums of the executable files on all cluster nodes. The node on which the application is currently runnin
the canonical copy. The check is skipped on the node where the application is currently running. On all other cluster nodes, if the checksum diffe
application is not running (ONLINE) on any node, the check is skipped on all nodes.

Check procedure:

Fetches the list of applications configured for monitoring on the target cluster systems.

Checks whether the checksum binaries specified for the StartProgram, StopProgram and MonitorProcess attributes of the application res
systems.

Verifies that the checksums of the specified binaries are the same as the checksums on the system where the group is online.

Check recommendation: The checksum of each executable file should be the same on all nodes. Identify the definitive and correct executable
then synchronize the files on the remaining failover nodes.

Learn More...
Attributes required for configuring Application agent

VCS HAFD application program

Check category: Availability

Check description: Checks whether the application binaries exist and are executable.

Check procedure:

Fetches the list of application binaries that are specified for monitoring in the configuration on the target cluster systems.

Checks whether the application binaries exist on the target cluster systems.

Verifies whether the application binaries are executable on all the target cluster systems.

Check recommendation: Make sure that the scripts specified in the cluster configuration exist and that they are executable on all systems in th

Learn More...
Attributes required for configuring Application agent

VCS HAFD application user

Check category: Availability

Check description: Checks whether the application user account exists.

Check procedure:

Retrieves the user information for the applications configured on the target cluster systems.

3 of 60
VeritasTM Services and Operations Readiness Tools

Verifies that the user information is valid and that the user exists.

Check recommendation: Make sure that the application user has a valid UNIX login, and that the account is enabled for shell access.

Learn More...
Attributes required for configuring Application agent

ASL and APM consistency

Check category: Availability

Check description: Checks the Array Support Library (ASL) and Array Policy Module (APM) levels across systems in a single cluster.

Check procedure:

Identifies systems that are part of the same cluster and that have the same version of Storage Foundation / InfoScale installed.

Discovers the ASLs and the APMs installed on the systems.

Checks whether the ASL and APM versions are consistent across all systems in the cluster.

Check recommendation: Ensure that the ASL versions are consistent for all systems in a cluster that have the same operating system and Sto
version, and are connected to the same storage enclosure.

Learn More...
Introduction to ASLs
Configuring Array Policy Modules

Boot volumes completion

Check category: Availability

Check description: Checks the completeness of boot volumes, and verifies that the plex size is at least equal to the volume size.

Check procedure:

Determines the boot disk group that is present on the system.

Obtains the list of volumes residing in the boot disk group.

Checks whether the plex size of the volumes in the boot disk group is equal to the volume size.

Check recommendation: Fix the boot configuration for the boot volumes that are present on the system boot disk group; the volume size shoul
size.

Learn More...
Booting root volumes
Information on boot-time colume configurations

4 of 60
VeritasTM Services and Operations Readiness Tools

Campus cluster allsites volumes

Check category: Availability

Check description: The check fails if the volumes that have the allsites flag set do not have complete plexes allocated from all the configured s

Check procedure:

Verifies that allsites volumes have complete plexes on all the configured sites.

Verifies that the subdisks of the plex of a site are from the disks on the same site.

Does not apply to volumes in the disk groups that do not have any sites configured.

Check recommendation:

Make sure that the allsites volumes have a plex on the configured sites.

Make sure that the subdisks are site-confined.

Learn More...
How to create allsites volume

Campus cluster auto reattach daemon state

Check category: Availability

Check description: The check fails if the vxattachd daemon is not running. When the faulted disks are reconnected, this daemon automatically
sites.

Check procedure:

Verifies that the vxattachd daemon is running.

Check recommendation: Make sure that vxattachd and vxrestored are running.

Campus cluster config copy availability

Check category: Availability

Check description: Checks whether all sites have recent configuration copies and log copies, distributed in balanced manner across the sites. E
one configuration copy and one log copy.

Check procedure:

Verifies that each site has at least one configuration copy and one log copy.

Verifies that the configuration copies and log copies are distributed across the sites.

5 of 60
VeritasTM Services and Operations Readiness Tools

Check recommendation: Make sure that the configuration and log copies are distributed in balanced manner across sites.

Learn More...
How to handle conflicting configuration copy

Campus cluster disk with site tag

Check category: Availability

Check description: The check fails if the disks are not tagged with valid sites that are configured in the disk group.

Check procedure:

Verifies that the disks in the disk group which have sites configured, are not untagged.

Verifies that the disks in the disk group which has sites configured, are tagged with a valid site.

Verifies that all the disks from an enclosure that are in the disk group should belong to the same site.

Check recommendation:

Make sure that the disks are tagged with a valid site that is configured in the disk group.

Make sure that the disks from an enclosure are tagged with the same site.

Learn More...
How to configure sites for storage

Campus cluster host site tag

Check category: Availability

Check description: Checks whether the hosts are tagged with a site tag which is a configured site in all imported disk groups.
Note: This check does not apply if the host does not have an imported disk group.

Check procedure:

Verifies that the host has been tagged with a site.

Verifies that all the imported disk groups have the host site as a configured site.

Check recommendation: Make sure that you tag the host with a valid site and that the imported disk groups have the host site as a configured

Learn More...
How to configure sites for hosts

Campus cluster site-consistency

Check category: Availability

6 of 60
VeritasTM Services and Operations Readiness Tools

Check description: The check fails if the Volume Manager (VxVM) objects are not correctly configured to maintain site-consistency.

Check procedure:

Validates that all the volumes (both data and log volumes) that have the site-consistency flag set have at least one complete plex on eac
disk group.

Validates that a DCO log with a version => 20 is attached to the site-consistent volumes.

Validates that DCO volumes have site-consistency set and are enabled.

The check does not apply if no sites are configured in the disk group or the disk group site-consistency=off.

Check recommendation:

Make sure to attach a DCO log with a version => 20 to the site-consistent volumes.

Make sure that the site-consistent volumes have a complete plex in all the configured sites.

Make sure that the state of the plexes of the site-consistent volumes match with the state of the site.

Learn More...
How to configure site consistency on a disk group
How to configure site consistency on a volume

Concat volumes with multiple disks

Check category: Availability

Check description: Checks for non-mirrored concatenated volumes consisting of multiple disks.

Check procedure:

Generates a list of the non-mirrored concatenated volumes on the system.

Checks whether the concatenated volumes consist of multiple disks.

Check recommendation: It is recommended that you mirror the concatenated volumes. That way, if one of the disks in the volume fails, you wil

Learn More...
Adding a mirror to a volume
Creating a concatenated-mirror volume
Creating a mirrored-concatenated volume

Concat volumes across disk arrays

Check category: Availability

7 of 60
VeritasTM Services and Operations Readiness Tools

Check description: Checks for concat volumes whose LUNs span two or more disk arrays.

Check procedure:

Generates a list of the concatenated volumes on the system.

Checks whether the subdisks that constitute the concatenated volume are coming from different storage array ports and enclosures.

Check recommendation: Reconfigure the volume(s) so that all LUNs on the volume are exported by a single storage array. When a concat volu
two or more arrays, failure of any one array brings the entire volume offline. Therefore, it is recommended that all LUNs in a concatenated volum

The high-level procedure is as follows:

1. Decide on which array you want the volume to reside (referred hereafter as Array1).

2. Identify and record the name and size of the LUN(s) to be replaced, that is, those LUN(s) exported by arrays other than Array1.

3. Export new, or unused existing, LUN(s) to the server from Array1. Each LUN will typically be the same size and redundancy as the LUN to be

4. Initialize those LUNs as VxVM disks, typically with the <font face='courier'>vxdisksetup command.

5. Use the <font face='courier'>vxdg adddisk command to add those VxVM disks to the diskgroup.

6. Use the <font face='courier'>vxconfigbackup command to backup the diskgroup configuration.

7. Use the <font face='courier'>vxsd mv command to move contents of the old LUN onto the new LUN. This command operates online while the

8. Optionally, remove the replaced LUN(s) from the diskgroup.

Learn More...
vxdisksetup: man page
vxdg: man page
vxconfigbackup: man page
vxsd: man page

Dynamic Multi-Pathing (DMP) driver paths state

Check category: Availability

Check description: Checks whether all Dynamic Multi-Pathing (DMP) paths to the array are in the active/enabled state.

Check procedure:

Identifies all the DMP subpaths for all the disks available on the system.

Checks whether any of the subpaths are disabled.

8 of 60
VeritasTM Services and Operations Readiness Tools

Check recommendation: Enable all the disabled controllers on the system. You can do so using DMP commands provided in the documentatio

Learn More...
Displaying paths controlled by the controller
Enabling I/O for paths, controllers, or array ports

Dynamic Multi-Pathing (DMP) restore daemon state

Check category: Availability

Check description: Verifies that the Dynamic Multi-Pathing (DMP) restore daemon is running on the system.

Check procedure:

Verifies that the Dynamic Multi-Pathing (DMP) restore daemon is running on the system.

Check recommendation: Automatic path failback stops if the DMP restore daemon is stopped. You are advised to start the DMP path restoratio
user land daemon that you can find using ps command, but a kernel thread that you can see only through vxdmpadm command. Click the docum
details of using vxdmpadm command.

Learn More...
How to configure DMP path restoration policies?
Displaying the status of DMP path restoration thread
Stopping the DMP path restoration thread

Dynamic Multi-Pathing (DMP) restore daemon consistency

Check category: Availability

Check description: Verifies that the Dynamic Multi-Pathing (DMP) restore daemon is configured consistently across all systems in a cluster.

Check procedure:

Determines which systems are part of the same cluster.

Verifies that the DMP restore daemon attributes are consistent across all systems in the cluster.

Check recommendation: Ensure that the DMP daemon is configured consistently for all systems in a cluster that are connected to the same en
Storage Foundation / InfoScale version.

Learn More...
How to configure DMP path restoration policies?
Displaying the status of DMP path restoration thread
Stopping the DMP path restoration thread

Encapsulated SAN boot disk

Check category: Availability

Check description: Checks whether the disk arrays on which the boot disks reside are listed in the hardware compatibility list (HCL) as support
(SAN) bootability. Note: This check is only performed on Linux and Solaris systems.

Check procedure:

9 of 60
VeritasTM Services and Operations Readiness Tools

Identifies the boot disk group present in the system.

Gets the list of disks in the boot disk group.

Identifies the disk array the disks in the boot disk group belong to.

Verifies in the HCL whether the disk array supports SAN bootability.

Check recommendation: Disks in the boot disk group do not support SAN bootability. The disk arrays where disks in a disk group reside, shoul
disk arrays should be listed in the HCL as supporting SAN bootability.

Dynamic Multi-Pathing (DMP)/DDL enclosure claim

Check category: Availability

Check description: Checks that attached enclosures are properly claimed by Device Discovery Layer (DDL) and Dynamic Multi-Pathing (DMP)

Check procedure:

Determines the array type of attached enclosures, the enclosure names, and the list of enclosures that are not claimed by any Array Sup

Determines all the active Array Policy Modules (APMs) installed on the system and the array types supported by the active APMs.

Verifies the presence of any APMs installed on the system with the same array type as that of the enclosures attached to the system.

Check recommendation: Ensure that enclosures connected to the system are properly claimed by DDL, and that the ASL/APM appropriate for
system.

Learn More...
Introduction to ASLs
Displaying enclosure information
Configuring Array Policy Modules
Download ASL/APM

Mirrored volumes with a single disk array

Check category: Availability

Check description: Verifies whether mirroring is done with disks coming from a single disk array.

Check procedure:

Identifies all the mirrored volumes present on the system.

Verifies whether the subdisks that constitute the mirrored volume are coming from different enclosures.

10 of 60
VeritasTM Services and Operations Readiness Tools

In the case of layered volumes (concat-mirror/stripe-mirror), this check verifies the sub-volumes.

Check recommendation: Ensure that the mirror copies are placed across storage enclosures; to do so,move the subdisks in one data plex to s
using the following command:

# vxsd mv <old sub-disk> <new sub-disk>

This arrangement ensures continued availability of the volume if either of the enclosures becomes unavailable.

Learn More...
How to create mirrors across enclosures
vxassist: manual page

Full fsck required

Check category: Availability

Check description: Check whether the File System(VxFS)s mounted on a system are marked for a full file system check (fsck)

Check procedure:

Checks whether the File System(VxFS) package is installed on the system.

Identifies all available File Systems(VxFS) on the system, and determines whether the full fsck flag is set for each one.

Check recommendation: Repairing file systems that require a full fsck is a time-consuming operation; the time required is proportional to the si
systems will require a full fsck before they can be mounted again; plan any downtime accordingly.

Learn More...
fsck: manual page

Sufficient memory for full fsck

Check category: Availability

Check description: Checks whether enough memory is available on the system for a full file system check (fsck) to run on a mounted File Syste

Check procedure:

Identfies any mounted File System(VxFS)s on the system.

For each of the file systems, determines the number of file system blocks, inodes and allocation units.

Collects system information such as the available physical and swap memory.

Calculates the memory required to perform a full fsck on each file system.

Determines whether the required memory is available on the system.

11 of 60
VeritasTM Services and Operations Readiness Tools

Check recommendation: You do not have enough physical and virtual memory to run a full file system check (fsck) of this file system. In most s
intent log, avoiding the need for a full fsck. In rare circumstances, however, a full fsck is required. If you are not at or above Storage Foundation
upgrading. 5.0MP3 and higher have reduced memory requirements for a full fsck. Alternately, you can add physical or virtual memory. Physical m
using swap space increases the time to complete the check.

Learn More...
fsck: manual page

Mirrored volumes in FSS disk group

Check category: Availability

Check description: Confirms that the mirrored volumes that are created in a Flexible Storage Sharing (FSS) disk group are not based on local s
FSS cluster. Otherwise the host becomes a single point of failure for the volume.

Check procedure:

Checks whether VxVM and VCS are installed.

Checks the version of VxVM and VCS.

Verifies whether the platform is other than HP-UX.

Checks whether FSS disk groups exist.

Verifies whether mirrored volumes are created in FSS disk groups.

Verifies whether the mirrored volumes that are created in an FSS disk group are based on the storage from more than one host in the FS

Check recommendation: The mirrored volumes in an FSS disk group should be created based on the storage from more than one host to main

Learn More...
About Flexible Storage Sharing

Removable VxFS File System checkpoints

Check category: Availability

Check description: Checks whether all the Storage Checkpoints of the mounted VxFS File Systems are removable.

Check procedure:

Identifies the mounted VxFS File Systems on the system.

Checks whether the Storage Checkpoints of each of the mount points were created with the removable attribute.

12 of 60
VeritasTM Services and Operations Readiness Tools

Check recommendation: The Storage Checkpoints are not removable. In most configurations, Storage Checkpoints should be removable to re
error on the primary data. To create a removable Storage Checkpoint, enter :
#fsckptadm -r create <checkpoint> <mountpoint>

Learn More...
Removable Storage Checkpoints
Creating a Storage Checkpoint
Space management considerations

File System disk layout version

Check category: Availability

Check description: Checks whether the File System disk layout version is supported with the installed Storage Foundation / InfoScale version,
is close to the maximum supported by the disk layout version and block size.

Check procedure:

Determines the Storage Foundation / InfoScale version installed on the system.

Determines the size of the file systems mounted on the system.

Checks whether the file system disk layout version is compatible with the Storage Foundation / InfoScale version installed on the system

Checks whether the file system size is close to the maximum supported by the disk layout version and block size.

Check recommendation: The recommendations are summarized in the following two cases:

i) Case 1 : Ensure that disk layout version of any VxFS File Systems mounted on the system are supported by the installed Storage Foundation
upgrade a disk layout, you cannot downgrade it. Refer to the upgrade recommendations below.

ii) Case 2 : Ensure that the VxFS File Systems mounted on the system are not approaching the maximum size allowed by that disk layout versio
recommended upgrading the file system disk layout version. Refer to the following table to view the maximum file system size supported by vario
versions:

FS block size (in K) Maximum FS size supported (in TB) for FS disk layout Version 5 Maximum FS size supported (in TB) for FS disk
1 4 32
2 8 64
4 16 128
8 32 256

It is recommended upgrading the file system disk layout version in the following cases:

i) When the system has any Storage Checkpoints. On older version file systems, Storage Checkpoints take a long time to create and can make f
crash.

13 of 60
VeritasTM Services and Operations Readiness Tools

ii) When the system does not have too many inodes. In an upgrade to a higher file system disk layout version, every inode is modified.

iii) When the system is running Storage Foundation Cluster File System, it is recommended that you upgrade the file system disk layout version
provides significant performance gains.

Learn More...
About disk layouts
Upgrading VxFS disk layout versions

Unmounted VxFS File System(s)

Check category: Availability

Check description: Checks for unmounted VxFS File System(s) that have entries in filesystem table and valid underlying devices. Note: Follow
table are referred in the check: Linux: /etc/fstab AIX: /etc/filesystems HP-UX: /etc/fstab Solaris: /etc/vfstab

Check procedure:

Reads the VxFS File System entries in the fstab file.

Checks whether these file systems are mounted.

For any file system that is not mounted, checks whether an underlying volume exists.

Check recommendation: It is recommended that removing any stale entries and deleting the corresponding underlying volumes to reclaim spac

Learn More...
Editing the fstab file
Unmounting the file system
Removing a volume

HBA controller state

Check category: Availability

Check description: Checks whether the system has any disabled HBA controllers.

Check procedure:

Discovers all the HBA controllers that are present on the system.

Identifies any disabled HBA controllers.

Check recommendation: Enable all the disabled HBA controllers on the system.

Learn More...
Displaying information about I/O controllers
Enabling I/O controllers

VxVM Volume Manager disk error

14 of 60
VeritasTM Services and Operations Readiness Tools

Check category: Availability

Check description: Checks for VxVM Volume Manager disks in error state.

Check procedure:

Identifies the disks under the control of the VxVM Volume Manager.

Checks whether any of the disks are in an error state.

Check recommendation: Inspect the hardware configuration to confirm that the hardware is functional and configured properly.

Learn More...
Displaying disk information
Taking a disk offline
Removing and replacing disks
Disks under VxVM control
Adding a disk to VxVM
Enabling a disk

Hot relocation daemon state

Check category: Availability

Check description: Checks whether the hot-relocation daemon is running on the system and whether any disks in the disk group are marked as
pass if there are no spare disks irrespective of the vxrelocd daemon state.

Check procedure:

Verifies whether the vxrelocd daemon is running on the system.

For each disk group on the system, verifies whether at least one disk in the disk group is marked as a spare disk.

Check recommendation: The hot relocation feature increases the overall availability in case of disk failures. You may either keep hot relocation
this functionality, or turn it off. Recommendations are:

Case 1: The spare flag is set to ON for at least one disk in the disk group and vxrelocd is not running: it is recommended to switch on the vxreloc
case of disk failure, the hot relocation feature can then try to use the disks marked as spare.

Case 2: The spare flag set ON for at least one disk in the disk group and vxrelocd is running: In case of disk failure, hot relocation may occur.

You can start vxrelocd using following command.


#nohup vxrelocd root

Learn More...
What is hot relocation?
How hot relocation works
Displaying spare disk information

Verify I/O fencing for cluster

15 of 60
VeritasTM Services and Operations Readiness Tools

Check category: Availability

Check description: Verifies whether I/O fencing is properly configured for cluster.

Check procedure:

Determines the fencing mode using the 'vxfenadm' command and verifies whether fencing mode is set to either SCSI-3 or SYBASE or C

Check recommendation: Either I/O fencing is not running on the system or fencing mode is not configured properly. It is recommended to confi
mode configured as per product requirements when using SFCFS to avoid VxFS file system corruption.

Learn More...
Understanding I/O Fencing
I/O Fencing for SFCFS

SFHA Kernel module consistency

Check category: Availability

Check description: Checks that SFHA Kernel modules loaded across all the nodes in a cluster are consistent

Check procedure:

Identifies the SFHA Kernel modules loaded on all the nodes in a cluster.

Verifies that the SFHA Kernel modules loaded are consistent across all the nodes in the cluster.

Check recommendation: Ensure that SFHA Kernel modules loaded on all the nodes in a cluster are consistent. Inconsistent SFHA Kernel mod
application fail-over.

Mirrored volumes with no mirrored DRL

Check category: Availability

Check description: Checks for mirrored volumes that do not have a mirrored Dirty Region Log (DRL).

Check procedure:

Identifies any mirrored volumes present on the system that are larger than the configurable threshold size on the system.

Checks whether the identified volumes have a mirrored DRL.

Check recommendation: Ensure that you mirror the DRL for faster read operation during recovery after a disk failure. A mirrored DRL also ens
disk fails where the DRL resides.

Learn More...
How to create a Volume with DRL enabled
Adding traditional DRL logging to a mirrored volume
Determining if DRL is enabled in a volume
Disabling and re-enabling DRL

16 of 60
VeritasTM Services and Operations Readiness Tools

License keys Consistency

Check category: Availability

Check description: This check compares the license key, license key type and license key version across cluster nodes and highlight any incon
currently compare the individual feature bits in the license keys. This can result in the following conditions: 1. The check does not currently distin
Enterprise-level license keys for Storage Foundation for Oracle and Storage Foundation HA for Oracle. This can result in a check that passes ev
enablement is not identical. 2. The check does not currently detect whether VVR is license key enabled across all cluster nodes. This can result
when the license key enablement is not identical. 3. The check does currently distinguish between Storage Foundation + VCS, and Storage Fou
features enabled by the license keys are identical. This results in a check that fails even when the license key enablement is identical.

Check procedure:

Identifies the license keys, license key types and license key versions installed on all the nodes in a cluster.

Verifies that the license keys, license ley types and license key versions installed are consistent across all the nodes in the cluster.

Check recommendation: Ensure that license keys, license key types and license key versions installed on all the nodes in a cluster are consist
license key types and license key versions installed can cause errors in application fail-over.

LLT links full duplex setting

Check category: Availability

Check description: Checks whether all the LLT links in the system are full-duplex.This check will be skipped for all bonded interfaces.

Check procedure:

Identifies all the LLT links configured on the system.

Checks whether all the LLT links are full duplex on the system.

Check recommendation: All the LLT links configured in the system should be full duplex.

LLT links high priority and private link

Check category: Availability

Check description: Checks for the availability of LLT high-priority links and verifies if they are in the private network.

Check procedure:

Verifies whether the LLT Links are configured on a private link and as high priority by using the lltstat command.

Check recommendation: It is recommended that you have at least two high-priority LLT links in the private network which is required to configu

17 of 60
VeritasTM Services and Operations Readiness Tools

LLT link
Check count Availability
category:

Check description: Checks if minimum two links configured as LLT links.

Check procedure:

Identifies all the LLT links configured on the system.

Checks whether the number of LLT links is greater than or equal to two.

Check recommendation: To ensure HA, it is mandatory that you should have minimum two links configured as LLT links.

Mirrored volumes with single disk controller

Check category: Availability

Check description: Checks for mirrored volumes whose plexes (mirrors) are on the same disk controllers.

Check procedure:

Identifies the mirrored volumes on the system.

For each mirrored volume, identifies the present subdisks and the disk controller through which it is visible.

Checks whether the volume is mirrored across different disk controllers.

Check recommendation: The mirrored volumes are not mirrored across controllers. A single controller failure compromises the volume. Create
controller. Attach the new plex to the volume (for a total of three plexes). Detach one of the two original plexes.

Learn More...
How to create mirrors across controllers
vxassist : manual page

Package Consistency

Check category: Availability

Check description: Checks that packages installed across all the nodes in a cluster are consistent

Check procedure:

Identifies the packages installed on all the nodess in a cluster.

Verifies that the package installed and its version are consistent across all the nodes in the cluster.

Check recommendation: Ensure that packages installed on all the nodes in a cluster are consistent and package versions are identical. Incons
errors in application fail-over.

18 of 60
VeritasTM Services and Operations Readiness Tools

RAID-5 volumes with unmirrored log

Check category: Availability

Check description: Checks for large RAID-5 volumes with size greater than !param!HC_CHK_RAID5_LOG_VOL_SIZE!/param! (set in sortdc.c
RAID-5 logs.

Check procedure:

Checks whether Volume Manager is installed on the system.

Retrieves all the RAID-5 volumes on the system larger than the threshold size defined in the sortdc configuration file.

Verifies whether the large RAID-5 volumes have mirrored logs.

Check recommendation: It is recommended to create a mirrored RAID-5 log for each large RAID-5 volume. A mirror of the RAID-5 log protects
due to disk failure.

Learn More...
Creating RAID-5 Volumes
Adding a RAID-5 Log

Volume Replicator (VVR) RVG and RLINK state check.

Check category: Availability

Check description: Checks whether the Replicated Volume Group (RVG) and its corresponding Replication Link (RLINK) are in stopped, pause
indicate that Volume Replicator (VVR) replication is not occurring appropriately.

Check procedure:

The check identifies all the Replicated Volume Groups (RVGs) and the corresponding Replication Links (RLINKs).

The check determines whether RVGs and RLINKs are in the ACTIVE state.

Check recommendation: Make sure that Replicated Volume Group (RVG) and corresponding Replication Link (RLINK) are in the ACTIVE state

Learn More...
Volume Replicator Administrator's Guide

Root mirror validity

Check category: Availability

Check description: Checks to see that the root mirrors are set up correctly.

Check procedure:

Identifies the root volumes of the boot disk group present on the system.

19 of 60
VeritasTM Services and Operations Readiness Tools

Checks whether the root volumes are mirrored.

Verifies that the root volumes are not spanned across subdisks and are at the same level of redundancy.

Verifies whether boot diskgroup has a ghost subdisk present on Solaris platform if atleast one partition spans sector zero.

Check recommendation: The root volumes of the boot disk group are not mirrored properly. It is recommended that you fix the mirroring of the

Learn More...
Rootability
Encapsulating and mirroring the root disk
Restrictions on using rootability
Booting root volumes

Fencing Configuration

Check category: Availability

Check description: Checks whether the fencing module is configured properly on all nodes in the cluster.

Check procedure:

Identifies the product installed - Storage Foundation for Oracle RAC (SF Oracle RAC) or Storage Foundation Cluster File System for Ora

In the case of SF Oracle RAC, checks whether I/O Fencing is enabled.

In the case of SFCFSRAC, checks whether I/O Fencing is disabled.

Check recommendation: You must configure fencing configuration for SF Oracle RAC. It is recommended configuring fencing in enabled mode
disabled mode for SFCFSRAC.

Fencing CP server Configuration

Check category: Availability

Check description: Checks configuration for coordination point server based fencing on all nodes in the cluster.

Check procedure:

Checks ping status for coordination point server from all nodes in the cluster.

Check recommendation: You must configure fencing for Storage Foundation for Oracle RAC(SF Oracle RAC). It is recommended to configure
Oracle RAC.

LLT Link jumbo frame setting

Check category: Availability

20 of 60
VeritasTM Services and Operations Readiness Tools

Check description: Checks whether all the Low Latency Transport(LLT) links in the Storage Foundation for Oracle RAC(SF Oracle RAC) node
1500 and 9000 bytes.

Check procedure:

Identifies all the LLT links configured in the system.

Checks whether all the LLT links have the same jumbo frame sizes.

Check recommendation: All the llt links in the node should have mtu size between 1500 and 9000 bytes.

LLT Link MTU check

Check category: Availability

Check description: Checks whether all the Low Latency Transport (LLT) links in the Storage Foundation for Oracle RAC (SF Oracle RAC) clus

Check procedure:

Identifies all the LLT links configured in the system.

Checks whether all the LLT links have the same mtu.

Check recommendation: All the nodes in the cluster should have the same mtu size.

LLT configuration- cluster ID

Check category: Availability

Check description: Checks whether the cluster ID is identical on all nodes in VCS and SF Oracle RAC clusters.

Check procedure:

Determines the cluster ID in each system in the SF Oracle RAC cluster.

Checks whether the cluster ID is identical across all nodes in the SF Oracle RAC cluster.

Check recommendation: The cluster IDs should be identical on all nodes in the SF Oracle RAC cluster.

LLT links cross-connection

Check category: Availability

Check description: Checks whether the LLT links in the system are cross-connected.

Check procedure:

21 of 60
VeritasTM Services and Operations Readiness Tools

Identifies all the LLT links configured on the system.

Checks whether the LLT links are cross-connected (multiple links connected to a single switch or connected directly).

Check recommendation: It is recommended that the LLT links should not be cross-connected.

LLT Link speed autonegotiation and MAC address check

Check category: Availability

Check description: Checks speed autonegotiation and MAC address settings for all links.

Check procedure:

Verifies if the speed settings for all LLT links is same in the system

Verifies if the autonegotiation settings for all LLT links is same in the system.

Verifies if the MAC addresses of all LLT links in the system are unique.

Check recommendation: All links should have the same speed, autonegotition setting ,and unique MAC address.

SF Oracle RAC ODM configuration

Check category: Availability

Check description: Checks whether the ODM is configured in cluster mode.

Check procedure:

Checks whether GAB port d. is up on the node.

Checks if /dev/odm is mounted and ODM is configured in cluster mode.

Check recommendation: It is recommended to configure ODM in cluster mode for SF Oracle RAC.

SF Oracle RAC Oracle integration - library linking check

Check category: Availability

Check description: Checks if Oracle libraries are properly linked.

Check procedure:

Verifies the presence of Oracle provided libraries - libskgxn, libskgxp (Oracle 10g only), and libodm.

22 of 60
VeritasTM Services and Operations Readiness Tools

Checks whether these libraries are linked with Veritas-provided libraries.

Check recommendation: It is recommended that on SF Oracle RAC, Oracle-provided libraries libskgxn, libskgxp (Oracle 10g only), and libodm
provided libraries.

System architecture type

Check category: Availability

Check description: Checks whether the system architecture type is identical across the SF Oracle RAC cluster.

Check procedure:

Determines the system architecture type of each system in the SF Oracle RAC cluster.

Verifies whether the system architecture type is same across all nodes in the cluster.

Check recommendation: All nodes in a cluster must use identical system architectures.

Time synchronization

Check category: Availability

Check description: Checks whether the date and time are synchronized across the cluster.

Check procedure:

Determines the date and time set on each system in the SF Oracle RAC cluster.

Verifies whether the date and time are synchronized across all nodes in the cluster.

Check recommendation: It is recommended that the date and time settings are identical on all cluster nodes.

Temporary license keys

Check category: Availability

Check description: Checks for temporary product license keys that are about to expire.

Check procedure:

Identifies products with temporary license keys.

Checks whether there is a permanent license key installed for that product.

Check recommendation: Ensure that valid license keys are installed for Storage Foundation / InfoScale products.

23 of 60
VeritasTM Services and Operations Readiness Tools

Volume Replicator version consistency

Check category: Availability

Check description: Checks that the same version of Volume Replicator (VVR) is installed on all VVR nodes.

Check procedure:

Determines whether Volume Manager (VxVM) is installed on all the nodes.

Verifies that VVR has the same version across all the nodes.

Check recommendation: Make sure that VxVM is installed on all the nodes, the license key for VVR is enabled and the VVR versions are same

Learn More...
Volume Replicator Administrator's Guide

VxVM configuration backup daemon state

Check category: Availability

Check description: Checks whether the vxconfigbackupd daemon is running on the system.

Check procedure:

Checks whether the vxconfigbackupd daemon is running on the system.

Check recommendation: It is recommended that you start the vxconfigbackupd daemon. The vxconfigbackupd daemon monitors changes to th
Volume Manager (VxVM), and stores any output in the configuration directory. This assists in recovering lost or corrupt disk groups/volumes whe
their configuration. Restart the vxconfigbackupd by running the following command: # /etc/vx/bin/vxconfigbackupd &

Learn More...
About vxconfigbackupd daemon
vxconfigbackupd: manual page

VCS critical resources

Check category: Availability

Check description: Checks whether any VCS resource is marked as non-critical (Critical=0).

Check procedure:

Retrieves the details of all the resources of the configured groups.

Verifies that atleast one of the resources is marked as critical.

24 of 60
VeritasTM Services and Operations Readiness Tools

Check recommendation: A group cannot failover to an alternate system unless it has at least one resource marked as critical. Therefore, to en
login as root and execute the following command to set the affected resources to critical:
# hares -modify resource_name Critical 1

Learn More...
About critical and non-critical resources

VCS HAFD disks

Check category: Availability

Check description: Checks whether all the disks in the VxVM disk group are visible on the cluster node.

Check procedure:

Fetches all the disk groups configured on the target cluster nodes

Discovers all the disks in the disk group.

Check recommendation: Make sure that all VxVM disks have been discovered. Do the following:
1. Run an operating system-specific disk discovery command such as lsdev (AIX), ioscan (HP-UX), fdisk (Linux), or format or devfsadm (Solaris)
2. Run vxdctl enable.
# vxdctl enable.

Learn More...
Verifying the disk visibility using vxfenadm utility
Disk Group agent notes

VCS HAFD VxVM license

Check category: Availability

Check description: Checks for valid Volume Manager (VxVM) licenses on the cluster systems.

Check procedure:

Uses the vxlicrep command to verify whether a valid Volume Manager (VxVM) license exists on the target cluster system.

Check recommendation: Use the /opt/VRTS/bin/vxlicinst utility to install a valid VxVM license key.

Learn More...
Installing a VCS license using vxlicinst utility
Troubleshooting for validating license keys
Disk Group agent notes

VCS HAFD disk UDID comparison

Check category: Availability

Check description: On the local system where the DiskGroup resource is offline, it checks whether the unique disk identifiers (UDIDs) for the d
systems.

25 of 60
VeritasTM Services and Operations Readiness Tools

Check procedure:

Determines the UDID of the disks in the disk group on the local cluster system and system where the disk group is online.

Checks whether the discovered UDIDs of the disks match.

Check recommendation: Make sure that the UDIDs for the disks on the cluster nodes match. To find the UDID for a disk, enter the following co
# vxdisk -s list disk_name.
Note: The check does not handle SRDF replication. In case of SRDF replication, user should make use of 'clearclone=1' attribute (SFHA 6.0.5 o
clone flag and update the disk UDID.

Learn More...
Disk Group agent notes

VCS HAFD VxVM components

Check category: Availability

Check description: Verifies that all the disks in the disk group in a campus cluster have site names. Also verifies that all volumes on the disk gr
plexes on each site in the campus cluster.

Check procedure:

Verifies whether the disk group has the proper campus cluster configuration.

Verifies whether the disk group is online anywhere in the cluster.

Fetches the plex and volume information of all the disk group resources that are configured on target cluster systems.

Verify that the plexes and volumes are the same on all sites of the campus cluster.

Check recommendation: Make sure that the site name is added to each disk in a disk group. To verify the site name, enter the following comm
# vxdisk -s list disk_name
On each site in the campus cluster, make sure that all volumes on the disk group have the same number of plexes. To verify the plex and subdis
created on a disk group, enter the following command:
# vxprint -g disk_group.

Learn More...
Setting up a campus cluster configuration
Disk Group agent notes

VCS HAFD DNS Domain Information Groper (DIG)

Check category: Availability

Check description: Checks if the dig binary is present and is executable on the system.

Check procedure:

26 of 60
VeritasTM Services and Operations Readiness Tools

Fetches the details of the DNS name server using the dig tool

Dynamic zone updates are done using the nsupdate command as per the configuration of the DNS resource on the target cluster system

Check recommendation: Make sure that the dig binary is present in at least one of the following locations:
* /usr/bin/dig
* /bin/dig
* /usr/sbin/dig

To make the dig binary executable, enter the following command:


# chmod +x dig_binary_path.

Learn More...
Action taken for DNS agent

VCS HAFD DNS keyfile

Check category: Availability

Check description: Checks whether the Transaction Signature (TSIG) key file that is specified in the cluster configuration exists, is readable, an

Check procedure:

Retrieves the keyfile details from the DNS resource configuration on the target cluster systems.

Verifies that the specified keyfile exists and is not a zero-byte file.

Check recommendation: Make sure that the TSIG key file exists and is a non-zero sized file. To make the file readable, enter the following com
# chmod +r absolute_key_file_path.

Learn More...
Action taken for DNS agent

VCS HAFD DNS master

Check category: Availability

Check description: Checks if stealth masters can reply to a Start of Authority (SOA) query for the configured domain.

Check procedure:

Retrieves the details about the DNS master server from the DNS resource configuration on the target cluster systems.

Verifies that the DNS master server is configured properly and is reachable from target cluster systems.

Check recommendation: Make sure that you configure the StealthMasters and Domain attributes with the correct values, and that the following
works properly:
# dig @stealth_master -t SOA domain_name.

27 of 60
VeritasTM Services and Operations Readiness Tools

Learn More...
Action taken for DNS agent

VCS faulted agents

Check category: Availability

Check description: Checks if any VCS agents have faulted and are not running.

Check procedure:

Retrieves the state of the agents on the target system nodes.

Verifies that agents are running and not faulted.

Check recommendation: VCS resources that belong to a type whose agent has faulted are not monitored. To restart the agent, as root do the
1. Start the agent: # haagent -start Agent -sys node
2. Confirm that the agent has restarted by:

i.Checking the engine log: /var/VRTSvcs/log/engine_A.log


ii. Doing:# ps -ef | grep Agent.

Learn More...
Troubleshooting resources

VCS faulted resources

Check category: Availability

Check description: Checks whether any VCS resources are in a FAULTED state.

Check procedure:

Fetches the resources from all the systems specified for executing the check.

Determines whether the resources are in a FAULTED state.

Check recommendation: A group cannot failover to a system where the VCS resource has faulted. Fix the problem and use the following comm
resource state:

hares -clear resource -sys node

Learn More...
Managing resource faults
Configure Restart Limit attribute for resource
State transitions for a resource

VCS HAFD IP device

Check category: Availability

28 of 60
VeritasTM Services and Operations Readiness Tools

Check description: Checks whether the network interface that is specified in the cluster configuration exists on the system.

Check procedure:

Retrieves the information for the device on which the IP resources are configured.

Verifies that the network device that is specified exists on the target cluster systems.

Check recommendation: In the cluster configuration, make sure you specify the correct network device.

Learn More...
Attributes required for configuring IP agent

VCS HAFD IP route

Check category: Availability

Check description: Checks whether the route to the IP address exists on the network interface specified in the cluster configuration.

Check procedure:

Fetches the route details from the configured IP resources.

Verifies that the route exits for the specified IP address on the associated network device.

Check recommendation: On the associated network device, add the route to the specified IP address.

Learn More...
Attributes required for configuring IP agent

VCS HAFD VxFS FsckOpt

Check category: Availability

Check description: Checks whether a valid fsck policy has been specified for all the Mount resources that are in the offline state to automatical

Check procedure:

Retrieves the details of FsckOpt attribute for all the Mount resources.

Verifies that the value is set to either -Y or -N.

Check recommendation: Set the FsckOpt attribute for the affected Mount resource to either -Y (fix errors during fsck) or -N (do not fix errors du

Learn More...
Mount attributes

VCS HAFD mount point availability

Check category: Availability

29 of 60
VeritasTM Services and Operations Readiness Tools

Check description: Checks whether the specified mount point is available for mounting after failover happens.

Check procedure:

Fetches the mount point location specified in the mount resource configuration.

Checks whether the mount point is already mounted.

Check recommendation: If the mount point is mounted, unmount it. Enter the following command: # umount mount_point.

Learn More...
VxFS file system lock
Mount agent notes

VCS HAFD mount point configuration

Check category: Availability

Check description: Verifies that the available mount point is not configured to mount a file when the system starts.

Check procedure:

Fetches the mount point location that is specified in the mount resource configuration.

Checks whether there is an entry in the fstab file for the specified mount point.

Check recommendation: On a cluster node, make sure that the operating system-specific file system table file does not contain an entry for the
/etc/filesystems (AIX), /etc/fstab (HP-UX and Linux), and /etc/vfstab (Solaris).

Learn More...
Samples of Configuration
Mount agent notes

VCS HAFD mount point existence

Check category: Availability

Check description: Checks whether the specified mount point existing on a cluster node is available for mounting.

Check procedure:

Fetches the mount point location specified in the mount resource configuration.

Verifies that the mount point location exists on the target cluster node.

Check recommendation: Create the specified mount point, and make sure that is it not in use.

Learn More...

30 of 60
VeritasTM Services and Operations Readiness Tools

Offlining mount resource


Mount agent notes

VCS HAFD VxFS license

Check category: Availability

Check description: Checks whether the File System (VxFS) installed on the cluster system where the Mount resource is currently offline has a

Check procedure:

Retrieves the list of target cluster system that require a VxFS license.

Verifies whether a valid license exists on the target cluster systems.

Check recommendation: Use the /opt/VRTS/bin/vxlicinst utility to install a valid VxFS license on the target cluster systems.

Learn More...
Action taken for Mount agent

VCS HAFD NFS lock directory

Check category: Availability

Check description: Checks whether the lock directory specified in the cluster configuration is on shared storage.

Check procedure:

Retrieves the lock directory location from the LockPathName attribute of the NFSRESTART resource configured on the target cluster sys

Verifies that the lock directory exists on shared storage.

Check recommendation: Make sure that the directory specified in the LocksPathName attribute is on shared storage.

Learn More...
Action taken for NFSRestart agent

VCS HAFD NFS configuration

Check category: Availability

Check description: Verifies that the NFS server does not start automatically when the system starts.

Check procedure:

Retrieves the NFS server details from the NFSRESTART resource that is configured on the target cluster systems.

Verifies that the NFS server is disabled in the rc.config.d init file.

Check recommendation: In the system configuration file, disable the NFS server so the NFS daemons do not start when the system boots. On
the svcadm command does not start the NFS daemon when the system boots.

31 of 60
VeritasTM Services and Operations Readiness Tools

Learn More...
Action taken for NFSRestart agent

VCS HAFD NIC

Check category: Availability

Check description: Checks whether the UP flag is set for the network interface specified in the cluster configuration.

Check procedure:

Fetches the network interface details that are specified in the configuration of the target cluster.

Verifies that the device has been flagged as ONLINE.

Check recommendation: Make sure that you configure the Device attribute of the NIC resource type to a network interface that is configured on
set. To set the UP flag on a configured device, use following command:!!Linux:!!# ip link set device_name up!!Solaris/AIX/HP:!!For IPv4:!!# ifconf
IPv6:!!# ifconfig device_name inet6 up.

Learn More...
Attributes required for configuring NIC agent

VCS ToleranceLimit

Check category: Availability

Check description: Checks whether the ToleranceLimit attribute has been set for the VCS NIC resource type.

Check procedure:

Retrieves the value of the ToleranceLimit attribute for the VCS NIC, MultiNICA, and MultiNICB resource types.

Verifies that the ToleranceLimit attribute is set to a non-zero number.

Check recommendation: Setting the ToleranceLimit to a non-zero value prevents false failover in the case of a spurious network outage. To se
resource type, login as root and enter the following command:
# hatype -modify NIC ToleranceLimit n
where n > 0.
Because this command prevents an immediate failover and may compromise the high availability of the affected resource groups, only use this c

Learn More...
About the ToleranceLimit attribute

VCS HAFD Oracle home

Check category: Availability

Check description: Checks whether the ORACLE_HOME directory location specified in the cluster configuration exists on the system.

Check procedure:

32 of 60
VeritasTM Services and Operations Readiness Tools

Retrieves the ORACLE_HOME directory location specified in the cluster configuration.

Verifies that ORACLE_HOME directory exists on the target cluster systems and is mounted properly.

Check recommendation: Ensure that the target cluster system is configured to mount ORACLE_HOME.

Learn More...
Virtual Firedrill actions for Oracle agent

VCS HAFD Oracle owner

Check category: Availability

Check description: Checks whether the user ID (UID) and group ID (GID) of the owner specified in the Oracle owner attribute match the UID an
node.

Check procedure:

Verifies that the Oracle home directory exists on the target cluster systems.

Verifies that the UID/GID for owner is the same on the target cluster systems and on the system where the group is online.

Check recommendation: Make sure that the UID and GID of the Oracle owner match those specified for the owner on the VCS node.

Learn More...
Virtual Firedrill actions for Oracle agent

VCS HAFD Oracle PFile

Check category: Availability

Check description: Verifies that the parameter file that is specified in the Oracle agent PFile or SPFile attribute exists.

Check procedure:

Retrieves the location of the parameter file from the Oracle resource configuration on the target cluster systems.

Verifies that the parameter file exists on the target cluster systems.

Check recommendation: Make sure that the parameter file (PFile or SPFile) exists, which is specified in the cluster configuration.

Learn More...
Virtual Firedrill actions for Oracle agent

VCS HAFD process program

Check category: Availability

Check description: Identifies and logs the application's checksum. The application is defined in the PathName attribute.

33 of 60
VeritasTM Services and Operations Readiness Tools

Check procedure:

Compares the checksum of the specified program with the checksum of the program on the online system.

Verifies that the program has executable permissions set.

Check recommendation: Make sure that the script specified in the PathName attribute value exists, and that it is executable on all systems in t

Learn More...
Attributes required for configuring Process agent

VCS HAFD share directory exists

Check category: Availability

Check description: Checks if the path specified by the PathName attribute exists on the cluster node. If the path does not exist locally, the chec
resource with a corresponding mount point is available to ensure that the path is on shared storage.

Check procedure:

Retrieves the details of the shared directory specified in the Share resource configuration on the target cluster systems.

Verifies that the shared directory exists on the target cluster systems.

Check recommendation: Make sure that the shared directory specified in the Share resource configuration exists either locally or through a Mo
corresponding mount point.

Learn More...
Share agent notes
Action taken for File Share agent

VCS HAFD triggers checksum

Check category: Availability

Check description: Checks if the checksums of the VCS triggers are the same on all the nodes of the cluster.

Check procedure:

Checks if the trigger location and triggers exist in /opt/VRTSvcs/bin/triggers on the target nodes.

Verifies that the binaries of the triggers are the same across all the target cluster nodes.

Check recommendation: Verify that the specified binaries in /opt/VRTSvcs/bin/triggers are identical on all nodes in the cluster.

Learn More...
About VCS event triggers

VCS HAFD triggers program

34 of 60
VeritasTM Services and Operations Readiness Tools

Check category: Availability

Check description: Checks if installed VCS triggers are executable.

Check procedure:

Checks if the triggers exist in /opt/VRTSvcs/bin/triggers on the target nodes.

Verifies that the binaries in /opt/VRTSvcs/bin/triggers are executable by the root user.

Check recommendation: Ensure that the triggers installed in /opt/VRTSvcs/bin/triggers are executable by the root user.

Learn More...
About VCS event triggers

VCS disk connectivity

Check category: Availability

Check description: Checks whether all the disks are visible to all the nodes in a cluster.

Check procedure:

Fetches the shared disks configured for the cluster systems.

Validates that the shared storage is visible for all the cluster systems.

Check recommendation: Make sure that all the disks are connected to all the nodes in a cluster. Run operating system-specific disk discovery
ioscan (HP-UX), fdisk (Linux) or devfsadm (Solaris).

If the disks are not visible, connect the disks to the nodes.

Learn More...
VCS behavior on loss of storage connectivity
Configuring PanicSystemOnDGLoss attribute of DiskGroup

VCS duplicate disk group name

Check category: Availability

Check description: Checks whether duplicate disk groups are configured on the specified nodes.

Check procedure:

Fetches the disk group names configured for the cluster systems.

Verifies that the disk groups on a particular target cluster system are unique and no duplicate disk group names are configured.

35 of 60
VeritasTM Services and Operations Readiness Tools

Check recommendation: To facilitate successful failover, make sure that there is only one disk group name configured for the specified node. T
system, enter the following command:

# vxdg list

Learn More...
Disk Group agent notes

VCS free swap space

Check category: Availability

Check description: Checks if free swap space is below the threshold value specified in the sortdc.conf file: !param!HC_VFD_CHK_FREE_SWA

Check procedure:

Fetches the free swap space present on the cluster system.

Checks whether the free swap space is less than the threshold value specified (in the HC_VFD_CHK_FREE_SWAP_THRESHOLD para

Check recommendation: Increase the swap space by adding an additional swap device.

Learn More...
About the HostMonitor daemon

VCS GAB Startup Configuration Check

Check category: Availability

Check description: Checks if the GAB_START entry in the GAB configuration file is set to 1.

Check procedure:

Verifies whether the GAB_START entry in the GAB configuration file is set to 0.

Check recommendation: Make sure that the GAB_START entry in the GAB configuration file is set to 1 so that the GAB module is enabled and
reboot.

VCS LLT startup configuration check

Check category: Availability

Check description: Checks if the LLT_START entry in the LLT configuration file is set to 1.

Check procedure:

If the LLT_START entry in the LLT configuration file is set to 0, the check reports failure.

Check recommendation: Make sure that the LLT_START entry in the LLT configuration file is set to 1 so that the LLT module is enabled and re
reboot.

36 of 60
VeritasTM Services and Operations Readiness Tools

VCS OS version and patch

Check category: Availability

Check description: Checks whether the nodes in a cluster have the same operating system, operating system version, and operating system p
be identical on all systems in a VCS cluster.

Check procedure:

Determines the OS, OS version, and the OS patches installed on the cluster systems.

Checks whether the OS, OS version, and OS patches on all cluster systems are the same.

Check recommendation: Use operating system-specific command to verify that the nodes in a cluster have the same operating system, version
"uname -a", "oslevel" (AIX).

Learn More...
Late Breaking News for Cluster Server Management Console 5.1
Format for engine version
VCS system requirements & Support Matrix

VCS Cluster ID

Check category: Availability

Check description: Checks if the VCS cluster ID is a non-zero value.

Check procedure:

Fetches the cluster Id N from the target VCS node, by parsing the /etc/llttab file for the string set-cluster N.

Reports failure if the cluster Id is 0.

Check recommendation: In the /etc/llttab file, set the VCS cluster ID to a unique, non-zero integer less than or equal to 65535.

Learn More...
Configuring the basic cluster

VCS ClusterAddress

Check category: Availability

Check description: The ClusterAddress attribute is a prerequisite for GCO. This check verifies if the ClusterAddress Cluster attribute is set, and
address in the ClusterService service group.

Check procedure:

Executes the command haclus value ClusterAddress localclus on the target VCS node to get the virtual address assigned to the cluster.

37 of 60
VeritasTM Services and Operations Readiness Tools

Compares this value against the value of the Address attribute of the webip IP resource of the cluster.

Reports failure if they are not identical.

Check recommendation: Set the value of the Address attribute of the webip resource to that of the ClusterAddress Cluster atttribute.As root :
1. Get the value of the ClusterAddress Cluster attribute:
# haclus -value ClusterAddress -localclus
2. Modify the Address attribute of the webip resource:
# haconf -makerw
# hares -modify webip Address address
# hares -dump -makero, where address is the output of the first command.

Learn More...
Cluster setup

VCS ClusterService OnlineRetryLimit

Check category: Availability

Check description: If the ClusterService service group is configured, it verifies that its OnlineRetryLimit is set.

Check procedure:

Retrieves the configuration details of the ClusterService service group.

Verifies that the OnlineRetryLimit is set for the ClusterService service group.

Check recommendation: Set the OnlineRetryLimit for the ClusterService service group.Enter:
hagrp -modify ClusterService OnlineRetryLimit N
where N >= 1

Learn More...
About the OnlineRetryLimit attribute

VCS configuration

Check category: Availability

Check description: Checks whether the existing cluster configuration at the !param!HC_VFD_CHK_VCS_CONFIG_DIR!/param! directory can b
system.

Check procedure:

Fetches the VCS configuration located at the directory specified by the HC_VFD_CHK_VCS_CONFIG_DIR parameter in the sortdc.conf

Verifies whether the configuration is valid using the hacf -verify command on the specified configuration directory.

38 of 60
VeritasTM Services and Operations Readiness Tools

Check recommendation: Fix the VCS configuration located at the directory specified by the HC_VFD_CHK_VCS_CONFIG_DIR parameter in t
node.

Learn More...
Creating entry points in scripts
About configuring VCS

VCS GAB jeopardy

Check category: Availability

Check description: Checks if any of the private links in the cluster are in the jeopardy state.

Check procedure:

Checks the network connectivity of the target host with the other hosts in the cluster.

Checks if any of the private links are in JEOPARDY state.

Check recommendation: 1. Determine the connectivity of this node with the remaining nodes in the cluster. Enter:
/sbin/lltstat -nvv
If the status is DOWN this node cannot see that link to the other node(s).
2. Restore connectivity through this private link.
3. Verify that connectivity has been restored. Enter:
# /sbin/gabconfig -a | /bin/grep jeopardy
If this command does not have any output, the link has been restored.

Learn More...
About cluster membership

VCS Cluster read-only status

Check category: Availability

Check description: Checks if the VCS configuration is read-only.

Check procedure:

Executes the command "haclus -value ReadOnly -localclus" on the target VCS node to verify if the configuration is closed (ReadOnly=1).

Reports failure if the configuration is not ReadOnly (ReadOnly=0).

Check recommendation: Close the cluster and save any configuration changes. As root, execute:
# haconf -dump -makero

Learn More...
Cluster Attributes

VCS SysName

39 of 60
VeritasTM Services and Operations Readiness Tools

Check category: Availability

Check description: The VCS sysname defined in the /etc/VRTSvcs/conf/sysname file must be identical to the node name defined in the /etc/lltt
attribute. This check also verifies that /etc/llthosts is consistent across all nodes of the cluster.

Check procedure:

Gets the system name defined in the /etc/VRTSvcs/conf/sysname file of the target VCS node.

Gets the node name nodename from the target VCS node, by parsing the /etc/llttab file for the string set-node nodename.

Reports failure if the system name and the node name are not identical.

Check recommendation: Make the contents of /etc/VRTSvcs/conf/sysname identical to node name defined in the /etc/llttab file against the set-

Learn More...
How VCS identifies the local system

VCS unique cluster name

Check category: Availability

Check description: Checks if each cluster that is discovered in the set of input nodes has a unique name.

Check procedure:

Fetches the cluster names of all the clusters that are discovered in the input cluster nodes.

Verifies that the cluster names are unique.

Check recommendation: Cluster names should be unique. Change cluster names in case you plan to set up the Global Cluster Option(GCO) b
cluster names.

Learn More...
Setting up a global cluster

VCS Disabled resources

Check category: Availability

Check description: Checks whether any VCS resource has been disabled.

Check procedure:

Executes hares -display -localclus 2>/dev/null and parses the output.

Reports failure if a resource is not enabled (Enabled = 0).

40 of 60
VeritasTM Services and Operations Readiness Tools

Check recommendation: Enable the VCS resource. Login as root and execute the following commands:
# haconf -makerw
# hares -modify resource_name Enabled 1
# haconf -dump -makero.

Learn More...
Adding, deleting, and modifying resource attributes

VCS frozen groups

Check category: Availability

Check description: Checks whether any VCS service group with an enabled resource has been persistently frozen.

Check procedure:

Executes the command "hagrp -list Frozen=1 -localclus" on the target VCS node to get a list of persistently frozen service groups.

Reports failure on these service groups that cannot failover.

Does not report on the service groups that have been temporarily frozen (TFrozen=1).

Check recommendation: Enable all VCS resource in the service group. As root:
1. Enable VCS resources in the service group:
# haconf -makerw
# hares -modify resource_name Enabled 1
2. Unfreeze the VCS service group:
# hagrp -unfreeze group_name -persistent
# haconf -dump -makero

Learn More...
Freezing and unfreezing service groups

VCS virtual host attributes

Check category: Availability

Check description: Checks if the values of the VCS resource attributes for virtual hosts or address exist locally in the /etc/hosts file of the syste
resolution in case of network connectivity loss to the DNS server.

Check procedure:

Checks if a particular VCS resource is of a type that has an attribute that accepts a virtual hostname or an IP address.

Reports failure if the attribute value cannot be found locally in the /etc/hosts file of that system.

Check recommendation: Add the value of specified VCS resource attributes to the system /etc/hosts file.

41 of 60
VeritasTM Services and Operations Readiness Tools

Learn More...
Virtual IPs

Detach policy for shared Disk group with A/P arrays

Check category: Best practices

Check description: Checks whether disk group's detach policy is set to global when the shared disk group consists disks from an A/P disk array

Check procedure:

Determines if there are any shared disk groups on the system.

Determines if the disk detach policy is set for only shared disk groups on the system.

Checks whether the shared disk group has any disks from an A/P disk array.

Checks whether the shared disk detach policy for the disk group is set to global.

Check recommendation: When Dynamic Multi-Pathing (DMP) is used to manage multipathing on A/P arrays, set the detach policy to global. Th
correctly coordinate their use of the active path.

Learn More...
Click here for the reference from the VxVM Administrator's Guide
How to set the disk detach policy on a shared diskgroup?
About disk array types

Campus cluster disk group fail policy

Check category: Best practices

Check description: Checks whether the dgfailpolicy is set to 'leave' for all site-consistent disk groups with version prior to 170. This is the recom
consistent disk groups.

Check procedure:

Verifies that the sites configured disk groups have dgfailpolicy set to 'leave'. This is the recommended policy for campus cluster.

Check recommendation: Make sure that the dgfailpolicy of the site configured disk groups is 'leave'.

Learn More...
What is connectivity policy for shared disk group
How to set disk group failure policy on a shared disk group

Campus cluster disk detach policy

Check category: Best practices

Check description: Checks whether the diskdetachpolicy is set to 'Global' for all site-consistent disk groups. This is the recommended policy fo

Check procedure:

42 of 60
VeritasTM Services and Operations Readiness Tools

Verifies that the diskdetachpolicy for all site-consistent disk groups is set to 'Global'. This is the recommended policy for site-consistent d

Check recommendation: Make sure that the diskdetachpolicy for all the site-consistent disk groups is 'Global'.

Learn More...
What is connectivity policy for shared disk group

Non-CDS disk group

Check category: Best practices

Check description: On systems with VxVM version 4.0 or greater and disk group version 110 or greater, checks whether any disks in a disk gro
using the Cross-platform Data Sharing (CDS) feature.

Check procedure:

Identifies the format of the disks in the disk groups present on the system.

Checks whether the format of the disks is compatible with CDS.

Check recommendation: Ensure that any disk groups present on the system have disks that are configured as portable and compatible with CD
and migration of data between platforms.

Learn More...
Cross-Platform Data Sharing (CDS) Administrator's Guide
vxcdsconvert: manual page

Number of disk group configuration backup copies

Check category: Best practices

Check description: Checks if each disk group have optimum number of configuration backup copies and disk has enough space for it..

Check procedure:

Checks if any of CBR copy is corrupt.

Checks if CBR copies are less than required count.

Checks if enough space is available to store CBR copies.

Check recommendation: The disk groups failure can be summarized in any of the six cases:

Case I : VxVM backup config copy count should be set to atleast 5. Refer technote to increase the configuration backup copies. For details refer

Case II : The backup directory is missing from host. You should recreate the directory and restart the "vxconfigdbackupd" daemon. For details re

Case III : The binconfig file is empty in the disk group configuration backup copy. i.e. The configuration backup copy is invalid. You should restar

43 of 60
VeritasTM Services and Operations Readiness Tools

For details refer the technote.

Case IV : None of the backup config copies have valid binconfig file. All the diskgroup backup config copies needs to be recreated. You should r
daemon. For details refer the technote.

Case V : Number of backup config copies are less than the default specified. Consider increasing the diskgroup backup config copies. For detail

Case VI : Enough free space is not available to store configuration backup copies.

Learn More...
How to change the number of configuration copies

Number of disk group configuration copies

Check category: Best practices

Check description: Checks if each disk group has enough configuration copies. Volume Manager (VxVM) stores metadata in the private region
comprise a disk group rather than on the server. Therefore, each disk group is independent and portable, which is ideal for performing server mi
based replication, or off-host processing via snapshots. For redundancy, VxVM keeps copies of this metadata on multiple LUNs. Although VxVM
copies, in some situations the number of copies may drop below VxVM's recommended target. For example, this can happen when you use arra
based snapshots. While single LUN disk groups are a valid configuration, they pose an availability risk because VxVM can only keep one copy o

Check procedure:

Determines the number of disks present in each disk group.

Verifies that each disk group has the optimal number of configuration and log copies.

Check recommendation: Increase the number of configuration copies.

Learn More...
How to change the number of configuration and log copies of a disk group?

Disk group spare space

Check category: Best practices

Check description: Checks whether the disk group has enough spare disk space available for hot-relocation to occur in case of a disk failure.

Check procedure:

Verifies whether the vxrelocd daemon is running on the system.

Identifies the disk groups on the system.

For each disk group, calculates the total space available for hot-relocation.

Calculates whether the total space available in the disk group is enough for a hot-relocation to occur.

44 of 60
VeritasTM Services and Operations Readiness Tools

Check recommendation: The disk group(s) do not have enough spare space available for hot-relocation if a disk fails. Make sure that the disk g
enough disk space available for hot-relocation. It is recommended that you designate additional disks as hot-relocation spares.
To add a disk in a diskgroup, enter:
# vxdg -g [diskgroup] adddisk disk=diskname
To designate a disk as a hot-relocation spare, enter:
# vxedit [-g diskgroup] set spare=on diskname

Learn More...
About hot-relocation
How hot-relocation works
Configuring a system for hot-relocation

Verify support package

Check category: Best practices

Check description: Verify whether the VRTSspt package is present on the system.

Check procedure:

Checks whether the VRTSspt package is installed on the system.

Check recommendation: The VRTSspt package is not installed on the system. It is recommended to install the VRTSspt package, which provid
tools. These tools do not run unless they are invoked by root.

Learn More...
How to download the VRTSspt package
How to use VRTSexplorer
How to collect a metasave from a mounted file system

Fragmented VxFS File System

Check category: Best practices

Check description: Checks VxFS File System fragmentation. If a file system is too fragmented, the check recommends defragmentation.

Check procedure:

Identifies the mounted VxFS File Systems on the system.

Checks whether the file systems are fragmented.

Check recommendation: You should defragment the file systems to improve performance and reduce recovery time.

Defragmentation creates I/O and uses CPU, so you should schedule defragmentation during periods of low system activity. A conservative appro
system at a time, that is, to wait for one defragmentation to complete before starting the next. Defragmentation time varies depending on factors
and the number of files. Defragmentation can run for extended periods.

Learn More...

45 of 60
VeritasTM Services and Operations Readiness Tools

Monitoring fragmentation

Verify software patch level

Check category: Best practices

Check description: Checks whether the installed Storage Foundation / InfoScale products are at the latest software patch level.

Check procedure:

Identifies all the Storage Foundation / InfoScale products installed on the system.

Verifies whether the installed products have the latest software versions that are available for download.

Check recommendation: To avoid known risks or issues, it is recommended that you install the latest versions of the Storage Foundation / Info

Mirrored-stripe volumes

Check category: Best practices

Check description: Checks for mirrored-stripe volumes and recommends to re-layout them for improved redundancy and enhanced recovery tim

Check procedure:

Identifies the mirror-striped volumes present on the system.

Verifies whether the size of the volumes is greater than the threshold value.

Check recommendation: Ensure that the size of any mirrored-stripe volumes on the system is smaller than the expected default. Reconfigure l
striped-mirror volumes to improve redundancy and enhance recovery time after a failure.

Learn More...
Striping plus mirroring
Mirroring plus striping
Creating a striped-mirror volume
Converting between layered and non-layered volumes

Volume Replicator SRL protection

Check category: Best practices

Check description: Checks whether the Storage Replicator Log (SRL) protection is set to DCM (Data Change Map). This prevents full resynchr

Check procedure:

Determines all the Replicated Volume Groups (RVG's) on the host.

Checks if the SRL protection is set to "dcm" or "autodcm" mode.

46 of 60
VeritasTM Services and Operations Readiness Tools

Check recommendation: Make sure that DCM logging is used to prevent full resynchronization of the Replicated Data Set (RDS) if the SRL ove
autodcm, enter the following command:

# /opt/VRTS/bin/vradmin <diskgroup> set <local_rvgname> <sec_hostname> srlprot=autodcm

Learn More...
Volume Replicator Administrator's Guide

Volume Replicator (VVR) Storage Replicator Log (SRL) space reserved check

Check category: Best practices

Check description: Checks whether the disks used for the SRL volume are marked as reserve for dedicated use. This ensures that only the SR

Check procedure:

Finds all the disks used by the SRL volume in a particular Replicated Volume Group (RVG).

Determines whether these disks are marked as reserve.

Check recommendation: The check determines whether the disks used for the SRL volume are marked as reserve to avoid accidental use of it
that only the SRL volume occupies the disk(s).

Learn More...
Volume Replicator Administrator's Guide

Volume Replicator (VVR) SRL striped and mirrored

Check category: Best practices

Check description: Checks whether the Storage Replication Log (SRL) volume is striped and mirrored for optimum performance and availability

Check procedure:

Determines all the Replicated Volume Groups (RVG's) on the host.

Checks if the SRL volume is both striped and mirrored.

Check recommendation: If you do not use a hardware RAID, make sure that the SRL volume is striped and mirrored for optimum performance
solution. To create a stripe-mirror volume, enter the following command:

# /opt/VRTS/bin/vxassist -g <diskgroup> make <SRL volume name> <size> layout=stripe-mirror

Learn More...
Volume Replicator Administrator's Guide

Volume Manager system name

Check category: Best practices

47 of 60
VeritasTM Services and Operations Readiness Tools

Check description: Checks that the value in the hostid field of the /etc/vx/volboot file is the same as the value returned by the hostname comma

Check procedure:

Retrieves the value stored in the hostid field of the /etc/vx/volboot file and the output of the hostname command.

Compares the values to ensure that they match.

Check recommendation: The hostid value in the /etc/vx/volboot file does not match the output of the hostname command. To change the hosti
1. Make sure the system does not have any deported disk groups. After you update the hostid in the /etc/vx/volboot file to match the hostname,
on the deported disk groups may not match the hostid in the /etc/vx/volboot file. Importing these disk groups would fail.

2. Run the vxdctl init command. This command does not interrupt any Volume Manager services and is safe to run in a production environment.
# vxdctl init

Unless you have a strong reason to have two different values, the hostid in the /etc/vx/volboot file should match the hostname. That way, the hos
and Storage Area Network.

At boot time, Volume Manager (VxVM) auto imports the disk groups whose hostid value in its disk headers match the hostid value in the /etc/vx/v
value unique so that no other host in a given domain and SAN can auto import the disk group; otherwise, data may be corrupted.

Learn More...
How to check for system name consistency
Click here to learn from a technote

Volume Replicator (VVR) network bandwidth limit

Check category: Best practices

Check description: Checks whether the bandwidth throttling is enabled and reports the user to verify if it is the intended configuration. If the ban
none, VVR uses all the available network bandwidth (i.e. Bandwidth throttling is disabled).

Check procedure:

Determines all the Replicated Volume Groups (RVG) on the host.

Checks if bandwidth throttling is enabled as an intended configuration.

Check recommendation: Make sure that bandwidth throttling is enabled as an intended configuration. To set bandwidth limit to a particular valu

# /opt/VRTS/bin/vradmin <diskgroup> set <local_rvgname> <sec_hostname> bandwidth_limit=<value>

Learn More...
Volume Replicator Administrator's Guide

Volume Replicator (VVR) consistent data and Storage Replicator Log (SRL) volume names.

Check category: Best practices

48 of 60
VeritasTM Services and Operations Readiness Tools

Check description: Checks if the data names and SRL volume names are the same across all nodes in a Replicated Data Set (RDS).

Check procedure:

The check reads all the data and SRL volume names in a Replicated Data Set (RDS).

The check verifies that the names are identical across all nodes.

Check recommendation: Make sure that the data names and SRL volume names are the same in a Replicated Data Set (RDS).

Learn More...
Volume Replicator Administrator's Guide

Volume Replicator (VVR) consistent data and SRL volume sizes.

Check category: Best practices

Check description: Checks if the data volume and SRL volume sizes are the same across all nodes in a Replicated Data Set (RDS).

Check procedure:

The check reads all the data and SRL volume sizes in a Replicated Data Set (RDS).

The check verifies that the data sizes and SRL volume sizes are identical across all nodes.

Check recommendation: Make sure that the data and SRL volume sizes are the same in a Replicated Data Set (RDS).

Learn More...
Volume Replicator Administrator's Guide

VCS IfconfigTwice attribute

Check category: Best practices

Check description: Checks whether the IfconfigTwice attribute for the VCS IP resource type is set to 1. Setting the attribute to 1 ensures that w
online or failed over, the system sends multiple Address Resolution Protocol (ARP) packets to the network clients. Sending multiple packets redu
problems after a failover event.

Check procedure:

Make sure that you set IfconfigTwice to 1. To set the IfconfigTwice for the IP resource type, login as root and enter the following comman

#haconf -makerw

# hares -modify res_name IfconfigTwice 1

#haconf -dump -makero

49 of 60
VeritasTM Services and Operations Readiness Tools

Note: This attribute only applies to Solaris and HP-UX platforms.

Check recommendation: Make sure you set the IfconfigTwice attribute has been set to a value of 1 or larger.

Learn More...
IPMultiNICA attributes

VCS NetworkHosts attribute

Check category: Best practices

Check description: Checks whether the NetworkHosts attribute for the VCS NIC resource type has been configured. This attribute specifies the
determine if the network is active. If you do not specify this attribute, the agent must rely on the NIC broadcast address. This causes a flood in ne

Check procedure:

Retrieves the value of the NetworkHosts attribute for the VCS NIC, MultiNICA, and MultiNICB resource types.

Verifies that a list of IP addresses which can be pinged quickly is set for the NetworkHosts attributes.

Check recommendation: Make sure you configure the NetworkHosts attribute with a list of IP addresses that can be pinged to determine if the
set the NetworkHosts attribute for the NIC resource, login as root and enter the following command: # hares -modify res_name NetworkHosts ip_
space separated list of IP addresses.

Learn More...
NIC attributes

Campus cluster volume read policy

Check category: Performance

Check description: Checks whether the read policy of the volumes is set to 'site read'. This is the recommended policy for campus clusters.

Check procedure:

Verifies that the read policy of the volumes in a site-configured disk group is 'site read'.

Verifies that the volumes have a complete plex on the site that is tagged to host.

Check recommendation: Make sure that the read policy of the volumes in the site-configured disk groups is 'site read'.

Learn More...
What is volume read policy
How to change read policy for volume
How to set siteread policy for volume

Input/output accelerators for databases

Check category: Performance

50 of 60
VeritasTM Services and Operations Readiness Tools

Check description: Checks whether or not the Oracle database is using Veritas Extension for Oracle Disk Manager (ODM) for the VxFS File Sy
developed by Oracle and Veritas. ODM improves Oracle database performance on file systems.

Check procedure:

Checks if the VRTSodm pkg is present.

Checks if the ODM feature is licensed.

Checks if Oracle 9i or later is present.

Checks if libodm is present.

Checks if Oracle ODM is linked to VRTSodm.

Check recommendation: You can use the ODM Extension for VxFS File System to provide better I/O performance to applications using Oracle
recommended over the Quick IO (QIO) or Concurrent IO (CIO) features for file system. As of now, this check does not check for QIO/CIO usage

Learn More...
Using Veritas Extension for Oracle Disk Manager

VxFS File System intent log size

Check category: Performance

Check description: If the disk layout is version 6 or later, this check compares the size of the VxFS File System and the size of the intent log. If
compared to the VxFS file system, the report recommends resizing the intent log to meet the standards. File System (VxFS) uses an intent log to
maintaining performance. When VxFS creates a file system, it chooses the intent log size. The larger the VxFS file system, the larger the intent l
system, the intent log size does not change, This can result in a log size smaller than the recommended default. Metadata-intensive workloads c
log. This is because if a high number of transactions come in a short period of time, the log fills and must be flushed before accepting new transa
workloads usually add, delete, append to, or truncate files; change file names, permissions, directories, access control lists (ACLs), or owners; o
Database workloads which pre-allocate large files and only read and write to those files are not usually metadata-intensive -- unless you use Sto
they are less likely to see performance benefits from a larger log. You can resize a VxFS file system when it is mounted, online, and actively rece
vxresize command.

Check procedure:

For each mounted VxFS File System with disk layout version 6 or greater, determines both the intent log size and the VxFS file system s

Compares the actual intent log size with the recommended intent log size.

Check recommendation: The VxFS file system(s) have undersized intent logs, which can impact performance. It is recommended to increase y
standards.

Learn More...
Intent log size
fsadm_vxfs: manual page

51 of 60
VeritasTM Services and Operations Readiness Tools

mkfs_vxfs: manual page

Input/output access time

Check category: Performance

Check description: Checks a volume's I/O access times and identifies volumes with I/O access times greater than the user-defined parameter !
HC_CHK_IO_ACCESS_MS!/param! of the sortdc.conf file.

Check procedure:

For each disk group, collects the vxstat command output.

Calculates the I/O access times for each of the volumes in the disk group.

Checks whether the access time is less than the threshold value.

Check recommendation: It is recommended that you work to improve I/O access times. Verify that multiple volumes do not use the same unde
an online relayout to enhance performance, or check for hardware configuration problems by comparing the iostat output with the vxstat output.

Learn More...
Performing online relayout
vxstat: manual page

Input/output fragment

Check category: Performance

Check description: Checks whether the Volume Manager (VxVM) buffer size set on the system is greater than or equal to the recommended th

Check procedure:

Determines the value of the vol_maxio parameter in the VxVM kernel for the system.

Checks whether the value is less than the default threshold value.

Check recommendation: The vol_maxio parameter on the system is less than the default threshold value. It is recommended that you increase
vol_maxio parameter.

Learn More...
VxVM maximum I/O size

Input/output wait state

Check category: Performance

Check description: Checks whether the system is experiencing I/O waits (that is, blocked processes).

Check procedure:

Collects the vmstat command output and determines the total number of processes blocked for I/O.

52 of 60
VeritasTM Services and Operations Readiness Tools

Check recommendation: Ensure that this system does not have one or more kernel threads/processes that are blocked for I/O resources. Narr
iostat or similar tools.

DRL size

Check category: Performance

Check description: Checks whether the Dirty Region Log (DRL) size of any volume deviates from the default DRL size.

Check procedure:

Determines the DRL size for each volume on the system.

Checks whether the DRL size deviates from the default DRL size.

Check recommendation: The dirty region log (DRL) size of the volume differs from the default DRL size. Make sure this is the intended configu
performance for random writes because DRL updates increase when the region size is small. A smaller DRL, increases the region size, which in
a crash. In general, the larger the region-size, the better it is for the performance of an online volume for a legacy DRL. It is recommended to set
To remove the existing DRL and add a new one, enter :
# vxassist -g <diskgroup> remove log <volume>
# vxassist -g <diskgroup> addlog <volume>
You can run these commands when the volume is unmounted and does not have any IO activity.

Learn More...
What is DRL?
About dirty region logging
More about DRLs

Mirrored volumes without Dirty Region Log (DRL)

Check category: Performance

Check description: Checks for mirrored volumes that do not have a DRL.

Check procedure:

Identifies any mirrored volumes present on the system that are larger than the configurable threshold size on the system.

Checks whether a DRL is present in the identified volumes.

Check recommendation: Ensure that you create a DRL for any large mirrored volumes. A DRL tracks those regions that have changed, and he
system crash. The DRL uses the tracking information to recover only those portions of the volume that need to be recovered. Without a DRL, rec
content of the volume between its mirrors; this process is lengthy and I/O intensive.

Learn More...
How to add DRL to a mirrored volume
Enabling FastResync on a volume

53 of 60
VeritasTM Services and Operations Readiness Tools

SmartIO feature awareness


Check category: Performance

Check description: Checks whether Solid State Drives (SSDs) or flash drives are attached to the server. It also recommends the right version o
Availability / InfoScale software that have the SmartIO feature to bring better performance, reduced storage cost and better storage utilization.

Check procedure:

Checks whether VxVM is installed.

Checks the version of installed VxVM.

Verifies whether the platform is other than HP-UX.

Checks whether SSDs or Flash drives are attached to the system.

Verifies whether the SmartIO feature is in use.

Check recommendation: The recommendation is summarized in below cases:

Case 1 : SSDs or flash drives are detected on the Linux system with the Storage Foundation software version earlier than 6.1 installed. It is reco
Storage Foundation software to 6.1 or higher version which enables you to use the SmartIO feature, which improves performance, reduces stora
storage utilization for the applications running on the servers.

Case 2 : SSDs or flash drives are detected on the AIX/Solaris system with the Storage Foundation software version earlier than 6.2 installed. It i
Storage Foundation software to 6.2 or higher version which enables you to use the SmartIO feature, which improves performance, reduces stora
storage utilization for the applications running on the servers.

Case 3 : SSDs or flash drives are detected on the Linux system with Storage Foundation software version 6.1 installed, but SmartIO feature is n
that you use the SmartIO feature which improves performance, reduces storage costs, and brings better storage utilization for the applications ru
refer the documentation link(s).

Case 4 : SSD or flash drives are detected on the system with Storage Foundation software version 6.2 or higher installed, but SmartIO feature is
recommended that you use the SmartIO feature which improves performance, reduces storage costs, and brings better storage utilization for the
servers. Please refer the documentation link(s).

Case 5 : Storage Foundation software version 6.2 or higher is found on the AIX/Linux/Solaris system without any SSD or flash drives. SSDs or f
since they provide faster data access and have a smaller footprint than traditional spinning disks. The data center uses solid-state technologies i
all flash arrays, all flash appliances, and mixed with traditional HDD arrays. Each form factor offers a different value proposition. SSDs also have
FC, SATA, and SAS. It is recommended that you use the SmartIO feature that offers data efficiency on your SSDs through I/O caching, which im
storage costs, and brings better storage utilization for the applications running on the servers.

Case 6 : Storage Foundation software version 6.1 is found on the Linux system without any SSDs or flash drives. SSDs or flash drives are more

54 of 60
VeritasTM Services and Operations Readiness Tools

faster data access and have a smaller footprint than traditional spinning disks. The data center uses solid-state technologies in many form factor
flash appliances, and mixed with traditional HDD arrays. Each form factor offers a different value proposition. SSDs also have many connectivity
SAS. It is recommended that you use the SmartIO feature that offers data efficiency on your SSDs through I/O caching, which improves perform
and brings better storage utilization for the applications running on the servers.

Case 7 : Storage Foundation software version 6.1 is found on the AIX/Solaris system without any SSDs or flash drives. It is recommended to up
software to 6.2 or higher version and use the SmartIO feature that offers data efficiency on your SSDs through I/O caching which improves perfo
and brings better storage utilization for the applications running on the servers. SSDs or flash drives are more efficient since they provide faster d
footprint than traditional spinning disks. The data center uses solid-state technologies in many form factors: in-server, all flash arrays, all flash ap
traditional HDD arrays. Each form factor offers a different value proposition. SSDs also have many connectivity types: PCIe, FC, SATA, and SAS

Learn More...
SmartIO for Solid State Drives

Disk group configuration database

Check category: Utilization

Check description: Checks whether the disk group's configuration database free space is reaching a critical low point, less than !param!
HC_CHK_CONF_DB_FULL_SIZE_PRCNT!/param!, which is set in sortdc.conf file.

Check procedure:

Identifies the disk groups on the system.

Checks whether the percentage of used space in the configuration database is greater than the threshold value.

Check recommendation: The configuration database of the disk group(s) is too full. When the percentage of used space for the configuration d
value, it is recommended that you split the disk group(s). To split the disk group, enter:
# vxdg split <source-diskgroup> <target-diskgroup> <object>
<object> can be a volume or a disk. For more information, see the Learn More links.

Learn More...
Displaying disk group information
How to split a diskgroup
How to move disk between diskgroups
VxVM Administrator's Guide

Underutilized disks

Check category: Utilization

Check description: Checks whether the disk is underutilized. It lists out those disks that have the percentage of usage space lower than the pe
defined parameter !param!HC_CHK_DISK_USAGE_PERCENT!/param!, which is set in sortdc.conf file.

Check procedure:

Determines the size of the disks in each of the disk groups (excluding disks that are marked as spare or coordinator).

55 of 60
VeritasTM Services and Operations Readiness Tools

Calculates the used disk space and compares it to the threshold value.

Check recommendation: Underutilized disks were found. It is recommended that you use all the storage disk(s) available to the system.

Learn More...
Removing disks
Removing a disk with subdisks
Removing a disk with no subdisks
Removing a disk from VxVM control

File System old Storage Checkpoint

Check category: Utilization

Check description: Checks for VxFS File System Storage Checkpoints that are older than !param!HC_CHK_FS_OLD_CHECKPOINT_DAYS_O
sortdc.conf file.

Check procedure:

Identifies all the VxFS File System mount points present on the system.

Identifies all the mounted Storage Checkpoints in the mount points, and checks whether they are older than the threshold value.

Check recommendation: It is recommended that you delete any old VxFS File System Storage Checkpoints that you no longer require in order

Learn More...
How to remove a Storage Checkpoint

VxFS File System utilization

Check category: Utilization

Check description: Checks VxFS File System utilization. It lists out the VxFS File Systems whose percentage of usage space is less or more th
the user-defined parameter !param!HC_CHK_FS_USAGE_PERCENT_MIN!/param! and !param!HC_CHK_FS_USAGE_PERCENT_MAX!/param
parameters are set in sortdc.conf file.

Check procedure:

Identifies the used space for all the VxFS file systems on the system.

Checks whether the percentage of used space is greater than the threshold value.

Check recommendation: The VxFS file system(s) listed in output are either under-utilized or over-utilized. It is recommended that you shrink th
system and its volume, and use the freed space elsewhere. It is better to defragment the over-utilized VxFS file systems and to add extra storag

Learn More...
Shrinking a file system
Extending a file system using fsadm

VxFS volume and file system size

56 of 60
VeritasTM Services and Operations Readiness Tools

Check category: Utilization

Check description: Checks if any of the VxFS file system and underlying VxFS volume size is different and the size difference is greater than th
defined parameter !param!HC_CHK_FS_VOLS_SIZE_DIFF_THRESHOLD!/param!, which is set in the sortdc.conf file.

Check procedure:

Identifies the mounted VxFS file systems and determines their size.

Identifies the volumes or volume set used to mount each of the file systems and determines its size.

Compares the size of the file system with the size of the underlying volume or volume set. Checks whether the size difference is greater
HC_CHK_FS_VOLS_SIZE_DIFF_THRESHOLD, which is set in the sortdc.conf file.

Check recommendation: To make the best use of volume space, the file system should be the same size as the volume or volume set.
The failure can be summarized in one of the following cases:

Case I: The file system size is less than underlying volume by a threshold parameter of HC_CHK_FS_VOLS_SIZE_DIFF_THRESHOLD.
You should either grow the file system using the fsadm command or shrink the volume using the vxassist command.

Case II: The file system is larger than underlying volume. This can happen due to execution of incorrect command (vxassist) for shrinking the vo

Run the following commands.


To grow the file system:
# fsadm [-F vxfs] [-b <newsize>] [-r rawdev] mount_point
To shrink :
#vxassist -g <mydg> shrinkby <vol> <len>
or
#vxassist -g <mydg> shrinkto <vol> <newlen>

Learn More...
fsadm_vxfs: manual page
vxassist: manual page
vxresize: manual page

Multi-volume storage tiering

Check category: Utilization

Check description: For multi-volume file systems with storage tiering, checks whether any tier is full or low on space.

Check procedure:

Identifies the VxFS File Systems mounted on the system.

Checks whether the file system is mounted on a volume set.

57 of 60
VeritasTM Services and Operations Readiness Tools

For each file system, collects the list of volumes, storage tiers, and space utilization per storage tier.

Warns when low space is detected in any of the storage tiers.

Check recommendation: The storage tier for the multi-volume file system has little available space on it. It is recommended adding more volum
vxvoladm command.

Learn More...
About Multi Volume Filesystem
About Volume Sets
Creating and Managing Volume Sets
Creating Multi-volume Filesystem
About Dynamic Storage Tiering
vxvoladm: manual page
Learn from technote

Unused volume components

Check category: Utilization

Check description: Checks for unused objects (such as plexes and volumes present in a disk group) and violated objects (such as disabled, de
or disabled volumes, disabled logs, and volumes needing recovery).

Check procedure:

For each of the disk groups on the system, generates a list of volumes along with its state and kernel state.

Checks for the stopped volumes, disabled volumes, and the volumes that require recovery.

Identifies the volume plexes and checks for disabled or detached plexes, disabled logs, dissociated plexes, and failed plexes.

Check recommendation: The disk groups on the system contain unused or violated objects. It is recommended that you either remove these o

Learn More...
Displaying volume and plex states
Recovering an unstartable mirrored volume
Recovering an unstartable volume with a disabled plex in the RECOVER state
Forcibly restarting a disabled volume

Storage Foundation thin provisioning

Check category: Utilization

Check description: Determines whether the storage enclosure on your system is ready to use the Storage Foundation / InfoScale thin provision

Check procedure:

Checks whether the storage enclosure connected to the system supports thin provisioning and is certified by Storage Foundation / InfoSc

Checks whether there are any VxFS file Systems not on thin provisioned storage that can potentially be migrated.

58 of 60
VeritasTM Services and Operations Readiness Tools

Determines the Storage Foundation / InfoScale version installed on the system.

Checks which Storage Foundation / InfoScale features are licensed and configured.

Check recommendation: The recommendations are:


Case I: The VxFS file system resides on thin provisioned storage, but the system does not have a Storage Foundation Enterprise / InfoScale Sto
need a Storage Foundation Enterprise / InfoScale Storage license to mirror the storage.

Case II: The VxFS file system storage enclosure appears to support thin provisioning but the Storage Foundation / InfoScale software does not d
storage. Ensure you have the correct Array Support Libraries (ASLs) installed for this storage.

Case III: Your system appears to be attached to a storage enclosure that supports thin provisioning, and the necessary Storage Foundation / Inf
however, the VxFS file System does not reside on thin provisioned storage. Possible reasons are:
a) Thin provisioning is not used.
b) Thin provisioning is not enabled on the storage enclosure.
c) Your version of the storage enclosure may be an old version that does not support thin provisioning.

For reasons (a) or (b), check the thin provisioning support with your storage vendor.

For reason (c), if you are considering migrating to thin provisioned storage, consider the following:

Read the white paper on thin provisioning listed in the Learn More section below.

Ensure that your system has the required Storage Foundation Enterprise / InfoScale Storage license installed.

Ensure that you have enabled the SmartMove feature in Storage Foundation / InfoScale. To do so, set the variable usefssmartmove=yes
Ensure that you have the necessary level of Storage Foundation 5.0 MP3 RP installed before you turn on the SmartMove feature.

Upgrade to Storage Foundation 5.0 MP3 RP1 for access to the full feature set of thin provisioning supported in Storage Foundation / Info

Note: If you have 5.0 MP3 with HF1, you may not need RP1.

Case IV: The VxFS file system's disk group version is less than 110. It is recommended that you upgrade to a disk group version greater than 11

Case V: The Storage Foundation / InfoScale with thin provisioning feature does not support the storage enclosure on which the VxFS file system

Learn More...
White Paper on: Storage Foundation / InfoScale and Thin Provisioning
About Stop Buying Storage
Volume Manager Administrator's Guide
Visit Patch Central: SF 5.0MP3RP1
Visit Patch Central: SF 5.0MP3HF1

Unused volumes

59 of 60
VeritasTM Services and Operations Readiness Tools

Check category: Utilization

Check description: Checks for unused volumes on hosts with no mounted file systems and no input/output.

Check procedure:

Identifies the volumes present on the system.

Checks whether any file systems are mounted or I/O is running.

Check recommendation: If the volume is not in use, consider removing it to reclaim storage.

Learn More...
How to remove a volume
Accessing a Volume
Creating a Filesystem on a volume
Mounting the Filesystem

60 of 60

Potrebbero piacerti anche