Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Troubleshooting
IES-410
Student Guide
This product or document is protected by copyright and distributed under licenses restricting its use, copying, distribution, and
decompilation. No part of this product or document may be reproduced in any form by any means without prior written authorization of
Sun and its licensors, if any.
Third-party software, including font technology, is copyrighted and licensed from Sun suppliers.
Sun, Sun Microsystems, the Sun Logo, Solaris, StorEdge, Sun Enterprise, SunSolve, Sun Enterprise Network Array, JumpStart, OpenBoot,
Solstice, Sun BluePrints, and Solstice DiskSuite are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and other
countries.
All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the U.S. and
other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc.
UNIX is a registered trademark in the U.S. and other countries, exclusively licensed through X/Open Company, Ltd.
RESTRICTED RIGHTS: Use, duplication, or disclosure by the U.S. Government is subject to restrictions of FAR 52.227-14(g)(2)(6/87) and
FAR 52.227-19(6/87), or DFAR 252.227-7015 (b)(6/95) and DFAR 227.7202-3(a).
DOCUMENTATION IS PROVIDED “AS IS” AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS, AND
WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR
NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY
INVALID.
Please
Recycle
Copyright 2003 Sun Microsystems Inc., 901 San Antonio Road, Palo Alto, California 94303, Etats-Unis. Tous droits réservés.
Ce produit ou document est protégé par un copyright et distribué avec des licences qui en restreignent l’utilisation, la copie, la distribution,
et la décompilation. Aucune partie de ce produit ou document ne peut être reproduite sous aucune forme, par quelque moyen que ce soit,
sans l’autorisation préalable et écrite de Sun et de ses bailleurs de licence, s’il y en a.
Le logiciel détenu par des tiers, et qui comprend la technologie relative aux polices de caractères, est protégé par un copyright et licencié
par des fournisseurs de Sun.
Sun, Sun Microsystems, the Sun Logo, Solaris, StorEdge, Sun Enterprise, SunSolve, Sun Enterprise Network Array, JumpStart, OpenBoot,
Solstice, Sun BluePrints, et Solstice DiskSuite sont des marques de fabrique ou des marques déposées de Sun Microsystems, Inc. aux Etats-
Unis et dans d’autres pays.
Toutes les marques SPARC sont utilisées sous licence sont des marques de fabrique ou des marques déposées de SPARC International, Inc.
aux Etats-Unis et dans d’autres pays. Les produits portant les marques SPARC sont basés sur une architecture développée par Sun
Microsystems, Inc.
UNIX est une marques déposée aux Etats-Unis et dans d’autres pays et licenciée exclusivement par X/Open Company, Ltd.
LA DOCUMENTATION EST FOURNIE “EN L’ETAT” ET TOUTES AUTRES CONDITIONS, DECLARATIONS ET GARANTIES
EXPRESSES OU TACITES SONT FORMELLEMENT EXCLUES, DANS LA MESURE AUTORISEE PAR LA LOI APPLICABLE, Y
COMPRIS NOTAMMENT TOUTE GARANTIE IMPLICITE RELATIVE A LA QUALITE MARCHANDE, A L’APTITUDE A UNE
UTILISATION PARTICULIERE OU A L’ABSENCE DE CONTREFAÇON.
Please
Recycle
Table of Contents
About This Course ........................................................... Preface--xiii
Course Goals...................................................................... Preface--xiii
Course Map......................................................................... Preface--xiv
Topics Not Covered.............................................................Preface--xv
How Prepared Are You?................................................... Preface--xvi
Introductions ..................................................................... Preface--xvii
How to Use Course Materials ........................................ Preface--xviii
Conventions .........................................................................Preface--xix
Icons .............................................................................Preface--xix
Typographical Conventions ......................................Preface--xx
Introducing the VERITAS Volume Manager Software
Architecture ......................................................................................1-1
Objectives ........................................................................................... 1-1
Relevance............................................................................................. 1-2
Additional Resources ........................................................................ 1-3
Introducing Storage Management................................................... 1-4
Host-Based Storage Management........................................... 1-4
Controller-Based Storage Management................................. 1-5
Comparison of Storage Management Methods.................... 1-6
Exploring VxVM Software and Storage Management ................. 1-7
Relationship to the Operating System Environment ........... 1-7
Configuration Database ........................................................... 1-8
Device Discovery Layer (DDL) ............................................... 1-9
Drivers and Daemons............................................................... 1-9
VxVM Software Support Files.............................................. 1-11
Examining VxVM Software Objects.............................................. 1-20
Physical Disks......................................................................... 1-21
VxVM Software Disks ............................................................ 1-21
Disk Groups ............................................................................. 1-23
Subdisks.................................................................................... 1-24
Plexes ........................................................................................ 1-25
Volumes.................................................................................... 1-26
VxVM Software Layered Volume Objects........................... 1-27
Course Goals
The VERITAS Volume Manager Troubleshooting course introduces you to the
VERITAS Volume Manager (VxVM) software and its functions.
Course Map
The following course map enables you to see the general topics and the
modules for that topic area in reference to the course goal.
Architecture
The VERITAS
Volume Manager
Software Architecture
Availability Management
Problem Management
Troubleshooting
Tools and Utilities
Release Management
Upgrading
the VxVM
Software
Refer to the Sun Educational Services catalog for specific information and
registration.
Introductions
Now that you have been introduced to the course, introduce yourself to
the other students and the instructor, addressing the following items:
● Name
● Company affiliation
● Title, function, and job responsibility
● Experience related to topics presented in this course
● Reasons for enrolling in this course
● Expectations for this course
Conventions
The following icons and typographical conventions are used in this course
to represent various training elements and alternative learning resources.
Icons
Note – Indicates additional information that can help but is not crucial to
understanding of the concept being described. Examples of notational
information include keyword shortcuts and minor system adjustments.
Caution – Indicates the risk of injury due to heat or hot surfaces will
result.
Typographical Conventions
Courier is used for the names of command, files, and directories, as well
as on-screen computer output. For example:
Use ls -al to list all files.
system% You have mail.
Courier bold is used for characters and numbers that you type. For
example:
system% su
Password:
Palatino italics is used for book titles, new words or terms, or words that
are emphasized. For example:
Read Chapter 6 in the User’s Guide.
You must be root to do this.
Objectives
Upon completion of this module, you should be able to:
● Describe the two storage management methodologies
● Describe the relationship between the VxVM software and the
Solaris™ Operating Environment
● Identify and describe the major components of the VxVM software
configuration database
● Identify and define all the VxVM software objects
● Describe the resynchronization process
● Describe how the VxVM software identifies disks under control
● Describe the different plex states
● List newly supported and unsupported features introduced in the
VxVM software version 3.2
Relevance
Additional Resources
Host Bus
Adapter
Device Drivers
9-GByte
VxVM Software Virtual
Layer Volume
Manager
Data
User
Process
JBOD Enclosure
Interface
Adapter Board
3-GByte
disk/slice/LUN
JBOD disks also can be found in Sun’s midrange servers, such as the Sun
Enterprise™ 3500 and Sun Enterprise 250 servers. These disks are not
considered by the VxVM software to be enclosure-based JBODs because
they are configured differently. This difference is addressed later in this
module.
Host Bus
Adapter
Data
Storage Subsystem
Manager
9-GByte
Controller LUN
RAID
User Manager
Process
Cache
RAID
Hardware
3-GByte
Disk/Slice disk disk disk
This section describes the VxVM software use for storage management.
Applications
Database
DMP Driver
Kernel
Enclosure/Array vxconfigd
DDL
Configuration Database
The VxVM software configuration database stores all disk and volume
configuration data. The following apply to the configuration database:
● Database access is managed through the /dev/vx/config device.
● Accesses are executed serially.
● Initial volume configurations are downloaded to the kernel through
this device.
● The vxconfigd daemon updates the database to reflect changes to
the configuration of VxVM software objects.
Kernel Drivers
Note – The vxconfigd logging and error messages are discussed in detail
in Module 4, “Troubleshooting Tools and Utilities.”
If the VxVM software driver binaries are corrupted, copy the proper
architecture version of the binary into the corrupted driver binary file to
correct the problem. This process is presented in Module 5, “Recovering
Boot and System Processes.”
The /sbin directory holds the binaries for vxconfigd and additional
VxVM software commands. Notice the configuration files in this example
list output.
2960 -r-xr-xr-x 1 root sys 1499404 Nov 21 23:03 vxconfigd
2800 -r-xr-xr-x 1 root sys 1421748 Aug 15 2001 vxconfigd.SunOS_5.6
2832 -r-xr-xr-x 1 root sys 1439184 Aug 15 2001 vxconfigd.SunOS_5.7
2960 -r-xr-xr-x 1 root sys 1499404 Nov 21 23:03 vxconfigd.SunOS_5.8
864 -r-xr-xr-x 1 root sys 432736 Nov 21 17:18 vxdg
256 -r-xr-xr-x 1 root sys 116100 Nov 21 17:18 vxdmpadm
● vxnm-vxnetd
● vxvm-reconfig
● vxvm-recover
● vxvm-shutdown
● vxvm-startup1
● vxvm-startup2
● vxvm-sysboot
The /etc/rc*.d directories contain the hard links to the VxVM software
startup scripts in /etc/init.d, as follows:
● The /etc/rc0.d directory contains the following links:
● K10vmsa-server
● K99vxvm-shutdown
● The /etc/rc1.d directory contains the K10vmsa-server link.
● The /etc/rcS.d directory contains the following links:
● S35vxvm-startup1
● S85vxvm-startup2
● S86vxvm-reconfig
● The /etc/rc2.d directory contains the following links:
● S94vxnm-host_infod
● S94vxnm-vxnetd
● S95vxvm-recover
● S96vmsa-server
The /opt directory contains the VxVM software man pages, documents,
the Storage Administrator GUI binaries, and licensing utilities. The /opt
directory contains the following:
VRTS:
man
VRTS/man:
man1m man3x man4 man7
VRTSlic:
bin
VRTSlic/bin:
vxliccheck vxlicense
VRTSvxvm:
docs
VRTSvxvm/docs:
admin.pdf hwnotes.pdf igbook.pdf tshoot.pdf vmsaguide.pdf
admin.ps hwnotes.ps igbook.ps tshoot.ps vmsaguide.ps
VRTSspt:
FS README.VRTSspt VRTSexplorer VVR
VRTSspt/FS:
MetaSave VxBench
VRTSspt/VRTSexplorer:
README fw salr vcs vrtsisp
VRTSexplorer gcm samba vfr vsap
bin.SunOS isis spc visnC vvr
dbed lib spcs visnS vxfs
dbed1 main.SunOS spnas vmsa vxld
edition.dro ndmp sybed1 vnas vxvm
edition.sybed sal txpt vras
VRTSvmsa:
bin jre vmsa
VRTSvmsa/bin:
autostart gen_params pkg_params server_params vmsa vmsa_server
VRTSvmsa/jre:
bin COPYRIGHT jre_config.txt lib LICENSE.ps
VRTSvmsa/vmsa:
java os_properties properties server
The /etc/vx directory holds hundreds of files and subdirectories that are
used to hold key information about the VxVM software and the server
configuration on which it is installed.
Note – Due to the large number of files, this section lists those files and
directories of most importance to the recovery of the VxVM software. A
full list of files is available in Appendix A.
● SENA0 – This file lists all disks in the enclosure named SENA0
detected by the VxVM software device discovery. An example
of the SENA0 file is shown here.
::::::::::::::
SENA0
::::::::::::::
c2t25d0
c2t16d0
c2t26d0
c2t10d0
c2t5d0
c2t0d0
c2t19d0
● The /etc/vx/reconfig.d/disk/d directory lists subdirectories that
hold pre- and post-encapsulation information for encapsulated disks,
as seen in these list outputs.
./vx/reconfig.d/disk.d:
total 6
2 drwxr-xr-x 3 root other 512 Mar 29 16:01 .
2 drwxr-xr-x 6 root sys 512 Mar 29 16:06 ..
2 drwxr-xr-x 2 root other 512 Mar 29 16:01 c0t0d0
./vx/reconfig.d/disk.d/c0t0d0:
total 12
2 drwxr-xr-x 2 root other 512 Mar 29 16:01 .
2 drwxr-xr-x 3 root other 512 Mar 29 16:01 ..
2 -rw-r--r-- 1 root other 9 Mar 29 16:01 dmname
2 -rw-r--r-- 1 root other 933 Mar 29 16:01 newpart
2 -rw-r--r-- 1 root other 7 Mar 29 16:01 primary_node
2 -rw-r--r-- 1 root other 452 Mar 29 16:01 vtoc
● The ./c0t0d0 directory contains files that describe present and prior
configuration information about the encapsulated boot disk. The file
named vtoc holds the boot disk’s pre-encapsulation vtoc and can be
used to recover the original configuration for the boot disk if
unencapsulation fails.
Note – The procedure for using the vtoc file to unencapsulate the boot
disk is addressed in Module 2, “Encapsulating Disks.”
./vx/reconfig.d/saveconf.d/etc:
total 14
2 drwxr-xr-x 2 root root 512 Mar 29 16:08 .
2 drwxr-xr-x 3 root root 512 Mar 29 17:15 ..
2 -rw-r--r-- 1 root root 239 Mar 29 16:08 dumpadm.conf.new
2 -rw-r--r-- 1 root root 239 Mar 29 16:08 dumpadm.conf.orig
2 -rw-r--r-- 1 root root 140 Mar 29 16:08 dumpadm.out.orig
4 -rw-r--r-- 1 root other 2002 Mar 29 15:41 system
● The /etc/vx directory also holds a file called /etc/vx/volboot.
This is the VxVM software bootstrap file. It is an ASCII file that
adheres to a very strict format and should not be edited. This file has
the following characteristics:
● It is 512 bytes in length, including padding.
● Update this file using the vxdctl command.
● The volboot file holds the VxVM software host identifier
hostid. This is usually the Solaris OE node name, not the
hardware hostid. Keep in mind the following concepts:
● The VxVM software hostid does not have to match the
server’s node name, which can cause confusion.
● The hostid is used to establish disk and disk group
ownership.
● If two or more servers can access the same disks using the
same bus, the VxVM software hostid ensures that the two
hosts do not interfere with each other when accessing the
VxVM software disks.
● The volboot file can also contain a list of simple disks for
rootdg. Refer to the ‘‘VxVM Software Disks’’ on page 1-21 for
more information.
/var/vxvm/tempdb
total 34
2 drwxr-xr-x 2 root root 512 Mar 29 17:15 .
2 drwxr-xr-x 3 root sys 512 Mar 29 17:15 ..
18 -rw-r--r-- 1 root root 8704 Mar 29 17:15 DGa
6 -rw-r--r-- 1 root root 2560 Mar 29 17:15 DGb
6 -rw-r--r-- 1 root root 3072 Mar 29 17:15 rootdg
The /dev/vx directory contains the logical volume device files that are
used to access the VxVM software objects. These files are shown here in a
list output.
/dev/vx
clust dmp dsk iod rdmp task trace
config dmpconfig info netiod rdsk taskmon
Figure 1-4 illustrates the relationship between physical and virtual VxVM
software objects.
Volume
Physical Objects Virtual Objects
Plex Plex
Plex Plex
Subdisk Subdisk
Subdisk Subdisk
Disk Group
Physical Disks
VxVM
Software Subdisk
Private
Public
VxVM
Software Subdisk
Private
Public
Physical Disks
Physical disks are storage devices where data is ultimately stored.
Physical disks, or physical objects, are identified by the Solaris OE using a
unique identifier called a ctd number. Valid ctd identifiers are:
● c – The system controller or host bus adapter number
● t – A Small Computer System Interface (SCSI) target identifier
● d – A device or logical unit number
● s – A slice or partition
The VxVM software uses a drive ctd number for identification of the
physical device when it is brought under VxVM software control.
Initialized Disks
Initialized disks are reformatted with either one or two partitions, and all
data is destroyed. The partitions are used to store the VxVM software
configuration and data areas called private and public regions.
The public region uses the remaining space available on the physical disk
to store subdisks. The public region has the following characteristics:
● It is usually slice 4.
● It is used for data storage.
● The region is maintained by the VxVM software commands.
● It is assigned vtoc tag number 14 for identification purposes.
Encapsulated Disks
Disk Groups
A named collection of VxVM software disks that share a common
configuration is called a disk group. Common configuration refers to a set of
records that provide detailed information about related VxVM software
objects, their connections and attributes. This configuration information is
stored in the private region of the VxVM software disks. A backup copy
for each configured disk group is stored in /var/vxvm/tempdb.
Disk groups are virtual objects and have the following characteristics:
● The default disk group is rootdg.
● Additional disk groups can be created on the fly.
● Disk group names are a maximum of 31 characters in length.
● Disk groups can be renamed.
● Disk groups are versioned.
● Disk groups allow grouping of the VxVM software disks into logical
collections.
● Disk groups can be moved from one system to another with an
import and deport process.
● Volumes created within a specific disk group can use only the VxVM
software disks that are a member of that disk group.
● Volumes and disks can be moved among disk groups.
Note – Moving volumes and disks among disk groups using an early
version of the VxVM software was a risky procedure. The VxVM software
version 3.2 has new options for the vxdg command to move the VxVM
software objects among disk groups and to split and join disk groups.
These new options require a special license.
Subdisks
Subdisks are contiguous blocks of space. Subdisks provide the basic
building blocks for the VxVM software plexes and volumes, creating the
VxVM software basic unit of storage allocation. Subdisks are virtual
objects.
Subdisk names are based on the VxVM software disk name where they
reside, appended with an incremental numeric identifier. Figure 1-5 on
page 1-25 illustrates how subdisk names are derived.
Private Region
disk01-01
disk01-03
disk01
Plexes
The VxVM software virtual objects built from subdisks are called plexes. A
plex consists of one or more subdisks located on one or more VxVM
software disks.
Plexes are:
● Also known as submirrors
● Mid-tier building blocks of the VxVM software volumes
● Named based on the name of the volume for which it is a submirror,
plus an appended incremental numeric identifier
● Organized using the following methods:
● Concetanation
● Stripe (RAID 0)
● Mirror (RAID 1)
● Striping with parity (RAID 5)
Private Region
Public Region
disk01-01 disk01-01
disk01-03 disk01-03
This is the first
vol01-01 plex of volume
Free Space vol01.
disk01
Volume name
This plex is a
sub-mirror.
Volumes
Volumes are virtual devices (virtual objects) that appear to be physical
disks to applications, databases and file systems. Volumes are the VxVM
software’s top-tier virtual objects. Although volumes appear to be
physical disks, they do not share the limitations of physical disks.
Plex Volume
disk01-01
disk01-02
disk01-03
disk01-01 disk02-01
vol01-01 disk01-02 disk02-02
disk01-03 disk02-03
Figure 1-8 illustrates the how a layered volume differs from a standard
volume.
Volume Volume
Plex
Mirror Subvolume
Mirror
Plex Plex
Subplex Subplex
SD SD
SD SD
Stripe
SD SD
SD SD Stripe Subvolume
Mirror
SD SD
Subplex Subplex
SD SD
RAID 0+1
Subvolume
Mirror
Subplex Subplex
SD SD
RAID 1+0
Resynchronizing Volumes
The VxVM software ensures that all data stored redundantly using
mirrored or RAID 5 volumes remains in a consistent state. Data is written
in parallel to both mirrored and RAID volumes to ensure that data
remains consistent unless there is a system crash or physical disk failure.
If this occurs, data can become inconsistent or unsynchronized.
System failures are not the only reason data can become unsynchronized.
Data can become inconsistent during maintenance procedures when a
mirrored plex or RAID 5 element is taken offline. If data becomes
inconsistent between mirrored plexes or between a RAID 5 volume’s data,
use a volume resynchronization to correct the problem.
Dirty Flag
The VxVM software keeps track of data synchronization operations using
a flag called the dirty flag. When data is written to a volume, it is marked
as dirty until the volume stops or all writes are completed and the data in
the volume plexes are identical.
Volumes that do not have the dirty flag reset require volume
resynchronization when started or during a system reboot.
Resynchronization Process
The resynchronization (resync) process depends on the type of volume
started.
There are two types of logs: RAID 5 and DRL. This section contains a
detailed description of these logs.
RAID 5 Logs
RAID 5 logs perform the following tasks:
● RAID 5 logs protect data and parity calculations from system
crashes.
Degraded RAID 5 volumes are not restarted during a reboot; they
must be started manually. Attaching a RAID 5 log does not alter this
fact.
● RAID 5 parity is only used when data needs to be rebuilt. It is only
written and never checked.
It is advised to run the vxr5check utility on a regular basis to check
parity.
● RAID 5 logs appear as a plex with a plex state of Log.
● DRLs are not supported on core system volumes, such as the /, /usr,
and /var volumes.
● Dirty region logging is usually implemented as a log plex.
Note – The active state is the most common state for plexes on a well-
running system.
● Stale – If there is a possibility a plex does not have the complete and
current volume contents, this plex is placed in a stale state.
Additionally, if I/O errors occur on a plex, the kernel stops using
and updating the plex, and the operation sets the state of the plex to
stale.
● To reattach the stale plex to the volume, synchronize the data
and set the plex to the active state, use the following command:
*vxplex -g (diskgroup) att volumename plexname*
● To force a plex into Stale state, use the following command:
*vxplex -g (diskgroup) det plexname*
● Offline – The following command detaches a plex from a volume
setting and changes the plex state to offline:
*vxmend -g (diskgroup) off plexname*
Although the detached plex is associated with the volume, the
changes to the volume are not reflected to the plex while it is in the
offline state.
● To set the plex state to stale and start to recover data after the
vxvol start operation, use the following command:
*vxplex -g (diskgroup) att volumename plexname*
● Temp – A utility sets the plex state to temp at the start of an
operation, and also sets the plex to an appropriate state at the end of
the operation.
For example, attaching a plex to an enabled volume requires copying
volume contents to the plex before it can be fully attached. If the
system goes down for any reason, a temp plex state indicates the
operation is incomplete; a subsequent vxvol operation starts to
disassociate plexes in the temp state.
PS = Plex state
PKS = Plex kernel state
Create plex
Resync data
(vxplex att)
Shut down
(vxvol stop)
Put plex online
(vxmend on)
Uncorrectable
I/O failure
PS = Plex state
PKS = Plex kernel state
When an uncorrectable error happens, the following plex states are used
to manage recovery:
1. An un-correctable I/O failure occurs and the PS transitions to the
iofail state. The PKS transitions to the detached state.
2. Repairs are affected and the plex is reattached to the volume. This
causes the execution of a data resync. Once the data in the plex is
updated, the PS transitions to active, and the PKS transitions to
enabled.
The plex is now usable by the volume.
Use the vxprint utility to view plex states and their transitions. If a small
plex is re-synching, vxprint might not show the transition states as they
may happen too fast for vxprint to report.
Device Discovery
Device discovery is separated from the base VxVM software functionality
into a separate layer. Previously, the VxVM software discovered block
storage devices by scanning the Solaris OE device tree using the vxiod
daemon. This strategy assumed that the Solaris OE device tree remained
static or changed very little. When changes occurred, command line
utilities were used to update the VxVM software configuration.
With the growth of disk subsystem vendors and storage area networks
implemented within storage environments, the previous strategy proved
to be inadequate. The current device discovery facility is designed to
allow the dynamic addition of new storage subsystems without
modification to the VxVM software modules. The VxVM software DDL
discovers all disks that are visible to the Solaris OE.
The DDL is part of the vxconfigd daemon, and performs the following:
● Discovers all block storage devices connected to a host
● Probes using SCSI commands from user space to determine the
following:
● Type of disks
● Number of paths
● Attributes
● Executes from the vxdisk utility to perform a vxdisk scan
command
Figure 1-11 illustrates DDL components and the relationship to the VxVM
software kernel.
Applications
Database
DMP Driver
Kernel
Enclosure/Array vxconfigd
DDL
Libraries are included with the VxVM software for the most popular
brands of arrays. If a new array is created by a vendor, the VxVM
software only needs an updated device discovery library to recognize the
new array. The new library can be added dynamically.
Support can be added dynamically for the following types of disk arrays:
● Active/Active
● Active/Passive (Autorepass mode)
● Active/Passive (LUN group failover)
● JBODs
JBOD Support
The device discovery facility can detect multi-pathed disk storage devices
that do not belong to a disk array but are capable of being multi-pathed
by DMP. Use the vxddladm utility to add or remove JBODs.
Disks must have a unique serial number that can be read through a SCSI
inquiry or the mode_sense command to be detected correctly.
DDL Administration
A sample output from the execution of the vxddladm command with the
listsupport option is shown here. This option lists all disk enclosures
currently supported by the system’s DDL.
bash-2.03# vxddladm listsupport
LIB_NAME ARRAY_TYPE VID PID
=======================================================================
libvxap.so A/A SUN AP_NODES
libvxatf.so A/A VERITAS ATFNODES
libvxeccs.so A/A ECCS all
libvxemc.so A/A EMC SYMMETRIX
libvxhds.so A/A HITACHI OPEN-*
libvxhitachi.so A/PG HITACHI DF350
Note – See the vxddladm man page for command syntax options.
Device Naming
Enclosure-based and operating system (OS)-independent naming changes
the way storage devices are identified by the VxVM software and
provides the following benefits:
● Naming is independent of the OS.
This mitigates device naming confusion in multi-OS environments.
The device name format is based on the name of the enclosure. Use the
following format to name devices:
logical_enclosure_name_#
Default disk names are based on the vendor identification (ID). For
example, disks in a Sun StorEdge T3 multi-path array named purple0 by
the VxVM software have the following names:
purple0_1
purple0_2
purple0_3
Notice that the dmp metanodes now use devices named SENA0_xyz. A
long listing of the directory shows that c1t0d0 and SENA0_0 are the same
device by comparing the major and minor numbers.
bash-2.03# ls -las /dev/vx/dmp
total 12
10 drwxr-xr-x 2 root other 5120 Jul 5 09:17 .
2 drwxr-xr-x 6 root other 512 Jun 8 09:49 ..
0 brw------- 1 root other 68, 2 Jul 5 09:05 SENA0_0
...
0 brw------- 1 root other 68, 2 Jun 8 09:41 c1t0d0
...
To enable this support within the Sun StorEdge T3 array, configure the
following modifications in the mp_support file:
● Single-host environment – Set mp_support to rw.
● Multi-host environment – Set mp_support to std.
The following example command shows how to use the ordered storage
allocation option:
# vxassist -g testdg -o ordered make vol01 1g layout=mirror-stripe ncol=3 disk02 disk04
disk06 disk01 disk03 disk05
Encapsulating Disks
Objectives
Upon completion of this module, you should be able to:
● Describe the encapsulation and unencapsulation processes for data
and boot (system) disks
● Encapsulate a data disk
● Unencapsulate a data disk
● Encapsulate a boot disk
● Recover the loss of the encapsulated disk in a boot disk mirrored pair
● Describe best practice partition configurations for boot disk
encapsulation
● Unencapsulate a boot disk, including a compact disc read-only
memory (CD-ROM)
● Perform a successful vxunroot process on a boot disk mirror,
including a mirror that has a replaced encapsulated disk
● Recover an encapsulated boot disk that failed the vxunroot process
Relevance
Additional Resources
When the encapsulation process is finished, the encapsulated disk has two
partitions. These are usually partition 6, used for the public region, and
partition 7, used for the private region. Figure 2-1 illustrates this
partitioning.
Slice 0
Public
Slice 1 Slice 2
Slice 2 Region
and
(Slice 6)
Slice 3 Slice 6
Slice 4
Slice 7
Free Space Private Region
Although the VxVM software re-partitions the disk, the original data
resides in the same blocks they occupied prior to encapsulation. The data
is not moved or overwritten. The VxVM software is responsible for
performing the translation necessary to make the data available as VxVM
software volumes.
Pre-Encapsulation df -k Command
This example output from the df -k command displays the file systems
currently mounted from disk c1t3d0.
# df -k
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c1t0d0s0 7670973 1536449 6057815 21% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
swap 1209520 16 1209504 1% /var/run
swap 1209520 16 1209504 1% /tmp
Notice that the beginning sector of unallocated space is the last sector of
partition 4 plus 1.
The print option on the format utility partition menu shows the present
partition configuration of the disk. Print this information and save a copy
in a safe place to use if the disk needs to be unencapsulated. Note that
there are at least two unused partitions available in the partition table
(partitions 6 and 7).
partition> print
Current partition table (original):
Total disk cylinders available: 4924 + 2 (reserved cylinders)
Use this operation to convert one or more disks to use the Volume Manager.
This adds the disks to a disk group and replaces existing partitions
with volumes. Disk encapsulation requires a reboot for the changes
to take effect.
More than one disk or pattern may be entered at the prompt. Here are
some disk selection examples:
c1t3d0
4. Enter the name of the disk group for this encapsulated disk to join.
In the following example the disk group name is storagedg.
Answer all prompts as appropriate for this particular encapsulation
procedure:
Which disk group [<group>,list,q,?] (default: rootdg) storagedg
A new disk group will be created named storagedg and the selected
disks will be encapsulated and added to this disk group with
disk names that will be specified interactively.
c1t3d0
c1t3d0
A new disk group storagedg will be created and the disk device c1t3d0 will
be encapsulated and added to the disk group with the disk name storage01.
Caution – The size of the private region of all member disks in a disk
group must be the same.
The encapsulation will require two or three reboots which will happen
automatically after the next reboot. To reboot execute the command:
This will update the /etc/vfstab file so that volume devices are
used to mount the file systems on this disk device. You will need
to update any other references such as backup scripts, databases,
or manually created swap devices.
This example illustrates how the VxVM software views this disk prior to
completeion of the encapsulation procedure. After the encapsulation
process is completed, compare this example to the same information for
the encapsulated disk.
Select an operation to perform: list
Device: c1t3d0s2
devicetag: c1t3d0
type: sliced
flags: online error private autoconfig
errno: Disk is not usable <--- Disk is not usable
Multipathing information:
numpaths: 2
c1t3d0s2state=enabled
c2t3d0s2state=enabled
Goodbye.
A system reboot is required before the disk is available for use by the
VxVM software.
#
# Storage
#
/dev/vx/dsk/storagedg/fs1 /dev/vx/rdsk/storagedg/fs1 /fs1 ufs 1 yes logging
/dev/vx/dsk/storagedg/fs2 /dev/vx/rdsk/storagedg/fs2 /fs2 ufs 2 yes logging
/dev/vx/dsk/storagedg/fs3 /dev/vx/rdsk/storagedg/fs3 /fs3 ufs 3 yes logging
/dev/vx/dsk/storagedg/fs4 /dev/vx/rdsk/storagedg/fs4 /fs4 ufs 4 yes logging
#NOTE: volume rootvol (/) encapsulated partition c1t0d0s0
#NOTE: volume swapvol (swap) encapsulated partition c1t0d0s1
#NOTE: volume fs1 (/fs1) encapsulated partition c1t3d0s0
#NOTE: volume fs2 (/fs2) encapsulated partition c1t3d0s1
#NOTE: volume fs3 (/fs3) encapsulated partition c1t3d0s3
#NOTE: volume fs4 (/fs4) encapsulated partition c1t3d0s4
Post-Encapsulation df -k Command
/dev/vx/dsk/storagedg/fs1
2055705 2073 1991961 1% /fs1
/dev/vx/dsk/storagedg/fs3
2055705 2073 1991961 1% /fs3
/dev/vx/dsk/storagedg/fs4
2055705 2073 1991961 1% /fs4
The following example of the prtvtoc command shows that the c1t3d0
partition table was modified to have only two partitions (partitions 6 and
7). Partition 6 is the public region and partition 7 is the private region.
This partitioning scheme is indicative of encapsulated disks. Even though
this disk partition table was modified, the data from the original four
partitions occupy the same blocks as they did prior to the encapsulation
process. The data was not moved or otherwise disturbed. The data
remains in this configuration unless the volumes are grown or converted
to a striped layout. This is an important distinction to remember if this
disk is ever unencapsulated.
bash-2.03# prtvtoc /dev/rdsk/c1t3d0s2
* /dev/rdsk/c1t3d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 133 sectors/track
* 27 tracks/cylinder
* 3591 sectors/cylinder
* 4926 cylinders
* 4924 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* Unallocated space:
* First Sector Last
* Sector Count Sector
* 17682084 4294963705 17678492
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
2 5 01 0 17682084 17682083
6 14 01 0 17682084 17682083
7 15 01 17678493 3591 17682083
The following example of the format utility prints the new partition table
for c1t3d0. Notice that partition 6 overlaps the entire disk, including
partition 7, which is the private region.
partition> print
Current partition table (original):
Total disk cylinders available: 4924 + 2 (reserved cylinders)
The vxdisk list command in the following example shows that c1t3d0
is under the management of the VxVM software.
bash-2.03# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 sliced rootdisk rootdg online
c1t1d0s2 sliced - - error
c1t2d0s2 sliced - - error
c1t3d0s2 sliced storage01 storagedg online
c1t4d0s2 sliced - - online
c1t5d0s2 sliced - - online
c1t6d0s2 sliced - - error
c1t16d0s2 sliced - - error
c1t17d0s2 sliced - - error
c1t18d0s2 sliced - - error
c1t19d0s2 sliced - - error
c1t20d0s2 sliced - - online
c1t21d0s2 sliced - - online
c1t22d0s2 sliced rootmirror rootdg online
The dg File
The dg file in the following example delineates the disk group of which
this encapsulated disk is a member.
bash-2.03# more *
::::::::::::::
dg
::::::::::::::
storagedg
::::::::::::::
The dmname file for the following example lists the VxVM software disk
name of the encapsulated disk.
::::::::::::::
dmname
::::::::::::::
storage01
::::::::::::::
The following example of the newpart file displays the new partition
information for this encapsulated disk. Additionally, it lists all the VxVM
software commands executed to successfully complete the encapsulation
process.
::::::::::::::
newpart
::::::::::::::
# volume manager partitioning for drive c1t3d0
0 0x0 0x000 0 0
1 0x0 0x000 0 0
2 0x5 0x201 0 17682084
3 0x0 0x000 0 0
4 0x0 0x000 0 0
5 0x0 0x000 0 0
6 0xe 0x201 0 17682084
7 0xf 0x201 17678493 3591
#vxmake vol fs1 plex=fs1-%%00 usetype=gen
#vxmake plex fs1-%%00 sd=storage01-B0,storage01-%%00
#vxmake sd storage01-%%00 disk=storage01 offset=0 len=4197878
#vxmake sd storage01-B0 disk=storage01 offset=17678492 len=1 putil0=Block0
comment="Remap of block 0
#vxvol start fs1
#rename c1t3d0s0 fs1
#vxmake vol fs2 plex=fs2-%%01 usetype=gen
#vxmake plex fs2-%%01 sd=storage01-%%01
#vxmake sd storage01-%%01 disk=storage01 offset=4197878 len=4197879
#vxvol start fs2
#rename c1t3d0s1 fs2
#vxmake vol fs3 plex=fs3-%%02 usetype=gen
The primary_node file lists the original c#t#d# name of the encapsulated
disk in the following example.
::::::::::::::
primary_node
::::::::::::::
c1t3d0
::::::::::::::
The following example of the vtoc file saves the original partition table of
the encapsulated disk. In earlier versions of the VxVM software, this file
was modified and used with the /etc/fmthard command to re-partition
disks during the unencapsulation process. The vxmksdpart command
now performs this function.
::::::::::::::
vtoc
::::::::::::::
#THE PARTITIONING OF /dev/rdsk/c1t3d0s2 IS AS FOLLOWS:
#SLICE TAG FLAGS START SIZE
0 0x0 0x200 0 4197879
1 0x0 0x200 4197879 4197879
2 0x5 0x201 0 17682084
3 0x0 0x200 8395758 4197879
4 0x0 0x200 12593637 4197879
5 0x0 0x000 0 0
6 0x0 0x000 0 0
7 0x0 0x000 0 0
If there is not enough free space on the disk and additional free space
cannot be configured, the situation can be alleviated by temporarily
encapsulating the non-conforming disk without a private region, then
mirroring the encapsulated disk volumes to another disk. Once the data is
mirrored to a real VxVM software disk, the encapsulated plexes are
detached, leaving the data on the mirror. At that point, mirror the mirror
disk to another VxVM software disk to complete the operation.
5. Mirror the volumes to a disk that has enough space to mirror both
volumes. In the following example, the volumes are mirrored to a
disk named disk01.
vxassist -g <diskgroup> mirror NPdisk05vol layout=nostripe alloc="disk01"
vxassist -g <diskgroup> mirror NPdisk06vol layout=nostripe alloc="disk01"
6. When the mirroring process is complete, remove the original side of
the mirror. This removes the disk that does not have a private region
configured. Type the following:
vxplex -g <diskgroup> -o rm dis NPdisk05vol-01
vxplex -g <diskgroup> -o rm dis NPdisk06vol-01
7. Remove the old disks from the disk group and return them to their
original state:
vxdg -g <diskgroup> rmdisk NPdisk05
vxdg -g <diskgroup> rmdisk NPdisk06
vxdisk rm c0t5d10s5 vxdisk rm c0t5d10s6
partition> print
Current partition table (original):
Total disk cylinders available: 4924 + 2 (reserved cylinders)
Power user – Alternatively, for disks with multiple mirrors, the vxplex
command can be looped:
bash-2.03# for i in 1 2 3 4
> do
> vxplex -o rm dis fs$i-02
> done
3. Use the vxprint command to verify that all mirrors are detached:
bash-2.03# vxprint -g storagedg -ht
DG NAME NCONFIG NLOG MINORS GROUP-ID
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
V NAME RVG KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
partition> print
Current partition table (original):
Total disk cylinders available: 4924 + 2 (reserved cylinders)
When this step is completed, all volumes hosted by this disk are
unmounted, as shown in the following example:
bash-2.03# df -k
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/rootvol 7670973 1646148 5948116 22% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
swap 1176080 16 1176064 1% /var/run
swap 1176096 32 1176064 1% /tmp
6. Stop all applications using data on these volumes prior to executing
the remaining steps.
7. Edit the /etc/vfstab file by changing the mount statements to
reflect partitions instead of volumes:
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
#/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr ufs 1 yes -
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/vx/dsk/swapvol - - swap - no -
/dev/vx/dsk/rootvol /dev/vx/rdsk/rootvol / ufs 1 no logging
swap - /tmp tmpfs - yes -
#
# Storage
#
/dev/dsk/c1t3d0s0 /dev/rdsk/c1t3d0s0 /fs1 ufs 1 yes logging
/dev/dsk/c1t3d0s1 /dev/rdsk/c1t3d0s1 /fs2 ufs 1 yes logging
/dev/dsk/c1t3d0s3 /dev/rdsk/c1t3d0s3 /fs3 ufs 1 yes logging
/dev/dsk/c1t3d0s4 /dev/rdsk/c1t3d0s4 /fs4 ufs 1 yes logging
#NOTE: volume rootvol (/) encapsulated partition c1t0d0s0
#NOTE: volume swapvol (swap) encapsulated partition c1t0d0s1
8. Mount the partitions. Use the mountall command:
bash-2.03# mountall
bash-2.03# df -k
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/rootvol 7670973 1646148 5948116 22% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
swap 1174552 16 1174536 1% /var/run
swap 1174568 32 1174536 1% /tmp
/dev/dsk/c1t3d0s1 2055705 2073 1991961 1% /fs2
/dev/dsk/c1t3d0s3 2055705 2073 1991961 1% /fs3
/dev/dsk/c1t3d0s4 2055705 2073 1991961 1% /fs4
/dev/dsk/c1t3d0s0 2055705 2073 1991961 1% /fs1
It is now safe to start applications that use data contained on these
partitions.
11. Remove the public and private partitions using the format utility.
Type the following:
partition> print
Current partition table (unnamed):
Total disk cylinders available: 4924 + 2 (reserved cylinders)
rootvol
Overlay Partition- / (0) Slice 2
Slice 0 - /
and
Slice 2
Slice 3
Overlay Partition -
swap (1)
Public Region
Slice 1 - swap
swapvol
Slice 4 Private Region
Free Space
Mirror Disk
rootvol
Slice 4 Private Region
Region
Overlay Partition -
swap (1) swapvol
rootvol
Slice 0 - / Overlay Partition - /(0) swapvol
Overlay Partition - swap(1) Slice 2
Slice 1 - swap and Slice 6
Slice 2
Overlay Partition - /usr(3) Public
Slice 3 - /usr
Overlay Partition - /var(4)
Region
Slice 4 - /var
Free Space Slice 7 Private Region
Mirror Disk
rootvol
Slice 3 Private Region
swapvol
Volume Restrictions
The rootvol, swapvol, and usr volumes have restrictions that other
encapsulated volumes do not have. These restrictions are:
● The root volume rootvol must be a member of the rootdg disk
group.
● The rootvol, swapvol, and usr volumes must have the following
specific minor numbers:
● rootvol = 0
● swapvol = 2
● usr = 3
● The rootvol, swapvol, usr, and var volumes use restricted mirrors
that have overlay partitions created for them. Overlay partitions
occupy the same disk blocks as the restricted mirror and are used to
boot the system prior to the availability of the vxconfigd daemon.
● The rootvol, swapvol, usr, var, and opt volumes cannot be grown,
spanned or occupy a plex that has multiple non-contiguous
subdisks. All data associated with encapsulated system partitions
must reside in contiguous blocks of space.
● When mirroring the boot disk, the mirror disk must be large enough
to hold all plexes on that disk or mirroring fails for one or more
volumes on the encapsulated boot disk.
● The rootvol, swapvol, and usr volumes cannot have DRL attached
to their volumes.
Encapsulating the boot disk using the vxdiskadm command requires that
a minimum of one disk be configured into rootdg during the vxinstall
processing. This disk is initalized for use as the mirror for the
encapsulated boot disk. Boot disk encapsulation using the vxdiskadm
command is identical to the process outlined for data disks described in
‘‘Data Disk Encapsulation Process’’ on page 2-8. The difference between
the two processes is in the number of reboots. Data disk encapsulation
requires a single reboot, while boot disk encapsulation performs multiple
reboots.
Pre-Encapsulation df -k Command
The prtvtoc command output of the example system boot disk shows
the disk having two partitions and no free unpartitioned space. Boot
disks, unlike data disks, do not require any unpartitioned free space for
encapsulation. In this situation, space for the private region is taken from
the swap partition.
In this example, the space problem was rectified by reducing the size of
the swap partition, leaving sufficient unpartitioned free space for use by
the encapsulation process. Additional swap space can be configured using
swap files to supplement the reduced swap partition size. The corrected
partition map now shows unallocated space.
bash-2.03# prtvtoc /dev/rdsk/c1t0d0s2
* /dev/rdsk/c1t0d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 133 sectors/track
* 27 tracks/cylinder
* 3591 sectors/cylinder
* 4926 cylinders
The /etc/vfstab file displays the new devices to mount and to fsck
after the encapsulation in this example. Notice that comments at the end
of this example file describe the pre-encapsulation device configuration of
the / and swap partitions.
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
#/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr ufs 1 yes -
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/vx/dsk/swapvol - - swap - no -
/dev/vx/dsk/rootvol /dev/vx/rdsk/rootvol / ufs 1 no logging
swap - /tmp tmpfs - yes -
#NOTE: volume rootvol (/) encapsulated partition c1t0d0s0
#NOTE: volume swapvol (swap) encapsulated partition c1t0d0s1
Post-Encapsulation df -k Command
Additionally, there are two overlay partitions for the / and swap partitions
that block-for-block match the original / and swap partition locations.
This is an important distinction. The overlay partitions are used during
initial boot processing prior to the initialization of the vxconfigd
daemon. Because the data in an encapsulated disk is not overwritten, the
overlay partitions map to the proper data blocks. This allows you access
to them without having to perform an active vxconfigd process to access
the rootvol and swapvol volumes.
bash-2.03# prtvtoc /dev/rdsk/c1t0d0s2
* /dev/rdsk/c1t0d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 133 sectors/track
* 27 tracks/cylinder
* 3591 sectors/cylinder
* 4926 cylinders
* 4924 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* Unallocated space:
* First Sector Last
* Sector Count Sector
* 15581349 4279385947 4294967295
* 17682084 4292866561 15581348
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 0 15581349 15581348 <-- Overlay partition for /
The following example of the print selection from the format utility
shows another view of the encapsulated disk partitioning scheme.
partition> print
Current partition table (original):
Total disk cylinders available: 4924 + 2 (reserved cylinders)
Notice in the following example output that each system volume (root
and swap) is now mirrored using a VxVM software disk called
rootmirror. The example rootmirror disk provides the following
plexes:
● rootvol - plex = rootvol-02 / sd = rootmirror-01
● swapvol - plex = swapvol-02 / sd = rootmirror-02
bash-2.03# vxprint -ht -g rootdg
DG NAME NCONFIG NLOG MINORS GROUP-ID
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
V NAME RVG KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
Example output from the vxdisk list command shows both the
encapsulated boot disk (rootdisk) and its mirror (rootmirror). Both
disks must be members of the rootdg disk group.
bash-2.03# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 sliced rootdisk rootdg online
c1t1d0s2 sliced - - error
c1t2d0s2 sliced - - error
c1t3d0s2 sliced - - error
c1t6d0s2 sliced - - error
c1t16d0s2 sliced - - error
c1t17d0s2 sliced - - error
c1t18d0s2 sliced - - error
c1t19d0s2 sliced - - error
c1t20d0s2 sliced - - online
c1t21d0s2 sliced - - online
c1t22d0s2 sliced rootmirror rootdg online
The following example of the prtvtoc command output for the boot disk
displays an encapsulation partition scheme with overlay partitions for /
and swap. Notice the starting and stopping sectors for each overlay
partition.
bash-2.03# prtvtoc /dev/rdsk/c1t0d0s2
* /dev/rdsk/c1t0d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 133 sectors/track
* 27 tracks/cylinder
* 3591 sectors/cylinder
* 4926 cylinders
* 4924 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* Unallocated space:
* First Sector Last
* Sector Count Sector
* 15581349 4279385947 4294967295
* 17682084 4292870152 15584939
* 16637103 1041390 17678492
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 0 15581349 15581348
1 3 01 15584940 1052163 16637102
2 5 00 0 17682084 17682083
3 14 01 0 17682084 17682083
4 15 01 17678493 3591 17682083
The following example shows the output from the prtvtoc command for
the root mirror disk. Notice that it not only mirrors the public and private
regions of the boot disk, but additionally creates overlay partitions for /
and swap. The partitions are configured in this manner, so the mirror disk
can be used to boot the system if the primary fails. The difference in the
starting sector for overlay partition 0 is because this is a VxVM software-
initialized disk which has a private region configured within the first two
cylinders of the disk. This causes the offset to sector 7182 for the overlay
of partition 0.
bash-2.03# prtvtoc /dev/rdsk/c1t22d0s2
* /dev/rdsk/c1t22d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 133 sectors/track
* 27 tracks/cylinder
* 3591 sectors/cylinder
* 4926 cylinders
* 4924 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* Unallocated space:
The format utility print option for the example rootdisk shows the
boot disk slicing on cylinder boundaries.
partition> print
Current partition table (original):
Total disk cylinders available: 4924 + 2 (reserved cylinders)
partition>
The following example output from the format utility print option is for
the rootmirror disk. Notice how the private region (slice 3) occupies
cylinder 0. The private region (slice 4) and overlay partition 0 start at
cylinder 2. Contrast this with the output for the encapsulated boot disk.
partition> print
Current partition table (original):
Total disk cylinders available: 4924 + 2 (reserved cylinders)
File List
bash-2.03# ls -las
total 12
2 drwxr-xr-x 2 root other 512 Apr 26 19:17 .
2 drwxr-xr-x 4 root other 512 Apr 24 15:29 ..
2 -rw-r--r-- 1 root other 9 Apr 26 19:17 dmname
2 -rw-r--r-- 1 root other 827 Apr 26 19:17 newpart
2 -rw-r--r-- 1 root other 7 Apr 26 19:17 primary_node
2 -rw-r--r-- 1 root other 452 Apr 26 19:17 vtoc
The dmname file for this example lists the VxVM software disk name of the
encapsulated disk.
bash-2.03# more *
::::::::::::::
dmname
::::::::::::::
rootdisk
::::::::::::::
The following example of the newpart file displays the new partition
information for this encapsulated disk. Additionally, it lists all the VxVM
software commands executed to successfully complete the encapsulation
process.
::::::::::::::
newpart
::::::::::::::
# volume manager partitioning for drive c1t0d0
0 0x2 0x200 0 15581349
1 0x3 0x201 15584940 1052163
2 0x5 0x200 0 17682084
3 0xe 0x201 0 17682084
4 0xf 0x201 17678493 3591
5 0x0 0x000 0 0
6 0x0 0x000 0 0
7 0x0 0x000 0 0
#vxmake vol rootvol plex=rootvol-%%00 usetype=root logtype=none
#vxmake plex rootvol-%%00 sd=rootdisk-B0,rootdisk-%%00
#vxmake sd rootdisk-%%00 disk=rootdisk offset=0 len=15581348
#vxmake sd rootdisk-B0 disk=rootdisk offset=17678492 len=1 putil0=Block0 comment="Remap
of block 0
#vxvol start rootvol
#rename c1t0d0s0 rootvol
#vxmake vol swapvol plex=swapvol-%%01 usetype=swap
#vxmake plex swapvol-%%01 sd=rootdisk-%%01
#vxmake sd rootdisk-%%01 disk=rootdisk offset=15584939 len=1052163
#vxvol start swapvol
#rename c1t0d0s1 swapvol
The primary_node file lists the original c#t#d# name of the encapsulated
disk in this example.
::::::::::::::
primary_node
::::::::::::::
c1t0d0
::::::::::::::
The following example of the vtoc file saves the original partition table of
the encapsulated disk. In earlier versions of the VxVM software, this file
was modified and used with the /etc/fmthard command to re-partition
disks during the unencapsulation process. The vxmksdpart command
now performs this function.
::::::::::::::
./vtoc
::::::::::::::
#THE PARTITIONING OF /dev/rdsk/c1t0d0s2 IS AS FOLLOWS:
#SLICE TAG FLAGS START SIZE
0 0x2 0x200 0 15581349
1 0x3 0x201 15584940 1052163
2 0x5 0x200 0 17682084
3 0x0 0x000 0 0
4 0x0 0x000 0 0
5 0x0 0x000 0 0
6 0x0 0x000 0 0
7 0x0 0x000 0 0
This section addresses the following two methods of bringing a boot disk
under VxVM software management using the Sun Enterprise Services
best practice recommendations:
● Manual process using the command line
● Scripted process using the Sun Enterprise Installation Services (EIS)
CD-ROM
Note – The Sun Enterprise Services best practices for boot disk
management are based on guidelines from the Sun BluePrints OnLine
document Toward a Reference Configuration for VxVM Managed Boot Disks,
part number 806-6197-10.
Note – If the boot disk was sliced with swap as the first slice, reverse the
order of mirroring for the / and swap slices.
Note – If the boot disk was sliced with swap as the first slice, reverse the
order of mirroring for the / and swap slices.
Name Flag
Name Tag
UNASSIGNED 0x00
BOOT 0x01
ROOT 0x02
SWAP 0x03
USR 0x04
BACKUP 0x05
STAND 0x06
VAR 0x07
HOME 0x08
ALTSCTR 0x09
CACHE 0x0a
VxVM PRIVATE REGION 0x15
VxVM PUBLIC REGION 0x14
# prtvtoc /dev/rdsk/c1t1d0s2
* /dev/rdsk/c1t1d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 133 sectors/track
* 27 tracks/cylinder
* 3591 sectors/cylinder
* 4926 cylinders
* 4924 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* Unallocated space:
* First Sector Last
* Sector Count Sector
* 3591 3591 7181
* 4205061 4290769417 7181
* 17682084 4281490273 4205060
* 9455103 8226981 17682083
*
Note – In this example, the capture was modified to show the /var slice
as not built. This was done to help show how to use the vxmksdpart
command to create overlay partitions. The VxVM software version 3.2
creates overlay partitions for /, swap and /var, and thus you do not need
to execute any additional commands. If /opt or other non-system
partitions are defined on the boot disk, use vxmksdpart to define those
partitions.
In this example, an overlay partition must be built for the /var slice.
The subdisk information listed in Table 2-3 is needed as input to the
vxmksdpart command.
*
* Unallocated space:
* First Sector Last
* Sector Count Sector
* 3591 3591 7181
* 4205061 4290769417 7181
* 17682084 4281490273 4205060
* 9455103 8226981 17682083
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 7182 4197879 4205060
1 3 01 4205061 1052163 5257223
2 5 00 0 17682084 17682083
3 15 01 0 3591 3590
4 14 01 7182 17674902 17682083
6 7 00 5257224 4197879 9455102
10. Set the dump device to a non-VxVM software disk, if available. If
such a disk is not available, use the boot disk. Type the following:
# dumpadm -d /dev/dsk/c0t0d0s1
11. Create the OpenBoot PROM device aliases, if needed. Build the
aliases using the eeprom commands nvedit or nvalias at the
OpenBoot PROM prompt.
bash-2.03# init 6
7. After the reboot completes, check the devices used for the / and
swap partitions. Verify that these partitions use non-VxVM software
objects:
bash-2.03# df -k
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c1t0d0s0 7670973 1697985 5896279 23% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
swap 655784 16 655768 1% /var/run
swap 655792 24 655768 1% /tmp
bash-2.03# swap -l
swapfile dev swaplo blocks free
/dev/dsk/c1t0d0s1 118,145 16 1052144 1052144
4. Detach all plexes associated with the rootmirror disk. Use the
following script:
bash-2.03# vxprint -qhtg rootdg -s | grep -i rootmirror | awk ’{print $3}’> /rmsub.plex
Note – This process is similar to the one used to restore partitions for data
disk unencapsulation.
Caution – There must be one disk in rootdg, or the VxVM software does
not start. Do not remove the disk that is used as the root mirror.
11. Verify that the boot disk is removed from VxVM software control:
bash-2.03# vxprint -qhtg rootdg
dg rootdg default default 0 1019673916.1025.lowtide
Note – If the disk is re-encapsulated, these lines are added correctly by the
process, so there is no harm done by removing them.
Note – The VxVM software does not start. It can be started manually once
the system is booted.
19. Re-write the vtoc of the disk so that hard partitions are again
defined for the root file systems.
There are several ways to put the hard partitions back into the vtoc
file, including using the fmthard command on a modified
/etc/vx/reconfig.d/disk.d/c#t#d#/vtoc file, using the format
utility to partition the disk manually, or using the vxmksdpart
command. The simplest method, however, is to use the vxedvtoc
command.
a. When the VxVM software encapsulates a disk, it makes a record
of the old vtoc of the disk. This file is stored for each disk in
/etc/vx/reconfig.d/disk.d/c#t#d#. It is stored in a VxVM
software-specific format, so it cannot be used as an argument
for the fmthard command unless it is modified. The vxedvtoc
command is similar to the fmthard command except that it can
read this vtoc file and write that vtoc to a disk. The command
takes the following form:
vxedvtoc -f filename devicename
b. Assuming that the boot disk is c0t0d0, run the command as
follows:
# /etc/vx/bin/vxedvtoc -f /etc/vx/reconfig.d/disk.d/c0t0d0/vtoc /dev/rdsk/c0t0d0s2
# THE ORIGINAL PARTITIONING IS AS FOLLOWS:
#SLICE TAG FLAGS START SIZE
0 0x0 0x200 0 0
1 0x0 0x200 0 0
2 0x5 0x201 0 8794112
3 0x0 0x200 0 0
4 0x0 0x200 0 0
5 0x0 0x200 0 0
6 0xe 0x201 0 8794112
7 0xf 0x201 8790016 4096
# THE NEW PARTITIONING WILL BE AS FOLLOWS :
#SLICE TAG FLAGS START SIZE
0 0x0 0x200 0 2048000
1 0x0 0x200 2048000 2048000
2 0x5 0x201 0 8794112
3 0x0 0x201 4096000 2048000
4 0x0 0x201 6144000 2048000
5 0x0 0x200 0 0
6 0x0 0x200 0 0
7 0x0 0x200 0 0
DO YOU WANT TO WRITE THIS TO THE DISK ? [Y/N] :y
In this example, comments were removed from /export and the data
volume /somevol:
# vi /etc/vfstab
/dev/dsk/c0t0d0s1 - - swap - no -
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 / ufs 1 no -
/dev/dsk/c0t0d0s5 /dev/rdsk/c0t0d0s5 /usr ufs 1 no -
/dev/dsk/c0t0d0s6 /dev/rdsk/c0t0d0s6 /var ufs 1 no -
/dev/dsk/c0t0d0s7 /dev/rdsk/c0t0d0s7 /export ufs 2 yes -
swap - /tmp tmpfs - yes -
/dev/vx/dsk/datadg/somevol /dev/vx/rdsk/datadg/somevol /somevol ufs 2 yes -
21. Start all volumes.
# /usr/sbin/vxvol startall
22. Issue a mountall command to mount the now uncommented
volumes.
# mountall
At this point the root disk is completely free of the VxVM software
control. The VxVM software daemons are started, and all system file
systems should be mounted.
15. Reboot the system for changes to take effect. Type the following:
# reboot
Once rebooted, the system comes up in an unencapsulated state with
/, /usr, /var, and swap mounted.
Data Disks
Data disks fail unencapsulation if the following is true:
● Any encapsulated partition was grown or has a modified layout.
● The encapsulated disk failed and was replaced.
● The encapsulated disk was non-conforming and was unencapsulated
using the procedure in ‘‘Encapsulating a Non-Conforming Disk’’ on
page 2-20.
If the data on the disk must be removed from the VxVM software control,
back up the data and restore it to a non-VxVM software disk. Data must
be restored because the encapsulated disk’s original mapping of partitions
to blocks within the public region changes when the disk is replaced and
synced. This prevents the vxmksdpart command from properly mapping
subdisks within the public region to physical partitions.
Boot Disks
The issues described in this section affect boot disk encapsulation.
Do not grow any encapsulated boot disk file systems. This causes
unencapsulation to fail. The only recovery option is to back up and restore
the boot disk from tape.
Unlike data disks, if an encapsulated boot disk fails and is replaced, the
disk can be unencapsulated. This is because the /, /usr, /var, and swap
partitions and any other system partitions (such as /opt or /home)
encapsulated on the boot disk are preserved by the mirror. When the
failed boot disk is replaced and re-synced, these preserved and overlay
partitions are copied to the replacement disk and are then available to
successfully unencapsulate the disk.
Depending on the original configuration of the boot disk and the method
of mirroring, the final unencapsulated partition scheme can change from
the pre-encapsulation configuration, as follows:
● The /, /usr, /var, and swap partitions – There should be no
problems with these partitions; the disk unencapsulates successfully
using both scripted and manual unencapsulation methods. In
particular –
● On a two-slice boot disk (/ and swap partitions) – The
unencapsulated partition scheme is identical to pre-
encapsulation except that slice 0 is moved back one or two
cylinders from the start of the disk. Modifications to the
/etc/vfstab file are unnecessary if manually unencapsulated.
● On three- or four-slice boot disks (/, /usr, /var, and swap
partitions) – The unencapsulated partition scheme is different
from the pre-encapsulated scheme. The /usr and /var
partitions are relocated to partition 6 and 7 if originally
configured in partitions 3 and 4.
Scripted unencapsulation using the vxunroot command
successfully unencapsulates boot disk and modifies the
/etc/vfstab file to reflect the new location of the /usr and
/var partitions. Manual unencapsulation requires manual
modification of the /etc/vfstab file to reflect the new
locations of the /usr and /var partitions.
● The /, /usr, /var, and swap partitions plus /opt or /home – The
partition scheme can change depending on:
● Scripted unencapsulation using vxunroot works with no
problems. The final partition scheme is changed from the
original, which is reflected in the /etc/vfstab file. The system
boots with no problems.
● Manual unencapsulation can be confusing if the original (pre-
encapsulation) partition scheme is not known. Determining
where the original /, /usr, /var, and swap partitions were
located is easy. Output from the format utility delineates those
partitions. The confusing part is determining which of the
preserved system partitions (such as /opt and /home) is which
if the encapsulated boot disk has both.
Additionally, manual unencapsulation procedures must be
modified to reflect the following:
● The vxmksdpart and the vxedvtoc commands are not
necessary because the disk is already partitioned. All
system partitions (/, /usr, /var, and swap) and
encapsulated system partitions (/opt and /home) are
visible. Additionally, the data these commands use is
invalid.
● Use the format utility only to remove the public or private
region partitions.
● Edit the /etc/vfstab file to reflect the new locations for
any relocated partitions. If /usr was originally in slice 4
and now occupies slice 6, this must be reflected in the
/etc/vfstab file prior to a system reboot.
Note – If the Sun Enterprise Services best practices boot disk management
processes are followed, the partitioning concerns described in
‘‘Encapsulated Disk Was Replaced’’ on page 2-73 are eliminated.
Preparation
To prepare for this exercise:
● Identify four disks in addition to the boot disk to use as mirror and
data disks.
● Make sure that the boot disk has the /, swap, /usr, /var, and /opt
partitions.
If the boot disk is not configured this way, have the instructor
perform a re-flash installation on your system using the proper boot
disk configuration for this lab.
● Ask your instructor for the location of the VxVM software packages,
patches, and supporting Solaris OE software.
● Have paper and writing instruments for taking notes.
7. Mirror the encapsulated boot disk using Sun Enterprise Services best
practices procedure (see ‘‘Examining Sun Enterprise Services Best
Practices for VxVM Software-Managed Boot Disks’’ on page 2-48).
Assign the mirror disk a VxVM software disk name of rootmirror.
8. While the boot disk is being mirrored, answer the following
questions:
a. Describe the difference between the pre- and post-encapsulation
boot disk partition configuration. Use the contents of
/bootdisk_capture as a guide.
________________________________________________________
________________________________________________________
________________________________________________________
________________________________________________________
________________________________________________________
b. What directory contains both pre- and post-encapsulation
configuration information about the boot disk?
________________________________________________________
________________________________________________________
________________________________________________________
c. State the purpose of the following files:
● /etc/vx/reconfig.d/state.d/root-done
_____________________________________________________
_____________________________________________________
● /etc/vx/reconfig.d/state.d/install-db
_____________________________________________________
_____________________________________________________
● /etc/vfstab.prevm
_____________________________________________________
_____________________________________________________
9. After the boot disk is mirrored, use vxprint to capture the post-
mirror configuration. Copy this information to the
/bootdisk_capture file for use later in this lab.
Caution – Do not continue the lab without re-mirroring the boot disk. The
next task requires that the boot disk be mirrored.
4. Create mount points for each of the partitions on the data disk and
verify that the new file systems successfully mounts.
5. Update the /etc/vfstab file to auto-mount these file systems
during system reboot.
6. Verify that the file systems mount from the /etc/vfstab file.
7. Use prtvtoc, the df command, and the format utility to capture
pre-encapsulation and mount information to the
/datadisk_capture file. Also capture the contents of the
/etc/vfstab file. This information is used later in this lab
exercise.
8. Encapsulate this disk using the vxdiskadm utility.
While encapsulating this disk, you are asked for a disk group for this
disk to join. Create a new disk group called datadg, or use rootdg.
9. After the system reboots, verify that the encapsulation was
successful using the df -k and vxprint commands.
10. Mirror the encapsulated data disk using a disk you identified for this
purpose and the vxdiskadm utility.
11. While the data disk is being mirrored, answer the following
questions:
a. What are the differences between the pre- and post-
encapsulation partition configuration for this disk?
________________________________________________________
________________________________________________________
________________________________________________________
b. How many reboots did the system execute?
________________________________________________________
________________________________________________________
________________________________________________________
c. Using output from the execution of a df -k command, contrast
the differences in the devices used to mount the newly
encapsulated file systems.
________________________________________________________
________________________________________________________
________________________________________________________
Exercise Summary
Task 1 Solutions
Complete the following steps:
1. Open a text editor such as vi and capture the following pre-
encapsulation boot disk information:
● The prtvtoc value
● The format utility partition print
● The df -k output
● Contents of the /etc/vfstab file
Save the file as /bootdisk_capture. The information in this file is
used later in this lab exercise.
2. Install the VxVM software release 3.2 packages.
3. Install all VxVM software release 3.2 patches as indicated by the
instructor.
4. Use vxinstall to configure the system as follows:
a. Do not use enclosure-based naming.
b. Select custom install.
c. Encapsulate the boot disk as follows:
1. Assign it a VxVM software disk name of rootdisk
(default).
2. Accept the default private region size.
d. Leave all other disks alone.
5. Reboot the system.
6. Open /bootdisk_capture using a text editor and capture the
following post-encapsulation boot disk information:
● The prtvtoc value
● The format utility partition print
● The df -k value
There are two new disk partitions for the private and public regions. Additionally,
/opt was encapsulated into the disk’s public region and is no longer visible. The
/, swap, /usr, and /var partitions are still visible as overlay partitions.
b. What directory contains both pre- and post-encapsulation
configuration information about the boot disk?
This file defines by its presence that vxinstall was not executed. It prevents the
VxVM software daemons from starting.
● /etc/vfstab.prevm
This file holds a copy of the pre-boot disk encapsulation /etc/vfstab file
contents.
9. After the boot disk is mirrored, use vxprint to capture the post-
mirror configuration. Copy this information to the
/bootdisk_capture file for use later in this lab.
Task 2 Solutions
Complete the following steps:
1. Unencapsulate the boot disk by using the vxunroot command.
The procedure for this process is found in ‘‘Examining Sun
Enterprise Services Best Practices for VxVM Software-Managed Boot
Disks’’ on page 2-48, but can be successfully used for a five-slice boot
disk as described in ‘‘Unencapsulating a Boot Disk Using the
vxunroot Utility’’ on page 2-57.”
To unencapsulate a five-slice boot disk, use the following command
syntax in place of that listed in the procedure to execute the loop to
remove the mirrors:
# for i in ‘cat /rmsub.plex‘
>do
>vxplex -g rootdg -o rm dis $i
>done
This substitution results in full removal of the plexes and their
subdisks.
2. Was the ununcapsulation successful?
It should be.
3. How do you know, and what commands do you use to verify this?
Task 3 Solutions
Complete the following steps:
1. Encapsulate the boot disk using the vxdiskadm utility menu
selection 6. Use the following configuration:
a. Assign it a VxVM software disk name of rootdisk (default).
b. Do not configure it as a spare disk.
c. Accept all other defaults.
2. One the encapsulation process is complete and the system has
rebooted, mirror the boot disk using the Sun Enterprise Services best
practice procedure (see ‘‘Examining Sun Enterprise Services Best
Practices for VxVM Software-Managed Boot Disks’’ on page 2-48).
Assign the mirror disk a VxVM software disk name of rootmirror.
Task 4 Solutions
Complete the following steps:
1. Unencapsulate the boot disk using the manual procedure when
booted from CD-ROM described in ‘‘Unencapsulating When Booted
From the CD-ROM’’ on page 2-64. Be sure to recover the
encapsulated /opt partition using the vxedvtoc command outlined
in this procedure.
Be patient—this procedure reboots the system multiple times.
2. Was the ununcapsulation successful?
It should be.
Caution – Do not continue the lab without re-mirroring the boot disk. The
next task requires that the boot disk be mirrored.
Task 5 Solutions
Complete the following steps:
1. Select a disk to be used as a data (non-root) disk for encapsulation.
2. Use the format utility to create the following partition configuration
for this disk –
● Create two partitions minimum (512 megabyte maximum size).
● Leave partitions 6 and 7 unallocated.
3. Build file systems on each of the partitions.
4. Create mount points for each of the partitions on the data disk and
verify that the new file systems successfully mounts.
5. Update the /etc/vfstab file to auto-mount these file systems
during system reboot.
6. Verify that the file systems mount from the /etc/vfstab file.
7. Use prtvtoc, the df command, and the format utility to capture
pre-encapsulation and mount information to the
/datadisk_capture file. Also capture the contents of the
/etc/vfstab file. This information is used later in this lab
exercise.
8. Encapsulate this disk using the vxdiskadm utility.
While encapsulating this disk, you are asked for a disk group for this
disk to join. Create a new disk group called datadg, or use rootdg.
9. After the system reboots, verify that the encapsulation was
successful using the df -k and vxprint commands.
10. Mirror the encapsulated data disk using a disk you identified for this
purpose and the vxdiskadm utility.
11. While the data disk is being mirrored, answer the following
questions:
a. What are the differences between the pre- and post-
encapsulation partition configuration for this disk?
The post-encapsulated disk has only partitions 6 and 7. The original partitions
were encapsulated in slice 6, the public region. Slice 7 is the private region.
b. How many reboots did the system execute?
One.
c. Using output from the execution of a df -k command, contrast
the differences in the devices used to mount the newly-
encapsulated file systems.
The /etc/vfstab file now uses the VxVM software volumes as devices to
mount and fsck. The original device information was saved as comments at the
end of the file.
12. Wait for the mirror process to complete before proceeding to the next
task.
Task 6 Solutions
Complete the following steps:
1. Unencapsulate the data disk using the procedure outlined in
‘‘Unencapsulating Data Disks’’ on page 2-23.
2. Was the unencapsulation successful?
It should be.
3. If the unencapsulation was not successful, what do you think went
wrong?
No.
If yes, list them.
Task 7 Solutions
Complete the following steps:
1. Select a disk to be used as a data (non-root) disk for encapsulation.
2. Use the format utility to create the following partition configuration
for this disk:
● Create two partitions, minimum, using all the available space
on the disk. Do not leave any free space.
● Leave at least one partition unused.
3. Build file systems on each of the partitions.
4. Create mount points for each of the partitions on the data disk, and
verify that the new file systems successfully mount.
5. Update the /etc/vfstab file to auto-mount these file systems
during system reboot.
6. Verify that the file systems mount from the /etc/vfstab file.
7. Use prtvtoc, the df command, and the format utility to capture
pre-encapsulation and mount information to the
/datadisk_capture file. Also capture the contents of the
/etc/vfstab file.
8. Encapsulate this disk using the procedure in ‘‘Encapsulating a Non-
Conforming Disk’’ on page 2-20.
Objectives
Upon completion of this module, you should be able to:
● Define and explain how the dynamic multi-pathing (DMP) functions
enhance the availability and accessibility of VxVM software-
managed storage devices
● Explain how DMP identifies disks in both pre- and post-version 3.2
of the VxVM software
● Describe how to install and verify DMP
● Enable and disable multi-pathing to selected disks and controllers
● Administrate DMP by using the vxdmpadm utility
● Perform start restore and stop restore functions
● Identify common DMP problems
Relevance
Additional Resources
Applications
User I/O
Kernel
File System
/dev/dsk/c1t1d0s0 /dev/dsk/c2t1d0s0
Target Disk Driver
Path 1 H i ta c h i D a t a S y s te m s
R E A D YW A R N I N G A L A R MP O W E R
SD
Path 2
Load Balancing
Load balancing is the function which attempts to maximize I/O
throughput by using the full bandwidth of all paths. Although the goal is
the same, load balancing is implemented differently depending on which
version of the VxVM software is used.
Early versions of the VxVM software (before 2.5.4 for example) allow the
user to disable DMP. Version 2.5.4 of the VxVM software does not allow
the user to disable DMP. While some systems can disable DMP, this causes
vxconfigd to fail.
Load balancing in the VxVM software version 3.0 is achieved by using the
balanced path policy for the active/active-type disk arrays only, and not for
active/passive-type disk arrays.
The method used to balance random I/Os across all the available paths is
to send all I/Os that start within a single 64-kilobyte block range down
the same path. I/Os that start within other 64-kilobyte ranges on the disk
are routed through a different path.
For each device path, the Solaris OE creates an entry in the dev_info tree.
DMP scans the dev_info tree and creates metanodes in the /dev/vx tree.
The VxVM software uses the metanodes to access the disk.
Conversely, DMP creates one metanode per LUN. That is, DMP identifies
LUNs and creates a metanode for each LUN. The result is that a disk with
multiple paths is seen as a single disk by the VxVM software.
For example, an array with five LUNs has five metanodes. If there are
four paths from the host to the array, the host has 20 entries under
/dev/dsk tree, and only five entries under /dev/vx/dmp tree.
In the following example, the same disks are managed by DMP, which
arbitrarily chooses one of the two paths and creates a single device entry.
To see the multiple paths to individual disks, use the vxdmpadm command.
bash-2.03# ls -las /dev/vx/rdmp/*t0d0s0
0 crw------- 1 root other 74, 0 May 21 15:34 /dev/vx/rdmp/c1t0d0s0
The following example shows output from the vxdisk list command to
illustrate how the VxVM software sees the same disk shown in the
previous examples.
bash-2.03# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 sliced rootdisk rootdg online
c1t3d0s2 sliced - - error
c1t4d0s2 sliced - - online
c1t5d0s2 sliced - - online
c1t6d0s2 sliced - - error
c1t16d0s2 sliced - - error
c1t18d0s2 sliced - - error
c1t19d0s2 sliced - - error
c1t22d0s2 sliced rootmirror rootdg online
With the release of the VxVM software version 3.1, DMP is always
installed and cannot be disabled or the VxVM software does not start. The
process of selectively disabling DMP on a controller or drive basis is
described in ‘‘Enabling and Disabling DMP’’ on page 3-10.
One way to verify that the DMP driver is installed correctly and that it is
not corrupt is to compare the size of the DMP driver file to that of the
driver_OSversion, which is in the same directory. Both drivers should
be the same size as seen in this example. The /kernel/drv file holds the
32-bit drivers and /kernel/drv/sparcv9 holds the 64-bit driver.
bash-2.03# pwd
/kernel/drv
bash-2.03# ls -las vxdmp*
640 -rw-r--r-- 1 root sys 314156 Nov 21 2001 vxdmp
608 -rw-r--r-- 1 root sys 297920 Aug 15 2001 vxdmp.SunOS_5.6
608 -rw-r--r-- 1 root sys 300980 Aug 15 2001 vxdmp.SunOS_5.7
640 -rw-r--r-- 1 root sys 314156 Nov 21 2001 vxdmp.SunOS_5.8
4 -rw-r--r-- 1 root sys 1026 Aug 15 2001 vxdmp.conf
bash-2.03# pwd
/kernel/drv/sparcv9
bash-2.03# ls -las vxdmp*
800 -rw-r--r-- 1 root sys 393968 Nov 21 2001 vxdmp
768 -rw-r--r-- 1 root sys 380840 Aug 15 2001 vxdmp.SunOS_5.7
800 -rw-r--r-- 1 root sys 393968 Nov 21 2001 vxdmp.SunOS_5.8
Additionally, with the release of the VxVM software version 3.1, DMP is a
mandatory component of the VxVM software and must be operational or
the VxVM software is disabled. This is due to new features which are
introduced in the VxVM software version 3.2, such as enclosure-based
naming, which require DMP to function.
DMP Terminology
When you are disabling DMP on a device or group of devices, the prevent
operation executes first, and the suppress operation executes second.
When you are enabling DMP, the un-suppress operation occurs first and
the allow operation occurs second.
When you are installing the VxVM software, the vxinstall utility
performs the initial VxVM software setup tasks. One of these tasks is to
prevent multi-pathing and suppress devices from the VxVM software’s
view.
To disable DMP using the vxinstall utility, use option 3, Prevent Multi-
pathing/Suppress devices from VxVMs view, from the main vxinstall
menu. An example shows the process used in vxinstall.
Volume Manager Installation
Menu: VolumeManager/Install
You will now be asked if you wish to use Quick Installation or Custom
Installation. Custom Installation allows you to select how the Volume
Manager will handle the installation of each disk attached to your
system.
If you want to exclude any devices from being seen by VxVM or not be
multipathed by VxDMP then use the Prevent multipathing/Suppress
devices from VxVMs view option, before you choose Custom
Installation
or Quick Installation.
If you do not wish to use some disks with the Volume Manager, or if
you wish to reinitialize some disks, use the Custom Installation
option. Otherwise, we suggest that you use the Quick Installation
option.
Options 1 through 4 on the vxinstall utility main menu are used for
suppression operations. Options 6 through 8 are used for prevention
operations. Option 8 lists suppressed or non-multi-pathed devices. To use
this options, select the menu option for the specific operation and follow
the scripted prompts.
Use the vxdiskadm utility menu items 17 and 18 to include and exclude
DMP operations. Selecting option 17, Prevent multipathing/Suppress
devices from VxVM’s view, produces the following output:
Exclude Devices
Menu: VolumeManager/Disk/ExcludeDevices
This operation might lead to some devices being suppressed from VxVM’s view
or prevent them from being multipathed by vxdmp (This operation can be
reversed using the vxdiskadm command).
To use these options, select the menu option for the specific operation and
follow the scripted prompts. A system reboot is required to activate the
changes.
The devices selected in this operation will become visible to VxVM and/or
will be multipathed by vxdmp again. Only those devices which were previously
excluded can be included again.
To use these options, select the menu option for the specific operation and
follow the scripted prompts. A system reboot is required to activate the
changes.
#
controllers
#
product
#
pathgroups
#
Caution – Do not edit these files manually. Use vxdiskadm to make all
updates to these files. If it is necessary to edit these file manually be very
careful and only delete the lines needed to get an inoperable system into
service.
To view help for the vxdmpadm command, use the man pages or enter:
# vxdmpadm help
Controllers : None
VID:PID : None
VID:PID : None
Pathgroups : None
----------
The interval option specifies the time interval after which the daemon
thread checks the failed paths, as specified in seconds. The default is
300 seconds (5 minutes). To not wait for the command to time out, use the
following command:
# vxdctl enable
Preparation
To prepare for this exercise:
● The VxVM software 3.2 must be installed and operational.
● The boot disk must be encapsulated and mirrored.
● A second disk group must exist other than rootdg with a configured
volume that is formatted and mounted.
● Open a window, and execute an iostat while performing the lab
exercises to see the effect on pathing as paths are removed and
added. Use the following script to run the iostat command:
# while true
> do
> iostat -xcnm
> sleep 2
> done
Note – Re-size the window to fit the output of the iostat display.
View the contents of these files to see the changes that were made.
_____________________________________________________________
5. Un-suppress and allow DMP operations on the disk excluded in the
first two tasks of this lab exercise.
What commands did you use, and in what order did you execute
them?
_____________________________________________________________
_____________________________________________________________
_____________________________________________________________
View the contents of the two exclude files to see what changes were
made.
_____________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Exercise Summary
Task 1 Solutions
Complete the following steps:
1. Use the vxdiskadm utility to disable multi-pathing to one VxVM
software disk on your lab teams system. Do not use the boot disk for
this task.
What vxdiskadm main menu option did you use?
_____________________________________________________________
How did you activate the change you made?
Reboot.
2. Use the vxdiskadm utility to suppress the secondary paths to the
disk on which you just prevented DMP operations.
What vxdiskadm main menu option did you use?
No.
3. View the DMP multi-pathing status to verify that you were
successful in the execution of the previous tasks.
What commands did you use?
Use vxdiskadm menu option 18 to include both disk paths in DMP operations.
View the contents of the two exclude files to see what changes were
made.
Task 2 Solutions
Complete the following steps:
1. Use the vxdmpadm utility to list all controllers seen by the Solaris OE.
What format of the vxdmpadm command did you use?
Task 3 Solutions
Use the vxdmping script to view the serial number of a disk of your
choice.
Objectives
Upon completion of this module, you should be able to:
● Describe the troubleshooting tools and utilities available for the
VxVM software
● Enable vxconfigd logging from the command line and the VxVM
software startup scripts
● Reference a VxVM failure to the error messages section of the VxVM
software troubleshooting manual
● Use the VxVM software debugging and information-gathering tools
and utilities
Relevance
Additional Resources
Logging Errors
The VxVM software provides extensive logging capabilities that include
the following error logging mechanisms:
● vxconfigd logging
● /var/adm/messages file
● root mail
There are nine levels of debug logging. Level 1 is minimal logging, and
level 9 logging involves writing all debug and error messages to the
selected log file. Logging is enabled by running the vxconfigd command
from the command line, or by modifying the
/etc/init.d/vxvm-sysboot script.
Note – DMP devices, metanodes, and other information about DMP were
described in Module 3, “Managing Dynamic Multi-Pathing.”
Note – The action recommendation is None because the error was not
caused by the VxVM software. This is a hardware problem, and there are
no VxVM software commands that can fix the error.
4. Find the major and minor numbers, and use the following version of
the ls command to find the cxtxdx number:
# ls -laRL /dev/dsk | grep “118, 48”
brw-r----- 1 root sys 118, 48 Jun 7 23:47 c2t1d0s0
The failed path is c2t1d0s0. Map this path to a specific piece of
hardware using step 3 of the procedure described on page 4-7.
test1_vol-L02
To debug errors, use the messages generated by the VxVM software. A list
of error messages is located in the VERITAS Volume Manager™ 3.2
Troubleshooting Guide in the “Error Messages” section for the version of the
VxVM software installed on the system.
Note – The FS and VVR tools require additional licenses and are not part
of the basic VxVM software. They are not described in this guide.
/etc/vx/diag.d/config.d/sparcv7:
total 418
2 drwxr-xr-x 2 root sys 512 Jun 8 09:19 .
2 drwxr-xr-x 4 root sys 512 Jun 8 09:19 ..
114 -r-xr-xr-x 1 root sys 57440 Aug 15 2001 vxautoconfig.SunOS_5.6
114 -r-xr-xr-x 1 root sys 57708 Aug 15 2001 vxautoconfig.SunOS_5.7
114 -r-xr-xr-x 1 root sys 57824 Aug 15 2001 vxautoconfig.SunOS_5.8
24 -r-xr-xr-x 1 root sys 11960 Aug 15 2001 vxdevwalk.SunOS_5.6
/etc/vx/diag.d/config.d/sparcv9:
total 348
2 drwxr-xr-x 2 root sys 512 Jun 8 09:19 .
2 drwxr-xr-x 4 root sys 512 Jun 8 09:19 ..
0 -r-xr-xr-x 1 root sys 0 Aug 15 2001 vxautoconfig.SunOS_5.6
140 -r-xr-xr-x 1 root sys 71408 Aug 15 2001 vxautoconfig.SunOS_5.7
140 -r-xr-xr-x 1 root sys 71560 Aug 15 2001 vxautoconfig.SunOS_5.8
0 -r-xr-xr-x 1 root sys 0 Aug 15 2001 vxdevwalk.SunOS_5.6
32 -r-xr-xr-x 1 root sys 15416 Aug 15 2001 vxdevwalk.SunOS_5.7
32 -r-xr-xr-x 1 root sys 15368 Aug 15 2001 vxdevwalk.SunOS_5.8
/etc/vx/diag.d/macros.d:
total 58
2 drwxr-xr-x 3 root other 1024 Jun 8 09:31 .
2 drwxr-xr-x 5 root other 512 Jun 8 09:31 ..
4 -rwxrwxr-x 1 root sys 1975 Aug 15 2001 dmp
2 -rwxrwxr-x 1 root sys 94 Apr 4 19:36 dmp_cpuiocount
2 -rwxrwxr-x 1 root sys 99 Apr 4 19:36 dmp_cpuiocount_next
0 -rwxrwxr-x 1 root sys 0 Apr 4 19:36 dmp_cpuiocount_zero
2 -rwxrwxr-x 1 root sys 435 Apr 4 19:36 dmp_ctlr
2 -rwxrwxr-x 1 root sys 69 Apr 4 19:36 dmp_ctlr_list_next
2 -rwxrwxr-x 1 root sys 72 Apr 4 19:36 dmp_ctlr_path_next
2 -rwxrwxr-x 1 root sys 285 Apr 4 19:36 dmp_dev_list
2 -rwxrwxr-x 1 root sys 82 Apr 4 19:36 dmp_dev_list_next_dmpnode
2 -rwxrwxr-x 1 root sys 364 Apr 4 19:36 dmp_dmpnode
2 -rwxrwxr-x 1 root sys 448 Apr 4 19:36 dmp_dmpnode_next
2 -rwxrwxr-x 1 root sys 78 Apr 4 19:36 dmp_dmpnode_next_ptr
2 -rwxrwxr-x 1 root sys 104 Apr 4 19:36 dmp_dmpnode_path_next
2 -rwxrwxr-x 1 root sys 52 Aug 15 2001 dmp_dmpopencount
2 -rwxrwxr-x 1 root sys 63 Aug 15 2001 dmp_end_dev_list_ctlrs
2 -rwxrwxr-x 1 root sys 66 Aug 15 2001 dmp_end_dev_list_dmpnodes
2 -rwxrwxr-x 1 root sys 50 Aug 15 2001 dmp_end_dmp_nodes
2 -rwxrwxr-x 1 root sys 150 Aug 15 2001 dmp_errq_buf
2 -rwxrwxr-x 1 root sys 40 Aug 15 2001 dmp_opath_next
2 -rwxrwxr-x 1 root sys 117 Apr 4 19:36 dmp_opaths
2 -rwxrwxr-x 1 root sys 582 Apr 4 19:36 dmp_path
2 -rwxrwxr-x 1 root sys 289 Apr 4 19:36 dmp_print_dev_list_ctlrs
2 -rwxrwxr-x 1 root sys 298 Aug 15 2001 dmp_print_dev_list_dmpnodes
2 -rwxrwxr-x 1 root sys 193 Apr 4 19:36 dmp_print_errq
2 -rwxrwxr-x 1 root sys 121 Apr 4 19:36 dmp_print_errq_next
2 -rwxrwxr-x 1 root sys 25 Apr 4 19:36 dmp_print_errq_null
2 drwxr-xr-x 2 root other 1024 Jun 8 09:31 sparcv9
/etc/vx/diag.d/macros.d/sparcv9:
total 56
2 drwxr-xr-x 2 root other 1024 Jun 8 09:31 .
2 drwxr-xr-x 3 root other 1024 Jun 8 09:31 ..
4 -rwxrwxr-x 1 root sys 1964 Aug 15 2001 dmp
2 -rwxrwxr-x 1 root sys 97 Apr 4 19:36 dmp_cpuiocount
2 -rwxrwxr-x 1 root sys 99 Apr 4 19:36 dmp_cpuiocount_next
/etc/vx/diag.d/scripts:
total 42
2 drwxr-xr-x 3 root sys 512 Jun 8 09:20 .
2 drwxr-xr-x 5 root other 512 Jun 8 09:31 ..
2 drwxr-xr-x 2 root sys 512 Jun 8 09:20 fix_lib
8 -r-xr-xr-x 1 root sys 3541 Aug 15 2001 fixmountroot
6 -r-xr-xr-x 1 root sys 2382 Aug 15 2001 fixsetup
10 -r-xr-xr-x 1 root sys 4897 Aug 15 2001 fixstartup
12 -r-xr-xr-x 1 root sys 5409 Aug 15 2001 fixunroot
/etc/vx/diag.d/scripts/fix_lib:
total 16
2 drwxr-xr-x 2 root sys 512 Jun 8 09:20 .
2 drwxr-xr-x 3 root sys 512 Jun 8 09:20 ..
8 -r-xr-xr-x 1 root sys 3437 Aug 15 2001 fixdevsetup
4 -r-xr-xr-x 1 root sys 1409 Aug 15 2001 fixgetmajor
Tool Use
Script Use
Script Use
fixunroot Converts system files so that the files no longer
require the VxVM software to boot the root file
system. Also disables startup of the VxVM software,
so that future recovery of a mirrored root volume does
not cause corruption.
Caution – After running this script, use caution when
bringing up the VxVM software again. If the VxVM
software configuration retains a mirrored root
volume, starting the VxVM software can cause severe
corruption to the root file system.
Note – Not all files and commands described in this section are relevant to
the Solaris OE. Some of the information collected by the vxexplorer
utility reflects the UNIX cross-platform capabilities of the VxVM software.
Some files and command output reference versions of UNIX other than
the Solaris OE.
DMP Information
● VRAS
● SANPoint Control
● Information left by VERITAS Software Corporation support
NOTICE: This section will stop and restart the VxVM Configuration Daemon,
vxconfigd. This may cause your VxVA and/or VMSA session to exit.
This may also cause a momentary stoppage of any VxVM configuration
actions. This should not harm any data; however, it may cause some
configuration operations (e.g. moving subdisks, plex
resynchronization) to abort unexpectedly. Any VxVM configuration
changes should be completed before running this section.
If you are using EMC PowerPath devices with VERITAS Volume Manager,
you must run the EMC command(s) ’powervxvm setup’ (or ’safevxvm
setup’) and/or ’powervxvm online’ (or ’safevxvm online’) if this
script terminates abnormally.
Open this file and use the contents to aid in troubleshooting VxVM
software problems.
Note – The ./pkginfo_l file is a long list of all packages installed on the
system. Search through the file to find the VRTS packages.
d. Verify that all correct modules are loaded for the VRTS
packages installed. If the modules are not loaded, there may
have been problems upgrading the VxVM software, or the
system was not rebooted after the modules were installed. The
following example shows a list of installed modules.
bash-2.03# grep vx ./modinfo
19 101e9005 ffa88 74 1 vxio (VxVM 3.2t_p2.5 I/O driver)
21 102d5440 17428 68 1 vxdmp (VxVM 3.2t_p2.5: DMP Driver)
22 102ea888 83f 75 1 vxspec (VxVM 3.2t_p2.5 control/status
e. Verify the correct licenses are installed by viewing the
vxlicense_p file.
bash-2.03# more ./vxlicense_p
.
.
.
use-nvramrc?=true
nvramrc=devalias test2 /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713df01,0:a
devalias test1 /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w210000203713fc9f,0:a
Intrlv. Intrlv.
Brd Bank MB Status Condition Speed Factor With
--- ----- ---- ------- ---------- ----- ------- -------
0 0 256 Active OK 60ns 1-way
Bus Freq
Brd Type MHz Slot Name Model
--- ---- ---- ---------- -------------------------------- -----------------
1 SBus 25 2 DOLPHIN,sci
1 SBus 25 3 SUNW,hme
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
2 5 00 0 17682084 17682083
3 15 01 0 3591 3590
4 14 01 7182 17674902 17682083
i. Check the ./etc directory, which contains a copy of the
contents of the /etc directory of the live system. An example is
as follows:
bash-2.03# ls -las ./etc
total 640
16 drwxr-xr-x 12 root other 2426 Jul 3 06:01 .
16 drwxr-xr-x 9 root other 2294 Jul 3 06:27 ..
16 -rw-r--r-- 1 root other 12 Jun 8 05:43 defaultrouter
16 drwxr-xr-x 2 root sys 308 Jun 7 23:12 dfs
16 -rw-r--r-- 1 root other 12 Jul 3 06:01 err.out
16 -rw-r--r-- 1 root root 8 Jun 7 23:47 hostname.hme0
16 -rw-r--r-- 1 root root 1 Jun 7 23:47 hostname6.hme0
16 -r--r--r-- 1 root sys 97 Jun 8 05:43 hosts
16 drwxr-xr-x 2 root sys 1223 Jun 8 04:52 inet
32 -rw-r--r-- 1 root other 12200 Jul 3 06:00 ls_l_rc
16 -rw-r--r-- 1 root sys 1731 Jun 8 09:20 name_to_major
16 -rw-r--r-- 1 root root 8 Jun 7 23:47 nodename
16 -rw-r--r-- 1 root sys 780 Jun 8 07:34 nsswitch.conf
16 -r--r--r-- 1 root root 5036 Jun 8 09:28 path_to_inst
16 -rwxr--r-- 1 root sys 2792 Jan 5 2000 rc0
16 drwxr-xr-x 2 root sys 2558 Jun 8 09:32 rc0.d
16 -rwxr--r-- 1 root sys 3177 Jan 5 2000 rc1
16 drwxr-xr-x 2 root sys 2347 Jun 8 09:23 rc1.d
16 -rwxr--r-- 1 root sys 2885 Jan 5 2000 rc2
16 drwxr-xr-x 2 root sys 3456 Jun 8 09:23 rc2.d
16 -rwxr--r-- 1 root sys 2341 Jan 5 2000 rc3
16 drwxrwxr-x 2 root sys 650 Jun 8 08:11 rc3.d
16 -rwxr--r-- 1 root sys 2792 Jan 5 2000 rc5
16 -rwxr--r-- 1 root sys 2792 Jan 5 2000 rc6
32 -rwxr--r-- 1 root sys 9973 Jan 5 2000 rcS
16 drwxr-xr-x 2 root sys 3275 Jun 8 09:32 rcS.d
16 drwxr-xr-x 3 root sys 181 Jun 7 23:11 rcm
16 -r--r--r-- 1 root sys 184 Dec 18 2001 release
16 -r--r--r-- 1 root sys 3701 Jun 8 06:50 services
16 -rw-r--r-- 1 root sys 1001 Jun 7 23:12 syslog.conf
16 drwxr-xr-x 2 root other 182 Jul 3 06:00 syslog.d
16 -rw-r--r-- 1 root root 2161 Jun 8 09:49 system
16 -rw-r--r-- 1 root root 2161 Jun 8 09:49 system.GOOD
16 -rw-r--r-- 1 root other 2161 Jun 21 17:15 system.sav
16 -rw-r--r-- 1 root other 2161 Jun 24 14:39 system_06242002
16 -rw-r--r-- 1 root root 728 Jun 8 11:23 vfstab
16 -rw-r--r-- 1 root other 415 Jun 8 09:41 vfstab.prevm
16 drwxr-xr-x 9 root other 1483 Jul 3 06:01 vx
.
.
.
testautoconfig output
binding_name: ssd
node_name: ssd
node name: ssd
node addr: w220000203713f582,0
parent_name: sf
instance: 14
minor: a, ddi_block:wwn
binding_name: ssd
node_name: ssd
node name: ssd
node addr: w220000203713f643,0
parent_name: sf
instance: 15
minor: a, ddi_block:wwn
binding_name: ssd
node_name: ssd
node name: ssd
node addr: w220000203713e0b9,0
.
.
.
vxdevwalk output
SUNW,Ultra-Enterprise :: id=-268264420
driver properties:
pm-hardware-state value=6e6f2d73 75737065 6e642d72 6573756d 6500
ascii=no-suspend-resume.
system properties:
relative-addressing value=00000001
MMU_PAGEOFFSET value=00001fff
MMU_PAGESIZE value=00002000
PAGESIZE value=00002000
Driver packages :: id=-268251420
Driver packages/terminal-emulator :: id=-268212908
/dev/rdsk/c1t0d0s2
Vendor id : SEAGATE
Product id : ST19171FCSUN9.0G
Revision : 177E
Serial Number : 9831X06256
/dev/rdsk/c1t16d0s2
Vendor id : SEAGATE
Product id : ST19171FCSUN9.0G
Revision : 177E
Serial Number : 9831X07536
.
.
.
The dmp.out file is large and contains the output of all DMP debugging
and information gathering l commands executed by vxexplorer. Use
shell commands to parse this file for the information needed to verify the
existence of a DMP problem.
Caution – The set option can damage the configuration database (private
region) beyond repair if not used properly. Some of the information
presented in this section is for reference only.
dm disk01 - - - - SPARE
dm rootdisk - - - - -
dm rootmirror - - - - -
The dumplog option lists the group log copy information stored in the
private region. An example of the dumplog option is shown here.
# ./vxprivutil dumplog /dev/rdsk/c0t3d0s3
LOG #01
BLOCK 0: KLOG 0 : COMMIT tid=0.1352
BLOCK 0: KLOG 1 : DIRTY rid=0.1154
The list option provides output similar in format and content to the
vxdisk list command. This is a useful option to use when vxconfigd is
not running. An example is as follows:
# ./vxprivutil list /dev/rdsk/c0t3d0s3
diskid: 921260771.1030.stoli.veritas.com
group: name=rootdg id=921260769.1025.stoli.veritas.com
flags: private autoimport
hostid: stoli.veritas.com
version: 2.1
iosize: 512
public: slice=4 offset=0 len=673536
private: slice=3 offset=1 len=1151
update: time: 921263222 seqno: 0.17
headers: 0 248
configs: count=1 len=825
logs: count=1 len=125
tocblks: 0
tocs: 1/1150
Defined regions:
config priv 000017-000247[000231]: copy=01 offset=000000 enabled
config priv 000249-000842[000594]: copy=01 offset=000231 enabled
log priv 000843-000967[000125]: copy=01 offset=000000 enabled
Use the set option to change the attributes of a disk or disk group
without using vxconfigd.
Caution – Use the set option carefully. This option can cause irreversible
damage to the configuration database in a disk’s private region and
damage it beyond repair.
See the following for how to use the set option. In this example, a new
disk group initialization failed because disk c1t15d0 is owned by rootdg.
# vxdg init newdg disk05=c1t15d0s2
vxvm:vxdg: ERROR: Device c1t15d0s2 appears to be owned by disk group rootdg.
Use this information to correlate the kernel space information with user
space entries by performing a long list on the /dev/dsk directory and
grep on the wwn or major and minor numbers of the listed device. For
example:
bash-2.03# ls -las /dev/dsk | grep 3f579
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s0 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:a
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s1 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:b
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s2 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:c
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s3 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:d
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s4 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:e
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s5 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:f
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s6 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:g
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s7 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:h
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s0 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:a
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s1 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:b
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s2 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:c
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s3 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:d
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s4 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:e
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s5 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:f
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s6 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:g
2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s7 ->
../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:h
Note – The output of this command can be very large. Consider re-
directing the command output to a file and using a text editor to search
the file contents.
System administrators use these utilities to analyze panic dump files. The
VxVM software could at some time cause a system panic. It is helpful to
be able to diagnose a panic dump file to determine whether the VxVM
software was responsible for the crash and what component of the VxVM
software failed.
Note – Analyzing panic dump files is beyond the scope of this class. Sun
Education provides classes that teach the use of system-level debugging
tools to analyze crash dump files.
Preparation
To prepare for this exercise, make sure that the VxVM software is installed
and operational.
Power user – The following task is for students who have the skills to
map a device tree entry to a physical hardware slot.
If you have access to SunSolve Online, pull the hardware mapping
for the system used by your lab group and map the physical device
address of the failing path to the cable that was disconnected.
What is the physical Peripheral Component Interconnect (PCI) or
SBus slot of the pulled cable?
_____________________________________________________________
8. Replace the pulled cable.
9. View any messages logged by the VxVM software as the path is
brought online.
_____________________________________________________________
_____________________________________________________________
10. Disable vxconfigd logging.
11. Close the real-time viewing window for the vxconfigd log.
12. Remove the vxconfigd log.
Exercise Summary
Task 1 Solutions
Complete the following steps:
1. Enable maximum vxconfigd logging from the command line. What
command did you use?
In /var/vxvm/vxconfigd.log.
3. View the contents of the log. Is there data present?
_____________________________________________________________
_____________________________________________________________
4. Disable vxconfigd logging. What command did you use?
Task 2 Solutions
Complete the following steps:
1. Enable maximum vxconfigd logging. What command did you use?
# tail -f /var/vxvm/vxconfigd.log
3. Open a new window and view the current messages file in real time.
What command did you use?
# tail -f /var/adm/messages
Power user – The following task is for students who have the skills to
map a device tree entry to a physical hardware slot.
If you have access to SunSolve Online, pull the hardware mapping
for the system used by your lab group and map the physical device
address of the failing path to the cable that was disconnected.
What is the physical Peripheral Component Interconnect (PCI) or
SBus slot of the pulled cable?
_____________________________________________________________
8. Replace the pulled cable.
Task 3 Solutions
Complete the following steps:
1. Install the VRTSspt package, if it is not already installed.
2. Run the VRTSexplorer utility using the procedure described in ‘‘The
vxexplorer Utility’’ on page 4-18.
3. Browse the output, and view the following files:
● hostid
● uname_a
● pkginfo_l – Search for the VxVM software packages and
verify the package levels and installation dates.
● vxlicense_p
● prtdiag
● eeprom
● Contents of various files under the ./dev subdirectory
● Contents of the various files under the ./vxvm subdirectory.
4. Run the vxprivutil command with the dumpconfig option on a
selected VxVM software disk. Pipe the output to the vxprint
command to display a vxprint -ht type of printout.
What command did you use?
Objectives
Upon completion of this module, you should be able to:
● Describe the VxVM software system recovery processes
● Describe how the VxVM software is initialized during system boot
● Successfully troubleshoot boot problems that prevent the VxVM
software from starting
● Identify errors that prevent the VxVM software from functioning
● Use the correct recovery procedures to resolve initialization and
operational problems
● Correctly determine when to reinstall the VxVM software
● Identify the VxVM software errors, match these errors to a list of
known errors and successfully repair the problems
Relevance
Additional Resources
This module addresses errors that occur in the boot process and the
VxVM software functionality. Storage device errors are addressed in
Module 6, “Recovering Disk, Disk Group, and Volume Failures.”
/etc/rcS.d
Single-User Start-Script
Execution
Boot System
Ready
/etc/rc2.d
Multi-User Start-Script
Execution
Scripts which execute in single-user mode that affect the VxVM software
initialization are:
● /etc/rcS.d/S25vxvm-sysboot
● /etc/rcS.d/S35vxvm-startup1
● /etc/rcs.d/S40standardmounts
● /etc/rcS.d/S50drvconfig
● /etc/rcS.d/S60devlinks
● /etc/rcS.d/S85vxvm-startup2
● /etc/rcS.d/S86vxvm-reconfig
Scripts which execute in multi-user mode that affect the VxVM software
initialization are:
● /etc/rc2.d/S20sysetup
● /etc/rc2.d/S94vxnm-host_infod
● /etc/rc2.d/S94vxnm-vxnetd
● /etc/rc2.d/S95vxvm-recover
● /etc/rc2.d/S96vmsa-server
Figure 5-2 illustrates the actions started during the execution of the
/etc/rcS.d/S25vxvm-sysboot script.
The dev_info tree is a kernel structure that is built from device tree
information in the boot PROM. To view the dev_info tree, use the
/etc/vx/diag.d/vxdevwalk command.
h :: dev=118,151:block nodetype=ddi_block:wwn
a,raw :: dev=118,144:char nodetype=ddi_block:wwn
b,raw :: dev=118,145:char nodetype=ddi_block:wwn
c,raw :: dev=118,146:char nodetype=ddi_block:wwn
d,raw :: dev=118,147:char nodetype=ddi_block:wwn
e,raw :: dev=118,148:char nodetype=ddi_block:wwn
f,raw :: dev=118,149:char nodetype=ddi_block:wwn
g,raw :: dev=118,150:char nodetype=ddi_block:wwn
h,raw :: dev=118,151:char nodetype=ddi_block:wwn
The entries are matched by the dev_info tree entry, shown in the
following example.
Driver sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713fc9f,0 :: id=25 instance=18
Unmatched Entries
Use the vxdisk list command to display all user-space disk access
records. The following example shows typical output from the vxdisk
list command.
bash-2.03# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 sliced rootdisk rootdg online
c1t1d0s2 sliced rootmirror rootdg online
c1t2d0s2 sliced disk01 rootdg online spare
c1t3d0s2 sliced - - error
c1t4d0s2 sliced - - online
c1t5d0s2 sliced - - online
c1t6d0s2 sliced - - error
c1t16d0s2 sliced - - online
c1t17d0s2 sliced - - online
c1t18d0s2 sliced - - online
c1t19d0s2 sliced - - online
Note – Only disks that are members of imported disk groups have entries
in the DISK and GROUP columns.
Disk Ownership
This file must be 512 bytes in length, including padding characters. Do not
edit this file using vi or other text editor. If this file is corrupted, the
VxVM software initialization fails.
Boot Mode
The following flag files are checked. The files can affect the way in which
the VxVM software is initialized:
● /etc/vx/reconfig.d/state.d/install-db –
● This file is created during installation of the VxVM software
packages.
This file is also created if an installation of the VxVM software
is incomplete. In that case, if the boot disk is under the VxVM
software control, the system does not boot past single-user
mode.
● This file prevents vxconfigd from starting when the system
boots. The VxVM software starts if the boot disk is under the
VxVM software control, but it is crippled.
Note – Although vxconfigd may not be started, the vxdmp and vxspec
modules are started.
● /VXVM#.#.#-UPGRADE/.start_runed –
● The value #.#.# is the software level to which the system is
being upgraded, such as 3.1.1.
● This is a hidden file, created by the upgrade_start script. It is
removed when the upgrade is finished.
● This file prevents vxconfigd from starting even if the boot disk
is under the VxVM software control.
/etc/rcS.d/
S35vxvm-startup1
This script executes after the / and /usr volumes are available and makes
other volumes available that are needed early in the Solaris OE boot
sequence.
Special Volumes
Dump Device
The dump device is used to store core information when the system
panics. Dump device configuration is as follows:
● A swap device must be listed in the /etc/vfstab file.
● The swap device must be in rootdg.
● A physical partition must be available underneath the swap volume.
Swap files cannot be used as a dump device.
● The dump device is registered by adding and then removing the swap
device. The VxVM software does not have hooks for dumping, so the
swap device must be created prior to the creation of the dump
device.
The dump device must be the first swap device listed in the
/etc/vfstab file.
● Core file creation and recovery are performed outside of the VxVM
software operations.
The S40standardmounts script sets the swap device. This script is not
part of the VxVM software Swap devices have the following
characteristics:
● The VxVM software treats all file systems with file type swap in the
/etc/vfstab file as swap volumes.
● The primary swap volume must be:
● Usage type swap
● A real partition
● In rootdg
● Swap device size is limited to 2000 megabytes in the early releases of
the Solaris OE.
Figure 5-4 illustrates the processes started by the S50devfsadm script. This
script is not part of the VxVM software but is used to scan the device tree
and build new devices.
The dev_info
tree is checked for new devices.
Flag Files
Flag files are queried for reconfiguration and reboot procedures. The
/etc/vx/reconfig.d/state.d file contains flag files set by prior
operations, as follows:
● Pre-recovery events that might have occurred
● Encapsulation procedures – Encapsulation requires a reboot and
creates flag files to delineate actions that need to be taken. If
encapsulation fails or is incomplete, the flag files must be removed
manually.
Note – The root_done flag tells the VxVM software that the boot disk is
under VxVM software control and that this startup script can exit without
any action.
Note – The boot disk device address depends on the system and on which
storage device is configured as the boot disk.
If the boot disk is under VxVM software control, check that the
boot-device is set to the proper VxVM software-generated device alias,
as follows:
boot-file: data not available
boot-device=vx-rootdisk
local-mac-address?=false
Power user – If the primary boot device fails and the system must be
rebooted, use the vx-rootmirror device alias to boot the system.
If the primary boot disk fails and a spare was configured in rootdg, after
the spare disk replaces the failed boot disk and the failed volumes are
being hot-relocated, the VxVM software builds an additional device alias
to enable booting from the spare disk, as follows:
vx-rdgspare01 /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w220000203713f96d,0:a
Use this device alias if the system must be booted from the spare disk.
Note – The spare disk device address depends on the system and on
which storage device is configured as the spare disk. Also, the spare disk
name is arbitrary and is reflected in the device alias name.
The last two lines of the /etc/system file are only present if the boot disk
is under VxVM software control. If these lines are not present and the
boot disk is under VxVM software control, the system will not boot.
Also, the forceload statements for drv/sd and drv/ssd must be present
in the /etc/system file if the boot device is one of these class of disks. If
these forceload statements are not present or corrupted, the system
rebootsrecursively.
Recovery of this file requires editing the file to correct problems and, in
severe cases, can require booting from the CD-ROM.
Startable Volumes
If the boot disk is under the VxVM software control, all system volumes
(/, swap, /usr, and /var) must start. One reason a volume might not start
is the presence of a stale or unusable plex.
A stale plex is defined as a plex that has data which is inconsistent with
other mirrors of that volume. During the boot process, only the plexes on
the boot disk are accessed until the VxVM software is fully initialized and
a complete configuration for the volumes on the boot disk can be
obtained. If the data on the boot disk is stale, then the system must be
rebooted from an alternate boot disk which does not contain stale plexes.
If the boot-device nvram parameter is set, and aliases are set for both the
rootdisk and rootmirror, the system reboots use the rootmirror disk.
Binary Files
If the boot disk is under the VxVM software control and a driver file is
corrupt, the system boot fails. The system administrator must perform a
basic or functional unencapsulation (refer to ‘‘Performing a Basic or
Functional Unencapsulation’’ on page 2-68 for details).
Library Files
If any of these library files are not accessible and the boot disk is
encapsulated, a message similar to the following is displayed:
Starting VxVM restore daemon...
VxVM starting in boot mode...
ld.so.1: vxconfigd: fatal: <missing library file name is displayed>: open failed: No
such file or directory
Killed
If you cannot boot from the root disk, you can try to repair the problem
using a network-mounted root file system or some other alternate root
file system. Again, see the Installation Guide for more details.
If this file is corrupted or deleted, recover the file from backups of the
affected system. If backups are not available, a reinstall may be necessary.
Power user – The recovery from errors resulting from the execution of
these scripts requires reading each individual script and determining
where the failure is within the script. This requires shell programming
expertise and a detailed understanding of the VxVM software commands
and support files. All of the VxVM software startup scripts are hard
linked from the /etc/init.d file.
Preparation
To prepare for this exercise:
● The VxVM software must be installed and operational.
● The boot disk must be encapsulated and mirrored.
● There must be one additional disk group other than rootdg with at
least one configured, started and mounted volume.
● The instructor must give you the location and name of the break-
and-fix script.
x. Exit
b) Break it
s) Solution
h) Hint
m) Return to Main Menu
x) Exit
Tasks
Execute the break-and-fix script, and select bugs 1 through 10. List
problem resolution steps for each bug in the space listed below.
Problem 1:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 2:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 3:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 4:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 5:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 6:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 7:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 8:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 9:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 10:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
The lab is complete when all 10 bugs are fixed, your lab system with the
VxVM software is operational, and all volumes started and accessible.
Exercise Summary
Task Solutions
The solutions for the tasks in the lab exercise are found in the break-and-
fix script and on SunSolve in the appropriate SRDB and INFODOC files.
Objectives
Upon completion of this module, you should be able to:
● Describe the VxVM software utilities, commands, and virtual devices
to help with recovery processes
● Identify and repair failed disks
● Perform disk recovery processes and procedures
● Identify and repair volume errors
● Identify and repair disk group failures
Relevance
Additional Resources
This module describes how to identify and recover from disk, disk group,
and volume failures using the VxVM software.
Use the following commands and utilities to display the operational state
of managed disks:
● vxprint
● vxdisk
● vxdiskadm
● vxstat
● The vmsa GUI administrative console
This section discusses these commands and their use in recovering from
full or partial disk failures.
Note in this example that the failed plex for volume rootvol is
rootvol-0. This volume has a failed subdisk named rootmirror-01. The
volume is unaffected because it is enabled and active.
The STATUS column displays the status of disks visible to the VxVM
software. A status of error does not always indicate a disk error; it
usually indicates that the disk is not under the VxVM software control.
Device: c1t22d0s2
devicetag: c1t22d0
type: sliced
flags: online error private autoconfig
errno: Device path not valid
Multipathing information:
numpaths: 2
c1t22d0s2 state=disabled
c2t22d0s2 state=disabled
This clearly shows that the rootdisk disk experienced a read failure
when accessing subdisks rootdisk-04 and -05.
failed disks:
rootdisk
failed plexes:
opt-01
var-01
failing disks:
rootdisk
This root mail message shows that disk03 failed. Plexes home-02,
fin-02, and hr-01 also failed; if a spare disk with sufficient space is
available for hot-relocation, these disk are relocated.
Note – For a problem of this type, check cabling or disk seating, or replace
the disk.
Partial disk failures are those errors that cause a region of a disk to fail,
affecting some but not all of the subdisks stored on the failed disk. Errors
of this type are usually caused by media errors, but do not ignore the
possibility of a cabling problem. Mail is sent to root describing the failed
plexes but not the failed disks or subdisks.
failed plexes:
opt-01
var-01
Use the vxstat command to determine the failed disk. This procedure is
described in the ‘‘The vxstat Command’’ on page 6-7.
Replacing Disks
If a disk fails, it must be replaced. Use one of the following two methods
to replace a failed disk:
● The vxdiskadm utility
● The command line
Start
No
Yes
Yes
No
Hardware or
Solaris OE Problem
Figure 6-1 Recovery Process When VxVM Software Cannot See a Disk
When these procedures are complete the disk device should be available
for use, unless there is a hardware or functional VxVM software problem.
After the problem with the volume is corrected, use the vxvol command
to restart the volume.
Use the vxvol command with the -f option to forcibly start a RAID 5
volume. Use the following syntax:
# vxvol -f start RAID_5_volume _name
2. Try to use the vxdg command to complete the move. Type the
following:
# vxdg recover disk_group_name
3. If the previous step does not work, try to reset the move flag as
follows:
# vxdg -o clean recover disk_group_name
4. If the previous step does not work, try to remove the move flag. Type
the following:
# vxdg -o remove recover disk_group_name
Preparation
To prepare for this exercise:
● Make sure that the VxVM software is installed and operational.
● The boot disk must be encapsulated and mirrored.
● There must be one additional disk group other than rootdg with at
least one configured, started, and mounted volume.
● You must have access to another system’s VxVM software-managed
storage devices, either through a storage area network (SAN) or by
reconfiguring shared storage attached to the two systems.
● The instructor must give you the location and name of the break-
and-fix script.
x. Exit
b) Break it
f) Solution
h) Hint
m) Return to Main Menu
x) Exit
Tasks
Execute the break-and-fix script, and select bugs 11 through 20. List
problem resolution steps for each bug in the space provided.
Problem 11:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 12:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 13:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 14:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 15:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 16:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 17:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 18:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 19:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Problem 20:
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
The lab is complete when all 10 bugs are fixed and your lab system is
operational, with the VxVM software and all volumes started and
accessible.
Exercise Summary
Task Solutions
The solutions for the tasks in the lab exercise are found in the break-and-
fix script and on SunSolve in the appropriate SRDB and INFODOC files.
Objectives
Upon completion of this module, you should be able to:
● Describe the processes used to upgrade the VxVM software from
release 3.1 to 3.2 and from 3.2 to 3.5
● Perform an upgrade of the VxVM software to release 3.5
● Read, review, and interpret release notes
● Identify the top three installation problems
● Identify bugs, find patches, and apply patches for the release of the
VxVM software installed
● Identify and resolve licensing issues
● Upgrade the Solaris OE with VxVM is installed on the system
Relevance
Additional Resources
The scripted, manual, and pkgadd upgrade processes make provisions for
the encapsulated state of the boot disk. These procedures, as applied to
the boot disk, must be executed precisely, or the upgrade fails.
Although this is called a scripted upgrade, there are manual steps needed
to finish the upgrade process. A general description of the manual steps
needed is as follows:
1. Obtain and install any new license keys needed for the new release
of the VxVM software.
2. Make sure that any system-level file systems that are under VxVM
software control have at least one plex where they begin on a
cylinder boundary.
3. If installing any documentation or man page packages, /opt must
exist, be writable and not be symbolically linked.
4. Boot the system to single-user mode.
5. Load and mount the VxVM software upgrade CD-ROM.
6. Execute the upgrade_start script.
7. Reboot to single-user mode.
The general steps to upgrade the VxVM software manually with a non-
encapsulated boot disk are:
1. Back up the system partitions.
2. Completely remove the VMSA software package. Make sure that the
/opt/VRTSvmsa directory is removed.
3. Install required Solaris OE patches.
4. Install the new VxVM and VMSA software packages.
5. Apply the latest VxVM and VMSA software patches for the release
installed.
6. Run the vxinstall command.
7. Import the original disk groups. Make sure that they are upgraded, if
necessary, to use any new features provided by the new release of
the VxVM software.
Upgrading to release 3.5 using pkgadd has its advantages over using the
supplied upgrade scripts, and there are some disadvantages. Table 7-1
compares the two processes.
3. The VRTSlic
package might not
be able to be
removed if a second
VRTSvxvm package
is using it.
To upgrade a disk group, use the vxdg command with the following
syntax:
# vxdg upgrade disk_group_name
Release Notes
All upgrades and patches include release notes. Be sure to read the release
notes prior to performing any upgrade or patch. Release notes contain
valuable installation and operation information specific to the patch or
release of the VxVM software being installed.
Release notes are normally found in the base directory of the patch or
upgrade distribution and contain either release or notes in the file
name.
Licensing
When upgrading the VxVM software, licensing must be taken into
consideration. If upgrading from the VxVM software version 2.x to 3.x, a
version 3.x license key must be installed prior to the upgrade, or the
VxVM software does not start.
Preparation
To prepare for this exercise:
● All lab systems must have Solaris 8 OE flash or JumpStart installed,
with all supporting packages and patches.
● At a minimum, the VxVM software version 3.2 must be installed and
configured.
● The boot disk must be encapsulated.
● The VxVM software packages and patches for the new version must
be available either through network file system (NFS) mounts, ftp,
or local access.
● You must have access either to all SunSolve and VERITAS Software
Corporation documents referenced in this module.
Tasks
Complete the following steps:
1. Upgrade the VxVM software to a newer release using one of the
following methods:
● Upgrade using the upgrade scripts located in the
/Package_Distribution_Path/scritps directory.
Reference the VERITAS Volume Manager™ 3.x Installation Guide
section “Upgrading VxVM on an Encapsulated Root Disk.” This
section provides a detailed, step-by-step procedure for using
the upgrade scripts.
● Upgrade using manual procedures.
Reference VERITAS Software Corporation TechNote ID 240006
for the procedures to perform the upgrade without using the
upgrade scripts.
Exercise Summary
SunSolve INFODOCs
This appendix contains the following SunSolveSM Online Free Info Docs
online sources, available at:
http://sunsolve.Sun.COM/pub-cgi/search.pl?mode=advanced
● INFODOC 16051 – “How to ‘Encapsulate’ Disks With No Free Space
Using Volume Manager.” 22 March 2002.
● INFODOC 24663 – “Full and Basic/Functional Unencapsulation of a
Volume Manager Encapsulated Root Disk While Booted CDROM.”
22 March 2002.
INFODOC 16051
INFODOC ID: 16051
SYNOPSIS: How to 'Encapsulate' Disks With No Free Space Using Volume Manager
DETAIL DESCRIPTION:
To encapsulate a disk into Veritas Volume Manager or Sun Enterprise Volume Manager[TM],
you must have some free space on the disk in order for Volume Manager to write a private
region to the disk. The private region is generally smaller than 2mb.
However, if you absolutely do not have any free space on the disk, and you can't free up
any, and you really want to get this data under Volume Manager control, you can work
around this by TEMPORARILY "encapsulating" one or more slices from a disk into Volume
Manager so that the data may be mirrored to another disk. Once the data is mirrored to
a "real" Volume Manager disk with a private and public region, you can then break the
mirror, leaving the data on the "real" Volume Manager disk.
SOLUTION SUMMARY:
Here is how to do it:
For each slice on the disk (excluding slice 2), run the following command. In this
example, only slices 5 and 6 have data on them.
vxdisk define c#t#d#s5 type=nopriv
vxdisk define c#t#d#s6 type=nopriv
Then add each of these "slices" as a disk in a disk group and give them a name. This
example names them NPdisk05 and NPdisk06.
vxdg -g <diskgroup> adddisk NPdisk05=c#t#d#s5
vxdg -g <diskgroup> adddisk NPdisk06=c#t#d#s6
Next we create a simple volume (not a file system, just a volume) on each of these new
"disks" that spans the entire "disk". To do this we first check to see what the max size
is for the volumes we are about to create. We're looking for the len value to then use
with the vxassist command to create the volumes.
vxdisk list NPdisk05 | grep public
public: slice=0 offset=0 len=8196096
vxdisk list NPdisk06 | grep public
public: slice=0 offset=0 len=9400320
With this info we create the volumes, naming them NPdisk05vol and NPdisk06vol:
vxassist -g <diskgroup> make NPdisk05vol 8196096 layout=nostripe alloc="NPdisk05"
vxassist -g <diskgroup> make NPdisk06vol 9400320 layout=nostripe alloc="NPdisk06"
Next step is to mirror the volumes, assuming that we are mirroring the volumes to a disk
named disk01 that has enough space to mirror both volumes to it:
vxassist -g <diskgroup> mirror NPdisk05vol layout=nostripe alloc="disk01"
vxassist -g <diskgroup> mirror NPdisk06vol layout=nostripe alloc="disk01"
Once that is complete we then remove the original side of the mirror.
vxplex -g <diskgroup> -o rm dis NPdisk05vol-01
vxplex -g <diskgroup> -o rm dis NPdisk06vol-01
The final step is to remove the old disks from the disk group and return them to
their original state.
vxdg -g <diskgroup> rmdisk NPdisk05
vxdg -g <diskgroup> rmdisk NPdisk06
vxdisk rm c0t5d10s5
vxdisk rm c0t5d10s6
This leaves us with two concat volumes named NPdisk05vol and NPdisk06vol.
These volumes will contain the data that was orignally located on
c#t#d#s5 and c#t#d#s6.
INFODOC 24663
INFODOC ID: 24663
SYNOPSIS: Full and Basic/Functional Unencapsulation of a Volume Manager Encapsulated
Root Disk While Booted CDROM
DETAIL DESCRIPTION:
Overview:
This document explains the steps necessary to unencapsulate the root disk from Volume
Manager control. This document applies to both Sun Enterprise Volume Manager[TM] (SEVM)
2.x and Veritas Volume Manager (VxVM) 3.x.
This document is divided into two distinct sections. The first section describes full
unencapsulation while booted from a Solaris CDROM. This procedure should be used any
time it is necessary to completely remove the root disk from Volume Manager control and
bring the disk back to a pre-encapsulation state including all partitions such as
/export, and /opt.
The second section explains the steps to perform a Basic/Functional (BF) unencapsulation
while booted from a Solaris CDROM. Basic/Functional unencapsulation temporarily
unencapsulates the root disk so that troubleshooting of booting issues or other issues
can be done. BF unencapsulation gives you access to an unencapsulated /, swap, /usr, and
/var but no access to non "big-4" partitions.
SOLUTION SUMMARY:
Notes for Full Unencapsulation:
Under normal circumstances, if the system can be booted to at least single user mode, it
is recommended that the vxunroot command be used to unencapsulate root. A full
unencapsulation should be performed if the vxunroot command is not working for some
reason, or if the system cannot be booted and we want to completely remove Volume
Manager from having any control over the root disk.
You cannot perform a full unencapsulation and still maintain Volume Manager
functionality if the root disk is the ONLY disk in the rootdg diskgroup. If the root
disk is the only disk in rootdg, you can still unencapsulate, but Volume Manager will
not work until another disk is initialized into rootdg using vxinstall after the system
has been fully unencapsulated. Normally if root it is encapsulated, it is also mirrored
which gives us another disk in rootdg, however it should always be verified that there
is at least one other disk in rootdg before following this procedure so that we know
what to expect once root is unencapsulated.
Also note that Volume Manager will allow you to create volumes using free space on the
root disk, after the root disk has been encapsulated. Volumes created post-encapsulation
like this do not have underlying hard partitions and therefore are not recoverable with
this procedure. If at all possible, make backups of any volumes created on the rootdisk
post-encapsulation before following this procedure. Once you are unencapsulated, if you
have free space and a free partition, you could newfs that partition and restore to it
from your backup.
Steps for Full Unencapsulation:
Bring the system to the OK prompt and insert a Solaris CD into the CDROM drive.
Then issue:
boot cdrom -s
Once booted to the cdrom, set your terminal type so that vi will work correctly.
If TERM=sun doesn't work, often times TERM=vt100 will.
TERM=sun;export TERM
Fsck your root filesystem:
fsck -y /dev/rdsk/c#t#d#s0
If fsck comes back cleanly, mount slice 0 to /a. If fsck cannot repair the root file
system, there are obviously a number of possibilities. This procedure does not attempt
to explain file system corruption or how to repair it beyond fsck. Fsck must come back
cleanly to continue and mount root.
mount /dev/dsk/c#t#d#s0 /a
Make a backup of /a/etc/system and then edit it:
cp /a/etc/system /a/etc/system.orig
vi /a/etc/system
Completely remove the following lines from the system file. If you re-encapsulate in the
future, these lines will be added back correctly so there is nothing to be lost by
removing them.
rootdev:/pseudo/vxio@0:0
set vxio:vol_rootdev_is_volume=1
Make a backup of /a/etc/vfstab and then edit it:
cp /a/etc/vfstab /a/etc/vfstab.orig
vi /a/etc/vfstab
Edit the vfstab file back to it's original state, pointing /, swap, /usr, and /var to
hard partitions on the disk like /dev/dsk and /dev/rdsk rather than /dev/vx/ entries.
Temporarily comment out all other /dev/vx volumes from the /a/etc/vfstab file using the
# character. This includes filesystems like /opt and /export, if they exist.
The original /etc/vfstab will look something like this, assuming root is c0t0d0:
Note: Columns have been aligned and spaces added for clarity.
---------------------------------------------------------------------------
/dev/vx/dsk/swapvol - - swap - no -
/dev/vx/dsk/rootvol /dev/vx/rdsk/rootvol / ufs 1 no -
/dev/vx/dsk/usr /dev/vx/rdsk/usr /usr ufs 1 no -
/dev/vx/dsk/var /dev/vx/rdsk/var /var ufs 1 no -
/dev/vx/dsk/export /dev/vx/rdsk/export /export ufs 2 yes -
swap - /tmp tmpfs - yes -
---------------------------------------------------------------------------
Now make sure Volume Manager does not start on the next boot:
touch /a/etc/vx/reconfig.d/state.d/install-db
This is important because IF the root disk contains mirrors, and the system boots up,
the mirrors will get resynced, corrupting the changes we just made.
Remove the flag that tells Volume Manager that the root disk is encapsulated:
rm /a/etc/vx/reconfig.d/state.d/root-done
Reboot the system for changes to take effect:
reboot
When we reboot we will come up in an partially unencapsulated state with /, /usr, /var,
and swap mounted. Volume Manager will not start but we can start it manually once we
are booted.
To start Volume Manager, run the following commands:
rm /etc/vx/reconfig.d/state.d/install-db
vxiod set 10
vxconfigd -m disable
vxdctl enable
Now we can remove the volumes that existed on the encapsulated boot disk. They will
generally be rootvol, swapvol, usr, and var. They might also include home, opt, or
other non-standard root partitions. Use the command 'vxprint -htg rootdg' to list the
volumes in rootdg before removing them. Then, for each volume, run the command:
/usr/sbin/vxedit -rf rm <volume name>
Remove the rootdisk from rootdg now that it has no volumes, plexes or subdisks: The disk
name is usually 'rootdisk'
/usr/sbin/vxdg rmdisk <disk name>
The final step is to re-write the vtoc of the disk so that hard partitions are again
defined for the root file systems. There are several ways to put the hard partitions
back, including using fmthard on a modified /etc/vx/reconfig.d/disk.d/c#t#d#/vtoc file,
using format to manually repartition the disk, or using the vxmksdpart command. The
simplest method however is to use the vxedvtoc command as explained below.
When Volume Manager encapsulates a disk, it makes a record of the old vtoc of the disk.
This file is stored for each disk in /etc/vx/reconfig.d/disk.d/c#t#d#. This file is
stored in a Volume Manager specific format, so it can't be used as an argument to
fmthard unless it is modified. The 'vxedvtoc' command is similar to fmthard but knows
how to read this vtoc file and write that vtoc to a disk. The command takes the form:
vxedvtoc -f <filename> <devicename>
Assuming that the boot disk is c0t0d0 we would now run the command
/etc/vx/bin/vxedvtoc -f /etc/vx/reconfig.d/disk.d/c0t0d0/vtoc /dev/rdsk/c0t0d0s2
# THE ORIGINAL PARTITIONING IS AS FOLLOWS :
#SLICE TAG FLAGS START SIZE
0 0x0 0x200 0 0
1 0x0 0x200 0 0
2 0x5 0x201 0 8794112
3 0x0 0x200 0 0
4 0x0 0x200 0 0
5 0x0 0x200 0 0
6 0xe 0x201 0 8794112
7 0xf 0x201 8790016 4096
# THE NEW PARTITIONING WILL BE AS FOLLOWS :
#SLICE TAG FLAGS START SIZE
0 0x0 0x200 0 2048000
1 0x0 0x200 2048000 2048000
2 0x5 0x201 0 8794112
3 0x0 0x201 4096000 2048000
/dev/dsk/c0t0d0s1 - - swap - no -
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 / ufs 1 no -
/dev/dsk/c0t0d0s5 /dev/rdsk/c0t0d0s5 /usr ufs 1 no -
/dev/dsk/c0t0d0s6 /dev/rdsk/c0t0d0s6 /var ufs 1 no -
/dev/dsk/c0t0d0s7 /dev/rdsk/c0t0d0s7 /export ufs 2 yes -
swap - /tmp tmpfs - yes -
**rootdev:/pseudo/vxio@0:0
**set vxio:vol_rootdev_is_volume=1
Make a backup of /a/etc/vfstab and then edit it:
cp /a/etc/vfstab /a/etc/vfstab.orig
vi /a/etc/vfstab
Edit the vfstab file back to it's original state, pointing /, swap, /usr, and /var to
hard partitions on the disk like /dev/dsk and /dev/rdsk rather than /dev/vx/ entries.
Temporarily comment out all other /dev/vx volumes from the /a/etc/vfstab file using the
# character. This includes filesystems like /opt and /export, if they exist.
The original /etc/vfstab will look something like this, assuming root is c0t0d0: Note:
Columns have been aligned and spaces added for clarity.
---------------------------------------------------------------------------
/dev/vx/dsk/swapvol - - swap - no -
/dev/vx/dsk/rootvol /dev/vx/rdsk/rootvol / ufs 1 no -
/dev/vx/dsk/usr /dev/vx/rdsk/usr /usr ufs 1 no -
/dev/vx/dsk/var /dev/vx/rdsk/var /var ufs 1 no -
/dev/vx/dsk/export /dev/vx/rdsk/export /export ufs 2 yes -
swap - /tmp tmpfs - yes -
When we reboot we will come up in an unencapsulated state with /, /usr, /var, and swap
mounted.
At this point we have performed a Basic/Functional unencapsulation. This is a not a
state that they system should be left in permanently. It is a state that is useful for
troubleshooting and system maintenance.
If problems with the system are resolved and you are ready to re-encapsulate, perform
the following:
touch /etc/vx/reconfig.d/state.d/root-done
rm /etc/vx/reconfig.d/state.d/install-db
cp /a/etc/vfstab.orig /a/etc/vfstab
cp /a/etc/system.orig /a/etc/system
reboot
Keywords: SEVM, VxVM, Volume Manager, encapsulation
APPLIES TO: Operating Systems/Solaris/Solaris 8, Operating Systems/Solaris/Solaris 7,
Operating Systems/Solaris/Solaris 2.6, Operating Systems/Solaris/Solaris 2.5.1,
Storage/Veritas, Storage/Volume Manager, AFO Vertical Team Docs/Storage
ATTACHMENTS:
This appendix contains flow charts which outline the processes used to
perform the following tasks:
● Encapsulate non-root or data disks
● Unencapsulate non-root or data disks
● Encapsulated boot disks
● Unencapsulate boot disks by using the vxunroot command
● Unencapsulate boot disks by not using the vxunroot command
● Unencapsulate boot disks when the system is booted from a CD-
ROM
● Perform a basic boot disk unencapsulation
● Perform the Sun Enterprise Services’s best practice procedures for
managing boot disks with the VxVM software
No
No
A Page 2 of 26
No
No
C Page 7 of 26
End
Rev-B 07/18/2002
Page 1 of 26
disk name.
length.
partitions for previously
encapsulation? selected, enter
If encapsulating the boot
an arbitrary
disk, use the name
name now
rootdisk.
Page 3
Yes
Yes of 26
is the normal
Open question. If
same private
region size.
Select Yes
encapsulation 2
utility.
Exit vxdiskadm
Encapsulate
No utility and reboot
additional disks?
the system.
Select disk to
encapsulate
c#t#d#
End
Answer yes to
continue y
operation
Disk Encapsulation Flowchart
Rev-B 07/18/2002
Page 2 of 26
Yes
Configure
additional slices?
No
Configure
additional slices?
No
Document the
maximum size of
the "disks" just
created. This vxdisk list <diskname> | grep public
information shall
be used to create Example
Document
additional slices?
Yes
#vxassist -g datadg make NPdisk05vol 8196096
layout=nostripe \ alloc="NPdisk05"
#vxassist -g datadg make NPdisk06vol 9400320
layout=nostripe \ alloc="NPdisk06"
Create additional
volumes?
No
G Page 5 of 26
Rev-B 07/18/2002
Page 4 of 26
Yes Example
No
No
Mirroring process
complete?
Yes
Example
Remove additional
mirrors?
No
H Page 6 of 26
Disk Encapsulation Flowchart
Rev-B 07/18/2002
Page 5 of 26
vxdisk rm c#t#d#s#
disks from the disk
group
Example
Yes
#vxdg -g datadg rmdisk NPdisk05
#vxdg -g datadg rmdisk NPdisk06
#vxdisk rm c1t3d0s5
#vxdisk rm c1t3d0s6
Remove additional
disks?
No
Rev-B 07/18/2002
Page 6 of 26
No
Any volumes
Disk is unable to be
structurally Yes
unencapsulated.
modified?
No
Original
Disk is unable to be
disk Yes
unencapsulated.
replaced?
No
Encapsulated disks use slices six and
seven for public and private regions.
Backup data and restore to
Mirror disks use slices three and four.
Identify disk to be non VxVM managed disk.
unencapsulated.
Use the following commands:
vxprint
format / prtvtoc
No
End
Volumes
Review content of
backed-up?
/etc/vfstab file
to define
pre-encapsulation
configuration
Yes
I Page 8 of 26
Rev-B
Page 7 of 26
07/18/2002
disk to be unencapsulated
vxplex -g <diskgroup> -o rm dis <plexname>
Example
Remove additional
mirrors?
No
been detached.
vxprint -qhtg <diskgroup>
No
unencapsulated, restore
Partititions No the data from backups to End
Restored?
non VxVM software
managed disk.
Yes
Disk Encapsulation Flowchart
Rev-B 07/18/2002
J
Page 9 of 26 Page 8 of 26
Yes
Unmount
additional
volumes?
No
Re mount filesystems
vxedit -r -f rm <volume>
Recursively remove each Example
volume.
#vxedit -r -f rm fs1
#vxedit -r -f rm fs2
Yes
Remove additional No
Remove disk from vxdg rmdisk <diskname>
volumes?
diskgroup. vxdisk rm c#t#d#
Example
End
Disk Encapsulation Flowchart
Rev-B 07/18/2002
Page 9 of 26
Manager control.
Encapsulate
X
and mirror using
boot disks?
No
s o f t wa r e . T o e n c a p s u l a t e t h e b o o t
Encapsulate using
vxinstall?
Yes disk, answer yes to the "Encapsulate
Boot Disk" prompt and answer
the installation
No
End
A Page 2 of 26
Rev-B 07/18/2002
Page 10 of 26
Manager control.
vxunroot? Yes
K Page 12 of 26
No
Manual? Yes
N Page 15 of 26
No
S
Booted
CDROM?
No
Basic
Yes
V Page 22 of 26
Functionality?
Rev-B 07/18/2002
Page 11 of 26
No
The following procedure requires a reboot of the
system. Terminate all production applications
and logoff all non administrative users prior to
Current boot
starting.
disk backup?
Yes
Yes
Disk Encapsulation Flowchart
L Rev-B
Page 12 of 26
07/18/2002
Successful? No
M Page 14 of 26
Yes
Reboot system
Reboot
No
M Page 14 of 26
successful?
Yes
Verify system
file system mounts.
Using non
VxVM software No
M Page 14 of 26
objects
Yes
End
Rev-B 07/18/2002
Page 13 of 26
No a
Yes
vxunroot Go to CDROM
command No
Reboot
Yes unencapsulation D
failed?
failed? process.
No
Yes Execute
vxunroot
Still using unencapsulation
Verify all non
system volumes VxVM software Yes process again. D
on this disk are objects? Follow
unmounted. instructions
exactly as written
No
Verify all
"rootmirror" System
Still using
Other
mirrors have been Yes VxVM software No No
bootable? problem?
detached and objects?
removed.
End
No Yes
Yes
D D D Rev-B 07/18/2002
Page 14 of 26
No
Yes
No
removed. vxprint -qhtg rootdg
All
rootmirror"
This is a critical step.All "rootmirror" plexes must be
"
plexes detached
removed for all volumes residing on the boot disk. If this step
does not complete successfully, this procedure shall fail.
and removed?
Yes
Disk Encapsulation Flowchart
P Page 16 of 26
Rev-B 07/18/2002
Page 15 of 26
Remove rootability
using the following
processes:
#/etc/vx/bin/vxmksdpart -g
datadg \
reboot the system datadg01-04 5 0x00 0x00
Reboot
No
R Page 18 of 26
successful?
Rev-B 07/18/2002
Q Page 17 of 26
Page 16 of 26
disk file systems No that all non system partitions were properly restored and
Yes
vxedit -r -f rm rootvol
Recursively remove
vxedit -r -f swapvol
boot volumes from
rootdg .
removal
No
Volumes
removed?
Yes
Disk
removed?
Yes
Page 17 of 26
System
Yes End
bootable?
No
Manual
Go to CDROM
unencapsulation
Yes unencapsulation
D
failed. First time
process.
through flow?
No
CDROM
system?
No End
Yes
Boot disk
Reinstall Solaris
backup No End
Operating Environment
available?
Yes
procedure.
Yes
Page 18 of 26
No
The following procedure requires a reboot of the
system. Terminate all production applications
and logoff all non administrative users prior to
Current boot starting.
disk backup?
Yes
cp /a/etc/system /a/etc/system.orig
vi /a/etc/system
Edit the /etc/
system file and
Remove (do not comment) the following two lines:
remove " rootdev"
statements.
rootdev:/pseudo/vxio@0:0
set vxio:vol_rootdev_is_volume=1
Rev-B 07/18/2002
T Page 20 of 26 Page 19 of 26
swap, /usr, /var) partitions on T This flow has been selected because a process
touch /a/etc/vx/reconfig.d/state.d/ \
Prevent Volume
install-db
Manager from starting on
R Page 18 of 26
No
Reboot
successfull?
Yes
rm /etc/vx/reconfig.d/state.d/ \
Manually start the install-db
Volume Manager vxiod set 10
software. vxconfigd -m disable
vxdctl enable
later.
U Page 21 of 26
Rev-B 07/18/2002
Page 20 of 26
from rootdg .
Example
new vtoc .
/etc/vx/bin/vxedvtoc -f /etc/vx/reconfig.d/disk.d/ \
c0t0d0/vtoc /dev/rdsk/c0t0d0s2
system filesystems
home /export
, and others.
boot disk.
Disk Encapsulation Flowchart
Rev-B 07/18/2002
End
Page 21 of 26
No
The following procedure requires a reboot of the
system. Terminate all production applications
Yes
**rootdev:/pseudo/vxio@0:0
Edit the /etc/ **set vxio:vol_rootdev_is_volume=1
system file and
comment "rootdev"
statements. Do not skip this step. This copy is used to
restore the system back to it's original state
once system maintenance or troublrshooting
has been completed.
/etc/vfsta
Edit the b file
/, swap, /
and restore
/dev/vx
encapsulation physical
should have all devices commented
(/dev/
devices.
managed volumes
touch /a/etc/vx/reconfig.d/state.d/ \
Prevent VxVM software
install-db
from starting on the next
boot.
stopped.
system resident on the boot disk. If these filesystems are
No
Maintenance
finished?
touch /etc/vx/revconfig.d/state.d/root-done
Yes rm /etc/vx/reconfig.d/state.d/install-db
cp /etc/vfstab.orig /etc/vfstab
Restore the system to it's
cp /etc/system.orig /etc/system
encapsulated and VxVM
reboot
software enabled state.
Rev-B 07/18/2002
End Page 23 of 26
Recommended Guidelines:
Y Page 25 of 26
Rev-B 07/18/2002
Page 24 of 26
# /etc/vx/bin/vxrootmir rootmir
Attach the mirrors in
disk.
Note: If the original boot disk was configured only using /, swap
and /var slices; mirroring using vxdiskadm is acceptable.
while true
> do
> vxtask list
> sleep 15
> echo "#####################"
> done
W ait for the mirroring
process to complete
No
Mirroring
Process
Complete?
Yes
Z
Page 26 of 26
Rev-B 07/18/2002
Page 25 of 26
Remove the
rootdisk from vxdg -g rootdg rootdisk
rootdg
Initialize the
/etc/vx/bin/vxdisksetup -i c#t#d#
rootdisk and add
vxdg -g rootdg adddisk rootdisk=c#t#d#
it back to rootdg
Encapsulation of a boot disk that has only the /, swap, /usr, and /var
partitions is a straightforward process. The encapsulation process creates
the public and private regions needed by the VxVM software and an
overlap partition for each of the system partitions needed to boot the
system. Unencapsulation is also a straightforward process as well, as long
as the vxunroot utility is successful.
When a boot disk has other slices in addition to /, swap, /usr, and /var,
encapsulation is more complex. The main system partitions (/, swap,
/usr, and /var) are still visible, but the additional partitions, such as /opr
or /home, disappear. A similar result occurs during the data disk
encapsulation process. The data is not moved or overwritten. It is still
accessible by the VxVM software and mounted as a VxVM software
volume.
rootvol
swapvol
Slice 0 - / Overlay Partition - / (0)
Slice 1 - swap Slice 2 and
Overlay Partition - swap (1)
Slice 6
Overlay Partition - /usr (3) (Public Region)
Slice 2 Slice 3 - /usr
Overlaps
Slice 4 - /var Overlay Partition - /var (4) Full Disk.
/opt is
Encapsulated
Slice 5 - /opt in Public Region
Slice7
Free Space Private Region
Mirror Disk
Preserved
Slice 3 Private Region
rootvol /opt
Overlay Partition - /opt (5)
swapvol
Overlay Partition - / (0)
Slice 2
Slice 4
Overlay Partition - swap (1)
(Public Region)
Overlay Partition - /usr (6)
Pre-Encapsulation df -k Command
Example output from the df -k command delineates the physical devices
supporting currently mounted file systems.
bash-2.03# df -k
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c1t2d0s0 1018382 47206 910074 5% /
/dev/dsk/c1t2d0s3 2055705 772430 1221604 39% /usr
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
/dev/dsk/c1t2d0s4 2055705 956248 1037786 48% /var
swap 1222104 16 1222088 1% /var/run
swap 1222104 16 1222088 1% /tmp
/dev/dsk/c1t2d0s5 2055705 2133 1991901 1% /opt
Post-Encapsulation df -k Command
The following example of output from the df -k command shows that
the VxVM software volumes are used for the file systems provided by the
boot disk.
bash-2.03# df -k
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/rootvol 1018382 75398 881882 8% /
/dev/vx/dsk/usr 2055705 805992 1188042 41% /usr
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
/dev/vx/dsk/var 2055705 974276 1019758 49% /var
swap 1180368 16 1180352 1% /var/run
swap 1180424 72 1180352 1% /tmp
/dev/vx/dsk/opt 2055705 53686 1940348 3% /opt
Manually Unencapsulating
The manual unencapsulation process for a five-slice disk is similar to the
manual procedure described in ‘‘Manually Unencapsulating a Boot Disk’’
on page 2-60. The one difference in the process of unencapsulating a five-
slice boot disk is the recovery of the /opt partition; this procedure is
covered as part of the manual unencapsulation procedure.
This appendix contains an annotated boot process code list that identifies
when each of the VxVM software start-up scripts execute during the
system boot process.
{0} ok boot -v
Rebooting with command: boot -v
Boot device: /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713fc9f,0:a File and args: -
v
Size: 339096+90034+75930 Bytes
SunOS Release 5.8 Version Generic_108528-14 64-bit
Copyright 1983-2001 Sun Microsystems, Inc. All rights reserved.
Ethernet address = 8:0:20:7d:5d:60
mem = 262144K (0x10000000)
avail mem = 232955904
root nexus = 8-slot Sun Enterprise 4000/5000
sbus0 at root: UPA 0x2 0x0 ...
sbus0 is /sbus@2,0
sbus1 at root: UPA 0x3 0x0 ...
sbus1 is /sbus@3,0
socal0 at sbus1: SBus1 slot 0x0 offset 0x0 and slot 0x0 offset 0x10000 and slot 0x0
offset 0x20000 SBus level 3 sparc9 ipl 5
socal0 is /sbus@3,0/SUNW,socal@0,0
sf0 at socal0: socal_port 0
sf0 is /sbus@3,0/SUNW,socal@0,0/sf@0,0
sf1 at socal0: socal_port 1
sf1 is /sbus@3,0/SUNW,socal@0,0/sf@1,0
soc0 at sbus0: SBus0 slot 0xd offset 0x10000 Onboard device sparc9 ipl 5
soc0 is /sbus@2,0/SUNW,soc@d,10000
ssd0 at sf1: name w210000203713f582,0, bus address c7
ssd0 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f582,0
<SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f582,0 (ssd0) online
ssd2 at sf1: name w210000203713e0b9,0, bus address e2
ssd2 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713e0b9,0
<SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713e0b9,0 (ssd2) online
ssd1 at sf1: name w210000203713f643,0, bus address cc
ssd1 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f643,0
<SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f643,0 (ssd1) online
pseudo-device: devinfo0
devinfo0 is /pseudo/devinfo@0
VxVM general startup...
NOTICE: vxvm:vxio: Cannot open disk c1t3d0s2: kernel error 6
dump on /dev/dsk/c1t0d0s1 size 513 MB
NOTICE: vxvm:vxio: Cannot open disk c1t3d0s2: kernel error 6
Objectives
This appendix contains information related to VxVM software device
configuration tasks.
Relevance
Additional Resources
Layered Volumes
Prior to version 3.2, the VxVM software applied default rules to assign
disks using the vxassist command. The default rules essentially created
the RAID stripe and then the mirror. The VxVM software version 3.0
introduced layered volumes, allowing for RAID 1 + RAID 0 (mirror and
striping) configurations. This feature is now fully implemented in VxVM
software version 3.2 through the vxassist command -o ordered option.
Plex States
Plex state information reflects consistent or inconsistent configurations
and the state of those configurations.
Volume States
Volume states consist of the following:
● Clean – Volume is not started; kernel state is disabled, but plexes are
synchronized.
● Active – Volume is started; kernel state is enabled.
● Empty – Volume is not initialized; kernel state is disabled.
● Sycn – Volume is in recovery mode; kernel state is enabled. Also
indicates volume is recovered after boot; kernel state is disabled, and
plexes need to be resynchronized.
● Needsync – Volume requires resynchronization.
● Replay – Volume is in transient state as part of log replay (only valid
for RAID 5).
Disk States
The following example shows output of the vxdisk list command:
DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 sliced disk06 - online
c0t2d0s2 sliced - - error
c1t0d0s2 sliced rootdisk rootdg online
c2t0d0s2 sliced - - error
c3t0d1s2 sliced - - online
c3t0d4s2 sliced rootmir rootdg online
c3t1d1s2 sliced newdg01 newdg online spare
c3t2d0s2 sliced newdg02 newdg online spare
c3t2d1s2 sliced disk01 rootdg online
c3t2d2s2 sliced newdg04 newdg online reserved
c3t3d0s2 sliced newdg05 newdg online nohotuse
c3t3d2s2 sliced newdg07 newdg online invalid
c3t3d3s2 sliced - - error
c3t4d0s2 sliced newdg09 newdg online failing
c3t4d3s2 sliced newdg12 newdg online altused
c3t5d2s2 sliced newdg15 newdg online
- - newdg03 newdg removed was:c3t2d1s2
- - newdg08 newdg removed was:c3t3d3s2
- - newdg11 newdg failed was:c3t4d2s2
Field Descriptions
This section contains an explanation of the fields in the vxprint output.
For additional information about the vxprint command, see the vxprint
man page.
● Record type dg:
● Disk group name
● Disk ID
● Record type dm:
● Record name
● Underlying disk access record
● Disk access record type (sliced, simple, or nopriv)
● Length of the disk’s private region
● Length of the disk’s public region
● Record type sd:
● Record name
● Associated plex, or dash (-) if the subdisk is dissociated
● Name of the disk media record used by the subdisk
● Device offset in sectors
● Subdisk length in sectors
● Plex association offset – Optionally, this value is preceded by
subdisk column number for subdisks associated to striped
plexes, LOG for log subdisks, or the putil[0] field if the
subdisk is dissociated. The putil[0] field can be non-empty to
reserve the subdisk space for non-volume uses. If the putil[0]
field is empty, it is a dissociated subdisk.
● Subdisk state string:
● ENA – The subdisk is usable.
● DIS – The subdisk is disabled.
● RCOV – The subdisk is part of a RAID-5 plex and has stale
content.
● DET – The subdisk is detached.
● KDET – The subdisk is detached in the kernel due to an
error.
For volumes, the output consists of the following fields, from left to right:
● Record type v:
● Record name
● Associated usage type
● Volume kernel state
● Volume utility state
● Volume length in sectors
● Volume read policy
● The preferred plex, if the read policy uses a preferred plex
● Record type dc:
● Record name
● Associated volume, or dash (-) if the data change object (DCO)
is dissociated
● Name of the DCO log volume, or dash (-) if no DCO log
volume is associated with the DCO object
● Record type sp:
● Record name
● Name of the volume which this snap record describes
● Name of the DCO with which this snap record is associated
● Record type rv:
● Record name
● Associated remote link (RLINK) object count
● Remote volume group (RVG) kernel state (derived from various
flags)
● RVG utility state
● RVG primary flag (primary or secondary)
In previous releases, when mirroring was used, the mirroring had to occur
above striping. In the VxVM software version 3.2, mirroring can occur
both above and below striping. Mirroring put below striping mirrors each
column of the stripe. If the stripe is large enough to have multiple
subdisks per column, each subdisk can be individually mirrored. Instead
of forming subdisks into plexes and then mirroring the plexes, RAID 1 + 0
provides a separate plex for each disk that is then mirrored individually.
Stripe sd sd
Stripe Volumes
mirror
Mirror
Stripe
subplex subplex
sd sd
sd sd
Volume - vol01
Plex - vol01-03
vol01-S01
Stripe
vol01-S02
Matching the diagram in Figure E-2 on page E-13 against the output of the
example vxprint command shows the following objects:
● Volume (v) – vol01
● Plex (pl) – vol01-03
● Subvolume – vol01-S01
● Subvolume – vol01-S02
These objects provide the stripe. Notice that even though mirroring is
implemented, it is not visible to the single upper layer plex. In essence, this
configuration is a single stripe that is mirrored.
There are two subvolumes with a total of four subdisks in the plex. From
the perspective of the plex, the capacity is equal to two subdisks. The
other subdisks provide the data redundancy. Underneath the plex, logical
subvolumes provide the mirroring as shown in Figure E-3.
Plex
vol01-S01
vol01-L01
vol01-P01 vol01-P02
Disk 04 Disk 01
Mirror
vol01-S02
vol01-L02
vol01-P03 vol01-P04
Disk 02 Disk 03
Mirror
Striped Plex
This command allocates space for column 1 from disks on controllers c1,
for column 2 from disks on controller c2, and so on.
In this example, the disks in one plex are all attached to controller c1, and
the disks in the other plex are all attached to controller c2. If a controller
fails, only one side of the mirror is lost.
Striping Considerations
The question of how to perform striping is a subject of hot debate with no
correct answer. This is partly due to the fact that striping can either help
or hurt performance, depending on the workload, the number of stripes
per disk, concurrent access, and how well the application uses I/O.
Striping Characteristics
Online Re-Layout
Online re-layout allows reconfiguration of RAID layouts on currently
configured volumes. A volume can be re-laid out or converted to another
layout.
Only one plex is used if you are re-laying out to RAID 5. See the
vxassist man pages for additional information.
The vmsa GUI allows a user to grow either a volume or a file system.
Depending on whether the user selects a volume or file system, the
selected item is either grown or shrunk. For example, selecting the volume
enables a resize menu entry to grow only the volume. Selecting the file
system enables a resize menu entry to grow both the volume and the file
system.
Multiply the number after bsize by the number after size. That
determines the size of the file system, in bytes. To translate to sectors,
divide that number by 512.
1024 (bsize) x 20480 (size) / 512 = 40960 sectors
This file system size is 40960 sectors, which matches the volume size.
3. Determine the largest size an existing volume can be grown.
Type the following:
# vxassist -g rootdg maxgrow vol01 disk01 disk02 disk03
Volume vol01 can be extended by 12244992 to 12285952 (5999Mb)
If you do not specify the disks, the vxassist command uses the
disks in the diskgroup.
4. Grow the volume to the required size. Type the following:
# vxassist -g rootdg growto vol01 10639360 disk01 disk02 disk03
This command only grows the volume. Remember to grow the file
system as well, if needed.
Note – The vxassist command also has a growby option. See the
vxassist man pages for more information.
Determine to which size to grow the file system. Usually, this is the size of
the volume. Proceed as follows:
1. Determine the size of volume with the following command:
# vxprint -g rootdg -t vol01
V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX
v vol01 fsgen ENABLED ACTIVE 10639360 SELECT -
2. Grow the file system. Type –
# /usr/lib/fs/ufs/mkfs -F ufs -M /mnt /dev/vx/rdsk/rootdg/vol01 10639360
The mkfs command has an option to grow a file system instead of making
one. To use this option, the file system must be mounted. You must also
use the full path to the mkfs program; do not use /usr/sbin/mkfs.
Below are examples of grown file systems. Notice the change in the offset
positions in the first example and the change in the disk offset in the
second example.
Caution – Using the same target name as an existing deported disk group
destroys that group.
To move objects from one imported disk group to another, use the
following syntax:
vxdg [-o expand] move sourcedg targetdg [object ...]
For each of these vxdg commands, the option -o expand includes all
disks from volumes sharing subdisks.
For a complete list of options for the vxdg command, see the man pages.
The VxVM software uses the TUTIL0 and PUTIL0 fields to lock the
affected objects during transition.
2. Enter the following command to complete the move:
# vxdg recover new-dg
The PUTIL and TUTIL fields are used as locks on volumes, plexes,
subdisks, and disks during most types of configuration changes and
recoveries. For example, if attaching a plex to a volume, the plex TUTIL0
field is set to ATT. Once the attach is complete, the TUTIL0 field is cleared
automatically.
The PUTIL fields are permanent and stay set even after a reboot. The
TUTIL fields are temporary and do not survive a reboot.
To list the PUTIL and TUTIL fields, use the following command:
# vxprint -h vol01
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
v vol01 fsgen ENABLED 2050272 - ACTIVE ATT1 -
pl vol04-01 vol01 ENABLED 2050272 - ACTIVE - -
sd newdg02-02vol04-01 ENABLED 2050272 0 - - -
pl vol01-01 vol01 ENABLED 2050272 - TEMPRMSD ATT -
sd newdg11-01vol01-01 ENABLED 2050272 0 - - -
Reconfiguration Considerations
The disk group move, split, and join features have the following
limitations:
● Disk groups involved in a move, split, or join must be version 90 or
greater.
● The reconfiguration must involve physical disks.
● Objects to be moved must not contain open volumes.
● Moved volumes are initially disabled following a disk group move,
split or join. Use either vxrecover -m or vxvol startall to restart the
volumes.
● Data change objects (DCOs) and snap objects that are dissociated by
persistent fast resynchronization cannot be moved between disk
groups.
● Sun StorEdge Volume Replicator (VR) objects cannot be moved
between disk groups.
Hot-Relocation
Hot-relocation allows a system to relocate data automatically in a
redundant configuration, in the event of a subdisk failure. There are,
however, a number of restrictions for hot-relocation use:
● Subdisks must be in a redundant configuration (mirror or RAID 5).
● Space must be available to contain the recovered data.
● Hot-relocation fails if:
● Space is only available in the same plex of the mirror as the
failed sub-disk.
● Space is only available on a plex that contains a RAID 5 log.
● The failed subdisk is in the same plex as a DRL log.
● RAID 5 logs or DRL logs are created, not relocated.
This section discusses the hot-relocation process and how to perform it.
Hot-Relocation Process
To execute relocation, the hot-relocation deamon vxrelocd handles four
distinct operations, in the following order:
1. Failure detection
2. Notification
3. Relocation
4. Recovery
Figure E-5 on page E-33 shows the interaction of various processes during
these procedures.
vxrelocd
vxnotify
failure
detected
vxrelocd
correct, vxrelocd
action
mailx
determine
notify available
vxconfigd space
access
disk
updates
configuration vxassist
builds
objects
vxrecover
recovers data
Failure Detection
Notification
The vxrelocd deamon notifies users (by default, the root user) using the
mailx command, providing information about the failure and the status
of relocation and recovery. The file /etc/rc2.d/s95-vxvm-recover
contains a list of users to notify.
Changes to this file take effect at the next reboot or when the vxrelocd
command is executed from the command line. To execute this command
from the command line, kill the running vxrelocd deamon first, but be
careful not to kill the deamon in the middle of a relocation process.
Volume vol02 Subdisk disk04-02 relocated to disk06-03, but not yet recovered.
Relocation
To display the spare flag use either the vxdisk or vxprint command, as
shown in the following example:
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 sliced rootdisk rootdg online
c1t12d0s2 sliced altboot rootdg online
c1t13d0s2 sliced newdg01 newdg online
c1t14d0s2 sliced newdg02 newdg online
c1t15d0s2 sliced newdg03 newdg online spare
If a spare flag is not set then the vxrelocd deamon uses available space in
the disk group to build the VxVM object. To exclude a disk from use in
hot-relocation, set the nohotuse flag as follows:
# /etc/vx/bin/vxedit -g disk_group set nohotuse=on disk_name
Recovery
Recovery is the last step in the hot-relocation process. After new objects
are moved, the vxrelocd deamon calls the vxrecovery command to
recover the data. Two fields are added to the DM record to identify the
original location of the object: the orig_dmname= and orig_dmoffset=
fields. These assist in manually restructuring the original object.
Hot-Relocation Configuration
The vxdiskadm utility has new options which support hot-relocation,
available as of the VxVM software version 3.1.
Alternatively, use the vxedit command to set the spare flag from the
command line. Use the following command:
# vxedit set spare=on disk_name
Unrelocating
If you try to manually move a relocated subdisk using the vxsd
command, the following message is displayed:
vxvm:vxsd: ERROR: Relocate trace information in subdisk disk04-01 not empty. Use -r to
retain or -d to discard it
Hot-Spares
The hot-spare process is similar to hot-relocation, but there are significant
differences between the two. The primary functional difference is disk
selection in the event of a failure; hot-relocation is able to relocate
subdisks, whereas the hot-spare process relocates entire disks. With hot-
relocation, the event of a failure to a subdisk no longer impacts all
volumes on the physical disk, unless it is the disk that fails. Hot-relocation
is the recovery process available in the VxVM software, starting with
version 2.3.