Sei sulla pagina 1di 18

Live Upgrade with Solaris Volume Manager (SVM) and Zones

By mhuff on Apr 23, 2008


As mentioned in my previous post on October 25th, 2007 titled The Live Upgrade E
xperience, I stated that the Live Upgrade feature of the Solaris operating envir
onment enables you to maintain multiple operating images on a single system. An
image called a boot environment, or BE represents a set of operating system and
application software packages. The BEs might contain different operating system
and/or application versions.
As part of this exercise I want to test Live Upgrade when using Solaris Volume M
anager (SVM) to mirror the rootdisk, where Solaris Containers/Zones are deployed
.
System Type Used in Exercise
SunFire 220R
2/ 450MHZ USII Processors with 4mb of cache
2048mb of memory
2/ internal 18gb SCSI drive
Sun StorEdge D1000
Preparing for Live Upgrade
I'm starting with a Solaris 10 11/06 release system, which was just freshly inst
alled with my root file system. We will call that the primary boot environment.
I'll begin by logging into the root account, and patch the system with the late
st Solaris 10 Recommended Patch Cluster, downloaded via SunSolve.
I will also install the required patches from the formerly Sun Infodoc 72099, no
w Sun InfoDoc 206844. This document provides information about the minimum patch
requirements for a system on which Solaris Live Upgrade software will be used.
As mention in my previous post,it is imperative that you ensure the target syste
m meets these patch requirements before attempting to use Solaris Live Upgrade s
oftware on your system.
Verifying booted OS release
root@sunrise1 # cat /etc/release Solaris 10 11/06 s10s_u3w
os_10 SPARC
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 14 November 2006
Display and list the containers/zones
root@sunrise1 # zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 zoneA running /zones/zoneA native shared
Abbreviations Used:
PBE: Primary boot environment
ABE: Alternate boot environment
Installing the latest Solaris Live Upgrade packages you will use a script called
liveupgrade20. The scripts runs silently and installs the latest Solaris Live
Upgrade packages. You can run the following command without the -noconsole and -
nodisplay options and you will see the GUI install tool. I ran it as you will s
ee with these options.
root@sunrise1 # pkgrm SUNWlur SUNWluu
root@sunrise1 # mount -o ro -F hsfs `lofiadm -a /solaris-stuff/solaris-images/s1
0u4/SPARC/solarisdvd.iso` /mntroot@sunrise1 # cd /mnt/Solaris_10/Tools/Installer
s
root@sunrise1 # ./liveupgrade20 -noconsole -nodisplay
Note: This will install the following packages SUNWluu SUNWlur SUNWlucfg
root@sunrise1 # pkinfo |grep SUNWluu SUNWlur SUNWlucfg application SUNWlucfg
Live Upgrade Configuration
application SUNWlur Live Upgrade (root)
application SUNWluu Live Upgrade (usr)
Introduction to Solaris Volume Manager (SVM)
Solaris Volume Manager (SVM), is included in Solaris, it allows you to manage la
rge numbers of disks and the data on those disks. Although there are many ways t
o use Solaris Volume Manager, most tasks include the following:
Increasing storage capacity
Increasing data availability
Easing administration of large storage devices
How does Solaris Volume Manager (SVM) manage storage
Solaris Volume Manager uses virtual disks to manage physical disks and their ass
ociated data. In Solaris Volume Manager, a virtual disk is called a volume. For
historical reasons, some command-line utilities also refer to a volume as a meta
device.
From the perspective of an application or a file system, a volume is functionall
y identical to a physical disk. Solaris Volume Manager converts I/O requests dir
ected at a volume into I/O requests to the underlying member disk.
Solaris Volume Manager volumes are built from disk slices or from other Solaris
Volume Manager volumes. An easy way to build volumes is to use the graphical use
r interface (GUI) that is built into the Solaris Management Console. The Enhance
d Storage tool within the Solaris Management Console presents you with a view of
all the existing volumes. By following the steps in wizards, you can easily bui
ld any kind of Solaris Volume Manager volume or component. You can also build an
d modify volumes by using Solaris Volume Manager command-line utilities.
For example, if you need more storage capacity as a single volume, you could use
Solaris Volume Manager to make the system treat a collection of slices as one l
arger volume. After you create a volume from these slices, you can immediately b
egin using the volume just as you would use any real slice or device.
On to Configuring Solaris Volume Manager (SVM)
In this exercise we will first create and set up some RAID-0 metadevices for bot
h the root file systems / and the swap partition. Then if goes well, with Live U
pgrade we will create the Alternate Boot Environment (ABE). For the purpose of t
his exercise I had enough disk capacity to create the ABE on another set of disk
s. However if you do not have enough disk capacity, you will have to break the m
irrors off form the PBE and use those disks for your ABE.
Note: Keep in mind if you are going to break the mirrors off from the PBE and y
ou are going to mirror swap with the lucreate command, it has a bug when it come
s to the attach and detach flags. See SunSolve for BugID: 5042861 Synopsis: lucr
eate cannot perform SVM attach/detach on swap devices.
SVM Commands:
metadb(1M) create and delete replicas of the metadevice state database
metainit(1M) configure metadevices
metaroot(1M) setup system files for root (/) metadevice
metastat(1M) display status for metadevice or hot spare pool
metattach(1M) attach a metadevice
metadetach(1M) detach a metadevice
metaclear(1M) delete active metadevices and hot spare pools
c0t0d0s2 represents the first system disk (boot) also the PBE
c1t0d0s2 represents the second disk (mirror) also will be used for the PBE
c1t9d0s4 represents the disk where the zones are created for the PBE
c0t1d0s2 represents the first system disk (boot) also the ABE
c1t1d0s2 represents the second disk (mirror) also will be used for the ABE
c1t4d0s0 represents the disk where the zones are created for the ABE
Set up the RAID-0 metadevices (stripe or concatenation volumes)corresponding to
the / file system and the swap space, and automatically configure system files (
/etc/vfstab and the /etc/system) for the root metadevice.
Duplicate the label's content from the boot disk to the mirror disk for both the
PBE and the ABE:
root@sunrise1# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c1t0d0s2
root@sunrise1# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2
Create replicas of the metadevice state database:
Note: Option -f is needed because it is the first invocation/creation of the met
adb(1M)
root@sunrise1# metadb -a -f -c 3 c0t0d0s7 c1t0d0s7 c0t1d0s7 c1t1d0s7
Verify meta databases:
root@sunrise1# metadb
flags first blk block count
a m p luo 16 8192 /dev/dsk/c0t0d0s7
a p luo 8208 8192 /dev/dsk/c0t0d0s7
a p luo 16400 8192 /dev/dsk/c0t0d0s7
a p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
a p luo 16400 8192 /dev/dsk/c1t0d0s7
a u 16 8192 /dev/dsk/c0t1d0s7
a u 8208 8192 /dev/dsk/c0t1d0s7
a u 16400 8192 /dev/dsk/c0t1d0s7
a u 16 8192 /dev/dsk/c1t1d0s7
a u 8208 8192 /dev/dsk/c1t1d0s7
a u 16400 8192 /dev/dsk/c1t1d0s7
Creation of metadevices:
Note: Option -f is needed because the file system created on the slice we want t
o initialize a new metadevice are already mounted.
root@sunrise1# metainit -f d10 1 1 c0t0d0s0
d10: Concat/Stripe is setup
root@sunrise1# metainit -f d11 1 1 c0t0d0s1
d11: Concat/Stripe is setup
root@sunrise1# metainit -f d20 1 1 c1t0d0s0
d20: Concat/Stripe is setup
root@sunrise1# metainit -f d21 1 1 c1t0d0s1
d21: Concat/Stripe is setup
Create the first part of the mirror:
root@sunrise1# metainit d0 -m d10
d0: Mirror is setup
root@sunrise1# metainit d1 -m d11
d1: Mirror is setup
Make a copy of the /etc/system, and the /etc/vfstab before proceeding:
root@sunrise1# cp /etc/vfstab /etc/vfstab-beforeSVM
root@sunrise1# cp /etc/system /etc/system-beforeSVM
Change /etc/vfstab and /etc/system to reflect mirror device:
Note: The metaroot(1M) command is only necessary when mirroring the root file sy
stem.
root@sunrise1# metaroot d0
root@sunrise1# diff /etc/vfstab /etc/vfstab-beforeSVM
6,7c6,7
< /dev/md/dsk/d1 - - swap - no -
< /dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no
logging
---
> /dev/dsk/c0t0d0s1 - - swap - no -
> /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 / ufs 1
no logging
Note: Don't forget to edit /etc/vfstab in order to reflect the other metadeices:
For example in this exercise we mirrored the swap partition, so will have to ad
d the following line to the /etc/vfstab manually.
/dev/md/dsk/d1 - - swap - no -
Install the boot block code on the alternate boot disk:
root@sunrise1# installboot /usr/plaform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/
c1t0d0s0
Reboot on the new metadevices ( the operating system will now boot encapsulated)
:
root@sunrise1# shutdown -y -g 0 -i 6
Attach the second part of the mirror:
root@sunrise1# metattach d0 d20
d0: submirror d20 is attached
root@sunrise1# metattach d1 d21
d1: submirror d21 is attached
Verify all:
root@sunrise1# metastat -p
d1 -m d11 d21 1
d11 1 1 c0t0d0s1
d21 1 1 c1t0d0s1
d0 -m d10 d20 1
d10 1 1 c0t0d0s0
d20 1 1 c1t0d0s0
root@sunrise1# metastat |grep %
Resync in progress: 41 % done
Resync in progress: 46 % done
Note: It would be a best practice to wait for the above resync of the mirrors to
finish before proceeding!
Modify the system dump configuration:
root@sunrise1# mkdir /var/crash/`hostname`
root@sunrise1# chmod 700 /var/crash/`hostname`
root@sunrise1# dumpadm -s /var/crash/`hostname`
root@sunrise1# dumpadm -d /dev/md/dsk/d1
Copy of the /etc/vfstab showing the newly created metadevice for (/) and (swap)
root@sunrise1# cat /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no logging
/dev/md/dsk/d1 - - swap - no -
/dev/dsk/c1t9d0s4 /dev/rdsk/c1t9d0s4 /zones ufs 2 yes
logging
/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /solaris-stuff ufs 2 y
es logging
/devices - /devices devfs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes -
Record the Path to the Alternate Boot Device
You'll need to determine the path to the alternate root device by using the ls(1
) -l command on the slice that is being attached as the second submirror to the
root (/) mirror.
root@sunrise1# ls -l /dev/dsk/c1t0d0s0
lrwxrwxrwx 1 root root 41 Nov 2 20:32 /dev/dsk/c1t0d0s0 -> ../..
/devices/pci@1f,4000/scsi@5/sd@0,0:a
Here you would record the string that follows the /devices directory: /pci@1f,40
00/scsi@5/sd@0,0:a
Solaris Volume Manager users who are using a system with OpenBoot Prom (OBP) can
use the OBP nvalias command to define a backup root device alias for the secondar
y root(/) mirror. For example:
ok nvalias rootmirror /pci@1f,4000/scsi@5/sd@0,0:a Note: I needed to change the
sd to disk
Then, redefine the boot-devices alias to reference both the primary and secondar
y submirrors, in the order in which you want them to be used, and store the conf
iguration.
ok printenv boot-device
boot-device = rootdisk net
ok setenv boot-device rootdisk rootmirror net
boot-device = rootdisk rootmirror net
ok nvstor
In the event of primary root disk failure, the system would automatically boot t
o the second submirror. Or, if you boot manually, rather than using auto boot, y
ou would only enter:
ok boot rootmirror
Note: You'll want to do this to make sure you can boot the submirror!!!!
Now on to creating your ABE. Please note the following:
Now that we have successfully set up,configured, and booted our system with SVM,
there is several ways to created the Alternate Boot Environment (ABE). You can
use the SVM commands such as metadetach(1M), and metaclear(1M) to break the mirr
ors, but for this exercise, I had enough disks to create my ABE.
Please note from above when I mentioned that if you are going to have swap mirro
red, there is a bug with lucreate(1M), in that lucreate(1M) cannot perform SVM a
ttach/detach on swap devices. The BugID is 5042861.
If you do not have enough disk space, and you are not going to mirror swap, the
Live Upgrade command lucreate will detach the mirrors, preserve the data, and c
reate ABE for you.
Note: Before proceeding you'll want to boot back to the rootdisk, after testing
booting from the mirror!
Live Upgrade commands to be used:
lucreate(1M) create a new boot environment
lustatus(1M) display status of boot environments
luupgrade(1M) installs, upgrades, and performs other functions on software on a
boot environment
luactivate(1M) activate a boot environment
lufslist(1M) list configuration of a boot environment
Since, I will not be breaking the mirrors to create my ABE in this exercise, I'v
e created a script call lu_create.sh to prepare the other disks for my ABE and c
reate the ABE using the lucreate(1M) command.
The lucreate(1M) command, has several flags. I will use the -C flag. The -C boot
_device flag, was provided for occasions when lucreate(1M) cannot figure out whi
ch physical storage device is your boot device. This might occur, for example, w
hen you have a mirrored root device on the source BE on an x86 machine.
The -C specifies the physical boot device from which the source BE is booted. Wi
thout this option, lucreate(1M) attempts to determine the physical device from w
hich a BE boots. If the device on which the root file system is located is not a
physical disk (for example, if root is on a Solaris Volume Manager volume) and
lucreate(1M) is able to make a reasonable guess as to the physical device, you r
eceive the query.
Is the physical device devname the boot device for the logical device devname?
If you respond y, the command proceeds.
If you specify -C boot_device, lucreate(1M) skips the search for a physical devi
ce and uses the device you specify. The (hyphen) with the -C option tells lucrea
te(1M) to proceed with whatever it determines is the boot device. If the command
cannot find the device, you are prompted to enter it.
If you omit -C or specify -C boot_device and lucreate(1M) cannot find a boot dev
ice, you receive an error message.
Use of the -C form is a safe choice, because lucreate(1M) either finds the corre
ct boot device or gives you the opportunity to specify that device in response t
o a subsequent query.
Copy of the lu_create.sh script
root@sunrise1# cat lu_create.sh
#!/bin/sh
# Created by Mark Huff Sun Microsystems Inc. on 4/08/08
#ScriptName: lu-create.sh
# This script will use Solaris 10 LiveUpgrade commands to create a Alternate Boo
t Environment (ABE),
# and also create Solaris Volume Manager (SVM) metadevice.
lustatus
metainit -f d110 1 1 c0t1d0s0
metainit -f d120 1 1 c1t1d0s0
metainit d100 -m d110
# The following line will create the ABE with the zones from the PBE
lucreate -C /dev/dsk/c0t1d0s2 -m /:/dev/md/dsk/d100:ufs \\
-m -:/dev/dsk/c0t1d0s1:swap \\
-m /zones:/dev/dsk/c1t4d0s0:ufs -n s10u4
sleep 10
# The following lines will setup the metadevices for swap and attach the mirrors
for / and swap
metainit -f d111 1 1 c0t1d0s1
metainit -f d121 1 1 c1t1d0s1
metainit d101 -m d111
metattach d100 d120
metattach d101 d121
lustatus
sleep 2
echo The lufslist command will be run to lists the configuration of a boot envir
onment BE. The output contain
s the disk slice,file system, file system type, and file system size for each BE
mount point. The output also
notes any separate file systems that belong to a non-global zone inside the BE
being displayed as well.
lufslist s10u4
luactivate s10u4
lustatus
sleep 10
echo Please connect to console to see reboot!!
sleep 2
#init 6
After the lu_create.sh script has completed you will now reboot. This system wil
l now boot on the new ABE.
root@sunrise1# init 6
Once the system has boot on the new ABE, list the configuration of the boot envi
ronment:
root@sunrise1# lufslist s10u4
boot environment name: s10u4
This boot environment is currently active.
This boot environment will be active on next system boot.
Filesystem fstype device size Mounted on Mount
Options
----------------------- -------- ------------ ------------------- -----
---------
/dev/md/dsk/d100 ufs 13811814400 / loggi
ng
/dev/dsk/c0t1d0s1 swap 4255727616 - -
/dev/dsk/c1t4d0s0 ufs 4289863680 /zones loggi
ng
root@sunrise1# lufslist d0 boot environment name: d0
This boot environment is currently active.
This boot environment will be active on next system boot.
Filesystem fstype device size Mounted on Mount
Options
----------------------- -------- ------------ ------------------ ------
--------
/dev/md/dsk/d0 ufs 13811814400 / loggin
g
/dev/md/dsk/d1 swap 4255727616 - -
/dev/dsk/c1t9d0s4 ufs 36415636992 /zones loggin
g
At this point, you're ready to run the luupgrade command, however if you encount
ered problems with the lucreate, you may find it very useful
to the lustatus (1M) utility to see the state of the boot environment.
In my case everything went as planned. Here is the output of the lustatus (1M) u
tility from my server.
To display the status of the current boot environment:
root@sunrise1# lufstatus
Boot Environment Is Active Active Can Copy

Name Complete Now On Reboot Delete Status

-------------------------- -------- ------ --------- ------ --------
--
d0 yes no no yes -

s10u4 yes yes yes no -

Renamed the PBE to s10u3
root@sunrise1# lurename -e d0 -n s10u3
To display the status of the current boot environment:
root@sunrise1# lufstatus
Boot Environment Is Active Active Can Copy

Name Complete Now On Reboot Delete Status

-------------------------- -------- ------ --------- ------ --------
--
s10u3 yes no no yes -

s10u4 yes yes yes no -
To display and list the containers/zones
root@sunrise1 # zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 zoneA running /zones/zoneA native shared
To upgrade to the new Solaris release, you will use the luupgrade (1M)command wi
th the -u option. The -s option identifies the path to the media.
In my case here I already had it mounted on /mnt before shown above in the secti
on Preparing for Live Upgrade.
I've ran the date(1) command to record when I started the luupgrade(1M)
root@sunrise1# date
Wed Apr 23 12:00:00 EDT 2008
The command line option would be as follows:
root@sunrise1# luupgrade -u -n s10u4 -s /mnt
The command generated the following output:
183584 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <s10u4>.
Determining packages to install or upgrade for BE <s10u4>.
Performing the operating system upgrade of the BE <s10u4>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment <s10u4>.
Package information successfully updated on boot environment <s10u4>.
Adding operating system patches to the BE <s10u4>.
The operating system patch installation is complete.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot
environment <s10u4> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot
environment <s10u4> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment <s10u4>. Before you activate boot
environment <s10u4>, determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment <s10u4> is complete.
I ran the date(1) command to record when the luupgrade(1M) completed.
root@sunrise1# date
Wed Apr 23 15:43:00 EDT 2008Note: The luupgrade took approximately a total of 3
hours and 43 minutes.
We now must activate the newly create ABE by running the luactivate(1M) command.
root@sunrise1 # luactivate s10u4
\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Enter the PROM monitor (ok prompt).
2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:
At the PROM monitor (ok prompt):
For boot to Solaris CD: boot cdrom -s
For boot to network: boot net -s
3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following command to mount:
mount -Fufs /dev/dsk/c0t1d0s2 /mnt
4. Run <luactivate> utility with out any arguments from the current boot

Boot new ABE
After the boot of the new ABE called s10u4 we will see the that the new ABE has
it new SVM metadevices along with the zones that we created when we ran the lu_c
reate.sh script .
I suppose you could nvalias, s10u4at the OBP, to it's correct disk path somethin
g like this.
ok nvalias s10u4 /pci@1f,4000/scsi@3/disk@1,0:a
or just boot the path in this case the following
ok boot /pci@1f,4000/scsi@3/disk@1,0:a
Resetting ...
Sun Ultra 60 UPA/PCI (2 X UltraSPARC-II 450MHz), No Keyboard
OpenBoot 3.23, 2048 MB memory installed, Serial #14809682.
Ethernet address 8:0:20:e1:fa:52, Host ID: 80e1fa52.
Rebooting with command: boot /pci@1f,4000/scsi@3/disk@1,0:a
Boot device: /pci@1f,4000/scsi@3/disk@1,0:a File and args:
SunOS Release 5.10 Version Generic_120011-14 64-bit
Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
TSI: gfxp0 is GFX8P @ 1152x900
Hostname: sunrise1
Configuring devices.
Loading smf(5) service descriptions: 27/27
/dev/rdsk/c1t4d0s0 is clean
sunrise1 console login: root
Password:
Apr 23 18:13:48 sunrise1 login: ROOT LOGIN /dev/console
Last login: Tue Apr 22 14:28:09 on console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
You have mail.
Sourcing //.profile-EIS.....
root@sunrise1 # cat /etc/release
Solaris 10 8/07 s10s_u4wos_12b SPARC
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 16 August 2007

Now that the new ABE, s10u4 is boot you'll see below that the new environment ha
s the new metadevices (d100 for / and d101 for swap.
root@sunrise1 # more /etc/vfstab
#live-upgrade:<Tue Apr 22 12:14:27 EDT 2008> updated boot environment <s10u4>
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/md/dsk/d100 /dev/md/rdsk/d100 / ufs 1 no
logging
#live-upgrade:<Tue Apr 22 12:14:27 EDT 2008>:<s10u4># /dev/md/dsk/d1 -
- swap - no -
/dev/md/dsk/d101 - - swap - no -
/dev/dsk/c1t4d0s0 /dev/rdsk/c1t4d0s0 /zones ufs 2 yes
logging
/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 /solaris-stuff ufs 2 y
es logging
/devices - /devices devfs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes -

root@sunrise1 # df -h
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d100 13G 4.7G 7.8G 38% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 4.6G 1.4M 4.6G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
fd 0K 0K 0K 0% /dev/fd
swap 4.6G 48K 4.6G 1% /tmp
swap 4.6G 48K 4.6G 1% /var/run
/dev/dsk/c1t4d0s0 3.9G 614M 3.3G 16% /zones
As you can see the new BE also has the container/zone
root@sunrise1 # zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 zoneA running /zones/zoneA native shared
Log into the zone in the new BE
root@sunrise1 # zlogin zoneA
[Connected to zone 'zoneA' pts/1]
Last login: Mon Apr 21 16:15:41 on console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
# cd /
# ls -al
total 1004
drwxr-xr-x 18 root root 512 Apr 23 15:41 .
drwxr-xr-x 18 root root 512 Apr 23 15:41 ..
drwx------ 3 root root 512 Apr 21 16:15 .sunw
lrwxrwxrwx 1 root root 9 Apr 22 12:00 bin -> ./usr/bin
drwxr-xr-x 12 root root 1024 Apr 23 17:44 dev
drwxr-xr-x 73 root sys 4096 Apr 23 17:47 etc
drwxr-xr-x 2 root sys 512 Nov 23 09:58 export
dr-xr-xr-x 1 root root 1 Apr 23 17:46 home
drwxr-xr-x 7 root bin 5632 Apr 23 15:03 lib
drwxr-xr-x 2 root sys 512 Nov 22 22:32 mnt
dr-xr-xr-x 1 root root 1 Apr 23 17:46 net
drwxr-xr-x 6 root sys 512 Dec 18 12:15 opt
drwxr-xr-x 54 root sys 2048 Apr 23 13:58 platform
dr-xr-xr-x 81 root root 480032 Apr 23 18:26 proc
drwxr-xr-x 2 root sys 1024 Apr 23 14:16 sbin
drwxr-xr-x 4 root root 512 Apr 23 15:41 system
drwxr-xr-x 5 root root 396 Apr 23 17:48 tmp
drwxr-xr-x 41 root sys 1024 Apr 23 15:34 usr
drwxr-xr-x 43 root sys 1024 Apr 23 15:41 var
# ifconfig -a
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 ind
ex 1
inet 127.0.0.1 netmask ff000000
qfe0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 192.168.1.27 netmask ffffff00 broadcast 192.168.1.255
Success!! " Give it a try!!

Potrebbero piacerti anche