Sei sulla pagina 1di 7

Document 1396382.

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=...

List of currently unsupported Live Upgrade (LU) configurations (Doc ID 1396382.1)


Modified: Feb 13, 2014 Type: HOWTO

In this Document Goal Solution Configurations unsupported by Solaris Live Upgrade Software 1. Executing lucreate in single-user mode. Reference Sun CR: 7076785/Bug # 15734117 (Fixed in 121430-84 -SPARC and 121431-85 - x86) 2. Mount point of filesystem configured inside a non-global zone is a descendant of zonepath mount point. Reference Sun CR: 7073468/Bug # 15732329 3. zfs dataset of filesystem configured inside a non-global zone is a descendant of zonepath dataset. Reference Sun CR: 7116952/Bug # 15758334 4. All subdirectories of the root file system that are part of the OS image, with the exception of /var, must be in the same dataset as the root file system. 5. No descendent file systems allowed in /opt Reference : Sun CR 7153257/Bug # 15778555 (Fixed in 121430-84 - SPARC and 121431-85 - x86) 6. Changing the mountpoints of rpool and rpool/ROOT from default values where rpool is a zpool containing BEs. Reference Sun CR: 7119104/Bug # 15759701 7. Zones residing on top level of the dataset. Reference Sun CR: 6867013/Bug # 15579467 8. Adding filesystem to a non-global zone through /etc/vfstab. 9. Using separate /var ZFS file system for non-global zones Reference Sun CR: 6813647/Bug # 15546791 10. Creating an alternate boot environment on a SVM soft-partition. 11. Using SVM disksets with non-global zones. Reference Sun CR: 7167449/Bug # 15790545 12. Excluding ufs/vxfs based zones with zfs root pool. Reference Sun CR: 7141482/Bug # 15769912 13. Excluding zone path or filesystems embedded within the NGZ. 14. Zones on a system with solaris clustered environment and OpsCenter: 15. lucreate fails if the canmount property of the zfs dataset in the root hierarchy is not set to "noauto". 16. LU operations within the non-global zones fail. 17. All subdirectories on NGZ zonepath that are part of the OS, must be in the same dataset as the zonepath. 18. issue regarding 'delegated datasets in Zones'. Reference Sun CR: 7382554/Bug # 17382554 19. lucreate fails with mount issues against zones with delegated datasets 20. Live upgrade will consider only file systems mounted in zones using 'zonecfg> add fs' Glossary: References

APPLIES TO:
Solaris SPARC Operating System - Version 10 3/05 to 10 1/13 U11 [Release 10.0] Solaris x64/x86 Operating System - Version 10 3/05 to 10 1/13 U11 [Release 10.0] Information in this document applies to any platform.

GOAL

1 of 7

3/1/2014 10:54 AM

Document 1396382.1

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=...

This document describes which Live Upgrade (LU) configurations are currently not supported. We expect this list to change quite frequently so please check this list again before a SR is closed as "configuration is unsupported".

SOLUTION
Configurations unsupported by Solaris Live Upgrade Software 1. Executing lucreate in single-user mode. Not all the services and utilities are available in the single user mode. Therefore, alternate boot environment created in a single user mode will also not have those services and utilities, thus would not be a complete BE Also the results of lucreate command in single user are unpredictable. Reference Sun CR: 7076785/Bug # 15734117 (Fixed in 121430-84 -SPARC and 121431-85 - x86) 2. Mount point of filesystem configured inside a non-global zone is a descendant of zonepath mount point. PBE with below sample zone configuration:
zfs create rootpool/ds1 zfs set mountpoint=/test1 rootpool/ds1 zfs create rootpool/ds2 zfs set mountpoint=/test1/test2 rootpool/ds2 zonecfg -z zone1 > create > set zonepath=/test1 > add fs > set dir=/soft > set special=/test1/test2 > set type=lofs > end > exit zoneadm -z test1 install zoneadm -z test1 boot

lucreate will fail because the special value of zone1 is /test1/test2 which is a descendent of zonepath /test1 Reference Sun CR: 7073468/Bug # 15732329 3. zfs dataset of filesystem configured inside a non-global zone is a descendant of zonepath dataset. PBE with below sample zone configuration:
zfs create zonepool/zone2 zfs create -o mountpoint=legacy zonepool/zone2/data zonecfg -z zone2 > create > set zonepath=/zonepool/zone2 > add fs > set dir=/data > set special=zonepool/zone2/data > set type=zfs > end > exit zoneadm -z zone2 install zoneadm -z zone2 boot

lucreate will fail because the special value of zone2 is zonepool/zone2/data which is a descendent of the zonepath dataset --> zonepool/zone2

2 of 7

3/1/2014 10:54 AM

Document 1396382.1

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=...

Reference Sun CR: 7116952/Bug # 15758334 4. All subdirectories of the root file system that are part of the OS image, with the exception of /var, must be in the same dataset as the root file system. Example:
rpool/ROOT/BE / rpool/ROOT/BE/opt /opt rpool/ROOT/BE/usr /usr above configuration is NOT supported, since /opt, /usr come as part of the OS image. Creating ABE with such PBE configuration will fail. Whereas rpool/ROOT/BE / rpool/ROOT/BE/var /var is supported and is an exception.]

Please see http://docs.oracle.com/cd/E23823_01/html/819-5461/zfsboot-2.html which states following: Oracle Solaris OS Components All subdirectories of the root file system that are part of the OS image, with the exception of /var, must be in the same dataset as the root file system. In addition, all OS components must reside in the root pool, with the exception of the swap and dump devices. Therefore, -D option can't be used with /opt or its descendents and should be used for non-OS critical file systems only. From the lucreate man page: While the -D option is mainly intended for specifying a separate dataset for /var, it can also be used for other non-OS critical file systems. For example, you can create a separate dataset for /data under the root dataset in a ZFS root BE.

5. No descendent file systems allowed in /opt Live Upgrade does not allow non-OS components to be descendants of /opt and /usr. For example, if you try to create a new dataset with -D /opt/ems, it fails with:
/opt/ems is not allowed to be a separate file system in ZFS luconfig: ERROR: altrpool/ROOT/envB/opt/ems cannot be used for file system /opt/ems in boot environment envB.

/opt is considered an OS-critical file system from LU perspective and it must reside in the root pool. There can't be a separate dataset for /opt or its descendents. Please see http://docs.oracle.com/cd/E23823_01/html/819-5461/zfsboot-2.html which states following: Oracle Solaris OS Components All subdirectories of the root file system that are part of the OS image, with the exception of /var, must be in the same dataset as the root file system. In addition, all OS components must reside in the root pool, with the exception of the swap and dump devices. Therefore, -D option can't be used with /opt or its descendents and should be used for non-OS critical file systems only. From the lucreate man page: While the -D option is mainly intended for specifying a separate dataset for /var, it can also be used for other non-OS critical file systems. For example, you can create a separate dataset for /data under the root dataset in a ZFS root BE. Reference : Sun CR 7153257/Bug # 15778555 (Fixed in 121430-84 - SPARC and 121431-85 - x86)

3 of 7

3/1/2014 10:54 AM

Document 1396382.1

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=...

6. Changing the mountpoints of rpool and rpool/ROOT from default values where rpool is a zpool containing BEs. The default values are: rpool mountpoint is set to /rpool rpool/ROOT mountpoint is set to legacy. If rootpool mountpoints are changed from these default value and an ABE is created, then the behavior of ABE is unpredictable . Reference Sun CR: 7119104/Bug # 15759701 7. Zones residing on top level of the dataset. If ZFS root pool resides on one pool (say rpool) with zone residing on toplevel dataset of a different pool (say newpool) mounted on /newpool i.e. zonepath=/newpool, the lucreate would fail. This limitation is documented in Oracle Solaris ZFS Administration Guide (Solaris 10 8/11 S10U10) / Chapter 5 Installing and Booting an Oracle Solaris ZFS Root File System / Supported ZFS with Zone Root Configuration Information (at Least Solaris 10 5/09) The following ZFS and zone path configuration is not supported: * Live upgrade cannot be used to create an alternate BE when the source BE has a non-global zone with a zone path set to the mount point of a top-level pool file system. For example, if zonepool pool has a file system mounted as /zonepool, you cannot have a non-global zone with a zone path set to /zonepool. Reference Sun CR: 6867013/Bug # 15579467 8. Adding filesystem to a non-global zone through /etc/vfstab. Adding a filesystem to NGZ in the following way is not supported. Entering the following lines in the global zones

/etc/vfstab : /dev/dsk/c1t0d0s4 /dev/rdsk/c1t0d0s4 /export/zones ufs 1 yes /dev/dsk/c1t1d0s0 /dev/rdsk/c1t1d0s0 /export/zones/zone1/root/data ufs 1 no -

where: /export/zones/zone1 is the zone root for NGZ zone1. /export/zones/zone1/root/data will be a data filesystem in NGZ zone1. Lucreate will fail. Instead, use the "zonecfg add fs" feature, which is supported by LU.
zonecfg -z zone1 create set zonepath=/export/zones/zone1 add fs set dir=/data set special=/dev/dsk/c1t1d0s0 set raw=/dev/rdsk/c1t1d0s0 set type=ufs end exit

9. Using separate /var ZFS file system for non-global zones Using a separate /var ZFS file system for non-global zones is not supported in Solaris 10. Solaris 11 uses separate /var ZFS file systems for non-global zones by default. Reference Sun CR: 6813647/Bug # 15546791

4 of 7

3/1/2014 10:54 AM

Document 1396382.1

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=...

10. Creating an alternate boot environment on a SVM soft-partition. With an SVM soft partition as below:
# metastat d100 d100: Soft Partition Device: c0t3d0s0 State: Okay Size: 8388608 blocks (4.0 GB) Device Start Block Dbase Reloc c0t3d0s0 0 No Yes Extent Start Block Block count 0 1 8388608 Device Relocation Information: Device Reloc Device ID c0t3d0 Yes id1,sd@THITACHI_DK32EJ-36NC_____432G8732

and creating an ABE on this soft partition d100 would fail. i.e.
lucreate -n svmBE -m /:/dev/md/dsk/d100:ufs

11. Using SVM disksets with non-global zones. Using Solaris Volume Manager disksets with non-global zones is not supported in Solaris 10. i.e.
# metastat -s datads/d60 -m datads/d160 1 datads/d260 1 datads -p datads/d160 datads/d260 1 2 c0t1d0s1 c0t3d0s1 -i 256b 2 c0t1d0s0 c0t3d0s0 -i 256b

Diskset "datads/d60" is used for non-global zones in /etc/vfstab


cat /etc/vfstab ... /dev/md/datads/dsk/d60 /dev/md/datads/rdsk/d60 /zones ufs 1

Trying to create a new boot environment with submirror "datads/d260" fails


# lucreate -n svmBE -m /:/dev/md/dsk/d10:ufs,mirror -m /:/dev/dsk/c0t1d0s0:detach,attach,preserve\ -m /zones:/dev/md/datads/dsk/d66:ufs,mirror -m /zones:/dev/md/datads/dsk/d260:detach,attach,preserve

with error messages


ERROR: cannot check device name </dev/md/datads/dsk/d66> for device path abbreviation ERROR: cannot determine if device name </dev/md/datads/dsk/d66> is abbreviated device path

or
ERROR: option <detach> metadevice <d260> not a component of a metadevice: </dev/md/datads/dsk/d260> ERROR: cannot validate file system </zones> option <detach> devices </dev/md/datads/dsk/d260>

Reference Sun CR: 7167449/Bug # 15790545 12. Excluding ufs/vxfs based zones with zfs root pool. When you have a zfs root pool with ufs/vxfs file system based zones and a ZFS ABE is created using liveupgrade, the zones gets merged into zfs root pool. -m option is not supported while migrating from ZFS to ZFS, its only supported while migrating from UFS to ZFS. Even while migrating from UFS to ZFS Liveupgrade can not preserve the UFS/VXFS file systems of zones of PBE. These file systems get merged into zfs root pool.

5 of 7

3/1/2014 10:54 AM

Document 1396382.1

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=...

Reference Sun CR: 7141482/Bug # 15769912 13. Excluding zone path or filesystems embedded within the NGZ. If there are NGZ on the system with either of the following embededd within them: 1. ufs/zfs/lofs filesystems 2. zfs datasets then 1. During lucreate excluding/including the zonepath is not supported. 2. During lucreate excluding/including the filesystems embedded withing the zones is not supported. Eg: If there is a zone called zone1 as:
zonecfg -z zone1 > create > set zonepath=/zones/zone1 > add fs > set dir=/soft > set special=/test1/test2 > set type=lofs > end > exit

Then lucreate -n abe -c pbe -x /zones/zone1 -x /test1 -x /test1/test2 is not supported. This is also applicable for -f, -x, -y, -Y, -z options of lucreate command. 14. Zones on a system with solaris clustered environment and OpsCenter: If there are zones on a system with solaris clustered environment and Opscenter running, then while booting into the alternate boot environment the zones might fail to boot up. This can not be fixed by LU. But there is a workaround which is, after running luactivate <ABE> and before "init 6" run,
# cacaoadm stop -I scn-agent # svcadm disable -st zones

- If any zones are in "mounted" state , run


# zoneadm -z <zonename> unmount # init 6

15. lucreate fails if the canmount property of the zfs dataset in the root hierarchy is not set to "noauto". On a system with ZFS root if canmount property of datasets in the root hierarchy is not "noauto" eg:
bash-3.2# zfs get canmount NAME rpool rpool/ROOT rpool/ROOT/pbe rpool/ROOT/pbe/ds1

PROPERTY canmount canmount canmount canmount

VALUE on on noauto on

SOURCE local default local default

Here canmount property of zfs dataset "rpool/ROOT/PBE/ds1" is "on" therefore lucreate will fail. 16. LU operations within the non-global zones fail. Execution of any LU command within the non-global zones is unsupported. 17. All subdirectories on NGZ zonepath that are part of the OS, must be in the same dataset as the zonepath.
-- from zonezfg export fs:

6 of 7

3/1/2014 10:54 AM

Document 1396382.1

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=...

dir: /opt --- NOT SUPPORTED /opt in separate dataset special: zone1/opt raw not specified type: zfs options: []

See Doc 1530512.1 for more info: Lucreate fails ERROR: failed to mount file system < > on </.alt.tmp.b-ELb.mnt/opt> 18. issue regarding 'delegated datasets in Zones'. lucreate fails with mount issues against zones with delegated datasets ( CR 17382554 ) The issue is described in Oracle Solaris ZFS Administration Guide (http://docs.oracle.com/cd/E19253-01/819-5461/gbbst/index.html ) If you are using Oracle Solaris Live Upgrade to upgrade your ZFS BE with non-global zones, first remove any delegated datasets. Reference Sun CR: 7382554/Bug # 17382554 19. lucreate fails with mount issues against zones with delegated datasets lucreate fails with mount issues against zones with delegated datasets (CR 7382554). If you are using Oracle Solaris Live Upgrade to upgrade your ZFS BE with non-global zones, first remove any delegated datasets. The issue is described in Oracle Solaris ZFS Administration Guide (http://docs.oracle.com/cd/E26505_01/html/E37384 /gayov.html#gbbst)

20. Live upgrade will consider only file systems mounted in zones using 'zonecfg> add fs' If file systems are not added to zone using
zonecfg> add fs

these file systems won't be considered by Live Upgrade while creating new boot environment . File systems which are mounted inside zones without using 'zonecfg> add fs' will not be mounted inside newly created boot environment. Glossary: BE Boot environment PBE Primary boot environment ABE Alternate boot environment

7 of 7

3/1/2014 10:54 AM

Potrebbero piacerti anche