Sei sulla pagina 1di 20

Zones Over views

The Solaris Zones partitioning technology is used to virtualize operating system services and provide an isolated and secure environment for running applications. Zones can be build on any machine that is running Solaris 10 release. A zone is a virtualized operating system environment created within a single instance of the Solaris Operating System.

When to Use Zones


Zones are ideal for environments that consolidate a number of applications on a single server. The cost and complexity of managing numerous machines make it advantageous to consolidate several applications on larger, more scalable servers

Types of zones
Zones come in two flavors: Global zone - Global zones manage hardware resources and are the administrative domain for local zones Local zones - Virtualized Solaris execution environments that look and feel just like a normal standalone Solaris installation

Local zones come in following types:

Sparse Root Zone Contains a read/write copy of a portion of the file system that exists on the global zone. Other file systems are mounted read-only from the global zone as loop-back virtual file systems. When a sparse root zone is created, the global administrator selects which file systems to share with the sparse root zone in addition to the default read-only file systems: /usr, /lib,/sbin, and /platform. All packages that are installed on the global zone are available to the sparse root zone; a package database is created and all files in the mounted file system are shared with the zone.

Whole Root Zone Contains a read/write copy of the entire file system that exists on the global zone. When a whole root zone is created, all packages that are installed on the global zone are available to the whole root zone; a package database is created and all files are copied onto the whole root zone for the dedicated and independent use of the zone. Branded Zone Supports different versions of Solaris OS for running applications. For example, you can install Solaris 8 or 9, Linux in a branded zone.

Some of the zone features implemented here. ( not sure abt the heading ) Resource Management Overview Solaris resource management features enable you to treat workloads individually. You can do the following: -Restrict access to a specific resource -Offer resources to workloads on a preferential basis -Isolate workloads from each another Shared-IP and Exclusive-IP Shared ip: zones on the same system can bind to the same network port by using the distinct IP addresses associated with each zone or by using the wildcard address. The applications are also prevented from monitoring or intercepting each other's network traffic, file system data, or process activity. Exclusive IP: If a zone needs to be isolated at the IP layer on the network, for example, by being connected to different VLANs or different LANs than the global zone and other non-global zones, then for security reasons the zone can have an exclusive IP. The exclusive-IP zone can be used to consolidate applications that must communicate on different subnets that are on different VLANs or different LANs.

Hostname idolprb00 Global zone details OS version Solaris 10 8/07 s10s_u4wos_12b SPARC Hardware type sun4v SPARC Enterprise T5120 Memory size Memory size: 65408 Megabytes CPU's CPUs: 32 of 1167 MHz SUNW,UltraSPARC-T2 zones list idolprb01 idolprb02 idolprb03 idolprb04 idolprb05 idolprb06 idolprb07 idolprb08 idolprb09 idolprb10 Disk details 1. c1t0d0 2. c1t1d0 Details disk 1 : c1t0d0 partition> Current partition table (original): Total disk cylinders available: 65533 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 3751 - 16879 27.35GB (13129/0/0) 57347472 1 swap wu 0 - 3750 7.81GB (3751/0/0) 16384368

2 backup wm 0 - 65532 136.49GB (65533/0/0) 286248144 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 30947 - 65532 72.04GB (34586/0/0) 151071648 5 home wm 16880 - 21568 9.77GB (4689/0/0) 20481552 6 home wm 21569 - 26257 9.77GB (4689/0/0) 20481552 7 home wm 26258 - 30946 9.77GB (4689/0/0) 20481552 disk2 c1t1d0 partition> p Current partition table (original): Total disk sectors available: 286590942 + 16384 (reserved sectors) Part Tag Flag First Sector Size Last Sector 0 usr wm 256 136.66GB 286590942 1 unassigned wm 0 0 0 2 unassigned wm 0 0 0 3 unassigned wm 0 0 0 4 unassigned wm 0 0 0 5 unassigned wm 0 0 0 6 unassigned wm 0 0 0 8 reserved wm 286590943 8.00MB 286607326

File System, mount points, swap details, pool list

/etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd /dev/fd fd no /proc /proc proc no /dev/dsk/c1t0d0s1 swap no /dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 / ufs 1 no logging /devices /devices devfs no ctfs /system/contract ctfs no objfs /system/object objfs no swap /tmp tmpfs yes sharefs /etc/dfs/sharetab sharefs no -

df -h Filesystem size used avail capacity Mounted on /dev/dsk/c1t0d0s0 27G 12G 15G 45% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 16G 1.4M 16G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab /platform/SUNW,SPARC-Enterprise-T5120/lib/libc_psr/libc_psr_hwcap2.so.1 27G 12G 15G 45% /platform/sun4v/lib/libc_psr.so.1 /platform/SUNW,SPARC-Enterprise-T5120/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1 27G 12G 15G 45% /platform/sun4v/lib/sparcv9/libc_psr.so.1 fd 0K 0K 0K 0% /dev/fd swap 16G 168K 16G 1% /tmp swap 16G 104K 16G 1% /var/run intpool 134G 79K 1.9G 1% /intpool intpool/idolprb01_root 8.0G 5.1G 2.9G 65% /intpool/idolprb01_root intpool/idolprb02_root 5.0G 1.2G 3.8G 25% /intpool/idolprb02_root intpool/idolprb03_root 5.0G 1.2G 3.8G 24% /intpool/idolprb03_root intpool/idolprb04_root 5.0G 955M 4.1G 19% /intpool/idolprb04_root intpool/idolprb05_root 5.0G 864M 4.2G 17% /intpool/idolprb05_root intpool/idolprb06_root 5.0G 1.2G 3.8G 25% /intpool/idolprb06_root intpool/jrulesprb01_data_clone 134G 291M 1.9G 14% /intpool/jrulesprb01_data_clone intpool/jrulesprb01_root 8.0G 20K 8.0G 1% /intpool/jrulesprb01_root intpool/jrulesprb02_data_clone 134G 110M 1.9G 6% /intpool/jrulesprb02_data_clone intpool/jrulesprb02_root 8.0G 20K 8.0G 1% /intpool/jrulesprb02_root intpool/jrulesprb03_data_clone 134G 88K 1.9G 1% /intpool/jrulesprb03_data_clone intpool/jrulesprb03_root 8.0G 20K 8.0G 1% /intpool/jrulesprb03_root intpool/templatezone_root

4.0G 336M 3.7G 9% /intpool/templatezone_root intpool2 100G 1.3G 23G 6% /intpool2 intpool2/idolprb07_root 5.0G 2.0G 3.0G 40% /intpool2/idolprb07_root intpool2/idolprb08_root 5.0G 632M 4.4G 13% /intpool2/idolprb08_root intpool2/idolprb09_root 5.0G 2.0G 3.0G 40% /intpool2/idolprb09_root intpool2/idolprb10_root 5.0G 659M 4.4G 13% /intpool2/idolprb10_root intpool2/idolprb11_root 5.0G 2.0G 3.0G 40% /intpool2/idolprb11_root intpool/idolprb01_data_clone 134G 492M 1.9G 21% /intpool/idolprb01_data_clone intpool/idolprb02_data_clone 134G 1.2G 1.9G 39% /intpool/idolprb02_data_clone intpool/idolprb03_data_clone 134G 122M 1.9G 7% /intpool/idolprb03_data_clone intpool/idolprb04_data_clone 134G 26K 1.9G 1% /intpool/idolprb04_data_clone intpool/idolprb05_data_clone 134G 26K 1.9G 1% /intpool/idolprb05_data_clone intpool/idolprb06_data_clone 134G 83M 1.9G 5% /intpool/idolprb06_data_clone intpool2/idolprb07_data_clone 100G 33M 23G 1% /intpool2/idolprb07_data_clone intpool2/idolprb08_data_clone 100G 19K 23G 1% /intpool2/idolprb08_data_clone intpool2/idolprb09_data_clone 100G 21M 23G 1% /intpool2/idolprb09_data_clone intpool2/idolprb10_data_clone 100G 21K 23G 1% /intpool2/idolprb10_data_clone intpool2/idolprb11_data_clone 100G 23M 23G 1% /intpool2/idolprb11_data_clon SWAP DETAILS swapfile dev swaplo blocks free /dev/dsk/c1t0d0s1 32,9 16 16384352 16384352 zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT intpool 136G 13.1G 123G 9% ONLINE intpool2 101G 8.48G 92.8G 8% ONLINE -

Details of zpool pool: intpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM intpool ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 pool: intpool2 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM intpool2 ONLINE 0 0 0 c1t0d0s4 ONLINE 0 0 0 c1t0d0s5 ONLINE 0 0 0 c1t0d0s6 ONLINE 0 0 0 c1t0d0s7 ONLINE 0 0 0

Network information, Hosts file information, Net backup information

ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 zone idolprb01 inet 127.0.0.1 netmask ff000000 lo0:2: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 zone idolprb02 inet 127.0.0.1 netmask ff000000 lo0:3: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 zone idolprb03 inet 127.0.0.1 netmask ff000000 lo0:4: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1

zone idolprb04 inet 127.0.0.1 netmask ff000000 lo0:5: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 zone idolprb05 inet 127.0.0.1 netmask ff000000 lo0:6: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 zone idolprb06 inet 127.0.0.1 netmask ff000000 lo0:7: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 zone idolprb07 inet 127.0.0.1 netmask ff000000 lo0:8: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 zone idolprb08 inet 127.0.0.1 netmask ff000000 lo0:9: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 zone idolprb09 inet 127.0.0.1 netmask ff000000 lo0:10: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 zone idolprb10 inet 127.0.0.1 netmask ff000000 lo0:11: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 zone idolprb11 inet 127.0.0.1 netmask ff000000 e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 142.134.170.93 netmask ffffff00 broadcast 142.134.170.255 ether 0:21:28:0:cc:1c e1000g0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 zone idolprb01 inet 142.134.170.94 netmask ffffff00 broadcast 142.134.170.255 e1000g0:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 zone idolprb02 inet 142.134.170.220 netmask ffffff00 broadcast 142.134.170.255 e1000g0:3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 zone idolprb03 inet 142.134.170.222 netmask ffffff00 broadcast 142.134.170.255 e1000g0:4: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 zone idolprb04

inet 142.134.170.221 netmask ffffff00 broadcast 142.134.170.255 e1000g0:5: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 zone idolprb05 inet 142.134.170.223 netmask ffffff00 broadcast 142.134.170.255 e1000g0:6: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 zone idolprb06 inet 142.134.170.229 netmask ffffff00 broadcast 142.134.170.255 e1000g0:7: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 zone idolprb07 inet 142.134.170.230 netmask ffffff00 broadcast 142.134.170.255 e1000g0:8: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 zone idolprb08 inet 142.134.170.231 netmask ffffff00 broadcast 142.134.170.255 e1000g0:9: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 zone idolprb09 inet 142.134.170.232 netmask ffffff00 broadcast 142.134.170.255 e1000g0:10: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 zone idolprb10 inet 142.134.170.233 netmask ffffff00 broadcast 142.134.170.255 e1000g0:11: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 zone idolprb11 inet 142.134.170.234 netmask ffffff00 broadcast 142.134.170.255 e1000g3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3 inet 192.168.150.93 netmask ffffff00 broadcast 192.168.150.255 ether 0:21:28:0:cc:1f

/etc/hosts root@idolprb00:/root> more /etc/hosts # # Internet host table # ::1 localhost loghost 127.0.0.1 localhost loghost 142.134.170.93 idolprb00 idolprb00.aliant.icn 192.168.150.93 idolprb-nb00b0 # # NetBackup Server 192.168.150.251 pukake 192.168.150.13 nbumaster-nb00m0 # OVO Server 172.29.208.34 nbsjovo-vip

nbsjovo-vip.aliantnm.private

netbackup/bp.conf

SERVER = nbumaster-nb00m0 SERVER = nbumedia-nb00m0 SERVER = pukake CLIENT_NAME = idolprb00 CONNECT_OPTIONS = pukake 0 2 1 DEFAULT_CONNECT_OPTIONS = 0 1 1 MEDIA_SERVER = pukake

Procedure used in Bell Aliant to create zones

In the SAS Weblogic environment, the developers will sometimes want a container destroyed and rebuilt. The system was built so we could do this quickly and easily. There is a templatezone installed on each T2000, they should all be identical. The purpose of this container is to be the base configuration of all local containers. The templatezone should not be in a running state. It is designed to sit there in an installed state for cloning. You may, however, have to boot it for maintenance purposes, such as the addition of a new account that will part of all future base configurations. In this example, Ive captured the commands from a request I had to recreate wlsdva01. The basic steps are: 1. 2. 3. 4. 5. 6. 7. 8. Confirm the container definition is saved. Delete the container definition. Delete the related zfs filesystems. Create zfs filesystems for the new container and set zfs properties. Create the new container from the stored template file. Clone the templatezone into the new container definition. Boot the new container, login to the console and configure it. Set the mountpoint property of the /data filesystem inside the new container.

Each section is broken out in the following pages. If you had to create a new container, you could simply start at step 4 in the above list and modify an existing container template for step 5.

Step 1 Save Container Definition In most cases the container configuration template will already be on the server in /intpool/templates or /sanpool/templates. Ive included the steps to create an export of the container configuration in case it isnt. root@wlsdva00:/root> cd /sanpool/templates root@wlsdva00:/sanpool/templates> ls -al total 43 drwxr-xr-x 2 root sys 15 Oct 24 15:27 ./ drwxr-xr-x 25 root sys 25 Mar 4 09:25 ../ -rwxr-xr-x 1 root root 431 Oct 4 13:52 create_zfs_fs.ksh* -rwxr-xr-x 1 root root 297 Oct 18 14:31 create_zfs_fs_2.ksh* -rw-r--r-- 1 root root 447 Oct 24 15:27 template.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva01.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva02.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva03.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva04.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva05.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva06.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva07.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva08.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva09.template -rw-r--r-- 1 root root 494 Oct 24 15:27 wlsdva10.template root@wlsdva00:/sanpool/templates> zonecfg -z wlsdva01 info zonename: wlsdva01 zonepath: /sanpool/wlsdva01_root autoboot: false pool: limitpriv: inherit-pkg-dir: dir: /lib inherit-pkg-dir: dir: /platform inherit-pkg-dir: dir: /sbin inherit-pkg-dir: dir: /usr net: address: 142.134.125.246 physical: e1000g0 rctl: name: zone.cpu-shares value: (priv=privileged,limit=500,action=none) attr: name: comment

type: string value: "PreProd Container 1" dataset: name: sanpool/wlsdva01_data root@wlsdva00:/sanpool/templates> zonecfg -z wlsdva01 export -f _wlsdva01.template root@wlsdva00:/sanpool/templates> ls -al total 45 drwxr-xr-x 2 root sys 16 Mar 4 09:55 ./ drwxr-xr-x 25 root sys 25 Mar 4 09:25 ../ -rw-r--r-- 1 root root 493 Mar 4 09:55 _wlsdva01.template -rwxr-xr-x 1 root root 431 Oct 4 13:52 create_zfs_fs.ksh* -rwxr-xr-x 1 root root 297 Oct 18 14:31 create_zfs_fs_2.ksh* -rw-r--r-- 1 root root 447 Oct 24 15:27 template.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva01.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva02.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva03.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva04.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva05.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva06.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva07.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva08.template -rw-r--r-- 1 root root 493 Oct 24 15:27 wlsdva09.template -rw-r--r-- 1 root root 494 Oct 24 15:27 wlsdva10.template root@wlsdva00:/sanpool/templates> diff _wlsdva01.template wlsdva01.template root@wlsdva00:/sanpool/templates>

Step 2 Delete the existing container definition Once youre sure that you have the zonecfg information backed up, go ahead and halt and delete the container. root@wlsdva00:/root> zoneadm list -cv ID NAME STATUS PATH 0 global running / 2 wlsdva02 running /sanpool/wlsdva02_root . . . 10 wlsdva10 running /sanpool/wlsdva10_root 40 wlsdva01 running /sanpool/wlsdva01_root - templatezone installed /sanpool/templatezone_root root@wlsdva00:/root> zoneadm -z wlsdva01 halt root@wlsdva00:/root> zoneadm list -cv ID NAME STATUS PATH 0 global running / 2 wlsdva02 running /sanpool/wlsdva02_root . . . 10 wlsdva10 running /sanpool/wlsdva10_root - templatezone installed /sanpool/templatezone_root - wlsdva01 installed /sanpool/wlsdva01_root root@wlsdva00:/root> zonecfg -z wlsdva01 delete -F root@wlsdva00:/root> zoneadm list -cv ID NAME STATUS PATH 0 global running / 2 wlsdva02 running /sanpool/wlsdva02_root . . . 10 wlsdva10 running /sanpool/wlsdva10_root - templatezone installed /sanpool/templatezone_root root@wlsdva00:/root>

Step 3 Delete the related ZFS filesystems Once the container definition is gone, you can destroy the ZFS filesystems that were used for the container. IMPORTANT: Once you destroy a ZFS filesystem, you cannot get it back! There is no confirmation prompt! root@wlsdva00:/root> zfs list | grep wlsdva01 sanpool/wlsdva01_data 510M 514M 510M /data sanpool/wlsdva01_data@backup 33K - 510M sanpool/wlsdva01_data_clone 4.34M 42.1G 510M /sanpool/wlsdva01_data_clone sanpool/wlsdva01_root 3.30G 713M 3.30G /sanpool/wlsdva01_root root@wlsdva00:/root> zfs destroy sanpool/wlsdva01_root root@wlsdva00:/root> zfs destroy sanpool/wlsdva01_data_clone root@wlsdva00:/root> zfs destroy sanpool/wlsdva01_data@backup root@wlsdva00:/root> zfs destroy sanpool/wlsdva01_data root@wlsdva00:/root> The sanpool/wlsdva01_data@backup and sanpool/wlsdva01_data_clone objects are created as part of the backup process. When we recreate these filesystems for the new container, these objects will not be created.

Step 4 Create new ZFS filesystems for the new container There is a create_zfs.ksh script in the /sanpool/templates/ and /intpool/templates/ directories. These are the scripts that were used to create the ZFS filesystems for the initial containers. Do not run these to create all filesystems again, but use them as a template or for disaster recovery purposes. You have to create a zonename_root and a zonename_data ZFS filesystem and set the quota and reservation accordingly. The reservation property reserves the specified space in the ZFS pool, the quota property specifies the limit on the space for the filesystem. root@wlsdva00:/sanpool/templates> cat create_zfs_fs.ksh #!/usr/bin/ksh POOLNAME=sanpool #for zone in wlsdva01 wlsdva02 wlsdva03 wlsdva04 wlsdva05 wlsdva06 wlsdva07 wlsdva08 wlsdva09 wlsdva10 for zone in templatezone do zfs create ${POOLNAME}/${zone}_root zfs set reservation=4G ${POOLNAME}/${zone}_root zfs set quota=4G ${POOLNAME}/${zone}_root zfs create ${POOLNAME}/${zone}_data zfs set reservation=1G ${POOLNAME}/${zone}_data zfs set quota=1G ${POOLNAME}/${zone}_data done root@wlsdva00:/sanpool/templates> zfs create sanpool/wlsdva01_data root@wlsdva00:/sanpool/templates> zfs create sanpool/wlsdva01_root root@wlsdva00:/sanpool/templates> set reservation=4G sanpool/wlsdva01_root root@wlsdva00:/sanpool/templates> set reservation=1G sanpool/wlsdva01_data root@wlsdva00:/sanpool/templates> set quota=4G sanpool/wlsdva01_root root@wlsdva00:/sanpool/templates> set quota=1G sanpool/wlsdva01_data root@wlsdva00:/sanpool/templates>

Step 5 Create the new container from the saved configuration Now that the old container has been wiped and youve created new filesystems, its time to rebuild. This step is pretty easy. The container will show as configured. root@wlsdva00:/sanpool/templates> zonecfg -z wlsdva01 -f wlsdva01.template root@wlsdva00:/sanpool/templates> zoneadm list -cv ID NAME STATUS PATH 0 global running / 2 wlsdva02 running /sanpool/wlsdva02_root 3 wlsdva03 running /sanpool/wlsdva03_root 4 wlsdva04 running /sanpool/wlsdva04_root 5 wlsdva05 running /sanpool/wlsdva05_root 6 wlsdva06 running /sanpool/wlsdva06_root 7 wlsdva07 running /sanpool/wlsdva07_root 8 wlsdva08 running /sanpool/wlsdva08_root 9 wlsdva09 running /sanpool/wlsdva09_root 10 wlsdva10 running /sanpool/wlsdva10_root - templatezone installed /sanpool/templatezone_root - wlsdva01 configured /sanpool/wlsdva01_root root@wlsdva00:/sanpool/templates>

Step 6 Clone the templatezone into the new container definition Now youll see the real purpose of the templatezone. The templatezone should be in an installed state. If its running, shut it down with the zoneadm z templatezone halt command. root@wlsdva00:/sanpool/templates> zoneadm -z wlsdva01 clone templatezone /sanpool/wlsdva01_root must not be group readable. /sanpool/wlsdva01_root must not be group executable. /sanpool/wlsdva01_root must not be world readable. /sanpool/wlsdva01_root must not be world executable. could not verify zonepath /sanpool/wlsdva01_root because of the above errors. zoneadm: zone wlsdva01 failed to verify root@wlsdva00:/sanpool/templates> chmod 700 /sanpool/wlsdva01_root/ root@wlsdva00:/sanpool/templates> zoneadm -z wlsdva01 clone templatezone Cloning zonepath /sanpool/templatezone_root... root@wlsdva00:/sanpool/templates> zoneadm list -cv ID NAME STATUS PATH 0 global running / 2 wlsdva02 running /sanpool/wlsdva02_root 3 wlsdva03 running /sanpool/wlsdva03_root 4 wlsdva04 running /sanpool/wlsdva04_root 5 wlsdva05 running /sanpool/wlsdva05_root 6 wlsdva06 running /sanpool/wlsdva06_root 7 wlsdva07 running /sanpool/wlsdva07_root 8 wlsdva08 running /sanpool/wlsdva08_root 9 wlsdva09 running /sanpool/wlsdva09_root 10 wlsdva10 running /sanpool/wlsdva10_root - templatezone installed /sanpool/templatezone_root - wlsdva01 installed /sanpool/wlsdva01_root root@wlsdva00:/sanpool/templates> I always forget to chmod the _root filesystem to 700, so I left it in. The cloning takes between 5 and 10 minutes depending on if youre using SAN or internal disks and the server activity at the time. Notice that the container is now in an installed state. Its ready to boot and configure.

Step 7 Boot the new container and configure it Youll need to get a console to the new container to complete the configuration. Use zlogin C zonename for console access, and ~. to exit. Youll be completing the DNS and timezone configuration. Use the values specified below and AST for the time zone. root@wlsdva00:/sanpool/templates> zoneadm -z wlsdva01 boot root@wlsdva00:/sanpool/templates> zlogin -C wlsdva01 [Connected to zone 'wlsdva01' console] Hostname: wlsdva01 What type of terminal are you using? 1) ANSI Standard CRT 2) DEC VT52 3) DEC VT100 4) Heathkit 19 5) Lear Siegler ADM31 6) PC Console 7) Sun Command Tool 8) Sun Workstation 9) Televideo 910 10) Televideo 925 11) Wyse Model 50 12) X Terminal Emulator (xterms) 13) CDE Terminal Emulator (dtterm) 14) Other Type the number of your choice and press Return: 1 Creating new rsa public/private host key pair Creating new dsa public/private host key pair ---- many config screens, just use the config info below ---Domain name: aliant.icn Server address(es): 142.134.188.87 142.134.188.177 Search domain(s): aliant.icn nbtel.nb.ca aliantnm.private amc.aliant.ca

---- many config screens, just use the config info above ----

Step 8 Fix the mountpoint property for the /data filesystem in the container The new container will have the sanpool/zonename_data filesystem mounted as /sanpool/zonename_data. Youll have to change that to /data. The change is immediate. wlsdva01 console login: root Password: Last login: Fri Feb 29 13:13:55 on pts/6 Sun Microsystems Inc. SunOS 5.10 Generic January 2005 You have new mail. # df -k | grep data sanpool/wlsdva01_data 1048576 24 1048551

1%

/sanpool/wlsdva01_data

# zfs set mountpoint=/data sanpool/wlsdva01_data # df -k | grep data sanpool/wlsdva01_data 1048576 24 1048551 # exit

1%

/data

wlsdva01 console login: ~. [Connection to zone 'wlsdva01' console closed]

Application running on zones and steps to install application on zones

Potrebbero piacerti anche