Sei sulla pagina 1di 14

If Virtualization Is Free, It Can't Be Any Good

Right?
A Consolidation Case Study on Oracle SuperCluster
by Thierry Manf, with contributions from Orgad Kimchi, Maria Frendberg, and Mike Gerdts
Best practices and hands-on instructions for using Oracle Solaris Zones to consolidate existing physical
servers and their applications onto Oracle SuperCluster using the P2V migration process, including a
step-by-step example of how to consolidate an Oracle Solaris 8 server running Oracle Database 10g.

Published September 2013

Want to comment on this


article? Post the link
on Facebook's OTN Garage
page. Have a similar article to
share? Bring it up on Facebook
orTwitter and let's discuss.

Table of Contents
Introduction
Good Questions to Start With
SuperCluster Domains for Oracle Solaris Zones
Network Setup
Oracle Licensing
P2V Migration Step by Step
Performance Tuning
Example: Consolidating an Oracle Solaris 8 Server Running Oracle Database 10 g
Conclusion
See Also
About the Author

Introduction
A growing number of companies are looking at virtualization to consolidate many physical servers on a single
platform. As a general-purpose engineered system, Oracle SuperCluster has the following characteristics, which
are required to address consolidation needs:

Scalable computing resources with strong throughput capacities, which are critical for the execution of
many virtual machines and applications in parallel.

Fully redundant compute, storage, and networking resources, which deliver the availability required for
running many application layers on a single virtualized platform.

Native virtualization technologies with Oracle VM Server for SPARC and lightweight Oracle Solaris
Zones. These two technologies can be combined for maximum flexibility while reducing the virtualization
overhead.

A powerful and versatile database machine that can accommodate the load of many applications.
Tools to facilitate migration from a physical server to a virtual machine (P2V migration). The content of a
physical serverincluding applicationscan be captured and redeployed in an Oracle Solaris Zone.
Support of a wide range of Oracle Solaris releases, simplifying the consolidation of legacy servers.
This article provides guidance, best practices, and hands-on instructions for using Oracle Solaris Zones to
consolidate existing servers onto SuperCluster. It focuses on the operating system and virtualization layers, on
the P2V migration process, and on the associated tools for facilitating this migration.
This article is intended to help system administrators, architects, and project managers who have some
understanding of SuperCluster and want to familiarize themselves with P2V migration and evaluate the possibility
of conducting such a transition.
Good Questions to Start With

SuperCluster offers a lot of flexibility and numerous options for virtualization and consolidation. The following
sections discuss some questions that should help you to quickly identify the best options for consolidating your
physical servers on SuperCluster.

Is My Server Eligible for P2V Consolidation?


Oracle SuperCluster T5-8 runs Oracle Solaris releases 10 and 11 natively. SPARC-based servers running these
releases can be consolidated on SuperCluster using native Oracle Solaris Zones. SPARC servers running Oracle
Solaris releases 8 and 9 can also be consolidated using solaris8 and solaris9 branded zones.
Note: Native zones run the same Oracle Solaris release as the global zone that is hosting them. This is the
preferred configuration for performance: a native zone uses an up-to-date Oracle Solaris release while a nonnative zone runs older Oracle Solaris binaries that might not take advantage of the latest hardware platforms and
processors. Native zones use the native brand on Oracle Solaris 10 and the solaris brand on Oracle Solaris
11.
Here are the required updates for each of these Oracle Solaris releases:

Oracle Solaris 11 SRU 5.5 or later.

Oracle Solaris 10 update 8 or later; 64-bit.

Oracle Solaris 9: No restrictions on updates; 64-bit required. See System Administration Guide: Oracle
Solaris 9 Containers for more details on solaris9 branded zones. Use isainfob to check the number of
bits (32 or 64) in the address space of the server to be consolidated.
Oracle Solaris 8 2/04 or later; 64-bit. If you are using a previous update, applications
using libthread might experience some problems. See Chapter 8 of the System Administration Guide: Oracle
Solaris 8 Containers for more details on solaris8 branded zones. Use isainfob to check the number of
bits (32 or 64) in the address space of the server to be consolidated.
Also look at Oracle Solaris Support Life Cycle Policyreferenced on this pageto check whether your Oracle
Solaris release is supported.
Non-SPARC servers (such as x86 servers) cannot be consolidated on SuperCluster using P2V migration.
However, applications using non-native codesuch as Java, Python, and otherscan be migrated from nonSPARC servers to SuperCluster.
Note: For consolidating an application that was developed with a compiled language (such as C, C++, or
FORTRAN) and that runs on a non-SPARC server, the application must first be recompiled on SPARC, which
means you need access to the source code.

Are You Planning a Long-Term or Short-Term Migration?


Are you planning to consolidate a server running a business-critical application that you want to update with
future releases over upcoming years, or are you trying to get rid of an old server running a legacy application that
will not be updated anymore?
In the first scenariolong-term migration of a critical applicationnative Oracle Solaris 10 (that
is, native brand) zones and native Oracle Solaris 11 (that is, solaris brand) zones are preferred, because
they provide native performance and are maintained by the SuperCluster quarterly full stack download patch
(QFSDP) delivered by Oracle.
Non-native zonessuch as an Oracle Solaris 10 zone running in an Oracle Solaris 11 global zoneshould be
limited to applications and operating systems that are not planned to be updated on a regular basis and for which
performance is not critical (though, even with non-native zones, migrating from obsolete hardware to
SuperCluster can provide a performance improvement). The QFSDP does not apply to non-native zones.
Patching non-native zones is the responsibility of the operator. Within these constraints, non-native zones offer
the required flexibility for consolidating legacy servers running Oracle Solaris releases 8 and 9.

How Critical Are Performance and Manageability?


Oracle SuperCluster T5-8 offers two options for the zone's storage location:

Oracle's Sun ZFS Storage 7320 appliance, which is integrated into SuperCluster

Local hard-disk drives (HDDs)

The two main criteria to be considered when choosing between these options are I/O performance and
manageability.
I/O Performance
The first thing to check on the source server to be consolidated is whether the data that generates the I/O load is
located on the root file system. If notwhich is typically the case with data located on SAN storagethe data will
likely not be transferred to the zonepath as part of the P2V migration, and the location of the resulting zone won't
have much impact on I/O performance. In such a case, the zone should be installed on the Sun ZFS Storage
7320 appliance. If the data is located on the root file system, it will be transferred to the zonepath and the zone's
location will impact I/O performance. In this case, and if a large number of zones have their zonepaths on the
Sun ZFS Storage 7320 appliance, local HDDs can be a good alternative to dedicate I/O bandwidth to a limited
number of zones.
Manageability
If you want to migrate zones between the SuperCluster domains, installing zones on the Sun ZFS Storage 7320
appliance is the best option. Creating a dedicated ZFS pool (zpool) from an iSCSI LU exported by the Sun ZFS
Storage 7320 appliance for each zone greatly simplifies the migration, because it becomes a matter of
transferring the zpool from the source to the target domain using zpoolexportand zpoolimport. No data
migration or copy is required.
Note: A SuperCluster domain is simply an Oracle VM Server for SPARC virtual machine (aka a logical domain).
On SuperCluster, zones are hosted in domains.
In addition, one zpool per zone matches the default zone provisioning schema from Oracle Enterprise Manager
Ops Center, and if the zpool is located on the Sun ZFS Storage 7320 appliance, Oracle Enterprise Manager Ops
Center can be used to perform the migration between domains.
Finally, zones installed on the Sun ZFS Storage 7320 appliance immediately benefit from the high availability
designed into the SuperCluster: the storage and the network access to it are fully redundant.
SuperCluster Domains for Oracle Solaris Zones
Oracle Solaris Zones resulting from a P2V consolidation must be hosted in Application Domains. The Sun ZFS
Storage 7320 appliance should be preferred for installing zone; however, if you plan to install zones on local
HDDs, the Application Domain must have spare HDDs for creating an extra zpool dedicated to zones. For
SuperCluster configurations with one or two domains per SPARC T-Series server from Oracle, the Application
Domains have enough spare HDDs. For configurations with more than two domains per SPARC T-Series server,
the /u01 file system can be used instead of an extra zpool.
Oracle SuperCluster T5-8 comes with a set of logical domains (Oracle VM Server for SPARC virtual machines).
These domains are configured on the SPARC T5-8 servers during the initial configuration of SuperCluster and
they can be of two different types:

Database Domains dedicated to Oracle Database 11g Release 2


Application Domains dedicated to any software, including databases other than Oracle Database
11g Release 2
Oracle Solaris Zones resulting from a P2V consolidation must be hosted in Application Domains.
Note: P2V zones are not supported in Database Domains because all the computing resources in these domains
are dedicated to Oracle Database 11g Release 2 to deliver the expected performance on the database side.
If the server to be consolidated runs Oracle Solaris release 8 or 9, you need an Application Domain that boots
Oracle Solaris 10. If it runs Oracle Solaris 11, you need an Application Domain that boots Oracle Solaris 11. If it
runs Oracle Solaris 10, an Application Domain that boots Oracle Solaris 10 is preferred; however, an Oracle
Solaris 11 domain remains a valid option.
If you plan to install zones on the Sun ZFS Storage 7320 appliance, you can skip the rest of this section.
If you plan to install zones on local HDDs, the domain must have spare HDDs to create an extra ZFS pool
dedicated to zones. This ZFS pool must be mirrored to provide redundancy. When a SPARC T-Series server
hosts more than two domains, some of them do not have enough spare HDDs to create this extra ZFS pool. In
this case, the /u01 file system available in the domain can be used to install zones.
Network Setup

Application Domains running Oracle Solaris 11 offer the highest flexibility for zones' network configuration. Virtual
network interfaces and InfiniBand (IB) interfaces can be created at will and dedicated to zones. Each zone can
be connected to multiple VLANs, enabling a seamless integration with the data center network infrastructure.
Each zone can also be connected to multiple IB partitions. In each zone, IP Network Multipathing (IPMP) can be
configured on each VLAN and IB partition to provide network redundancy.
Oracle SuperCluster T5-8 comes with three networks:

The 10-GbE client access network connects SuperCluster to the data center. The servers to be
consolidated are located on this network.

The management network (10 GbE on Oracle SuperCluster T5-8) is dedicated to administration tasks.

The InfiniBand network interconnects the different SuperCluster domains.


This section focuses on connecting the zones to the client access network. If need be, and using the feature
described for the client access network, zones can also be connected to the InfiniBand network. Typically, a zone
that hosts an application that uses an Oracle Database 11g Release 2 instance is connected to InfiniBand.
By default, zones are connected to the management network. VLAN tagging can be used to improve networkbased segregation between zones.

Network Redundancy with IP Network Multipathing


Each Application Domain is provisioned with a minimum of two network interfaces connected to the client access
network that can be used for zones.
If network redundancy is required, IPMP can be used. With shared-IP zones, IPMP is set up in the domain's
global zone and the different zones can benefit from it. With exclusive-IP zones, IPMP is set up in each zone as it
would be in the global zone (as described for Oracle Solaris releases 10 and 11).
Note: Shared-IP zones share the network interface and the IP stack. The separation between zones is
implemented in the IP stack. This type is useful when there is a shortage of network interfaces available for
zones. With shared-IP zones, the network configuration is achieved in the global zone. Exclusive-IP zones have
an exclusive access to the network interface and to a dedicated IP stack. This type is preferred when strong
network segregation between zones is required. Some applications might also require the use of an exclusive-IP
stack.
Configuring IPMP in an exclusive-IP zone hosted in an Application Domain running Oracle Solaris 10 requires
two network interfaces (NICs) that are not already in use in the global zone or in another zone, but the number of
NICs per domain can be limited to two. In this case, and if many zones must run in the same domain, VLAN
tagging can be used to create more NIC instances in the global zone. These additional instances can be used in
exclusive-IP zones. For more details on how to use VLAN tagging with exclusive-IP zones on Oracle Solaris 10,
check the following blog: Solaris 10 Zones and NetworkingCommon Considerations.
Configuring IPMP in an exclusive-IP zone hosted in an Application Domain running Oracle Solaris 11 puts less
constraint on network interfaces. If the NICs are already in use or must be shared, it is possible to create Oracle
Solaris virtual network interfaces (VNICs) that can be used by exclusive-IP zones. VNICs are created on top of
NICs, from the Application Domain, without stopping it. Access to the control domain is not required.
NICs can be listed using dladmshowlink from an Application Domain running Oracle Solaris 10:
# dladm show-link
vnet0
type: non-vlan mtu: 1500
device: vnet0
ixgbe0
type: non-vlan mtu: 1500
device: ixgbe0
ixgbe1
type: non-vlan mtu: 1500
device: ixgbe1
ibd2
type: non-vlan mtu: 65520
device: ibd2
ibd0
type: non-vlan mtu: 65520
device: ibd0
ibd1
type: non-vlan mtu: 65520
device: ibd1

Here, one vnet (vnet0), two NICs (ixgbe0 and ixgbe1), and three InfiniBand interfaces are available in the
domain. A NIC that is not used by the global zone does not appear in the ifconfiga output, which is shown
in Listing 1:
# ifconfig -a

lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1


inet 127.0.0.1 netmask ff000000
ixgbe0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 10.129.184.66 netmask ffffff00 broadcast 10.129.184.255
ether 0:1b:21:c8:5e:b0
ixgbe1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7
inet 0.0.0.0 netmask 0
ether 0:1b:21:c8:5e:b1
vnet0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 8
inet 10.129.183.119 netmask ffffff00 broadcast 10.129.183.255
ether 0:14:4f:f8:4e:3
Listing 1

VLAN Setup
VLAN setup in an Application Domain running Oracle Solaris 11 on the 10-GbE client access network is
straightforward for both Oracle Solaris release 10 and 11 zones. Use an exclusive-IP zone, and create virtual
network interfaces (VNICs) in the global zone on top of the required 10-GbE NIC using the dladmcreate
vnic command and using the v option to set the vlanid attribute. From there, all the packets leaving the
zone through the VNIC are tagged with vlanid. Also, as soon as a VNIC is created, a virtual switch is
instantiated in the global zone, which ensures packet filtering: only the packets tagged with vlanid are
forwarded to the zone through the VNIC. The virtual switch also guarantees that two zones running in the same
Application Domain but connected to different VLANs cannot communicate with each other.
Similarly, for Application Domains running Oracle Solaris 10, VLAN-tagged interfaces are created in the global
zone and assigned to an exclusive-IP zone. For Oracle Solaris 10, more details can be found on this blog.
Oracle Licensing
Oracle recognizes capped Oracle Solaris Zones as licensable entities, known as hard partitions. This means that
zones created as part of the P2V migration process can be capped to optimize the cost of your Oracle software
license.
To create a zone that fits the licensing requirements set by Oracle, you first need to create a resource pool with
the desired number of cores and bind the zone to this resource pool.
First create and edit a licensePool.cmd file as follows:
create pset license-pset ( uint pset.min = 16; uint pset.max = 16 )
create pool license-pool
associate pool license-pool ( pset license-pset )

The first line defines a processor set with sixteen virtual CPUs. On Oracle SuperCluster T5-8, since each core
has eight virtual CPUs, this is equivalent to two cores. As a result, the actual number of cores considered for the
licensing is two. It is worth noting that the number of virtual CPUs in the processor set should always be a
multiple of eight.
When you are done editing the file, create the licensepool pool:
# pooladm -s
# poolcfg -f licensePool.cmd
# pooladm -c

The new resource pool configuration can be checked using pooladm, as shown in Listing 2:
# pooladm
# ...
# pool license-pool
int
pool.sys_id 2
boolean pool.active true
boolean pool.default false
int
pool.importance 1
string pool.comment
pset
license-pset
pset license-pset
int
pset.sys_id 1
boolean pset.default false
uint
pset.min 16

uint
string
uint
uint
string

pset.max 16
pset.units population
pset.load 31
pset.size 16
pset.comment

...
Listing 2
Modify the zone configuration using the zonecfg command and add the following attribute to the
zone: pool=licensepool. The zone is now using the licensepool cores.
Finally, reboot the zone. You can now connect to it and check the number of CPUs that are visible using
the psrinfo command.
Note that many zones can use the licensepool cores without increasing the license cost, which remains
based on two cores. All the zones associated to licensepool share the two cores.
P2V Migration Step by Step
This section goes through the main steps of a P2V migration to SuperCluster. The example given is the
consolidation of an Oracle Solaris 10 server into an Oracle Solaris 10 zone in an Application Domain running
Oracle Solaris 10.
Note: General information about P2V migrationoutside of the SuperCluster contextcan be found in
the Oracle Solaris Administration guide.

Performing a Sanity Check on the Source Server


zonep2vchk is a tool that helps in assessing the source server before performing a P2V migration. It provides a
good overview of the modifications between the source server and the target zone as a result of the P2V
migration process. It is useful for identifying potential problems and performing remediation ahead of the
migration.
zonep2vchk is shipped with Oracle Solaris 11 andbeing a scriptit can be directly copied and executed on an
Oracle Solaris 10 system.
Note: zonep2vchk does not run on Oracle Solaris release 8 or 9.
zonep2vchk should be first executed with the single T option that specifies the Oracle Solaris release in the
domain that will host the zone (Oracle Solaris 10, in our example), as shown in Listing 3. In that mode, the tool
lists services running on the server that are not available in a zone, identifies existing zones and additional zpools
that won't be migrated, identifies any NFS-share file system that can't be shared in an Oracle Solaris 10 zone,
and performs many other checks:
# ./zonep2vchk -T S10
- Source System: t5240-250-01
Solaris Version: Oracle Solaris 10 8/11 s10s_u10wos_17b SPARC
Solaris Kernel: 5.10 Generic_147440-01
Platform:
sun4v SUNW,T5240
- Target System:
Solaris_Version: Solaris 10
Zone Brand:
native (default)
IP type:
shared
--Executing basic checks
- The system is sharing file systems with NFS. This is not possible in
the destination zone. The shares should be evaluated to determine if
they are necessary. If so, this system may not be suitable for
consolidation into a zone, or an alternate approach for supporting the
shares will need to be used, such as sharing via the global zone or
another host. Use "zonep2vchk -P" to get a list of the shared
filesystems.
- The following SMF services will not work in a zone:
svc:/network/iscsi/initiator:default
svc:/network/nfs/server:default
svc:/system/iscsitgt:default
- The following zones will be unusable. Each zone should be migrated
separately to the target host using detach and attach. See zoneadm(1M),

solaris(5) and solaris10(5):


Zone
s10zone

State
running

- The following SMF services require ip-type "exclusive" to work in


a zone. If they are needed to support communication after migrating
to a shared-IP zone, configure them in the destination system's global
zone instead:
svc:/network/ipsec/ipsecalgs:default
svc:/network/ipsec/policy:default
svc:/network/routing-setup:default
- The system is configured with the following non-root ZFS pools.
Pools cannot be configured inside a zone, but a zone can be configured
to use a pool that was set up in the global zone:
cpool
- When migrating to an exclusive-IP zone, the target system must have an
available physical interface for each of the following source system
interfaces:
nxge0
- When migrating to an exclusive-IP zone, interface name changes may
impact the following configuration files:
/etc/hostname.nxge0
Basic checks compete, 11 issue(s) detected
Listing 3
Executed with the r option, zonep2vchk performs runtime checks and detects programs that use privileges not
available in zones. In the example shown in Listing 4, the runtime check is performed for only five minutes. A
longer period is recommended when the server is hosting a real application:
# ./zonep2vchk -r 5m -T S10
- Source System: t5240-250-01
Solaris Version: Oracle Solaris 10 8/11 s10s_u10wos_17b SPARC
Solaris Kernel: 5.10 Generic_147440-01
Platform:
sun4v SUNW,T5240
- Target System:
Solaris_Version: Solaris 10
Zone Brand:
native (default)
IP type:
shared
--Executing run-time checks for 5m
- The following programs were found using privileges that cannot be added
to a zone. The use of these privileges may be related to the program
command line options or configuration.
Program

Disallowed Privilege

/usr/lib/fm/fmd/fmd

sys_config

Run-time checks complete, 1 issue(s) detected


Listing 4
Executed with the s option, zonep2vchk performs a static binary analysis of the file system or directory
specified on the command line. It detects binaries that are statically linked, because they cannot execute in
zones. In the example shown in Listing 5, the check is performed on the root directory; however, when the source
server is hosting an application, it makes more sense to specify the application's home directory:
# ./zonep2vchk -s /
- Source System: t5240-250-01

Solaris Version: Oracle Solaris 10 8/11 s10s_u10wos_17b SPARC


Solaris Kernel: 5.10 Generic_147440-01
Platform:
sun4v SUNW,T5240
- Target System:
Solaris_Version: Solaris 10
Zone Brand:
native (default)
IP type:
shared
--Executing static binary checks
Static binary checks compete, 0 issue(s) detected
Listing 5
Finally, executed with c, zonep2vchk generates a template file based on the source server configuration,
which can be used when configuring the zone on SuperCluster:
# ./zonep2vchk -c
create -b
set zonepath=/zones/t5240-250-01
add attr
set name="zonep2vchk-info"
set type=string
set value="p2v of host t5240-250-01"
end
set ip-type=shared
# Uncomment the following to retain original host hostid:
# set hostid=853c66cc
# Max lwps based on max_uproc/v_proc
set max-lwps=40000
add attr
set name=num-cpus
set type=string
set value="original system had 128 cpus"
end
# Only one of dedicated or capped cpu can be used.
# Uncomment the following to use cpu caps:
# add capped-cpu
#
set ncpus=128.0
#
end
# Uncomment the following to use dedicated cpu:
# add dedicated-cpu
#
set ncpus=128
#
end
# Uncomment the following to use memory caps.
# Values based on physical memory plus swap devices:
# add capped-memory
#
set physical=65312M
#
set swap=69426M
#
end
# Original nxge0 interface configuration:
#
Statically defined 10.140.250.120 (t5240-250-01)
#
Factory assigned MAC address 0:21:28:3c:66:cc
add net
set address=t5240-250-01
set physical=change-me
end
exit
Listing 6

Creating a FLAR Image of the Source System


This step consists of storing the data of the source server in a single Flash Archive (FLAR)file. This is
accomplished using a single command:
# /usr/sbin/flarcreate -S -n s10P2V -L cpio /var/tmp/s10P2V.flar

The flarcreate command creates an s10server.flar image file. The L and cpio options enforce the use
of the CPIO format for the archive. This format should be used when the source system has a ZFS root file
system. The S option skips a disk-space checking step for a faster process, and n specifies the internal name

of the FLAR image file. You can use the x option to exclude a file, a directory, or a ZFS pool from the FLAR
image file.
Note: flarcreate is not available on Oracle Solaris 11. A ZFS stream from the global zone's rpool is used
instead of a FLAR image file. More information can be found here.
If the source server has multiple boot environments (BEs), only the active one is included in the FLAR image file.
With an Oracle Solaris 10 native zone, the different BEs can be listed but only one can be used: the active one.
With an Oracle Solaris10 branded zone (running in an Oracle Solaris 11 global zone), the live-upgrade related
commands are not available and the different BEs cannot be listed. As much as possible, BEs should be deleted
from the source server before the P2V migration.
The FLAR image file is now ready to be copied to SuperCluster.

Creating a ZFS Pool for the Zone


Before performing this step, you must decide whether you want to install the zone on local HDDs or on the Sun
ZFS Storage 7320 appliance.
ZFS Pool on Local HDDs
If you are installing the zone on local HDDs, check whether the SuperCluster configuration includes more than
two domains per SPARC T-Series server. If there are more than two domains the /u01 file-system remains an
option for installing the zone on local HDDs.
The following example is based on a configuration with one Database Domain and one Application Domain per
SPARC T-Series server.
First, use format to identify the disks in the domain, as shown in Listing 7:
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t5000C500429EFABFd0 <SUN600G cyl 64986 alt 2 hd 27 sec 668> solaris
/scsi_vhci/disk@g5000c500429efabf
1. c1t5000C500429F6D87d0 <SUN600G cyl 64986 alt 2 hd 27 sec 668> solaris
/scsi_vhci/disk@g5000c500429f6d87
2. c1t50015179596885CBd0 <ATA-INTELSSDSA2BZ30-0362 cyl 35769 alt 2 hd 128
sec 128>
solaris /scsi_vhci/disk@g50015179596885cb
3. c1t50015179596667E2d0 <ATA-INTEL SSDSA2BZ30-0362-279.46GB>
/scsi_vhci/disk@g50015179596667e2
Listing 7
In this case, we have two HDDs and two SSDs. Each HDD is divided in two slices: s0 and s1. The s0 slices are
already in use by the rpool, which can be checked with zpoolstatus, as shown in Listing 8:
# zpool status BIrpool-1
pool: BIrpool-1
state: ONLINE
scan: resilvered 8.93G in 0h2m with 0 errors on Thu May 17 17:02:12 2012
config:
NAME
STATE
READ WRITE CKSUM
BIrpool-1
ONLINE
0
0
0
mirror-0
ONLINE
0
0
0
c1t5000C500429F6D87d0s0 ONLINE
0
0
0
c1t5000C500429EFABFd0s0 ONLINE
0
0
0
Listing 8
The two s1 slices can be used to create an s10p2vpool pool dedicated to the zone:
# zpool create s10p2vpool mirror c1t5000C500429F6D87d0s1 c1t5000C500429EFABFd0s1

Note: If SuperCluster is configured with more than two domains per server node, the zpoolcreate command
can abort with a message saying that it cannot open one of the slices because the device is busy. It is likely that
the slice is shared by the domain virtual disk server. You can check this by connecting to the control domain and
running ldmlistdomainl on the Application Domain. The busy device should appear in the VDS section
of the output. The solution is to use other slices to create the ZFS pool. This example uses the SSDs.

ZFS Pool on the Sun ZFS Storage 7320 Appliance


To create the pool on the Sun ZFS Storage 7320 appliance, perform the following steps:
1.
2.

Using the Sun ZFS Storage 7320 appliance user interface, create an iSCSI LUN and get the associated
target number. To get the target, use the Configuration->SAN tab and select ISCSI Targets.
Mount the iSCSI LUN in the domain hosting the zone (that is, in the global zone):

1.

First add a static iSCSI configuration:

2. # iscsiadm add static-config


3.
iqn.1986-03.com.sun:02:847ad5ff-eff5-4bd7-8310999756b3d568,192.168.30.5:3260

Where:
iqn.198603.com.sun:... is the target number associated to the LUN and

collected in Step 1.

192.168.30.5 is the IP address of the Sun ZFS Storage 7320 appliance on the

InfiniBand network.

4.

3260 is the standard iSCSI port.


Then enable static discovery:

5. # iscsiadm modify discovery -s enable


b.

At this point, the LUN is mounted. Find its associated device name using iscsiadmlisttarget:

c. # iscsiadm list target -S iqn.1986-03.com.sun:02:78af61c2-953


d. Target: iqn.1986-03.com.sun:02:847ad5ff-eff5-4bd7-8310-999756b3d568
e.
Alias: f.
TPGT: 1
g.
ISID: 4000002a0000
h.
Connections: 1
i.
LUN: 0
j.
Vendor: SUN
k.
Product: COMSTAR
l.
OS Device Name:
/dev/rdsk/c0t600144F000212834B1BE50A60A010001d0s2
m. Using the device name, create the ZFS pool:

n. # zpool create s10p2vpool c0t600144F000212834B1BE50A60A010001d0

Creating and Booting the Zone


Once the pool is created, create a ZFS file system on it for the zonepath. Before actually installing the zone, turn
on ZFS compression. For optimal performance, also consider adjusting the ZFS recordsize (see the
"Performance Tuning" section):
# zfs create s10p2vpool/s10P2V
# zfs set compression=on s10p2vpool/s10P2V
# chmod 700 /s10p2vpool/s10P2V/

Configure the zone using zonecfg with the configuration file created with zonep2vchkc. In the configuration
file, zonepath is set to/s10p2vpool/s10P2V, iptype is set to exclusive, and netphysical is set
to ixgbe1:
# zonecfg -z s10P2V -f s10P2V.cfg

When the zone is configured, install it using the zoneadm command and the FLAR image file. If you want to use
the same OS configuration as the source serverincluding the same IP addressuse the p option. Be aware
that this can create an address conflict: the source server should be shut down or its IP address should be
modified before booting the zone. If you want to use a different configuration, use the u option instead, which
unconfigures the zone upon install:
# zoneadm -z s10P2V install -a /s10P2V.flar -u
cannot create ZFS dataset nfspool/s10P2V: dataset already exists
Log File: /var/tmp/s10P2V.install_log.AzaGfB
Installing: This may take several minutes...
Postprocessing: This may take a while...
Postprocess: Updating the zone software to match the global zone...
Postprocess: Zone software update complete
Postprocess: Updating the image to run within a zone
Result: Installation completed successfully.
Log File: /nfspool/s10P2V/root/var/log/s10P2V.install13814.log

At this point, the zone is ready to be booted but its Oracle Solaris instance is not configured. If the zone is booted
as such, connect to its console using zloginC in order to provide the configuration parameters interactively.
Alternatively, you can copy a sysidcfg file to the zonepath to avoid interactive configuration:
# cp sysidcfg /nfspool/s10P2V/root/etc/sysidcfg
# zoneadm -z s10P2V boot

Then you can connect to the zone:


# zlogin s10P2V

Performance Tuning
This section focuses on tuning the I/O performance of a zone resulting from a P2V migration. From a CPU and
network point of view, a zone behaves like any other Oracle Solaris image, so there is no zone-specific tuning to
be performed. However, on the I/O side, because a zone sits on a file system, performance can benefit from file
system tuning.
In the case of a P2V migration to SuperCluster, the most important parameters for I/O performance are the
ZFS recordsize andcompression for the zonepath andif the zone is located on the Sun ZFS Storage 7320
appliancethe NFS rsize and wsize.
If the source server hosts a database or an application that performs fixed-size access to files, the
ZFS recordsize should be tuned to match this size. For example, if it is hosting a database with a 4k record
size, set the ZFS recordsize to 4k.
Regardless of whether the zonepath is located on a local HDD or on the Sun ZFS Storage 7320 appliance, ZFS
compression improves performance for synchronous I/O operations. It also improves asynchronous I/O
operations when the zonepath is on the Sun ZFS Storage 7320 appliance. For these types of workloads, the
recommendation is to set the zonepath's compression to on. The improvement is not that important for
asynchronous I/O operations with the zonepath on local HDDs.
Example: Consolidating an Oracle Solaris 8 Server Running Oracle Database 10g
This section describes a P2V migration from an Oracle Solaris 8 server running Oracle Database 10.2.0.5 to an
Oracle Solaris 8 zone hosted in an Application Domain running Oracle Solaris 10. The database data is located
on attached storage connected through Fibre Channel on which an Oracle Automatic Storage Management file
system has been created.
On the source system, as user oracle and before creating the FLAR image file, stop the database listener and
the Oracle Automatic Storage Management instances, as shown in Listing 9:
$ sqlplus "/as sysdba"
SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 13:19:48 2012
Copyright (c) 1982, 2010, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> shutdown immediate
$ export ORACLE_SID=+ASM
$ sqlplus "/as sysdba"
SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 13:21:38 2012
Copyright (c) 1982, 2010, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> shutdown
ASM diskgroups dismounted
ASM instance shutdown
$ lsnrctl stop
LSNRCTL for Solaris: Version 10.2.0.5.0 - Production on 26-AUG-2012 13:23:49
Copyright (c) 1991, 2010, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
The command completed successfully
Listing 9
Once the database and Oracle Automatic Storage Management are stopped, create the FLAR image file
as root and copy it to the Application Domain. In the following example, S specifies that disk-space checking is
skipped and that the archive size is not written to the archive, which significantly reduces the archive creation
time. The n option specifies the image name, and L specifies the archive format.
# flarcreate -S -n s8-system -L cpio /var/tmp/s8-system.flar

At this point, move the SAN storage and connect it to the SPARC T-Series server. Then, from the control domain,
make the LUN (/dev/dsk/c5t40d0s6) available in Application Domain s10u10EIS21:
# ldm add-vdsdev /dev/dsk/c5t40d0s6 oradata@primary-vds0
# ldm add-vdisk oradata oradata@primary-vds0 s10u10-EIS2-1

Note: LUNs often appear with different names on different servers.


In the Application Domain, the LUN is now visible:
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
...
1. c0d2 <SUN-DiskSlice-408GB cyl 52483 alt 2 hd 64 sec 255>
/virtual-devices@100/channel-devices@200/disk@2

Now you can create the zone.


Note: Prior to creating the zone, check that the Oracle Solaris Legacy Containers software is installed in the
domain.
Configure the solaris8 branded zone (s810gr2) using zonecfg. Listing 10 shows the output of zonecfg
zs810gr2info after the configuration is complete:
zonename: s8-10gr2
zonepath: /cpool/s8-10gr2
brand: solaris8
autoboot: true
bootargs:
pool:
limitpriv: default,proc_priocntl,proc_lock_memory
scheduling-class: FSS
ip-type: exclusive
hostid:

net:
address not specified
physical: ixgbe1
defrouter not specified
device
match: /dev/rdsk/c0d2s0
attr:
name: machine
type: string
value: sun4u
Listing 10
Still in the Application Domain, install the zone and boot it using zoneadm. With p, the configuration of the
Oracle Solaris 8 image is preserved, and a specifies the archive location:
# zoneadm -z s8-10gr2 install -p -a /var/temp/s8-system.flar
...
# zoneadm -z s8-10gr2 boot

Now that the zone is booted, it is possible to connect to it using zlogins810gr2. As root, change the
ownership of the raw device and as oracle, start Oracle Automatic Storage Management and the database, as
shown in Listing 11:
# chown oracle:dba /dev/rdsk/c0d2s0
# su - oracle
$ lsnrctl start
...
$ export ORACLE_SID=+ASM
$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 14:36:44 2012
Copyright (c) 1982, 2010, Oracle. All Rights Reserved.
Connected to an idle instance.
SQL> startup
ASM instance started
Total System Global Area 130023424 bytes
Fixed Size
2050360 bytes
Variable Size
102807240 bytes
ASM Cache
25165824 bytes
ASM diskgroups mounted
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
$ export ORACLE_SID=ORA10
$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 14:37:13 2012
Copyright (c) 1982, 2010, Oracle. All Rights Reserved.
Connected to an idle instance.
SQL> startup
ORACLE instance started.
Total System Global Area 1610612736 bytes
Fixed Size
2052448 bytes
Variable Size
385879712 bytes
Database Buffers
1207959552 bytes
Redo Buffers
14721024 bytes
Database mounted.
Database opened.
Listing 11
Conclusion

Oracle Solaris Zones are fully supported and integrated with Oracle SuperCluster T5-8. In addition, the P2V
migration tools provided with Oracle Solaris greatly simplify the consolidation of physical servers to virtual
machines on Oracle SuperCluster T5-8.
As an engineered system, SuperCluster offers a lot of flexibility in terms of configuration: Oracle Solaris Zones
are the perfect receptacle for P2V migration. They provide a strong segregationincluding network segregation
and can be used to optimize the licensing cost of the platform. Meanwhile, with Oracle Solaris Zones, the
virtualization overhead is minimized.
Native Oracle Solaris Zones are patched and updated by the quarterly full stack downloadable patch for
SuperCluster.
Oracle's Sun ZFS Storage 7320 appliance, which is included in SuperCluster, provides a large amount of
redundant storage for Oracle Solaris Zones. Once installed on this shared storage, zones can be swiftly migrated
between the different domains and computing nodes of SuperCluster. Oracle Solaris Zones can be connected to
the 10-GbE client access network and to the InfiniBand I/O fabric. Network redundancy is available through IP
Network Multipathing, and VLANs are available for a seamless integration in the existing data center network.
The highly scalable computing resources of Oracle SuperCluster T5-8 ensure that many Oracle Solaris Zones
can run on this platform, while the powerful database can sustain the load of many applications running
concurrently.
All these integrated features make Oracle SuperCluster T5-8 the platform of choice for server consolidation.

See Also
System Administration Guide: Oracle Solaris 9 Containers
System Administration Guide: Oracle Solaris 8 Containers
Oracle Solaris Support Life Cycle Policy on the Oracle Solaris Releases web page
"Solaris 10 Zones and NetworkingCommon Considerations"
Oracle Solaris Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource
Management
About the Author
Thierry Manf has been working at Oracle and Sun Microsystems for more than 15 years. He currently holds the
position of principal engineer in the Oracle SuperCluster Engineering group.

Potrebbero piacerti anche