Sei sulla pagina 1di 37

SUN Virtualization Techonologies

Training Document

Document Version: 1.0


Document Date : 30-06-2009
Prepared by: D Clement Titus
Purpose: SUN Virtualization Training
Implemented by: Accel Frontline Ltd

Table of Contents
Preface......................................................................................................................................................5
ZFS.......................................................................................................................................................5
Procedure for creating ZFS RAID Z..........................................................................................5
Solaris10 Zones........................................................................................................................................6
Zones Features.....................................................................................................................................6
Zones Advantages................................................................................................................................7
Zones Concepts....................................................................................................................................7
Sparse Root Zone............................................................................................................................7
Whole Root Zone............................................................................................................................8
Zone States...........................................................................................................................................8
Zone Commands..................................................................................................................................8
Zones Configuration............................................................................................................................9
zonecfg Global Properties...............................................................................................................9
zonecfg Resources...........................................................................................................................9
Inherited Package Directories.........................................................................................................9
Zone Administration......................................................................................................................10
Configure Zone (Sparse)..........................................................................................................10
Install Zone...............................................................................................................................10
List and Verify..........................................................................................................................11
Sparse Root Configuration.......................................................................................................11
FullRoot Configuration............................................................................................................12
Zone Access...................................................................................................................................12
Zone Console.................................................................................................................................13
Boot a Zone ( Initial )....................................................................................................................13
Solaris10 Containers...............................................................................................................................14
1, Setting the CPU constraints in the zone.........................................................................................14
a, Enable the pools facility permanently in Solaris 10.............................................................14
b, Commit the settings to a default configuration file.............................................................14
c, Create a new pools configuration file, which defines the CPU allocated for each pool......14
d, Use the configuration file to setup pools..............................................................................15
e, Activate the pools and save it...............................................................................................15
f, Check the status of the new pools configuration..................................................................15
g, Add the pool into the zone configuration.............................................................................15
h, Check the CPU's available inside the zone...........................................................................16
2, Setting the Memory Constraints....................................................................................................16
a, Enable the rcap (daemon) services permanently..................................................................16
b, Setting Physical Memory constraints into the existing zone ( rss and swap )......................16
c, Set additional Shared Memory tunables per zone. ( For Database requirements like Oracle
& Sybase )................................................................................................................................17
Etude - Solaris8 Zones............................................................................................................................18
Branded Zones...................................................................................................................................18
Project Etude .....................................................................................................................................18

Archiver :.......................................................................................................................................18
Updater:.........................................................................................................................................18
Solaris 8 Container:.......................................................................................................................18
Procedure for creating Solaris 8 Zone......................................................................................19
Solaris10 LDOMS..................................................................................................................................21
1, LDOMS DOMAIN Concepts........................................................................................................21
a, -Control Domain -.................................................................................................................21
B, -Guest Domain -...................................................................................................................21
2, Setting Up LDOMS.......................................................................................................................21
a, Create Default Services........................................................................................................22
b, Set Up the Control Domain..................................................................................................22
c, Configure the Virtual Switch................................................................................................23
d, Create and Start a Guest Domain.........................................................................................23
3, LDOMS Installation.......................................................................................................................24
A, SUPPORTING HARDWARE.............................................................................................24
B, OS REQUIRMENT.............................................................................................................24
C, FIRMWARE Upgrade..........................................................................................................24
D, INSTALLATION OF LOGICAL DOMAIN MANAGER 1.0............................................25
D, VERIFY INSTALLATION OF LOGICAL DOMAIN MANAGER 1.0...........................25
SUN xVM ..............................................................................................................................................25
What is SUN xVM.............................................................................................................................26
a, Sun xVM Server...................................................................................................................26
B, Sun xVM VirtualBox...........................................................................................................26
C, Sun xVM Ops Center...........................................................................................................26
D, Sun xVM VDI......................................................................................................................27
A, xVM Server...................................................................................................................................27
A, Virtual Machine Types.........................................................................................................27
B, Why Sun xVM versus VMware or Linux+Xen...................................................................27
C, Windows XP on Sun xVM...................................................................................................28
D, Fedora8 on Sun xVM..........................................................................................................28
E, How to install Sun xVM ....................................................................................................28
G, DomU Startup Config..........................................................................................................29
H, Steps to clone domU OS using zfs......................................................................................29
B, xVM VirtualBox............................................................................................................................30
VirtualBox Features ......................................................................................................................30
Installation of VirtualBox on Solaris & OpenSolaris Host................................................................31
Un-Installation of VirtualBox on Solaris & OpenSolaris Host..........................................................32
Creating VirtualBox Guests...............................................................................................................33
Reference:...............................................................................................................................................34
Sample Solaris 10 Configuration..............................................................................................34
Sample Solaris 8 Configuration ...............................................................................................35
Reference Links........................................................................................................................36

Preface
The Solaris 10 Operating Environment from SUN introduced lots of new
technologies like the ZFS File-system, Solaris10 Zones & Containers, Dtrace
System Observation & Analysis tool, Predictive Self Healing, SMF services.
Additionally the Solaris Code was made Open Source and a new
development model evolved the OpenSolaris OS, which included open source
technologies and the latest developments in Virtualization.
Some of these technologies which are related to Virtualization are
introduced below.

ZFS
ZFS servers the role of a Software Volume Manager and File-system combined. The different
RAID levels that can be created are
RAID 1 ( RAID Mirror )
RAID Z ( Single Parity RAID )
RAID 2Z ( Double Parity RAID )
RAID 1+0

PROCEDURE FOR CREATING ZFS RAID Z


1. Create the RAIDZ Dataset which will hold the meta-data for the zfs filesystems
#zpool create <global-hostname> raidz c1t2d0 c1t3d0 c1t4d0 c1t5d0
2. Create a Cache for faster read and write operations. ( Only Applicable for the systems
which come with Flash Disks )
#zpool add <global-hostname> cache c2t0d0 c2t1d0

Note: This functionality will be only available in Solaris10 Update8 when it becomes
available.
3. Create the ZFS file systems as per the required Server Instances.
#zfs create <global-hostname>/dev01
#zfs create <global-hostname>/dev02
#zfs create <global-hostname>/dev03
#zfs create <global-hostname>/dev04
#zfs create <global-hostname>/dev05
4. Set the Disk restrictions as per the Server Instances.
# zfs set quota=145gb <global-hostname>/dev01
# zfs set quota=145gb <global-hostname>/dev02
# zfs set quota=125gb <global-hostname>/dev03
# zfs set quota=125gb <global-hostname>/dev04
# zfs set quota=125gb <global-hostname>/dev05
5. Create the ZFS file systems as per the required Server filesystems
#zfs create <global-hostname>/dev01/<zone-hostname>
#zfs create <global-hostname>/dev01/fs1
#zfs create <global-hostname>/dev01/fs2
#zfs create <global-hostname>/dev01/fs3...
#zfs create
#zfs create
#zfs create
#zfs create

<global-hostname>/dev02/<zone-hostname>
<global-hostname>/dev02/fs1
<global-hostname>/dev02/fs2
<global-hostname>/dev02/fs3...

Solaris10 Zones
ZONES FEATURES

Namespace isolation
Virtualized OS
Sharing for better utilization
Application fault containment

ZONES ADVANTAGES
The OS is virtualized, not the machine.
Runs on any hardware Solaris supports.
Private name, IP address and port range
Private process lists and authentication (file, NIS, LDAP,...)
Can boot, reboot a zone, run rc.N scripts, SMF services
Can create a new zone in a few minutes
Private file systems, with two flavors whole root & sparse root
Separate security, resource management, and failure scopes
Global zone administrator can give devices, UFS mount points, loopback
filesystems, etc, to zone for its disk assets

ZONES CONCEPTS
SPARSE ROOT ZONE

WHOLE ROOT ZONE

ZONE STATES

Configured: Configuration completely specified and committed to stable storage


Installed: Packages have been installed under the zone's root file system
Ready: Virtual platform has been established
Running: User processes are executing in the zone application environment

ZONE COMMANDS
Zone Configuration zonecfg
Creates and removes configuration
Zone Access zlogin
Interactive and Non-interactive access Console access
Zone Administration zoneadm

Install, Boot, Restart, Stop, List, Verify, Uninstall

ZONES CONFIGURATION
ZONECFG

GLOBAL PROPERTIES

zonepath: path in global zone to root directory under which zone will be installed
autoboot: to boot or not to boot when global zone boots
pool: which resource pool zone should be bound to
ZONECFG

RESOURCES

fs: file system


inherit-pkg-dir: directory which should have its associated packages inherited from the
global zone
net: network interface
device: device
rctl: resource control
attr: generic attribute

INHERITED PACKAGE DIRECTORIES

Four default inherit-pkg-dir resources provided


/lib
/platform
/sbin
/usr

Implemented via a read-only loopback file system


mount which provides security as well as storage and
virtual memory efficiencies
/opt is good to add to this list, unless it will be configured
differently than in the global zone

ZONE ADMINISTRATION
zoneadm(1M) is used by the global zone
administrator to
install a new root file system for a configured zone
list zones and optionally their state
verify whether the configuration of an installed zone is semantically complete and
ready to be booted
boot or ready an installed zone
halt or reboot a running zone
uninstall the root file system of an installed zone
CONFIGURE ZONE (SPARSE)
global# zonecfg -z workzone1
zone1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
zonecfg:zone1> set zonepath=/export/home/zones/zone1
zonecfg:zone1> set autoboot=false
zonecfg:zone1> add net
zonecfg:zone1:net> set physical=e1000g0
zonecfg:zone1:net> set address=192.168.100.11/24
zonecfg:zone1:net> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> ^D
global# zonecfg -z workzone1 info zonepath
zonepath: /export/home/zones/zone1

INSTALL ZONE
global# zoneadm list -cv
ID NAME STATUS PATH
0 global running /
- workzone1 configured /export/home/zones/zone1
global# zoneadm -z workzone1 install
Preparing to install zone <workzone1>.

Creating list of files to copy from the global zone.


Copying <2144> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <804> packages on the zone.
Initialized <804> packages on zone.
Zone <zone1> is initialized.
LIST AND VERIFY

Listing zones
global% zoneadm list -cv
ID NAME STATUS PATH
0 global running /
- workzone1 installed /export/home/zones/zone1

Verifying a zone
global# zoneadm -z workzone1 verify

SPARSE ROOT CONFIGURATION


# zonecfg -z small-zone
small-zone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:small-zone> create
zonecfg:small-zone> set autoboot=true
zonecfg:small-zone> set zonepath=/export/small-zone
zonecfg:small-zone> add net
zonecfg:small-zone:net> set address=192.168.2.101
zonecfg:small-zone:net> set physical=hme0
zonecfg:small-zone:net> end
zonecfg:small-zone> info
zonepath: /export/small-zone
autoboot: true
pool:
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin

inherit-pkg-dir:
dir: /usr
net:
address: 192.168.2.101
physical: hme0
zonecfg:small-zone> verify
zonecfg:small-zone> commit
zonecfg:small-zone> exit
FULLROOT CONFIGURATION
# zonecfg -z big-zone
big-zone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:big-zone> create -b
zonecfg:big-zone> set autoboot=true
zonecfg:big-zone> set zonepath=/export/big-zone
zonecfg:big-zone> add net
zonecfg:big-zone:net> set address=192.168.2.201
zonecfg:big-zone:net> set physical=hme0
zonecfg:big-zone:net> end
zonecfg:big-zone> info
zonepath: /export/big-zone
autoboot: true
pool:
net:
address: 192.168.2.201
physical: hme0
zonecfg:big-zone> verify
zonecfg:big-zone> commit
zonecfg:big-zone> exit

ZONE ACCESS
zlogin is used to enter the zone

Interactive mode
A lot like rlogin
global# zlogin workzone1

Non-interactive mode
A lot like rsh
global# zlogin -l jpb workzone1 ps -ef
Exit status preserved, so useful for shell scripting

Safe mode
Very minimal mode useful for repairing badly mis-configured zones
global# zlogin -S workzone1

ZONE CONSOLE
Zone pseudo-console available for each zone
Mimics a hardware console
Accessible via zlogin -C
Available prior to zone boot
global# zlogin -C workzone1
[Connected to zone 'workzone1' console]
twilight#
~.
[Connection to zone 'workzone1' console closed]
Publishes zone state change messages
[Notice: zone halted]

BOOT A ZONE ( INITIAL )


Open another terminal window
global$ su global# zlogin -C workzone1
[Connected to zone 'workzone1' console]
In original window
global# zoneadm -z workzone1 boot
In zone1 window, select terminal type (xterms, 12),
no name service, timezone, and root password.

Solaris10 Containers

1, SETTING THE CPU CONSTRAINTS IN THE ZONE


A,

ENABLE THE POOLS FACILITY PERMANENTLY IN SOLARIS 10.


#svcadm enable system/pools:default
Check the activitation
#poolcfg -c info
Note the following....
pool pool_default ( default pool created)
pset pset_default ( default pset created)

B,

COMMIT THE SETTINGS TO A DEFAULT

CONFIGURATION FILE.

#pooladm -s
#pooladm -c /etc/pooladm.cfg
Check changes..
#pooladm
C,

CREATE A NEW POOLS CONFIGURATION FILE, WHICH DEFINES THE CPU ALLOCATED FOR EACH
POOL.
(Note for Transaction Intensive Applications use TS CPU Scheduler, for other less
intensive Applications use FSS Scheduler.)
#vi /pool.host
create system host
create pset <zone-hostname>_pset (uint pset.min = 2; uint pset.max = 4)
create pset <zone-hostname>_pset (uint pset.min = 2; uint pset.max = 4)
create pset <zone-hostname>_pset (uint pset.min = 2; uint pset.max = 4)
create pset <zone-hostname>_pset (uint pset.min = 2; uint pset.max = 4)
create pset <zone-hostname>_pset (uint pset.min = 2; uint pset.max = 4)

create pool <zone-hostname>_pool (string pool.scheduler="TS")


create pool <zone-hostname>_pool (string pool.scheduler="TS")
create pool <zone-hostname>_pool (string pool.scheduler="TS")
create pool <zone-hostname>_pool (string pool.scheduler="TS")
create pool <zone-hostname>_pool (string pool.scheduler="TS")
associate pool <zone-hostname>_pool (pset <zone-hostname>_pset)
associate pool <zone-hostname>_pool (pset <zone-hostname>_pset)
associate pool <zone-hostname>_pool (pset <zone-hostname>_pset)
associate pool <zone-hostname>_pool (pset <zone-hostname>_pset)
associate pool <zone-hostname>_pool (pset <zone-hostname>_pset)
D,

USE THE CONFIGURATION FILE TO SETUP POOLS


#poolcfg -f /pool.host

E,

ACTIVATE THE POOLS AND SAVE IT


#pooladm -c /etc/pooladm.conf

F,

CHECK THE STATUS OF THE NEW POOLS CONFIGURATION


# pooladm | more

G,

ADD THE POOL INTO THE ZONE CONFIGURATION.


#zonecfg -z <zone-hostname>
>set pool=<zone-hostname>_pool
>verify
>exit
#zoneadm -z <zone-hostname> reboot

H,

CHECK THE CPU'S AVAILABLE INSIDE THE ZONE.


#zlogin <zone-hostname>
sol8>psrinfo
0 on-line since 05/25/2009 01:25:41
1 on-line since 05/25/2009 01:25:45
( shows 2 CPU's as per the min CPU's set )

2, SETTING THE MEMORY CONSTRAINTS

A,

ENABLE THE RCAP (DAEMON) SERVICES PERMANENTLY.


# rcapadm -E
(Check service availability)
#svcs -a | grep rcap
svc:/system/rcap:default
#ps -ef | grep rcapd
( Show rcap parameters)
#rcapadm
memory cap enforcement threshold: 0%
process scan rate (sec): 15
reconfiguration rate (sec): 60
report rate (sec): 5
RSS sampling rate (sec): 5

B,

SETTING PHYSICAL MEMORY CONSTRAINTS INTO THE EXISTING ZONE ( RSS AND SWAP )
RSS=6GB, SWAP=512MB
(Note: The physical property of the capped-memory resource is used by rcapd as the max-rss
value for the zone. ie, after 6GB limit paging will take place)

# zonecfg -z solaris_srv1
add capped-memory
set physical=6G
end
add rctl
set name=zone.max-swap
add value (priv=privileged,limit=536870912,action=deny)
end
add rctl
set name=zone.max-locked-memory
add value (priv=privileged,limit=268435456,action=deny)
end
C,

SET ADDITIONAL SHARED MEMORY TUNABLES PER ZONE. ( FOR DATABASE REQUIREMENTS
LIKE ORACLE & SYBASE )
# zonecfg -z <zone-hostname>
add rctl
set name=zone.max-sem-ids
add value (priv=privileged,limit=256,action=deny)
end
add rctl
set name=zone.max-shm-ids
add value (priv=privileged,limit=100,action=deny)
end
add rctl
set name=zone.max-shm-memory
add value (priv=privileged,limit=4294967296,action=deny)
end

Etude - Solaris8 Zones


BRANDED ZONES
Branded Zones are Zones which can install special Brand's of Operating Systems on specially
modified Zones. Branded Zones was introduced from Solaris 10 update 4 ( Solaris10 08/07 )
The Brands can include other Operating Systems specially ported ( modified ) to run on these
Zones like some versions of Linux on X86 Servers. The Brands can also include the older versions of
Solaris Operating Systems ported to run on SPARC Servers. Since the introduction of Branded Zones
the Global Zone OS is known as the native Brand.

PROJECT ETUDE
The Solaris 8 Migration Assistant ( S8MA ) is a tool for migration known as Project Etude.
The Solaris 8 and Solaris 9 which has reached End Of Life has to be migrated to Solaris 10, instead of
direct migration which might require more skill, SUN provides a indirect migration path using Project
Etude. Project Etude is the migration of Solaris 8 & Solaris 9 on to special Branded Zones only for
SUN SPARC servers.
Thereare3softwarecomponentsassociatedwiththistool:

ARCHIVER :
TheArchivertoolisaP2V(physicaltovirtual)toolwhicharchivestheSolaris8image
andmovesittothetargetSolaris10system.

UPDATER:
TheUpdatertool"massages"theSolaris8imagestorunintheSolaris8Container

SOLARIS 8 CONTAINER:
TheenvironmentwhichrunstheSolaris8environmentasifitwerestillontheoriginal
system

The S8MA works best for migrating user-land applications. Kernel level applications cannot be run
on the Solaris8 Containers.
There are three basic steps in S8MA
1. On the Solaris 8 host, create a system archive using flarcreate, or another preferred method,
such as cpio, ufsdump, etc.
2. Copy the archive to a Solaris 10 host.
3. On the Solaris 10 host, configure and install a Solaris 8 Container using the archive.

PROCEDURE FOR CREATING SOLARIS 8 ZONE


Note: <zone-hostname> uses the actual assigned Zone's hostname
<global-hostname> uses the actual global Zone's hostname
1. Install the Solaris8 Container software.
#gunzip s8containers-bundle.tar.gz
#tar -xvf s8containers-bundle.tar
#cd s8containers-bundle/1.0/Product
#pkgadd -d .
2. Create the Solaris8 Brand Zone
#zonecfg -z <zone-hostname>
<zone-hostname>: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:server1> create -t SUNWsolaris8
zonecfg:server1> set zonepath=<global-hostname>/dev01/ <zone-hostname>
zonecfg:server1> set autoboot=true
zonecfg:server1> info
zonename: server1
zonepath: /<global-hostname>/dev01/<zone-hostname>
brand: solaris8
autoboot: true
bootargs:
pool:

limitpriv:
scheduling-class:
ip-type: shared
zonecfg:server1> verify
zonecfg:server1> exit
3. Check the configured Zone
# zoneadm list -cv
ID NAME
STATUS PATH
0 global
running
/
- <zone-hostname>
configured
solaris8 shared

BRAND IP
native
shared
/<global-hostname>/ dev01/<zone-hostname>

4. Install the Solaris8Zone


#zoneadm -z <zone-hostname> -a /temp/solaris8.flar
5. Boot and check the created Zone
#zoneadm -z <zone-hostname> boot
# zoneadm list -cv
ID NAME
STATUS PATH
0 global
running
/
1 <zone-hostname>
running
solaris8 shared

BRAND IP
native
shared
/<global-hostname>/dev01/< zone-hostname>

6. Add ZFS file systems


#zonecfg -z <zone-hostname>
zonecfg:s8-zone> add fs
zonecfg:s8-zone:fs> set type=zfs
zonecfg:s8-zone:fs> set special= <global-hostname>/dev01/fs1
zonecfg:s8-zone:fs> set dir=/usr/local
zonecfg:s8-zone:fs> end
7. Add Network Interfaces
#zonecfg -z <zone-hostname>
zonecfg:s8-zone> set ip-type: exclusive
zonecfg:s8-zone> add net
zonecfg:s8-zone:net> set physical=nxge1
zonecfg:s8-zone:net> end

8. Set the 'HostID' to be the hostid of the source system


#zonecfg -z <zone-hostname>
zonecfg:s8-zone> add attr
zonecfg:s8-zone:attr> set name=hostid
zonecfg:s8-zone:attr> set type=string
zonecfg:s8-zone:attr> set value=8325f14d
zonecfg:s8-zone:attr> end

Solaris10 LDOMS

1, LDOMS DOMAIN CONCEPTS

A,

-CONTROL DOMAIN -

The Logical Domains Manager is used to create and manage logical domains. There can be only one
logical domain per server. The Logical Domain Manager maps logical domains to physical resources.
Domain in which the logical domains manager runs allowing you to create and manage other logical
domain and allocate virtual resources to other domains. There can be only one control domain per
server. the initial domain created when installing . Logical Domains software is a control domain and
is named primary.
B, -GUEST DOMAIN Domains that is managed by the control Domain.

2, SETTING

UP LDOMS

A,

CREATE DEFAULT SERVICES

1. Connect to the console for the operating system,if you are not already connected. #console -f
2. Verify the ldmd service is enabled. #svcs -a | grep ldmd
3. If needed,enable the ldmd service. #svcadm -v enable ldmd
4. Use the ldm command to list the current configuration. #ldm list
5. Create a virtual disk server (vds to allow importing virtual disks into a Logical Domain. #ldm
add-vds primary-vds0 primary
6. Create a virtual console concentrator service (vcc for use by the virtual network terminal server
daemon (vntsd and as a concentrator for all Logical Domain consoles. #ldm add-vcc portrange=5001-5100 primary-vcc0 primary
7. Create a virtual switch service (vsw to enable networking between virtual network (vnet devices in
Logical Domains. Assign a network adapter to the virtual switch if each of the Logical Domains
needs to communicate outside the box through the virtual switch. #ldm add-vsw net-dev=e1000g1
primary-vsw0 primary
8. Verify the services have been created by using the list-services subcommand. #ldm list-services
primary
B,

SET UP THE CONTROL DOMAIN

1. Assign cryptographic resources to the Control Domain. #ldm set-mau 1 primary


2. Assign four virtual CPUs to the Control Domain. #ldm set-vcpu 4 primary
3. Assign 4 Gbytes of memory to the Control Domain. #ldm set-memory 4G primary
4. Add a Logical Domain machine configuration to the service processor called initial #ldm addconfig initial
5. Verify that the configuration is ready to be used at the next reboot. #ldm list-config
6. You must reboot the Control/Service Domain for the preceding changes to take effect and the
server resources to be released for other Logical Domains to use. #shutdown -y -g 0 -i 6

C,

CONFIGURE THE VIRTUAL SWITCH

1. Print out the addressing information for all interfaces. #ifconfig -a


2. Plumb the virtual switch vsw0
3.Unplumb the physical network device e1000g1 #ifconfig e1000g1 down unplumb
4. Assign the properties of the physical network device e1000g1 to the virtual switch (vsw0 device.
#ifconfig vsw0 IP_of_e1000g1 netmask netmask_of_e1000g1 broadcast + up
5. Make the required con guration le modi cations to make this change permanent. #mv
/etc/hostname.e1000g1 /etc/hostname.vsw0
6. Enable the virtual network terminal server daemon vntsd #svcadm enable vntsd
D,

CREATE AND START A GUEST DOMAIN

1. Create a Logical Guest Domain called ldg1 #ldm add-domain ldg1


2. Add four CPUs to the Guest Domain ldg1 #ldm add-vcpu 4 ldg1
3. Add 512 Mbytes of memory to the Guest Domain ldg1 #ldm add-memory 512m ldg1
4. Add a virtual network device called vnet1 to the Guest Domain ldg1 #ldm add-vnet vnet1
primary-vsw0 ldg1
5. Specify the device /dev/dsk/c1t1d0s2 to be exported by the virtual disk server as a virtual disk
named vol1

#ldm add-vdsdev /dev/dsk/c1t1d0s2 vol1@primary-vds0


6. Add a virtual disk named vdisk1 to the Guest Domain ldg1 #ldm add-vdisk vdisk1
vol1@primary-vds0 ldg1
7. Set the auto-boot variable for the Guest Domain ldg1 to false #ldm set-var auto-boot\?=false
ldg1

8. Set the boot-device variable for the Guest Domain ldg1 to vdisk #ldm set-var boot-device=vdisk
ldg1
9. Bind resources to the Guest Domain ldg1 #ldm bind-domain ldg1
10. List the domain to verify that it is bound. #ldm list-domain ldg1
11. Start the Guest Domain ldg1 #ldm start-domain ldg1
12. Connect to the console of Guest Domain ldg1 #telnet localhost 5001

3, LDOMS INSTALLATION
A, SUPPORTING HARDWARE

Sunfire T1000 server, Sunfire T2000 server


SPARC Enterprises T1000 server, T2000 Server
SPARC Enterprises T5120 server, T5140 Server
SPARC Enterprises T5220 server, T5240 Server
SPARC Enterprises T5440 server
Netra T2000 Server
Netra cp 3060 Blade
Sun Blade T6300 Server Module
Sun Blade T6320 Server Module
Sun Blade T6340 Server Module

B, OS REQUIRMENT
Solaris 10 release 11/06
Patches 124921-02
125043-01
C, FIRMWARE UPGRADE

Download firmware upgrade version(Sun_System_firmware_6_5.5) form SUN Website


#cd <firmware_location>
#./sysfwdownload Sun_System_firmware_6_5.5_Sun_Fire_T2000.bin

#shutdown -i5 -g0 -y


sc>poweroff -fy
sc>flashupdate -s 127.0.0.1
sc>resetsc -y
sc>showhost (for checking firmware version)
sc>poweron

D, INSTALLATION OF LOGICAL DOMAIN MANAGER 1.0

Download the tat file (Ldoms_Manager_1.0.tar.gz) form Sun Website


http://www.sun.com/download
#gunzip -c LDoms_Manager_1.0.tar.gz | tar xvf Note- it will extract the following files
1- SUNWldm
2- install-ldm
#cd Ldoms_Manager_1.0
#Install/install_ldm

D, VERIFY INSTALLATION OF LOGICAL DOMAIN MANAGER 1.0

#cd /opt/SUNWldm/bin
#/opt/SUNWldm/bin/ldm list (To Show Domain List)
Service stop/start (Ldom Daemon)
#svcadm enable ldmd

SETTING MAN PATH

#PATH=$PATH:/opt/SUNWldm/bin; export PATH


#MANPATH=$MANPATH:/opt/SUNWldm/man; export MANPATH

SUN xVM

WHAT IS SUN XVM


Sun xVM is a family of technologies that addresses both desktop and server virtualization. It
leverages work from open source communities like Xen and is being built on proven Sun technology.
The first products in the Sun xVM family will include:

A,

SUN XVM SERVER

A cross-platform, high efficiency, open source hypervisor capable of hosting multiple operating
systems.

Host Windows, Linux and Solaris guest operating systems

Built using technology from the Xen open source project as well as Sun's Logical Domains

High availability and scalability

Advanced CPU and memory handling capabilities

Access features previously only available on the Solaris 10 OS - such as Predictive Selfhealing (FMA) and Solaris ZFS.

B, SUN XVM VIRTUALBOX


C, SUN XVM OPS CENTER
A complete, highly scalable datacenter automation tool that will simplify discovery, provisioning,
updates and management of physical and virtualized assets in cross-platform Linux and Solaris OSbased x86 and SPARC environments

Better manage datacenter consolidation

Keep guest operating systems up-to-date and monitor for virtual assets on a network

Automate provisioning and updating of both Linux and Solaris OS instances to increase
availability and utilization and minimize downtime

More effectively deploy, manage and monitor security and compliance in IT operations, either
locally or remotely

D, SUN XVM VDI

A, XVM SERVER
A, VIRTUAL MACHINE TYPES
There are two types of virtual machine:
1. HVM - Hardware Virtual Machine. Intel and AMD have independently developed extensions
to the x86 architecture that provide hardware support for virtualization. These extensions
enable a VMM to provide full virtualization to a VM, and support the running of unmodified
guest operating systems on a VM. You can run Windows OS using HVM, but not all processor
support HVM, please check at Intel & AMD website if your processor support VT.
2. PVM - Para Virtual Machine. Will need modified Operating System that support Xen
architecture. Using paravirtualized OS, it will reduce overhead and run almost at native speed
of the hardware.

B, WHY SUN XVM VERSUS VMWARE OR LINUX+XEN.

Solaris is the most multi-threaded and scalable OS that support up to 100 cpus. In
consolidation & virtualization environment the scalability of dom0 is critical to manage
many domUs running on the same machine.

Using zfs, Solaris will be able to provide the best IO virtulization to guest OS.
Using zfs zvol raw devices, domaU will have almost native performance, couple with
unique features such as unlimited snapshot, clones, compression, encryption (in the
future), etc.

dtrace provider for xVM so that we will be able to observe the domainU better
than other hostOS.

Integrate FMA into xVM so provide best availability and provide SunCluster
agent for xVM.

C, WINDOWS XP ON SUN XVM


How to install new domU ?
There are 2 ways to install domU: using virt-install command or using xm create + config file.
To install Fedora8 PVM using virt-install:
virt-install -n FEDORA8_PV -p -r 1024 --nographics -x "console=hvc0"
-f /disk2/fedora8/fedora8.img -l
CIFS server demo
If you are running Microsoft windows domU, it's good to show cifs server while showing
Windows domU.
Step to setup cifs server:
* Disable idmap daemon from performing auto-discovery:
# svccfg -s idmap
# setprop config/domain_name=astring:""
# setprop config/forest_name=astring:""
# setprop config/site_name=astring:""
# setprop config/domain_controller=astring:""
# setprop config/global_catalog=astring:""
* Start CIFS services: # svcadm enable -r smb/server
* Join the workgroup # smbadm join -w workgroup-name
* Install the PAM module, add the following line to the end of the /etc/pam.conf file
other password required pam_smb_passwd.so.1 nowarn
* Create local user passwords: # passwd username
Now the cifs services should be running, and you are ready to demo cifs. For more info, you
can check here.

D, FEDORA8 ON SUN XVM


To install Fedora8 PVM using virt-install:
virt-install -n FEDORA8_PV -p -r 1024 --nographics -x "console=hvc0" -f
/disk2/fedora8/fedora8.img -l
E, HOW TO INSTALL SUN XVM

Download OpenSolaris 2009.06 CD

Install OpenSolaris into your X86 Server


Booting Solaris Dom0 using xVM option.

Please note that to run unmodified OS such as Windows OS you need processor that support
AMD Secure Virtual Machine (SVM), code name Pacifica, or Intel VT-x. This is known as
hvm or hardware assisted Virtual Machines.
F, Hardware requirement
CPU: Preferably 2 dual core CPU. Minimum dom0 memory is 1GB, minimum domU memory is
512MB (depends on the guest OS).
You need VT capable processor to run unmodified OS on HVM. Refer to the Processor manufacturer
typical config: A laptop with Intel centrino core 2 duo, T7100 1.8GHz, 2GB RAM, 160GB
HDD.
All filesystem (including root) in zfs filesystem for easy administration

G, DOMU STARTUP CONFIG


You can start domainU using command: # xm create -c config_file_name.
You need to customize the config file (memory size, network config, disk config).
Solaris Express B80 domainU config file
* Fedora8 domainU config file

H, STEPS TO CLONE DOMU OS USING ZFS.

Make sure you have shutdown the domU OS.


Snapshot the zfs filesystem where domU file image or zvol reside: # zfs snapshot

Clone the snapshot: # zfs clone disk2/winxp_pro@now disk2/winxp_pro_clone1

Enable compression is save disk space: # zfs set compression=on disk2/winxp_pro_clone1

Edit disk parameter in config.py file

Startup clone domU

You can clone domU in 1 minute!

B, XVM VIRTUALBOX
VirtualBox is a general-purpose full Virtualizer for x86 hardware. Targeted at server, desktop, laptop
and embedded use.
It is now the only professional-quality virtualization solution that is also Open Source
Software.

VIRTUALBOX FEATURES
Modularity. VirtualBox has an extremely modular design with well-defined internal
programming interfaces and a client/server design. This makes it easy to control it from
several interfaces at once: for example, you can start a virtual machine in a typical virtual
machine GUI and then control that machine from the command line, or possibly remotely.
VirtualBox also comes with a full Software Development Kit: even though it is Open Source
Software, you don't have to hack the source to write a new interface for VirtualBox.
Virtual machine descriptions in XML. The configuration settings of virtual machines are
stored entirely in XML and are independent of the local machines. Virtual machine definitions
can therefore easily be ported to other computers.
Guest Additions for Windows and Linux. VirtualBox has special software that can be
installed inside Windows and Linux virtual machines to improve performance and make
integration much more seamless. Among the features provided by these Guest Additions are
mouse pointer integration and arbitrary screen solutions (e.g. by resizing the guest window).
Shared folders. Like many other virtualization solutions, for easy data exchange between
hosts and guests, VirtualBox allows for declaring certain host directories as "shared folders",
which can then be accessed from within virtual machines.
A number of extra features are available with the full VirtualBox release
Virtual USB Controllers. VirtualBox implements a virtual USB controller and allows you to
connect arbitrary USB devices to your virtual machines without having to install device
specific drivers on the host.
Remote Desktop Protocol. Unlike any other virtualization software, VirtualBox fully
supports the standard Remote Desktop Protocol (RDP). A virtual machine can act as an RDP
server, allowing you to "run" the virtual machine remotely on some thin client that merely
displays the RDP data.
USB over RDP. With this unique feature, a virtual machine that acts as an RDP server can still

access arbitrary USB devices that are connected on the RDP client. This way, a powerful
server machine can virtualize a lot of thin clients that merely need to display RDP data and
have USB devices plugged in.

INSTALLATION OF VIRTUALBOX ON SOLARIS & OPENSOLARIS HOST


After downloading and extracting the contents of the tar.gz file perform the following steps:

1. Login as root using the "su" command.


2. Install the packages (in this order):

First, the VirtualBox kernel interface package:


pkgadd -G -d VirtualBoxKern-3.0.0-SunOS-r49315.pkg

Next, the main VirtualBox package:


pkgadd -d VirtualBox-3.0.0-SunOS-r49315.pkg

3. For each package the installer would ask you to "Select package(s) you wish to process"
For this type "1" or "all".
4. Then type "y" when asked about continuing with the installation.
Now all the necessary files would be installed on your system. Start VirtualBox by typing
VirtualBox from the terminal or from the Desktop icon.

UN-INSTALLATION OF VIRTUALBOX ON SOLARIS & OPENSOLARIS HOST

To remove VirtualBox from your system perform the following steps:


1. Login as root using the "su" command.
2. Run the command:
pkgrm SUNWvbox
To remove the VirtualBox kernel interface module run the command:
pkgrm SUNWvboxkern

CREATING VIRTUALBOX GUESTS

Reference:
SAMPLE SOLARIS 10 CONFIGURATION
# zonecfg -z s10-zone
set zonepath=/zpool/zones/sol10zone
set set autoboot=false
set pool=sol10_pool
set scheduling-class=FSS
set ip-type=shared
add fs
set dir=/export/shared
set special=zpool/vol1/s10-zone
set type=zfs
end
add fs
set dir=/mnt
set special=/cdrom
set type=lofs
add options ro
add options nodevices
end
add net
set address=10.1.1.12
set physical=bge0
end
add rctl
set name=zone.max-swap
add value (priv=privileged,limit=536870912,action=deny)
end
add rctl
set name=zone.max-locked-memory
add value (priv=privileged,limit=268435456,action=deny)
end
add rctl
set name=zone.cpu-shares
add value (priv=privileged,limit=4,action=none)
end
add rctl
set name=zone.max-sem-ids
add value (priv=privileged,limit=256,action=deny)
end

add rctl
set name=zone.max-shm-ids
add value (priv=privileged,limit=100,action=deny)
end
add rctl
set name=zone.max-shm-memory
add value (priv=privileged,limit=4294967296,action=deny)
end
add capped-memory
set physical=6G
end
SAMPLE SOLARIS 8 CONFIGURATION

# zonecfg -z s8-zone
set zonepath=/mypool/zones/sol8zone
set brand=solaris8
set autoboot=false
set pool=<zone-hostname>_pool
set scheduling-class=FSS
set ip-type=shared
add fs
set dir=/export/shared
set special=mypool/vol1/s8-zone
set type=zfs
end
add fs
set dir=/mnt
set special=/cdrom
set type=lofs
add options ro
add options nodevices
end
add net
set address=10.1.1.12
set physical=bge0
end
add rctl
set name=zone.max-swap
add value (priv=privileged,limit=536870912,action=deny)
end

add rctl
set name=zone.max-locked-memory
add value (priv=privileged,limit=268435456,action=deny)
end
add rctl
set name=zone.cpu-shares
add value (priv=privileged,limit=4,action=none)
end
add rctl
set name=zone.max-sem-ids
add value (priv=privileged,limit=256,action=deny)
end
add rctl
set name=zone.max-shm-ids
add value (priv=privileged,limit=100,action=deny)
end
add rctl
set name=zone.max-shm-memory
add value (priv=privileged,limit=4294967296,action=deny)
end
add attr
set name=hostid
set type=string
set value=8325f14d
end
add capped-memory
set physical=6G
end
REFERENCE LINKS
More information on Sun xVM
* Sun xVM BluePrint
* Sun xVM Presentation
* Open xVM Website
* OpenSolaris xVM Website
* xVM ClusterAgent
* Onestop xVM Ops Center

* http://opensolaris.org/os/community/xen/docs/specs/
* http://opensolaris.org/os/community/xen/docs/virtinstall/
* http://opensolaris.org/os/community/xen/docs/windowsguest/
* http://opensolaris.org/os/community/xen/docs/changing-boot-flags/

Potrebbero piacerti anche