Sei sulla pagina 1di 57

Upgrade to POWER9 Planning

Session

Bob Foster
Senior Technical Staff Member
and
Turgut Genc 2018 IBM Systems
Senior IT Consultant Technical University
IBM – Power Systems Lab Services October 2018
Rome
Session Objectives
• Outline the Process for migrating your current environment to a Power 9
environment
▪ OS and Application Compatibility
▪ Power9 performance and sizing your workloads
▪ Using new features on Power9
▪ Building new servers
▪ Migrating workloads to new servers

2 © Copyright IBM Corporation 2018


OS and Application Compatibility
• SMT8 should be default for most workloads
• Check IBM FLRT for recommendations for VIOS/AIX/System
i/Linux/HMC
• Contact your Software vendors for compatibility with Power9 and SMT8

3 © Copyright IBM Corporation 2018


POWER9 SMT8 exploitation
Starting with AIX 7.2 TL3, the default SMT level is SMT8
§ AIX 7.2 TL3 on POWER9 lpars will boot in SMT8 regardless of the compatibility mode set
§ AIX 7.2 TL3 on POWER8 lpars will boot in SMT4
§ LPM will migrate with whatever the active SMT mode is
POWER9’s performance benefits in SMT8 mode are noted across a wide variety of workloads
§ Hardware performance is extremely robust in SMT8 mode
The last major AIX transition in default SMT levels was with POWER6->POWER7
§ During that transition, 4 PowerVM AND 12 AIX performance improvements occurred, based on
customer experiences
§ Unlike the POWER7 transition which was abrupt, POWER9 is positioned to take advantage of 4
years of SMT8 learning
§ Known customer transitions for SMT8 on POWER8 have tended to go quite smoothly
§ Most major applications and middleware have adapted to increasing cores and threads as a
technological norm
Performance testing across a variety of workloads and middleware on AIX have shown POWER9
SMT8 to show significant performance advantages over SMT4
§ SAP recommended SMT8 on POWER8 and POWER9
§ IBM’s experience with DB2 & Websphere
§ AIX POWER9 Performance Best Practices
§ IBM’s experience with Oracle 11g and 12c
IBM FLRT Product Compatibility
https://www14.software.ibm.com/webapp/set2/flrt/home

• This website can analyze your desired HMC, VIOS, Server, Server Firmware,
OS levels for your P9 and suggest the appropriate levels you should use for all
these components.

5
IBM FLRT webpage for Hiper APARs
https://www-304.ibm.com/webapp/set2/flrt/doc?page=hiper

6
IBM FLRT webpage for Security and HIPER script
https://www14.software.ibm.com/webapp/set2/flrt/sas?page=flrtvc

7
Application Compatibility
• Contact your Software vendors for compatibility with Power9 and SMT8
– Each customer has their set of applications that they will need to
discuss compatibility with their software vendor.

• IBM does have some documents on some of the products…Google


Search is your best friend here - “oracle support on power9”
• https://www-
03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102750

8 © Copyright IBM Corporation 2018


Hardware and sizing your workloads

• General rules
• Energy Scale
• Example of rPerf sizing
• Right-sizing Experiment results and conclusions

9 © Copyright IBM Corporation 2018


Performance Overview

• rPerf advantage kicks in when using SMT4 and SMT8 at similar


frequencies. SMT8 gains can be as high as 40-60% when
compared to similar Power8 models.

• SMT levels play no role with IBM i as CPW always exploits the
maximum number of threads

• Single Thread performance is based on the frequency of the


processors. If the P8 frequency is more than the P9 frequency, that
workload will run slower. And vice versa. Frequency is the key for
single thread.

10
EnergyScale Overview

POWER9 models introduced new features for EnergyScaletm, including new


variable processor frequency modes that provide a significant performance
boost beyond the static nominal frequency. There are 3 modes.

For example, Maximum Performance mode will allow the system to reach the
maximum frequency under more conditions, thus providing maximum
performance (the maximum frequency is approximately 20% better than
nominal).

• All servers default to maximum performance mode except S914 which


defaults to dynamic performance mode for lower acoustics. All servers can
run in all modes and the change is dynamic.

For more details -


https://www.ibm.com/developerworks/community/wikis/home?lang=en_th#!/wiki/Po
wer%20Systems/page/POWER9%20EnergyScale%20Introduction
11
rPerf – Sizing and migration E850->E950 example
p8 vs p9 System Perf ratio

Freq 3.6-3.8 E950 3.4-3.8 E950 3.15-3.8 E950


config 4skt/32c 4skt/40c 4skt/48c

thread SMT4 SMT8 SMT4 SMT8 SMT4 SMT8


rPerf 690.8 870.4 820.7 1034.1 909.9 1146.4
E850 3.72 4skt/32c SMT4 522.8 1.32x 1.66x 1.57x 1.74x
SMT8 559.4 1.56x 1.85x 2.05x
E850 3.35 4skt/40c SMT4 597.1 1.16x 1.37x 1.73x 1.52x
SMT8 639 1.36x 1.62x 1.79x
E850 3.02 4skt/48c SMT4 657.6 1.05x 1.25x 1.38x 1.74x
SMT8 703.6 1.24x 1.47x 1.63x

p8 vs p9 Core Perf ratio

Freq 3.6-3.8 E950 3.4-3.8 E950 3.15-3.8 E950


config 4skt/32c 4skt/40c 4skt/48c

thread SMT4 SMT8 SMT4 SMT8 SMT4 SMT8


E850 3.72 4skt/32c SMT4 1.32x 1.66x 1.26x 1.16x
SMT8 1.56x 1.48x 1.37x
rPerf ratings
E850 3.35 4skt/40c SMT4 1.45x 1.37x 1.73x 1.27x
assume all cores
SMT8 1.7x 1.62x 1.5x
busy and system
E850 3.02 4skt/48c SMT4 1.58x 1.5x 1.38x 1.74x
default power
SMT8 1.86x 1.76x 1.63x
management mode

Full report at https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=POO03017USEN


Right-sizing experiments and conclusions
The following migration analysis are estimated based on SMT8
IBM internal measurements on the application server P8 SMT8 6 vcpus -> P9 SMT8 6 vcpus
workload Source Estimated PC
Utilization Source PC Target Utilization Target PC Improvement
§ Example s824 24c 3.52Ghz->s924 24c AIX 20 2.29 12 2.03 13%
§ Rperf POWER9/POWER8 ratios: SMT4 1.25x & 40 3.35 26 2.76 21%
SMT8 1.47x 60 4.41 39 3.49 26%
§ Customer migration experience may vary by 80 5.46 52 4.23 29%
workload

P8 SMT8 6 vcpus -> P9 SMT8 5 vcpus


SMT4 Source Estimated PC
P8 SMT4 -> P9 SMT4 both 6 vcpus Utilization Source PC Target Utilization Target PC Improvement
right 20 2.29 14 1.87 22%
Target Estimated PC
Source Utilization Source PC Utilization Target PC Improvement 40 3.35 30 2.55 31%
6% 60 4.41 45 3.3 34%
20 2.06 17 1.94
80 5.46 61 4.01 36%
40 3.18 32 2.78 14%
60 4.32 46 3.63 19%
80 5.45 61 4.47 22% P8 SMT8 6 vcpus -> P9 SMT8 4vcpus
Source Estimated PC
Utilization Source PC Target Utilization Target PC Improvement
P8 SMT4 6 vcpus - > P9 SMT4 5 vcpus
20 2.29 17 1.6 43%
Target Estimated PC
40 3.35 36 2.3 46%
Source Utilization Source PC Utilization Target PC Improvement
60 4.41 56 3 47%
20 2.06 19 1.69 22%
80 5.46 75 3.7 48%
40 3.18 36 2.52 26%
60 4.32 53 3.35 29%
80 5.45 71 4.18 30%

• Maximum POWER9 benefit will be obtained by reducing virtual


processors/entitlement in migrating from POWER7 or POWER8
based on rPerf ratios
Exploiting new features on Power9

• Enable LPM
• Enable SRR
• Using SR-IOV
• Power Enterprise Pools
• Use AIX MPIO and NPIV

14 © Copyright IBM Corporation 2018


Live Partition Mobility (LPM)
• LPM allows you to move your partitions from one server to another
server while the partition is running (you can also move an partition if its
shutdown). The workload is unaffected during LPM.
• This technology has been around since 2008 starting with Power6.
• All your I/O is hosted by VIO Servers (VIOS) for a partition to be
LPM’able.

• Customers use this for


▪ Migrating workloads from p7 or p8 to p9
▪ Evacuating a server so that hw/sw maintenance can be done with no impact
to partitions
▪ Workload balancing

Movement to a
different server
with no loss of
service

Virtualized
VirtualizedSAN
SANand
andNetwork
NetworkInfrastructure
Infrastructure
15 © Copyright IBM Corporation 2018
Simplified Remote Restart (SRR) Overview

Move partitions from one POWER 8/9 server to


another Power 8/9 when the server has crashed
(aka unplanned outage). You just need to check a
box on the HMC GUI to enable this.

Released in December 2014.

This is your server on fire!


Using SR-IOV

• In 2015 PowerVM introduced support for Single-Root I/O Virtualization


(SR-IOV) on POWER8 systems. PowerVM SR-IOV support allows
multiple partitions to share an SR-IOV adapter for its ethernet traffic.
• This technology can also be used in as a Virtual NIC (VNIC) in the
VIOS to replace the Shared Ethernet Adapter.

17 © Copyright IBM Corporation 2018


Power Enterprise Pools (PEP)

Power Enterprise Pools (PEP) provides the ability to move processor and memory
resources from one server to another any time, with no physical movement of
hardware, using easy operator commands in HMC.

Customers are using this for workload balancing, LPM, firmware maintenance, and
DR sites.

18 © Copyright IBM Corporation 2018


AIX MPIO and NPIV

AIX MPIO

• Many customers no longer use vendor multipathing software in their VIOS partitions
and in their client partitions.
• The default AIX MPIO that is part of VIOS/AIX is being used instead of products like
PowerPath.
• Customers say upgrades of the VIOS/AIX is much simpler with the AIX MPIO vs. a
vendor product.

VSCSI and NPIV

• When PowerVM first released in 2005, VSCSI was the only technology for disk
virtualization. Early adopters used this.
• NPIV support was released in 2008. Even though NPIV was easier to manage than
VSCSI, along with other benefits, the conversion process was non-trivial.
• There is a tool from lab-services that automates this.

19 © Copyright IBM Corporation 2018


TechU sessions this week for LPM, SR-IOV, PEP

• There is one (1) session on the LPM/SRR Automation tool

• There are eight (8) sessions on SR-IOV technology

• There is one (1) session on the Power Enterprise Pools

20 © Copyright IBM Corporation 2018


Building new servers
• VIOS design
• Building partitions
• Adapter placement

21 © Copyright IBM Corporation 2018


VIOS design
• Once you have decided on which features you will want to use, your
VIOS design will be crucial to a successful migration.
• Many customers use 1 pair of VIOS servers to handle both production
and non-production client lpars on the low-end servers.
• Very, very few customers separate their SEA on one pair of VIOS and
have their VSCSI/NPIV on a different pair of VIOS.
• More and more customers are designing in a high-speed LPM ethernet
channel to their VIOS that is separate from the client lpar ethernet
channel.
• Will the VIOS use Shared Ethernet Adapters (SEA) vs. VNIC
technology?
• Use a repeatable process to install your VIOS. Manually building a
VIOS is complicated and with the use of many advanced features, it
becomes hard to repeat without issues.

22 © Copyright IBM Corporation 2018


Building partitions
• Whether you are using VIOS or not, your client partitions that you want
to migrate from your P7/P8 servers need to be built/replicated on the
P9.
• If you use LPM to move your partitions, there is no need to build the
client partitions. They will be built automatically as part of the LPM
process.
• Many customers aren’t LPM capable, so they will have to manually
move their partitions using mksysb technology or other methods. In this
case, they will have to replicate their current partition’s configuration on
the new servers. This is a very manual process.
• Using some repeatable process to read the current partition’s
configuration and building it on the new server is highly recommended.

23 © Copyright IBM Corporation 2018


Adapter Placement
Each system has different placement rules. IBM documents this very well
so please use their guides.

Here is a sample of the E980 rules


• All of the PCIe slots are Generation4 (Gen4) and low-profile (Half-height, Half-length). The
slots also support Gen1 to Gen3 PCIe adapters.
• Coherent Accelerator Processor Interface (CAPI) cards are supported in all of the PCIe slots.
• IBM recommends that PCIe adapters that do not require high bandwidth be placed in an
EMX0 PCIe3 expansion drawer. Populating a low latency, high-bandwidth slot with a low-
bandwidth adapter is not the best use of system resources. From a system resource
perspective, the best location for a low-bandwidth adapter is in a PCIe slot within the EMX0
PCIe3 expansion drawer.

Here is a sample of the E950 rules


• P1-C9 and P1-C12 are general-purpose slots that are also designated as serial-attached
SCSI (SAS) controller slots for controlling the internal disk bays.
• All of the x16 PCIe slots are coherent accelerator processor interface (CAPI) enabled.
• Four of the x16 PCIe slots support NVlink or OpenCAPI 25 Gbs cable cards.
24 © Copyright IBM Corporation 2018
TechU sessions this week for building VIOS and
partitions

The second session listed below uses features of the IBM PowerVM
Advanced Provisioning toolkit which is used by over 300 customers
worldwide to build VIOS and partitions.

The first session is a follow-on tool that originated from the provisioning
toolkit

25 © Copyright IBM Corporation 2018


Migrating workloads to new servers

• AIX Levels for Power9


• There are various ways to migrate AIX systems, both online and offline.
• Some of these techniques include LPM, Enterprise Pools, mksysb, and
MES upgrades.
• This covers upgrade methods for AIX including in place upgrades and
upgrades via NIM.

26 © Copyright IBM Corporation 2018


AIX Level Details for Power S9XX Systems

New AIX Levels supporting any I/O configuration available at P9 hardware


GA
▪ AIX Version 7.2 TL2 SP02 (7200-02-02-1810) or later
▪ AIX Version 7.1 TL5 SP02 (7100-05-02-1810) or later
▪ AIX Version 6.1 TL 9 SP11 (6100-09-11-1810) or later (AIX 6.1 service extension required)

Planned updates to existing AIX levels to support any P9 I/O configuration


▪ AIX Version 7.2 TL 0 SP06 (7200-00-06-1806) or later (planned avail. 5/4/2018 )
▪ AIX Version 7.2 TL1 SP04 (7200-01-04-1806 or later (planned avail. 5/4/2018)
▪ AIX Version 7.1 TL4 SP06 (7100-04-06-1806) or later (planned avail. 5/4/2018)

Existing AIX levels supported in LPM capable partitions


▪ AIX Version 7.2 TL2 SP01 (7200-02-01-1732) or later
▪ AIX Version 7.1 TL5 SP01 (7100-05-01-1731) or later
▪ AIX Version 7.2 TL1 SP01 (7200-01-01-1642) or later
▪ AIX Version 7.2 TL0 SP01 (7200-00-01-1543) or later
▪ AIX Version 7.1 TL4 SP01 (7100-04-01-1543) or later
▪ AIX Version 6.1 TL9 SP06 (6100-09-06-1543) or later

27 © Copyright IBM Corporation 2018


Migration Paths

A little terminology:

Concurrent Migrations occur without an outage to the lpar


- Live Partition Mobility (LPM)
- Live Partition Mobility with Enterprise Pools
Non Concurrent Migrations occur with a scheduled outage
-In Place MES upgrades
- Mksysb restore
- LPM (inactive)
- Alt_disk_copy
- Alt_disk_mksysb
- “Swing” Disks
LPM (Active and Inactive)
• “Active” LPM allows you to move lpars without any downtime. The lpar
is up and running and its moved to the P9 without the application/lpar
having to be stopped. This would be a concurrent migration.
• Some times you cannot move lpars via “Active” LPM because of issues
such as vlans don’t match. Or you are trying to go to a DR site and you
don’t want the system to be up upon arrival. In these cases, you can
do a LPM inactive migration. This migration is done with the lpar
shutdown, so it would be a non-concurrent migration.
LPM (Inactive)
• Advantages
▪ Creates the Profile for you.
▪ Does the “Swing” Disk for you

• Disadvantages
▪ Lpar requires downtime when migrating to new server
MES Upgrades - Definition
•What is a MES Upgrade:
•Wikipedia states:
• MES is an acronym used by IBM which stands for Miscellaneous
Equipment Specification. Any server hardware change, which can be an
addition, improvement, removal, or any combination of these. The serial
number of the server does not change. (in some cases)
• Specific types include the following:
▪ Customer-installable feature (CIF) miscellaneous equipment specification
(MES) Install-by-IBM(R) (IBI) MES
▪ Return-Parts MES (RPMES) is a special MES that is an IBI MES and
requires the return of selected parts to IBM on completion of the MES.
▪ [1] Since the MES process involves replacing large parts such as CPU or
feature cards, or even the entire server itself, but without a system serial
number change, it is sometimes referred to as a Machine Equipment
Swap.
MES Upgrade – General Procedure
• This procedure is a “general procedure” for MES’d servers (i.e. P8
to P9)

• Presteps:
− Gather Server properties – you will need these to set them on your P9
− Collect HMC Serial numbers
− Backup HMC Config
− Backup VIOS and Clients configs
− Remove CD/DVD from any managed system. (important to be done before
viosbr)
− Run VIOSBR on each VIOS
− Run Invscsout (perfect time to update the microcode of the devices if need be)
on your new P9 server
− Collect Source CEC serial numbers matching to target CEC serial numbers
(you need these for the viosbr restore).
• CE Installs Hardware

32
MES Upgrade – General Procedure
• Post Steps
• Change the VIO profiles to remove the old CEC devices (the old VIOS profiles will be on the
MES’d server)
• Add the New CEC Devices to the VIOS profile
• Boot the VIO’s, you may have to boot into SMS and choose boot device
• Remove defined devices created by the removal of the CEC and any available devices that
occurred after the Defined Devices so that they will be renumbered
▪ Ex: SAS drives, FC adapters, Network adapters
fcs0 Defined PCIe2 2-Port 16Gb FC Adapter (df1000e21410f103)
fcs1 Defined PCIe2 2-Port 16Gb FC Adapter (df1000e21410f103)
fcs2 Defined PCIe2 2-Port 16Gb FC Adapter (df1000e21410f103)
fcs3 Defined PCIe2 2-Port 16Gb FC Adapter (df1000e21410f103)
fcs4 Available PCIe2 2-Port 16Gb FC Adapter (df1000e21410f103)
fcs5 Available PCIe2 2-Port 16Gb FC Adapter (df1000e21410f103)
fcs6 Available PCIe2 2-Port 16Gb FC Adapter (df1000e21410f103)
fcs7 Available PCIe2 2-Port 16Gb FC Adapter (df1000e21410f103)

33
MES Upgrade – General Procedure
• Post Steps (cont’d)
▪ Run cfgdev
▪ Edit the VIOSBR.xml file and change the Bus id and the CEC Serial
<name>ent0</name> <state>AVAILABLE</state>
<locCode>U78C0.001.DBJF101-P2-C2-T1</locCode>
<unique_type>adapter/pciex/e41457162004000</unique_type>

<locCode>U78C0.001.DBJF101-P2-C2-T2</locCode>
<unique_type>adapter/pciex/e41457162004000</unique_type>
<type>FCP</type> <CuAtcount>3</CuAtcount> <CuAt
name="jumbo_frames" value="yes" action="TRUE" /> <CuAt
name="busmem" value="0xffc70000" action="FALSE" /> <CuAt
name="busintr" value="259585" action="FALSE" />

• :%s/search_string/replacement_string/g

34
MES Upgrade – General Procedure
• Post Steps (cont’d)
− Now that you have edited and fixed the VIOSBR.xml file you can restore it with
Restore VIOSBR
− Boot clients - you may have to boot into SMS and choose boot device

35
“Swing Disks”
•“Swing” Disks is virtually moving the disks to another LPAR.
“Swing”Disks
▪ Description:
− Pre-work before the outage
Verify all of the pre-requisites exist. (Very
important)
Create the new LPAR profile either VSCSI or NPIV
Collect the WWPN’s
Create vscsi or vfc mappings on new P9 VIOS
Zone the disks to the new frame
− Outage:
Shutdown the Source LPAR
Boot the Target System
− Post outage Order will change
based on vscsi or vfc
Clean up old LPAR and VIO mappings
“Swing” Disks – Prerequisites
• Pre-reqs.
▪ Both the CECs are connected to the same the network or subnet.

▪ The slot numbers of the virtual ethernet and virtual SCSI client adapters for the
AIX client partition (on the partition profile) must match on both CECs. The virtual
SCSI client and virtual SCSI server adapter mappings have to be the same on
both CECs.

▪ Virtual I/O server versions on the two CECs have to be the same.

▪ Running all software on Virtual I/O servers at the same levels, for instance,
SDDPCM or Powerpath. All the disks that are visible on the AIX client partition
have to be virtual disks (exported from VIOS using virtual SCSI).
“Swing” Disks- Prerequisites (cont’d)
▪ All the disks that are exported to the AIX client partition from VIOS have to be
SAN disks

▪ The attribute reserve_policy for all the SAN disks on VIOS should be set to
no_reserve for VSCSI. If 'no_reserve' is not set, then VIOS on the original CEC
should be shut down before the switch over is done.

▪ On the Virtual I/O server, a Shared Ethernet Adapter or VNIC has to be created
so that layer-2 bridging is available for the virtual ethernet assigned to the AIX
client partition. Ensure VLAN configuration is done appropriately for VIOS
partitions on both the CECs.

▪ Rootvg must be full LUNS, can’t be Logical Volumes

▪ Both Source Server and Target Server must see the same SAN fabric
Mksysb Restore
• Mksysb restore is the “granddaddy” of restoring rootvg. This is
the most trusted and most utilized process for restoring rootvg.

• There are many variations to a mksysb restore. You can:


▪ Restore mksysb for rootvg, and do a “Swing” Disk for the Datavg
− This may be used if your rootvg isn’t on SAN, and your datavg is on SAN..

▪ Restore mksysb for rootvg and restore the other VG’s with savevg.
− This can be used if your rootvg isn’t on SAN, and your datavg is not on SAN and you have don’t have
a backup solution. (Can require a great deal of space for the savevg)

▪ Restore mksysb for rootvg and restore the LVM skeleton with savevg, then restore the data with a
backup solution such as TSM.
− This is used if your rootvg isn’t on SAN and your datavg’s are not on SAN. You can create a savevg
without the data so once your rootvg is restore, you can create the filesystem structure for the datavg’s
with the savevg. You would first restore your rootvg , then restore your savevg, then do a TSM restore.
Alt_disk_copy

• alt_disk_copy -BOd hdiskx

▪ The -B option tells alt_disk_copy not to change the bootlist to this new copy of
rootvg, the -O option will remove devices from your customized ODM database.
− From the alt_disk_copy man page:-O Performs a device reset on the target
altinst_rootvg. This causes the alternate disk install to not retain any user-defined device
configurations. This flag is useful if the target disk or disks become the rootvg of a
different system (such as in the case of logical partitioning or system disk swap).

• Then zone this disk to the new system and boot.


▪ When the disks containing this altinst_rootvg are moved to another host and then
booted from, AIX will run cfgmgr and probe for any hardware, adding ODM
information at that time.
Alt_disk_mksysb
• Using alt_disk_mksysb to install a mksysb image on another disk. Using
this technique a mksysb image is first created, either to a file, on CD or
DVD or tape.

• Then that mksysb image is restored to unused disks in the current system
using alt_disk_mksysb, again using the -O option to perform a device reset.

• After this the disks could be removed and placed in a new system, or via
fibre rezoned to a new system, and the rootvg booted up.
Why unsupported methods don’t work
• The reason for this is there are many objects in an AIX system that are
unique to it; Hardware location codes, World-Wide Port Names,
partition identifiers, and Vital Product Data (VPD) to name a few.
• Most of these objects or identifiers are stored in the ODM and used by
AIX commands. If a disk containing the AIX rootvg in one system is
copied bit-for-bit (or removed), then inserted in another system, the
firmware in the second system will describe an entirely different device
tree than the AIX ODM expects to find, because it is operating on
different hardware.
• Devices that were previously seen will show missing or removed, and
usually the system will typically fail to boot with LED 554 (unknown boot
disk).
• Supported methods link:
▪ http://www-01.ibm.com/support/docview.wss?uid=isg3T1012273

43
•Software Upgrades – you may need to update
your OS before moving to P9

44
AIX Upgrade Options
• AIX Live Update
• Alt_disk_copy
• Update/Upgrade Current Version
• Mksysb Migration
• Multibos
• Nim Alt disk Migration

45
AIX Live Update

46
AIX Upgrade Options
Alt_disk_copy:
▪ Make a copy of the rootvg and update or upgrade the
alt_disk_copy. Then do a alt_disk_install to the copy of
rootvg:
− Alt_disk_copy
− Alt_disk_install
− Or smitty alt_disk_install
− Change bootlist when ready to boot to new level
− Reboot

▪ Benefits
− Can boot back and forth between the two rootvg’s. During testing if issues with
the new OS level arise, you can quickly reboot back to the previous working
version.
− Uptime while migrating, only an outage during the reboot.

47
AIX Upgrade Options
• Update/Upgrade current version (In place)
▪ This is the original update or upgrade path. Updating/upgrading the current
rootvg. An outage is needed for an update/ or upgrade.
▪ CONS:
− In order to backout from a failed upgrade, a mksysb restore will need to take
place.
− A longer outage needs to occur.
− Updates can be difficult to back out.

• Mksysb Migration
▪ This type of migration is used when you are trying to go to a new type of server
and aren’t at the correct OS level to get there. This is not intended for an update
from one tech level to another, but a full migration from AIX 6.1 to AIX 7.1
▪ Examples of Use: AIX 5.3 needing to move to Power 8 where the min is AIX
6.1tl8.

▪ More information on this can be found in the Redbook NIM from A to Z in AIX 5L
Chapter 4.5 Nim mksysb migration and nim_move_up Power5 tools.

48
AIX Upgrade Options
• Nim Alt Disk Migration
▪ The nimadm utility offers several advantages over a conventional migration. For
example, a system administrator can use nimadm to create a copy of a NIM
client's rootvg (on a spare disk on the client, similar to a standard alternate disk
install alt_disk_install) and migrate the disk to a newer version or release of AIX.
All of this can be done without disruption to the client (there is no outage required
to perform the migration). After the migration is finished, the only downtime
required will be a scheduled reboot of the system.

▪ Another advantage is that the actual migration process occurs on the NIM master,
taking the load off the client LPAR. This reduces the processing overhead on the
LPAR and minimizes the performance impact to the running applications.

49
Lab-Services Assistance Available for migrations
• Lab-Services has been helping customers with migrations starting back
in 2010. Over the last few years, we have designed engagements for
customer to better plan for these migrations along with multiple tools
that can be used to help build the new Power servers and size the
workloads and move the workloads. These tools make the migration
process easier and faster.

▪ For the overall planning, ask for the Power 9 Migration Planning Workshop
▪ For sizing of workloads, ask for the Capacity Planning Tool
▪ For building new VIOS/infrastructure, ask for the Lab-services IBM Advanced
PowerVM Provisioning Toolkit
▪ For moving from VSCSI to NPIV, ask for the VSCSI to NPIV Tool
▪ For LPMing workloads, ask for the PowerVM LPM/SRR Automation Tool
(ibm.biz/lpm_srr_tool)

50 © Copyright IBM Corporation 2018


BONUS SLIDE - Enable this on all your servers that you
have LPM setup on – very few customer have set this!!!

This capability allows you to LPM from a server where the


VIOS has crashed or is sick. If this IS NOT set before your
VIOS gets sick, you will not be able to LPM from this frame
and will need to fix the VIOS or shutdown all your
partitions.
Power to Cloud Worldwide Offerings

N
EW
Power to Cloud Linux
§ IBM Cloud Design Workshop § Linux Installation and Optimization
§ PowerVC Enablement
§ VMware vRealize Integration Security
§ Automation for DevOps Enablement § Security Assessment
§ Power Enterprise Pools Enablement § PowerSC Enablement
§ PowerVM Provisioning and Mobility Automation § AIX Role Based Access Control Workshop
§ Database as a Service § BigFix Patch Management

POWER9 Migration Power Systems Availability


§ Power Systems Availability Optimization
§ POWER9 Migration Planning § Power Systems Health Check
§ POWER9 Migration Automation § PowerHA SystemMirror
§ POWER9 Migration Validation § VM Recovery Manager
Power AI Performance
§ Power AI Workshop § Systems Performance Assessment
SAP HANA § Oracle Licensing Optimization Assessment
§ SAP HANA: Install § Oracle Performance Optimization Assessment
§ SAP HANA: Health Check
§ SAP HANA: Performance Assessment Select private technical training courses
§ SAP HANA: Migration Workshop
§ SAP HANA: Power Advanced Features Deployment
§ SAP HANA: Linux Security Assessment
New offers
Lab-Services Contacts – US and EMEA

• Lab Services Europe Delivery Manager:


Virginie Cohen VirginieCohen@fr.ibm.com

• Lab Services NA Opportunity Manager:


Stephen Brandenburg sbranden@us.ibm.com

• Other regions: ibmsls@us.ibm.com


Visit Lab Services
IBM Systems Lab Services in the
Proven expertise to help leaders plan, design and implement Solution Center at
Booth #40
IT infrastructure for what comes next
for some cool demos!

Call on our team of 1100+ consultants engaging worldwide for:

§ Power Systems
§ Storage and Software Defined Infrastructure
§ IBM Z and LinuxONE
§ Systems Consulting
§ Migration Factory
§ Technical Training and Events

ibmsls@us.ibm.com
www.ibm.com/it-infrastructure/services/lab-services

54
Thank you!

Bob Foster
Senior Technical Staff Member

bobf@us.ibm.com

Turgut Genc
STG Lab Services Consultant - Power Systems

TurgutGenc@uk.ibm.com

Please complete the Session


Evaluation!

55 © Copyright IBM Corporation 2018


Notices and disclaimers
• © 2018 International Business Machines Corporation. No part of • Performance data contained herein was generally obtained in
this document may be reproduced or transmitted in any form a controlled, isolated environments. Customer examples are
without written permission from IBM. presented as illustrations of how those

• U.S. Government Users Restricted Rights — use, duplication • customers have used IBM products and the results they may
or disclosure restricted by GSA ADP Schedule Contract with have achieved. Actual performance, cost, savings or other
IBM. results in other operating environments may vary.

• Information in these presentations (including information relating to • References in this document to IBM products, programs, or
products that have not yet been announced by IBM) has been services does not imply that IBM intends to make such
reviewed for accuracy as of the date of initial publication and could products, programs or services available in all countries in
include unintentional technical or typographical errors. IBM shall which IBM operates or does business.
have no responsibility to update this information. This document • Workshops, sessions and associated materials may have been
is distributed “as is” without any warranty, either express or prepared by independent session speakers, and do not
implied. In no event, shall IBM be liable for any damage arising necessarily reflect the views of IBM. All materials and
from the use of this information, including but not limited to, discussions are provided for informational purposes only, and
loss of data, business interruption, loss of profit or loss of are neither intended to, nor shall constitute legal or other
opportunity. IBM products and services are warranted per the guidance or advice to any individual participant or their specific
terms and conditions of the agreements under which they are situation.
provided.
• It is the customer’s responsibility to insure its own compliance
• IBM products are manufactured from new parts or new and used with legal requirements and to obtain advice of competent legal
parts. counsel as to the identification and interpretation of any
In some cases, a product may not be new and may have been relevant laws and regulatory requirements that may affect the
previously installed. Regardless, our warranty terms apply.” customer’s business and any actions the customer may need
• Any statements regarding IBM's future direction, intent or to take to comply with such laws. IBM does not provide legal
product plans are subject to change or withdrawal without advice or represent or warrant that its services or products will
notice. ensure that the customer follows any law.

56 © Copyright IBM Corporation 2018


Notices and disclaimers continued
• Information concerning non-IBM products was obtained from • IBM, the IBM logo, ibm.com and [names of other referenced
the suppliers of those products, their IBM products and services used in the presentation] are
published announcements or other publicly available trademarks of International Business Machines Corporation,
sources. IBM has not tested those products about this registered in many jurisdictions worldwide. Other product
publication and cannot confirm the accuracy of performance, and service names might be trademarks of IBM or other
compatibility or any other claims related to non-IBM companies. A current list of IBM trademarks is available on
products. Questions on the capabilities of non-IBM products the Web at "Copyright and trademark information" at:
should be addressed to the suppliers of those products. www.ibm.com/legal/copytrade.shtml.
IBM does not warrant the quality of any third-party products,
or the ability of any such third-party products to • .
interoperate with IBM’s products. IBM expressly disclaims
all warranties, expressed or implied, including but not
limited to, the implied warranties of merchantability and
fitness for a purpose.
• The provision of the information contained herein is not
intended to, and does not, grant any right or license under
any IBM patents, copyrights, trademarks or other intellectual
property right.

57 © Copyright IBM Corporation 2018

Potrebbero piacerti anche