Sei sulla pagina 1di 18

Add or remove an adapter to a link aggregation

(802.3ad)
on a VI/O Server as padmin :

cfglnagg -add -parent ent5 ent4 ent1


cfglnagg -rm -parent ent5 ent4 ent1

How to check missing filesets after a technology level


update
UPDATE : it seems that it is quite simplier with the following option :

#oslevel -rl [your expected oslevel]

Sometimes after a system update (say a technology level upgrade), you dont get the right TL version with
oslevel. For example I tried to update my AIX 6.1 TL5 to TL6, but here it is :

#> oslevel -s 6100-05-02-1034

So lets check what is missing :

#> instfix -i |grep ML


All filesets for 6100-00_AIX_ML were found.
All filesets for 6.1.0.0_AIX_ML were found.
All filesets for 6100-01_AIX_ML were found.
All filesets for 6100-02_AIX_ML were found.
All filesets for 6100-03_AIX_ML were found.
All filesets for 6100-04_AIX_ML were found.
All filesets for 6100-05_AIX_ML were found.
Not all filesets for 6100-06_AIX_ML were found.

Not all filesets were found . Ok, but which ones ?

root@lpar /root#> instfix -cik 6100-06_AIX_ML | grep -v -e ":=:" -e ":+:"

Keyword:Fileset:ReqLevel:InstLevel:Status:Abstract

6100-06_AIX_ML:Java6.sdk:6.0.0.215:6.0.0.200:-:AIX 6100-06 Update

Weve got our answer ! We need to update the Java SDK in order to have the right oslevel output.

First, I need to know which sdk I need, 32 or 64 bits ?

root@lpar /mnt/AIX610/ExpansionPack/installp/ppc#> lslpp -l Java6.sdk


Fileset Level State Description

----------------------------------------------------------------------------

Path: /usr/lib/objrepos

Java6.sdk 6.0.0.200 COMMITTED Java SDK 32-bit

Path: /etc/objrepos

Java6.sdk 6.0.0.200 COMMITTED Java SDK 32-bit

Lets update it !
smitty install_latest (yeah I know, smit)

root@lpar /mnt/AIX610/ExpansionPack/installp/ppc#> instfix -cik 6100-06_AIX_ML | grep -v


-e ":=:" -e ":+:"
#Keyword:Fileset:ReqLevel:InstLevel:Status:Abstract
root@lpar /mnt/AIX610/ExpansionPack/installp/ppc#> oslevel -s
6100-06-01-1043

And voila ! We are now at the right level.

PS : if your NIM master is nicely configured, you can even try to update with nim command on your client :

root@LPAR # nimclient -o cust -a lpp_source=6100-06-01-1043-lpp_source -a


installp_flags=agXYv -a filesets="Java6.sdk"

Live partition mobility CLI commands

perform a validation :

migrlpar -o v -m [source_managed_system] -t [target_managed_system] -p [lpar]

perform a migration :

migrlpar -o m -m [source_managed_system] -t [target_managed_system] -p [lpar] -i


'source_msp_id=15,dest_msp_id=15,shared_proc_pool_name=shp_test,\

virtual_fc_mappings="5/VIOS/1,6/VIOS3/3"' -v

virtual_fc_mappings:

Comma-separated list of virtual fibre channel adapters. Each item in this list has the format
slot_num/vios_lpar_name/vios_lpar_id. For example, 4/vios2/3 specifies a virtual fibre channel adapter on a
client logical partition with a virtual slot number of 4, a VIOS partition name of vios2, and the ID of the
destination VIOS logical partition of 3
IBM Systems Director : Regenerate Tivguid
root@lpar /root #> cp /etc/ibm/director/twgagent/twgagent.uid
/etc/ibm/director/twgagent/twgagent.uid.ORIG

root@lpar /root #> /opt/ibm/director/bin/genuid

UID Already exist.

root@lpar /root #> rm /etc/ibm/director/twgagent/twgagent.uid

rm: remove /etc/ibm/director/twgagent/twgagent.uid? y

root@lpar /root #> /opt/ibm/director/bin/genuid

time of day 1332937051

MAC Address is 102 - 994 - 105 - 239 - 999 - 999

Host name is NIM1

Generated UID is 7f-35-61-28-86-0e-38-6d

root@lpar /root #> stopsrc -s cas_agent

0513-044 The cas_agent Subsystem was requested to stop.

root@lpar /root #> lssrc -s cas_agent

Subsystem Group PID Status

cas_agent inoperative

root@lpar /root #> /usr/tivoli/guid/tivguid -Write -New

Tivoli GUID utility - Version 1 , Release 3 , Level 4 .

(C) Copyright IBM Corporation 2002, 2009 All Rights Reserved.

Guid:4a.96.a7.78.78.d0.55.e1.b4.a4.66.7c.73.ef.46.07

root@lpar /root #> startsrc -s cas_agent

0513-059 The cas_agent Subsystem has been started. Subsystem PID is 23003600.

root@lpar /root #> /usr/tivoli/guid/tivguid -show

Tivoli GUID utility - Version 1 , Release 3 , Level 4 .

(C) Copyright IBM Corporation 2002, 2009 All Rights Reserved.


Guid:4a.96.a7.78.78.d0.11.e1.99.a4.66.7c.73.ef.46.07

Machstat : Checking the power supplies health

use this command :

machstat -f

which generally gives the following output:

0 0 0

There is no documentation about that command By looking into /usr/lib/boot/bin/rc.powerfail_chrp, we can


assume this :

The first digit equals to :

0) => System under normal power conditions


1) => System facing non-critical cooling problem
2) => System facing non-critical power problem
3) => System facing severe power problems which require system shutdown
4) => System facing critical problems and the system will be halted immediately.
5) => We cannot handle this situation. Just issue syncs to ensure filesystem consistency and break. Continue to
wait out till system shutdown.
7) => We cannot handle this situation. Just issue syncs to ensure filesystem consistency and break. Continue to
wait out till system shutdown.

The second digit equals to :

1) => Immediate shutdown of the system


2) => Shutdown will proceed after the wait time

The third digit equals to the version (version of what exactly ? epow ?)

Creating an LACP adapter on AIX

As padmin:

rmtcpip -all

mkvdev -lnagg ent0 ent1 -attr mode=8023ad

mkvdev -sea ent4 -vadapter ent3 -default ent3 -defaultid 999 -attr ha_mode=auto
ctl_chan=ent2

mkvdev -vlan ent5 -tagid 11

mktcpip -hostname [vio_name] -inetaddr 10.10.10.57 -interface en6 -netmask 255.255.255.0


-gateway 10.10.10.254 -start
Dont forget to get in touch with the network guys, so that they can configure the switches ports
with port aggregation (option portfast enabled !!!)

Update Microcode on FCoE fibre adapter on a VI/O Server

Lets check our adapters actual firmware

lsmcode -rcd ent0

771001801410af03.010506

woops, we are like 2 years late ! we can check that on IBMs fix central, we need to download the latest version.

This will commit all uncommitted updates to the Virtual I/O Server.

updateios -commit

updateios will apply any fix present in the directory specified, where we put the previously downloaded fixes:

updateios -dev /home/padmin/ -accept -install

Lets go root!

oem_setup_env

Now we need to unconfigure any logical devices attached to the physical adapter that we want to update:

ifconfig en6 down detach # on detache l'interface ip

rmdev -l ent6 ent6 Defined # on passe en defined l'interface de type VLAN (car on est en
mode trunk)

rmdev -l ent5 ent5 Defined #on passe en defined le SEA

rmdev -l ent4 ent4 Defined # on passe en defined l'interface etherchannel (mode 802.3ad)

Now that the physical adapter is freed , we can update its microcode.

diag -d ent0 -T download

====================================================================================

INSTALL MICROCODE 802118

ent0 10 Gb Ethernet-SR PCI Express Dual Port Adapter (771000801410b003)

Microcode has been successfully updated to level 010522

on the following resources:

fcs2 fcs3 ent0 ent1

Please run diagnostics on the listed resources to

ensure that the adapter is functioning properly.

Use Enter to continue.


F3=Cancel F10=Exit Enter

====================================================================================

We need to reboot now !

#shutdown -Fr

lpar_netboot: power on a virtual server and test its connectivity

This command will boot up a lpar (power it off if its on, so be careful), set up an ip address and will try to ping
another ip (like the NIM master server), to test the network connectivity (thus to know if we can launch an
install/restore on it)

ssh -l hscroot ${HMC} -- lpar_netboot -M -A -n -T off -t ent -D -s auto -d auto -S $


{SERVER_IP} -G ${GATEWAY_IP} -C ${MAC_IP} -K ${NETMASK} "${LPAR}" "${MAC}" "${FRAME}"

# Connecting to infvio2.

# Connected

# Checking for power off.

# Power off complete.

# Power on vio2 to Open Firmware.

# Power on complete.

# Client IP address is 10.240.122.62.

# Server IP address is 10.240.122.16.

# Gateway IP address is 10.240.122.254.

# Subnetmask IP address is 255.255.255.0.

# Getting adapter location codes.

# /vdevice/l-lan@300000fd ping unsuccessful

# Type Location Code MAC Address Full Path Name Ping Result Device Type

ent U9119.FHB.999999-V13-C253-T1 42fddfbf74fd /vdevice/l-lan@300000fd unsuccessful virtual

# /vdevice/l-lan@300000fe ping unsuccessful

ent U9119.FHB.999999-V13-C254-T1 42fddfbf74fe /vdevice/l-lan@300000fe unsuccessful virtual

lpar_netboot: auto/auto settings are not supported on this adapter

ent U5803.001.999999-P1-C3-T1 00215ee297d0 /pci@800000020000292/ethernet@0 unsuccessful


physical

lpar_netboot: auto/auto settings are not supported on this adapter

ent U5803.001.99999-P1-C3-T2 00215ee297d2 /pci@800000020000292/ethernet@0,1 unsuccessful


physical

lpar_netboot: auto/auto settings are not supported on this adapter

ent U5803.001.99999-P1-C4-T1 00215ee29770 /pci@800000020000294/ethernet@0 unsuccessful


physical

lpar_netboot: auto/auto settings are not supported on this adapter

ent U5803.001.99999-P1-C4-T2 00215ee29772 /pci@800000020000294/ethernet@0,1 unsuccessful


physical

Determining firmware level of an adapter


# lsmcode -rcd entX
or

# lscfg -vl ent0 |grep "ROM"

ROM Level.(alterable).......010506

> We need to update the Firmware of the 5708 FCoE adapter, which is 2 levels late (last one is 010522):

Backup HMC data on a NFS mounted directory

To Back up critical HMC data use this :

bkconsdata -r nfs -h MyNFSserver -l /tmp/HMC_backup_dir

To Back up profile data use this :

bkprofdata -m ${managed system} -f backup_profiles_$server.prof --force

HDLM : change the default reserve policy on all disks

In order to get LPM to work properly, we need to put the no_reserve attribute to all disks (here we have the
hitachi disk example) :

/usr/DynamicLinkManager/bin/dlmchpdattr -a reserve_policy=no_reserve -s

We can check the new default parameter :

/usr/DynamicLinkManager/bin/dlmchpdattr -o

uniquetype = disk/fcp/Hitachi

reserve_policy : no_reserve

max_transfer : 0x40000

queue_depth : 8

rw_timeout : 60

Display the reserve on all hdisks :

dlmpr -k

Suppress the reserve on all hdisks except the rootvg disk :

dlmpr -c
Suppress the reserve on the rootvg disk (Im actually not so sure about this one, given the fact that I had some
problems during the boot of some lpars > boot error code 555, so please be very cautious !) :

dlmpr -c hdisk0

Extra command (free of charge) : Changing the queue_depth parameter on all disks managed by HDLM

/usr/DynamicLinkManager/bin/dlmchpdattr -a queue_depth=8 -s

Do not forget to perform a reboot after that !

IBM Systems Director: how to check the availability of an common agent

In order to function properly, a common agent needs theses ports to be allowed :

22 TCP Inbound

427 TCP, UDP Outbound and Inbound

5988, 5989 TCP Outbound and Inbound

6988, 6989 TCP Outbound and Inbound

9510 TCP Inbound

9511-9515 TCP Outbound

If the following command doesnt show you anything, then you either have a problem with the agent youre
trying to make contact with, or the common ports

root@ISD_server /root #> /usr/bin/slp_query --type="*" --address=10.246.70.11

12

62

URL: service:management-software.IBM:platform-agent://10.240.122.11

URL: service:management-software.IBM:platform-agent://10.246.58.21

URL: service:management-software.IBM:platform-agent://10.246.70.11

URL: service:wbem:http://10.246.70.11:5988

URL: service:wbem:https://10.246.58.21:5989

URL: service:wbem:https://10.246.70.11:5989

URL: service:wbem:https://10.240.122.11:5989

URL: service:wbem:http://10.240.122.11:5988

URL: service:wbem:http://10.246.58.21:5988

IBM Systems Director : Updating from 6.3 to 6.3.1 with smcli

A new update is out since a few days for Systems Director (here are the new
things : http://publib.boulder.ibm.com/infocenter/director/pubs/index.jsp?topic=
%2Fcom.ibm.director.main.helps.doc%2Ffqm0_r_whats_new_in_release_631.html ) and the update method is
quite simple !

root@ISD_server /tmp #> smcli installneeded -v /opt/SysDir6_3_1_AIX.zip

ATKUPD489I Collecting inventory for one or more systems.

ATKUSC206I Generating SDDs for path: "/opt/SysDir6_3_1_AIX.zip".

ATKUSC206I Generating SDDs for path:


"/opt/ibm/director/data/updateslib/TEMP0786411300026836".

ATKUPD293I Update "agentmanager.feature_6.3.1" was successfully imported to the library.

[blah blah blah...]

ATKUPD293I Update "com.ibm.vmcontrol.vim.help.doc.feature_2.4.1" was successfully imported


to the library.

ATKUPD293I Update "org.sblim.cim.client.feature_2.1.2" was successfully imported to the


library.

ATKUPD573I Running compliance for all new updates that were found.

ATKUPD286I The import updates task has completed successfully.

ATKUSC209I The install needed task found updates that need to be installed for system
"ISD_server":

com.ibm.director.core.manager.aix_6.3.1

com.ibm.lwi.activemq.feature_8.1.1.1-LWI

com.ibm.lwi.eclipse.help.feature_8.1.1.1-LWI

ATKUSC210I This operation will install the updates listed above. To continue, type "1" for
yes or "0" for no.

ATKUPD725I The update install task has started.

ATKUPD487I The download task has finished successfully.

ATKUPD629I Installation staging will be performed to 1 systems.

ATKUPD632I The Installation Staging task is starting to process system "ISD_server".

ATKUPD633I The Installation Staging task has finished processing system "ISD_server".

ATKUPD630I The update installation staging has completed.

ATKUPD760I Start processing update "com.ibm.lwi.activemq.feature_8.1.1.1-LWI" and system


"ISD_server".

ATKUPD764I Update "com.ibm.lwi.activemq.feature_8.1.1.1-LWI" was installed on system


"ISD_server" successfully.

ATKUPD760I Start processing update "com.ibm.director.core.manager.aix_6.3.1" and system


"ISD_server".

ATKUPD764I Update "com.ibm.director.core.manager.aix_6.3.1" was installed on system


"ISD_server" successfully.
ATKUPD760I Start processing update "com.ibm.lwi.eclipse.help.feature_8.1.1.1-LWI" and
system "ISD_server".

ATKUPD764I Update "com.ibm.lwi.eclipse.help.feature_8.1.1.1-LWI" was installed on system


"ISD_server" successfully.

ATKUPD795I You must manually restart the IBM Systems Director management server after
this install completes for the updates to take effect.

ATKUPD739I Collecting inventory on system "ISD_server".

ATKUPD572I Running compliance on system "ISD_server".

ATKUPD727I The update install task has finished successfully.

ATKUPD288I The install needed updates task has completed successfully.

Now we need to restart ISD in order to apply the update :

root@ISD_server /root #> smstop && smstart && smstatus -r

Shutting down IBM Director...

Starting IBM Director...

The starting process may take a while. Please use smstatus to check if the server is
active.

Error

Starting

Updating

Starting

Active

Getting the VIO host of your virtual server

Unfortunately this doesnt work with AIX 5.3, you should check this out
instead : https://www.ibm.com/developerworks/community/blogs/glukacs/entry/zoning_info_script_to_follow_t
he_vscsi_mapped_luns1?lang=en

NPIV
# echo "vfcs" |kdb

START END <name>

0000000000001000 0000000005750000 start+000FD8

F00000002FF47600 F00000002FFDF9C0 __ublock+000000

000000002FF22FF4 000000002FF22FF8 environ+000000

000000002FF22FF8 000000002FF22FFC errno+000000

F1000F0A00000000 F1000F0A10000000 pvproc+000000


F1000F0A10000000 F1000F0A18000000 pvthread+000000

read vscsi_scsi_ptrs OK, ptr = 0x0

(0)> vfcs

NAME ADDRESS STATE HOST HOST_ADAP OPENED NUM_ACTIVE

fcs4 0xF1000A0180146000 0x0008 VIO_1 vfchost0 0x01 0x0000

fcs6 0xF1000A0180148000 0x0008 VIO_2 vfchost1 0x01 0x0000

VSCSI
# echo "vfcs" |kdb | grep -E "NAME|vscsi"

read vscsi_scsi_ptrs OK, ptr = 0xF1000000C02F1290

NAME STATE CMDS_ACTIVE ACTIVE_QUEUE HOST

vscsi0 0x000007 0x0000000000 0x0 vio1->vhost0

vscsi1 0x000007 0x0000000000 0x0 vio2->vhost0

IBM Systems Director : increase logs verbosity (very useful sometimes)

The lwilog.sh script dynamically enables or disables the trace utility in the running application.
On ISD Server, try this :

# cd /opt/ibm/director/lwi/bin/

# lwilog.sh addlogger name $LOGGERNAME level $VERBOSITY_LEVEL

The levels of verbosity are :

[ERROR|WARNING|INFO|VERBOSE|FINE|FINER|FINEST]

The loggername can be found in the following logfile :

/opt/ibm/director/lwi/logs/error-log.html

example:

/opt/ibm/director/lwi/bin/lwilog.sh -addlogger -name com.ibm.director.updates.aix -level


-FINEST

VIO : Forcing WWN on a virtual fc adapter

Unique generated WWNs ?


With the HMC gui, it seems that the WWNs created with a virtual FC adapter cannot be changed, unless you
delete the virtual FC adapter and recreate it, but you will have a tiny problem : the WWNs will change. Hence,
if you do this, the zoning/masking of your LUNs will be lost, and you will have to call your fellow friends from
the SAN team (they love to allocate/deallocate WWNs, just for fun, it will only cost you a few cups of coffee)

On a virtual client, this command changes the profile , by adding to it an adapter with the WWN1 (NPIV) and
the WWN2 (used for LPM).
Get started with the profile
Be careful, when you want to include these commands in a shell script, be sure to double check your
double/triple quotes around the WWNs, it is a real pain in the ass when you write the command through ssh
For some unknown reason, IBM decided to put the same field separator (the comma) for the specifications of
WWNs AND every attribute you can set within the same commandincluding the attribute setting the WWNs
Weird.

Anyway, here is the command:

chsyscfg -m [managed system] -r prof -i


'name=[profile_name],lpar_id=[id],"virtual_fc_adapters+=""14/Client/13/[VIOServer
lparname/[Server VFC id] /WWN1,WWN2/0"""'

Be VERY careful when doing this

MPIO : Nice and easy way to check if all paths are consistent and available

Sometimes on a server which had some fiber channel difficulties, I always check if there are some missing or
failed paths by issuing this command (sometimes the numbers are not equal, and we have to reconfigure
missing paths) :

root@lpar:/root# lspath | awk '{print $1,$NF}' |sort |uniq -c

18 Enabled fscsi0

6 Enabled fscsi1

12 Failed fscsi1

If there are some failed paths, maybe you should try to re-enable them (quick and painless, cant do no harm)
with this one-liner :

root@lpar:/root# lspath|grep Failed | awk '{print "chpath -l "$2" -s enable -p "$3}'|ksh

paths Changed

paths Changed

paths Changed

paths Changed

paths Changed

paths Changed

paths Changed

paths Changed

paths Changed

paths Changed

paths Changed

paths Changed

root@lpar:/root# lspath | awk '{print $1" " $NF}' |sort |uniq -c


18 Enabled fscsi0

18 Enabled fscsi1

And voil, the failed paths are back online

You can also delete some paths before re-discovering them if you have some missing or defined paths :

lspath -F "name:connection:parent:path_status:status" | egrep "Defined|Missing" | awk -F :


'{print "rmpath -l "$1" -p "$3" -w "$2" -d"}' | ksh

Cant achieve concurrent live partition migrations ?


Check your max_virtual_slots value !
Context
Here are my latest discoveries about concurrent LPM :

We had at work a situation where we could not perform concurrent live partition migration between our two
p795, however or whatever we tried (multiple HMC/SDMC, multiple MSPs, cross-site migrations), and
opened a PMR at IBM about that.

The IBM guy told me about the max_virtual_slots value (which is set in the partition profile), which could
cause some problems if greater than 1000. A Fix is on the way, according to IBM :

http://www-01.ibm.com/support/docview.wss?uid=isg1IV20409

Indeed, for various internal reasons (numbering adapters ID policy), we had set this value to 2048 or even 4096
on our VIO servers. Big mistake. No concurrent migration could be performed.

Workaround
First of all we need to decrease the max_virtual_slots value to lets say 256 on all our VIOS profiles :

chsyscfg -r prof -m managed_system -i "name=VIO1,lpar_name=VIO1-


prod,max_virtual_slots=256"

But we have to stop the partition in order to load the profile with the new max_virtual_slots value.

More important, we need to change all the adapter IDs greater than 1000 (we numbered our FC adapters ID
according to the client ID times 100 , which grows rapidly on a p795 a client ID 44 could give a FC adapter
ID of 4400 to allow that , you need to increase the max_virtual_slots value) to smaller values, which is a
pain in the ass, because you have to delete/recreate the virtual FC adapter on the VIO (system and profile), do a
cfgmgr, remap the vfchost to the physical fcs, and modify the profile on the VIO AND the virtual server in
order to get things proper. And pray that your multipathing is fully working. Good luck.
So we came with a different method, using fake virtual adapters and LPM, in order to get smaller device IDs for
our Fibre Channel virtual adapters, and shutdown/restart the VIOS , one by one), without any downtime at all :

To modify the max_virtual_slots, I have to change the id on the client side (e.g. 2101 and 2102 as server
adapter ID), then on the VIOs side.

But without stopping the partition and alter the profile, there is only one solution :

I created fake virtual FC adapters on the VIO Servers on the other frame (target frame), with the same
ID (2101 and 2102), with a partner adapter ID of 99, just to keep an eye on it.
I migrate the virtual server (LPM)

At the arrival on the target frame, there is a cfgmgr and a check : if the adapter ID is not already used, it
keeps the same ID.

If it is actually already used, it takes the first IDs available (lets say 5 and 6 are available)

Migration is complete, my client is now connected to VIO server adapter IDs 5 and 6, instead of 2101
and 2102.

I can now migrate back, with my new tiny IDs (even if on the source side, 5 and 6 are taken, it wont
get back to 2101, it will again be set to the next ID available, like 7, 10, or 23, whatever.)
Now (and only now that the virtual server is gone on the other side) I can change the profile on the target
VIO Servers, and set the max_virtual_slots to a more IBM bug-free compliant value, like 256, instead
of 4096 as it used to be.

I can also delete the fake adapter used to spoof the server adapter ID (preceded with a rmdev on the
vfchost discovered by the cfgmgr executed on the VIOS when the partition migrated)

Last thing I need to do is shutdown the VIO server (one after another, of course) and restart it, loading
the new profile I just modified. With a full working redundancy between my VIO servers, it should not
be a problem.

> I also changed all the max_virtual_slots for the virtual servers , it was set to 255 and I changed that to 32
(default value, shouldnt be higher anyway)

Results
Indeed, it changes everything : We can now have 8 concurrent migrations on each p795 (4 for each
MSP which is the current limitation, Im confident that it will grow up one day), as it was expected to do
in the first place.
It also speeds up the duration of the migration (we had before a 10-15 minutes duration for each
migration, now it is closer to about 2-6 minutes )

I also discovered that event if the source/target VIO server (not MSP) are set with adapters IDs >1024, BUT the
MSPs are with good values (256), concurrent migrations are possible, so the problem of concurrent migration is
only caused by the MSPs max_virtual_slots value (I thought that every VIOs should be lower for this problem,
not only the MSPs).

Besides, I also discovered that with high IDs, on the VIOs side (not MSP), it affects the duration of the
migration, even if we do concurrent migration.

So with our high IDs policy, we were having 2 problems in one : same cause, multiple consequences !

I Hope it will help somebody, someday !

Hints :
If you need to know if your frame is LPM capable :

# lssyscfg -r sys -m mypseries


-Fname,active_lpar_mobility_capable,inactive_lpar_mobility_capable
mypseries,1,1

> Here, we can achieve LPM migrations (active AND inactive, which means even if the lpar is shut down)

If you want more information about the LPM capabilities of your pseries :

# lslparmigr -r sys -m mypseries


inactive_lpar_mobility_capable=1,num_inactive_migrations_supported=4,\
num_inactive_migrations_in_progress=0,active_lpar_mobility_capable=1,\
num_active_migrations_supported=8,num_active_migrations_in_progress=0,\
inactive_prof_policy=config

If you wanna know which of your VIOs is a MSP (sorry for the ugliest grep Ive ever done) :

# lssyscfg -r lpar -m mypseries -Fname,msp |grep ",1"


VIO1,1
VIO2,1

Checking the migration state of an LPAR on the HMC :

# lslparmigr -r lpar -m mypseries -Fname,migration_state --filter


lpar_names="my_mygrating_lpar"
my_mygrating_lpar,Not Migrating

Potrebbero piacerti anche