Sei sulla pagina 1di 5

CPU Pinning [Tested on Mirantis 7.

0]
If you are about to configure CPU pinning on Mirantis OpenStack deployment, make sure to
upgrade your QEMU to 2.2.0 on your controller.
Steps to upgrade qemu on Controller:

add-apt-repository cloud-archive:kilo
apt-get update
apt-get dist-upgrade
virsh version

Compiled against library: libvirt 1.2.9


Using library: libvirt 1.2.9
Using API: QEMU 1.2.9
Running hypervisor: QEMU 2.2.0

Compute configuration:

vim /etc/nova/nova.conf
Edit/Add following two lines in nova.conf
vcpu_pin_set=2,3,6,7
reserved_host_memory_mb=512

restart nova-compute
vim /etc/default/grub

GRUB_CMDLINE_LINUX=" console=tty0 net.ifnames=0 biosdevname=0 rootdelay=90


nomodeset root=UUID=c915c726-1e0b-4436-b228-3d456b81648e isolcpus=2,3,6,7"
update-grub
Reboot compute node

Controller Configuration:
vim /etc/nova/nova.conf

Search for scheduler_default_filters


Add following values to the list of scheduler_default_filters

NUMATopologyFilter,AggregateInstanceExtraSpecsFilter
restart nova-scheduler

Create the performance host aggregate for hosts that will receive pinning requests:
$ nova aggregate-create performance
+----+-------------+-------------------+-------+----------+
| Id | Name
| Availability Zone | Hosts | Metadata |
+----+-------------+-------------------+-------+----------+
| 1 | performance | |
|
|
+----+-------------+-------------------+-------+----------+
$ nova aggregate-set-metadata 1 pinned=true
Metadata has been successfully updated for aggregate 1.
+----+-------------+-------------------+-------+---------------+
| Id | Name
| Availability Zone | Hosts | Metadata
|

+----+-------------+-------------------+-------+---------------+
| 1 | performance | |
| 'pinned=true' |
+----+-------------+-------------------+-------+---------------+

Create the normal aggregate for all other hosts:


$ nova aggregate-create normal
+----+--------+-------------------+-------+----------+
| Id | Name
| Availability Zone | Hosts | Metadata |
+----+--------+-------------------+-------+----------+
| 2 | normal | |
|
|
+----+--------+-------------------+-------+----------+

Set metadata on the normal aggregate, this will be used to match all existing normal flavors here we are
using the same key as before and setting it to false.
$ nova aggregate-set-metadata 2 pinned=false
Metadata has been successfully updated for aggregate 2.
+----+--------+-------------------+-------+----------------+
| Id | Name
| Availability Zone | Hosts | Metadata
|
+----+--------+-------------------+-------+----------------+
| 2 | normal | |
| 'pinned=false' |
+----+--------+-------------------+-------+----------------+

Before creating the new flavor for performance intensive instances update all existing flavors so that their
extra specifications match them to the compute hosts in the normal aggregate:
$ for FLAVOR in `nova flavor-list | cut -f 2 -d ' ' | grep -o [0-9]*`; \
do nova flavor-key ${FLAVOR} set \
"aggregate_instance_extra_specs:pinned"="false"; \
done

If you got any error by running above error use following script:
for FLAVOR in `nova flavor-list | cut -f 2 -d ' ' | grep "\([0-9]\+\|[0-9az]\{8\}-[0-9a-z]\{4\}-[0-9a-z]\{4\}-[0-9a-z]\{4\}-[0-9a-z]\{12\}\)"`; \
do nova flavor-key ${FLAVOR} set \
"aggregate_instance_extra_specs:pinned"="false"; \
done

Create a new flavor for performance intensive instances. Here we are creating the
m1.small.performance flavor, based on the values used in the existing m1.small flavor. The

differences in behaviour between the two will be the result of the metadata we add to the new
flavor shortly.
$ nova flavor-create m1.large.performance 6 8192 80 4
+----+----------------------+-----------+------+-----------+------+-------+
| ID | Name
| Memory_MB | Disk | Ephemeral | Swap | VCPUs |
+----+----------------------+-----------+------+-----------+------+-------+
| 6 | m1.small.performance | 2048
| 20
| 0
|
| 2
|
+----+----------------------+-----------+------+-----------+------+-------+

Set the hw:cpy_policy flavor extra specification to dedicated. This denotes that all instances
created using this flavor will require dedicated compute resources and be pinned accordingly.
$ nova flavor-key 6 set hw:cpu_policy=dedicated

Finally, we must add some hosts to our performance host aggregate. Hosts that are not
intended to be targets for pinned instances should be added to the normal host aggregate:
$ nova aggregate-add-host 2 node-3.domain.tld
Host compute1.nova has been successfully added for aggregate 1
+----+-------------+-------------------+----------------+---------------+
| Id | Name
| Availability Zone | Hosts
| Metadata
|
+----+-------------+-------------------+----------------+---------------+
| 1 | performance | | node-5.domain.tld '| 'pinned=true' |
+----+-------------+-------------------+----------------+---------------+

$ nova aggregate-add-host 2 node-4.domain.tld


Host compute2.nova has been successfully added for aggregate 2
+----+-------------+-------------------+----------------+---------------+
| Id | Name
| Availability Zone | Hosts
| Metadata
|
+----+-------------+-------------------+----------------+---------------+
| 2 | normal
| | node-4.domain.tld | 'pinned=false'|
+----+-------------+-------------------+----------------+---------------+

Verifying the Configuration


Now that weve completed all the configuration, we need to verify that all is well with the world. First,
we launch a guest using our newly created flavor:
$ nova boot --image rhel-guest-image-7.1-20150224 --flavor
m1.small.performance test-instance

Assuming the instance launches, we can verify where it was placed by checking the OS-EXTSRV-ATTR:hypervisor_hostname attribute in the output of the nova show testinstance command. After logging into the returned hypervisor directly using SSH we can use

the virsh tool, which is part of Libvirt, to extract the XML of the running guest:
# virsh list
Id
Name
State
---------------------------------------------------1
instance-00000001
running
# virsh dumpxml instance-00000001
...

The resultant output will be quite long, but there are some key elements related to NUMA layout
and vCPU pinning to focus on:

As you might expect the vCPU placement for the 2 vCPUs remains static though a
cpuset range is no longer specified alongside it instead the more specific placement
definition defined later on are used:

<vcpu placement='static'>2</vcpu>

The vcpupin, and emulatorpin elements have been added. These pin the the virtual
machine instances vCPU cores and the associated emulator threads respectively to
physical host CPU cores. In the current implementation the emulator threads are pinned to
the union of all physical CPU cores associated with the guest (physical CPU cores 2-3).

<cputune>
<vcpupin vcpu='0' cpuset='2'/>
<vcpupin vcpu='1' cpuset='3'/>
<emulatorpin cpuset='2-3'/>
</cputune>

The numatune element, and the associated memory and memnode elements have been
added in this case resulting in the guest memory being strictly taken from node 0.

<numatune>
<memory mode='strict' nodeset='0'/>
<memnode cellid='0' mode='strict' nodeset='0'/>
</numatune>

The cpu element contains updated information about the NUMA topology exposed to the
guest itself, the topology that the guest operating system will see:

<cpu>
<topology sockets='2' cores='1' threads='1'/>
<numa>
<cell id='0' cpus='0-1' memory='2097152'/>
</numa>
</cpu>

References:
1. http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-andnuma-topology-awareness-in-openstack-compute/
2. Direct: Peter Ciolfi
3. Indirect: Gary Mussar

Potrebbero piacerti anche