Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
****************
The exam objectives for Red Hat OpenStack Platform 10 are as follows:
o Use Red Hat OpenStack director to deploy additional nodes in an existing overcloud
o Configure flavors
o Set quotas
• Manage instances
o Launch instances
To become a Red Hat Certified System Administrator in Red Hat OpenStack, you will validate your ability
to perform these tasks:
o Use template files, environment files, and other resources to obtain information about
an OpenStack environment
o Create projects
o Create groups
o Create users
o Manage quotas
Create resources
Configure networking
Manage instances
o Launch instances
Reference :
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/
https://www.tuxfixer.com/create-tenant-in-openstack-newton-using-command-line-interface/#more-
3226
https://www.golinuxhub.com/2018/08/openstack-tripleo-architecture-step-guide-install-undercloud-
overcloud-heat-template.html#Introspection
1) Create tenant (Project) dan 4 user (2 Project A dan 2 Project B (admin dan member biasa)
tiap user punya username, email ---> CL110 (chapter 2 Organizing people and resource)
*********************************************************************
https://www.tuxfixer.com/create-tenant-in-openstack-newton-using-command-line-interface/#more-
3226
[root@allinone ~(keystone_admin)]#
****************************************************
# source ~/admin-openrc
# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
Upload the image to the Image service using the QCOW2 disk format, bare container format, and public
visibility so all projects can access it:
************************************
Security Groups control network access to / from instances inside the tenant.
+-----------------+----------------------------------+
| Field | Value |
+-----------------+----------------------------------+
| created_at | 2017-01-06T18:31:27Z |
| description | tux_sec |
Add rule for sigma_sec group to permit incoming ICMP ECHO (ping):
[root@allinone ~(keystone_admin)]# openstack security group rule create --protocol icmp --ingress
--project sigma sigma_sec
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-01-06T20:02:17Z |
[root@allinone ~(keystone_admin)]# openstack security group rule create --protocol tcp --dst-port 22
--ingress --project sigma sigma_sec
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-01-06T20:04:12Z |
****************************
1. In the dashboard, select Project > Compute > Access & Security.
3. Specify a name in the Key Pair Name field, and click Create Key Pair.
1. Click “Compute” under the “Project” option in the Horizon left-hand menu.
2. Select “Access & Security”.
3. Click the “Key Pairs” tab.
4. Click “+Create Key Pair”.
5. Name your new key pair and click “Create Key Pair”.
6. The new key pair will automatically download to your local machine; make sure you don’t lose
it, or you won’t be able to access the new instance.
7. Click Access & Security again to see your new key pair.
You can also create a key pair manually and import it, or import an existing public key, by click the
“Import Key Pair” button and adding it to the form.
Add a key pair to an instance
To add a key pair to an instance, you need to specify it when you’re launching the instance.
4. Now you can use the key pair to connect to the instances created using this key pair:
******************************
flavor — 300
image — afa49adf-2831-4a00-9c57-afe1624d5557
keypair — myKey
security group — 29acef25-b59f-43a0-babf-6a5bb5cc7bed
servername — You can name it anything you like, but in this example myNewServer will be used.
[user@localhost]$ openstack server create --flavor 300 --image
afa49adf-2831-4a00-9c57-afe1624d5557 --key-name myKey --security-group
29abef85-b89f-43a0-babf-6a5bb5cc7bed myNewServer
************************************************************
*********************
Create public / provider network subnet named pub_subnet with specified CIDR, Gateway and IP pool
range:
Note: we specified here the allocation pool of OpenStack IP addresses (192.168.2.70 – 192.168.2.80)
for public network, because we can’t use the whole IP range,
[root@allinone ~(keystone_admin)]# openstack subnet create --subnet-range
192.168.2.0/24 --no-dhcp --gateway 192.168.2.1 --network pub_net --allocation-pool
start=192.168.2.70,end=192.168.2.80 pub_subnet
3.3 ) - Create Router di 1 Project
************************
Now we need to create router named sigma_router to connect tenant network with public network:
Now set gateway for sigma_router to our public / provider network pub_net (connect sigma_router to
pub_net).
Note: if you have problems with above command (openstack router set: error: unrecognized arguments:
–external-gateway), use neutron command instead:
+---------------+-------+--------+
| ID | Name | Status |
+---------------+-------+--------+
+---------------+-------+--------+
+---------------+--------+------+------+-----------+-------+-----------+
+---------------+--------+------+------+-----------+-------+-----------+
+---------------+--------+------+------+-----------+-------+-----------+
+---------------+---------------------+---------------+
| ID | Name | Subnets |
+---------------+---------------------+---------------+
| b0b7(...)0db4 | finance-network1 | a29f(...)855e |
+---------------+-------------+------------------------+---------------+
+---------------+-------------+------------------------+---------------+
1.6. Verify that the developer1-keypair1 key pair, and its associated file located at
+---------------------+-----------------+
| Name | Fingerprint |
+---------------------+-----------------+
| developer1-keypair1 | cc:59(...)0f:f9 |
+---------------------+-----------------+
...output omitted...
1.8. Verify the status of the finance-web1 instance. The instance status will be Active.
+--------+--------------+
| Field | Value |
+--------+--------------+
| name | finance-web1 |
| status | Active |
+--------+--------------+
4.2 ) - Create Network dan Security dari instance yg kita buat di 1 project
**************************************************************
************************************************
4.4 ) - Instance harus bisa dissh harus ada (PEM, Keypair, dan Floating IP)
***********************************************************************
Steps:
[root@allinone ~(keystone_admin)]#
+-------------+---------------------------------------+
| Field | Value |
+-------------+---------------------------------------+
Now create user named sigma with password sigma123 and assign it to project sigma:
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| email | admin@sigma.com |
| enabled | True |
unset OS_SERVICE_TOKEN
export OS_USERNAME=sigma
export OS_PASSWORD=sigma123
export OS_AUTH_URL=http://192.168.2.26:5000/v2.0
export OS_TENANT_NAME=sigma
export OS_REGION_NAME=RegionOne
Create public / provider network pub_net (hence external flag) available for all tenants including
admin (shared network):
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
+-------------------+------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
| allocation_pools | 192.168.2.70-192.168.2.80 |
Note: we specified here the allocation pool of OpenStack IP addresses (192.168.2.70 – 192.168.2.80)
for public network
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
Create private / tenant network subnet named tux_subnet for project tuxfixer with specified CIDR,
Gateway and DHCP :
| Field | Value |
+-------------------+---------------------------------------------------------+
| allocation_pools | 192.168.20.2-192.168.20.254 |
Now we need to create router named tux_router to connect tenant network with public network:
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | UP |
Now set gateway for sigma_router to our public / provider network pub_net (connect sigma_router to
pub_net).
Note: if you have problems with above command (openstack router set: error: unrecognized arguments:
–external-gateway), use neutron command instead:
OpenStack by default comes with couple of predefined flavors for use with newly created instances:
+----+-----------+-------+------+-----------+-------+-----------+
+----+-----------+-------+------+-----------+-------+-----------+
In many cases these flavors are sufficient, but we will create our ultra small flavor named m2.tiny (1
vCPU, 128MB RAM, 1GB Disk) for use with Cirros images:
[root@allinone ~(keystone_admin)]# openstack flavor create --public --vcpus 1 --ram 128 --disk 1 –id 6
m2.tiny
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
+----+-----------+-------+------+-----------+-------+-----------+
+----+-----------+-------+------+-----------+-------+-----------+
+----+-----------+-------+------+-----------+-------+-----------+
8. Create OpenStack image
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
Security Groups control network access to / from instances inside the tenant.
+-----------------+----------------------------------+
| Field | Value |
+-----------------+----------------------------------+
| created_at | 2017-01-06T18:31:27Z |
| description | tux_sec |
[root@allinone ~(keystone_admin)]# openstack security group rule create --protocol icmp --ingress
--project sigma sigma_sec
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-01-06T20:02:17Z |
[root@allinone ~(keystone_admin)]# openstack security group rule create --protocol tcp --dst-port 22
--ingress --project sigma sigma_sec
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-01-06T20:04:12Z |
We need to assign floating IPs for new Instances to be accessible from public / external network.
Unlike previous commands, which we were able to execute as admin user, to assign floating IPs to the
sigma’s project we need to source keystonerc_sigma file:
[root@allinone ~(keystone_tuxfixer)]#
Create / assign two floating IPs for the tuxfixer project:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2017-01-06T19:46:29Z |
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2017-01-06T19:47:05Z |
We have now everything needed to launch two instances (cirros_inst_1, cirros_inst_2) based on Cirros
image and m2.tiny flavor inside tuxfixer project:
+--------------------------------------+-----------------------------------------------------+
| Field | Value |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
+--------------------------------------+-----------------------------------------------------+
| Field | Value |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
Now it’s time to test our instances. We need to connect to both instances from public / external
network (i.e. from some machine in external network) to test their connectivity via floating IPs.
...
...
Connect to the floating IP of cirros_inst_1 instance (192.168.2.71) from computer in public network:
cirros@192.168.2.71's password:
$ hostname
cirros-inst-1
cirros@192.168.2.78's password:
$ hostname
cirros-inst-2
************************************************************
****************************************
2. Select the volume’s Edit Attachments action. If the volume is not attached to an instance, the Attach
To Instance drop-down list is visible.
3. From the Attach To Instance list, select the instance to which you wish to attach the volume.
=====================================
3. Provide a Snapshot Name for the snapshot and click Create a Volume Snapshot. The Volume
Snapshots tab displays all snapshots.
8. Troubleshoot Stack dari existing stack yg sudah dicreate
******************************************************
makesure service dan rpm contoh sftpd sudah ada dan jalan Chapter 3 CL210 Hal 95
*****************************************************************
This ensures that the required packages are installed on workstation, and provisions the
environment with a public network, a private network, a private key, and security rules to
access the instance.
Steps
1. From workstation, retrieve the osp-small.qcow2 image from
http://materials.example.com/osp-small.qcow2 and save it as /home/student/
finance-rhel-db.qcow2.
5. Because there was no output, ensure the mariadb service was enabled.
><fs> command "systemctl is-enabled mariadb"
enabled
6. Ensure the SELinux contexts for all affected files are correct.
Important
Files modified from inside the guestfish tool are written without valid SELinux
context. Failure to relabel critical modified files, such as /etc/passwd, will result
in an unusable image, since SELinux properly denies access to files with improper
context, during the boot process.
Although a relabel can be configured using touch /.autorelabel from within
guestfish, this would be persistent on the image, resulting in a relabel being
performed on every boot for every instance deployed using this image. Instead,
the foollowing step performs the relabel just once, right now.
11. Use ssh to connect to the finance-db1 instance. Ensure the mariadb-server package is
installed, and that the mariadb service is enabled and running.
11.3. Confirm that the mariadb service is enabled and running, and then log out.
16. List the available floating IP addresses, and allocate one to finance-mail1.
17. Use ssh to connect to the finance-mail1 instance. Ensure the postfix service is
running, that postfix is listening on all interfaces, and that the relay_host directive is
correct.
17.1. Log in to the finance-mail1 instance using ~/developer1-keypair1.pem with
ssh.
17.6. Return to workstation. Use the mail command to confirm that the test email arrived.
[cloud-user@finance-mail1 ~]$ exit
[student@workstation ~]$ mail
Heirloom Mail version 12.5 7/5/10. Type ? for help.
"/var/spool/mail/student": 1 message 1 new
>N 1 Cloud User Mon May 29 01:18 22/979 "Test"
&q
Building and Customizing Images
------------------------------------------------
This ensures thatthe required packages are installed on workstation, and provisions the
environment with a public network, a private network, a key pair, and security rules to access
the instance.
Steps
2. Create a copy of the diskimage-builder elements directory to work with in the /home/
student/ directory.
3. Create a post-install.d directory under the working copy of the rhel7 element.
4. Add a script under the rhel7/post-install.d directory to enable the httpd service.
ELEMENTS_PATH /home/student/elements
10. List the available floating IP addresses, and then allocate one to production-web1.
12. From workstation, confirm that the custom web page, displayed from productionweb1,
contains the text production-rhel-web.
Evaluation
Solution
In this lab, you will build a disk image using diskimage-builder, and then modify it using
guestfish.
Outcomes
You will be able to:
• Build an image using diskimage-builder.
• Customize the image using the guestfish command.
• Upload the image to the OpenStack image service.
• Spawn an instance using the customized image.
Steps
1. From workstation, retrieve the osp-small.qcow2 image from http://
materials.example.com/osp-small.qcow2 and save it in the /home/student/
directory.
2. Create a copy of the diskimage-builder elements directory to work with in the /home/
student/ directory.
3. Create a post-install.d directory under the working copy of the rhel7 element.
4. Add a script under the rhel7/post-install.d directory to enable the httpd service.
[student@workstation post-install.d]$ cd
[student@workstation ~]$
Variable Content
NODE_DIST rhel7
DIB_LOCAL_IMAGE /home/student/osp-small.qcow2
DIB_YUM_REPO_CONF "/etc/yum.repos.d/openstack.repo"
ELEMENTS_PATH /home/student/elements
7.3. Edit the /var/www/html/index.html file and include the required key words.
7.4. To ensure the new index page works with SELinux in enforcing mode, restore the /var/
www/ directory context (including the index.html file).
name production-web1
10. List the available floating IP addresses, and then allocate one to production-web1.
10.1. List the floating IPs. Available IP addresses have the Port attribute set to None.
[student@workstation ~(operator1-production)]$ openstack floating ip list -c "Floating IP
Address" -c Port
+---------------------+------------+
| Floating IP Address | Port |
+---------------------+------------+
| 172.25.250.P | None |
+---------------------+---------------+
12. From workstation, confirm that the custom web page, displayed from productionweb1,
contains the text production-rhel-web.
*******************************************************
https://access.redhat.com/articles/1167113
To work around this, manually create the two required RabbitMQ configuration files. These files, along
with their required default contents, are as follows:
/etc/rabbitmq/rabbitmq.config
]}
].
% EOF
/etc/rabbitmq/rabbitmq-env.conf
RABBITMQ_NODE_PORT=5672
1. Create a RabbitMQ user account for the Block Storage, Compute, OpenStack Networking,
Orchestration, Image, and Telemetry services:
2. Next, grant each of these RabbitMQ users read/write permissions to all resources:
3. The OpenStack services require a restart to apply the new permissions. This step is
performed later in Section 12, “Finalize Migration to RabbitMQ”. Once the OpenStack
services have been restarted, you can verify that the permissions were correctly applied
using the list_permissions subcommand on the Messaging server:
# rabbitmqctl list_permissions
Listing permissions in vhost "/" ...
ceilometer .* .* .*
cinder .* .* .*
glance .* .* .*
guest .* .* .*
heat .* .* .*
neutron .* .* .*
nova .* .* .*
11 ) Create Container Swift di 1 Project --> Managing Object Storage Chapter 4 CL 210 Hal 135
***************************************************************************
• Upload an object to the OpenStack object storage service.
• Download an object from the OpenStack object storage service to an instance.
Outcomes
You should be able to:
• Upload an object to the OpenStack object storage service.
• Download an object from the OpenStack object storage service to an instance.
Steps
1. Create a 10MB file named dataset.dat. As the developer1 user, create a container called
container1 in the OpenStack object storage service. Upload the dataset.dat file to this
container.
1.2. Load the credentials for the developer1 user. This user has been configured by the lab
script with the role swiftoperator.
+-------------+--------------+---------------------------------------------------------+
| object | container | etag |
+-------------+------------+-----------------------------------------------------------+
| dataset.dat | container1 | f1c9645dbc14efddc7d8a322685f26eb |
+-------------+------------+-----------------------------------------------------------+
2. Download the dataset.dat object to the finance-web1 instance created by the lab script.
2.1. Verify that the finance-web1 instance's status is ACTIVE. Verify the floating IP address associated
with the instance.
2.2. Copy the credentials file for the developer1 user to the finance-web1 instance. Use the cloud-user
user and the /home/student/developer1-keypair1.pem key file.
2.3. Log in to the finance-web1 instance using cloud-user as the user and the
/home/student/developer1-keypair1.pem key file.
2.5. Download the dataset.dat object from the object storage service.
2.6. Verify that the dataset.dat object has been downloaded. When done, log out from the instance.
12 ) (Scaling Compute Node) --> Deploy Second Compute Chapter 6 Hal 264 dan 294
************************************************************
********************************************************
https://www.golinuxhub.com/2018/08/openstack-tripleo-architecture-step-guide-install-undercloud-
overcloud-heat-template.html#Introspection
Solution
In this lab, you will add compute nodes, manage shared storage, and perform instance live
migration.
Outcomes
You should be able to:
• Add a compute node.
• Configure shared storage.
• Live migrate an instance using shared storage.
Steps
1. Use SSH to connect to director as the user stack and source the stackrc credentials file.
[stack@director ~]$ openstack baremetal node list -c Name -c 'Power State' -c 'Provisioning State' -c
Maintenance
+-------------+--------------------+-------------+-------------+
| Name | Provisioning State | Power State | Maintenance |
+-------------+--------------------+-------------+-------------+
| controller0 | active | power on | False |
| compute0 | active | power on | False |
| ceph0 | active | power on | False |
| compute1 | available | power off | False |
+-------------+--------------------+-------------+-------------+
2.4. Prior to starting introspection, set the provisioning state for compute1 to manageable.
4. Update the node profile for compute1 to use the compute profile.
[stack@director ~]$ openstack baremetal node set compute1 --property
"capabilities=profile:compute,boot_option:local"
5. Configure 00-node-info.yaml to scale two compute nodes. Update the ComputeCount line as follows.
Edit /home/stack/templates/cl210-environment/00-node-info.yaml to scale to two compute nodes.
ComputeCount: 2
In case that’s command no respon for deploy new compute use this command for deploy add new compute
:
********output trimmed*********
The overcloud is being deployed with virtual machines in a nested virtual environment
rather than on physical hardware. Race conditions have been observed, which can cause
elements of the deployment to hang inconsistently. During the deploying stage, the image
is uploaded to Glance and transferred to the bare metal Ironic nodes. After it completes, the
node will reboot and the file system is resized by cloud-init. It then moves into the active
provisioning state. The cloud-init issue can cause the overcloud deployment to hang due
to the node's network being unreachable. An automated solution for this issue was not
available when the course was released. However, the following procedure allows you to
manually correct the deployment.
6.1. Open a new terminal on workstation and use SSH to log in to director as the user
stack with redhat as the password. Watch the Bare Metal nodes transition from
available to deploying to to wait call-back to deploying to active by using
the openstack baremetal node list command.
6.2. After compute1 has become active, navigate to the Online Lab. Select OPEN
CONSOLE for compute1, and log in as the user root with the password redhat.
8.1. Log into controller0 as heat-admin and switch to the root user.
8.4. Configure vi /etc/exports to export /var/lib/nova/instances via NFS to compute0 and compute1.
Add the following lines to the bottom of the file.
/var/lib/nova/instances 172.25.250.2 (rw,sync,fsid=0,no_root_squash)
/var/lib/nova/instances 172.25.250.12 (rw,sync,fsid=0,no_root_squash)
9.1. Log into compute0 as heat-admin and switch to the root user.
9.2. Configure vi /etc/fstab to mount the directory /var/lib/nova/instances, exported from controller0. Add
the following line to the bottom of the file. Confirm that the entry is on a single line in the file; the two line
display here in the book is due to insufficient width.
9.5. Configure user, group, and vnc_listen in /etc/libvirt/qemu.conf Add the following lines to the bottom of
the file.
user="root"
group="root"
vnc_listen="0.0.0.0"
9.6. Configure /etc/nova/nova.conf virtual disk storage and other properties for live migration. Use the nfs
mounted /var/lib/nova/instances directory to store instance virtual disks.
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf \
libvirt images_type default
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf \
DEFAULT instances_path /var/lib/nova/instances
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf \
DEFAULT novncproxy_base_url http://172.25.250.1:6080/vnc_auto.html
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf \
DEFAULT vncserver_listen 0.0.0.0
[root@overcloud-compute-0 ~]# openstack-config --set /etc/nova/nova.conf \
DEFAULT live_migration_flag \
VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
10.1. Log into compute1 as heat-admin and switch to the root user.
10.5.Configure user, group, and vnc_listen in /etc/libvirt/qemu.conf Add the following lines to the bottom
of the file.
user="root"
group="root"
vnc_listen="0.0.0.0"
10.6.Configure /etc/nova/nova.conf virtual disk storage and other properties for live migration. Use the nfs
mounted /var/lib/nova/instances directory to store instance virtual disks.
11. Launch an instance named production1 as the user operator1 using the following
attributes:
Instance Attributes
Attribute Value
flavor m1.web
key pair operator1-keypair1
network production-network1
image rhel7
security group Production
name production1
12. List the available floating IP addresses, then allocate one to the production1 instance.
12.1. List the floating IPs. An available one has the Port attribute set to None.
[student@workstation ~(operator1-production)]$ openstack floating ip list -c "Floating IP Address" -c
Port
+---------------------+------+
| Floating IP Address | Port |
+---------------------+------+
| 172.25.250.P | None |
+---------------------+------+
14.2.Determine whether the instance is currently running on compute0 or compute1. In the example
below, the instance is running on compute0, but your instance may be running on compute1.
Source the /home/student/operator1-production-rc file to export the operator1 user credentials.
14.3.Prior to migration, ensure compute1 has sufficient resources to host the instance. The
example below uses compute1, however you may need to use compute0. The compute
node should contain 2 VCPUs, a 56 GB disk, and 2048 MBs of available RAM.
14.4.Migrate the instance production1 using shared storage. In the example below, the
instance is migrated from compute0 to compute1, but you may need to migrate the instance from
compute1 to compute0.
15. Verify that the migration of production1 using shared storage was successful.
15.1. Verify that the migration of production1 using shared storage was successful. The example below
displays compute1, but your output may display compute0.
Display Metric (Collect Information dari semua resource Openstack) --> Chapter 8 Hal. 373 dan 382
*****************************************************************************
Solution
In this lab, you will analyze the Telemetry metric data and create an Aodh alarm. You will also set
the alarm to trigger when the maximum CPU utilization of an instance exceeds a threshold value.
Outcomes
• Search and list the metrics available with the Telemetry service for a particular user.
• Create an alarm based on aggregated usage data of a metric, and trigger it.
Steps
1. List all of the instance type telemetry resources accessible by the user operator1.
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| enabled | True |
| id | 4301d0dfcbfb4c50a085d4e8ce7330f6 |
| name | operator1 |
| project_id | a8129485db844db898b8c8f45ddeb258 |
+------------+----------------------------------+
1.2. Use the retrieved user ID to search the resources accessible by the operator1 user.
"user_id": "4301d0dfcbfb4c50a085d4e8ce7330f6",
"type": "instance",
"id": "969b5215-61d0-47c4-aa3d-b9fc89fcd46c"
1.3. Observe that the ID of the resource in the previous output matches the instance ID of the production-
rhel7 instance.
+--------+--------------------------------------+
| Field | Value |
+--------+--------------------------------------+
| id | 969b5215-61d0-47c4-aa3d-b9fc89fcd46c |
| name | production-rhel7 |
| status | ACTIVE |
+--------+--------------------------------------+
2.1. Use the production-rhel7 instance resource ID to list the available metrics. Verify that the cpu_util
metric is listed.
[student@workstation ~(operator1-production)]$ openstack metric resource show 969b5215-61d0-
47c4-aa3d-b9fc89fcd46c --type instance
+--------------+---------------------------------------------------------------+
|Field | Value |
+--------------+---------------------------------------------------------------+
|id | 969b5215-61d0-47c4-aa3d-b9fc89fcd46c |
|image_ref | 280887fa-8ca4-43ab-b9b0-eea9bfc6174c |
| | cpu: e410ce36-0dac-4503-8a94-323cf78e7b96 |
| | cpu_util: 6804b83c-aec0-46de-bed5-9cdfd72e9145 |
+--------------+---------------------------------------------------------------+
3. List the available archive policies. Verify that the cpu_util metric of the productionrhel7 instance uses the
archive policy named low.
3.1. List the available archive policies and their supported aggregation methods.
+--------+------------------------------------------------+
| name | aggregation_methods |
+--------+------------------------------------------------+
+--------+------------------------------------------------+
3.2. View the definition of the low archive policy.
+------------+---------------------------------------------------------------+
| Field | Value |
+------------+---------------------------------------------------------------+
| name | low |
+------------+---------------------------------------------------------------+
3.3. Use the resource ID of the production-rhel7 instance to check which archive policy is in use for the
cpu_util metric.
+---------------------+-------+
| Field | Value |
+---------------------+-------+
| archive_policy/name | low |
+---------------------+-------+
3.4. View the measures collected for the cpu_util metric associated with the production-rhel7 instance to
ensure that it uses granularities according to the definition of the low archive policy.
+---------------------------+-------------+----------------+
| timestamp | granularity | value |
+---------------------------+-------------+----------------+
+---------------------------+-------------+----------------+
4. Add new measures to the cpu_util metric. Observe that the newly added measures
are available using min and max aggregation methods. Use the values from the following
table. The measures must be added using the architect1 user's credentials, because
manipulating data points requires an account with the admin role. Credentials of user
Measures Parameter
The measure values 30 and 42 are manual data values added to the cpu_util metric.
4.1. Source architect1 user's credential file. Add 30 and 42 as new measure values.
4.2. Verify that the new measures have been successfully added for the cpu_util metric.
Force the aggregation of all known measures. The default aggregation method is mean,
so you will see a value of 36 (the mean of 30 and 42). The number of records and their
+---------------------------+-------------+----------------+
+---------------------------+-------------+----------------+
+---------------------------+-------------+----------------+
4.3. Display the maximum and minimum values for the cpu_util metric measure.
+---------------------------+-------------+----------------+
+---------------------------+-------------+----------------+
+---------------------------+-------------+----------------+
[student@workstation ~(architect1-production)]$ openstack metric measures show --resource-id
969b5215-61d0-47c4-aa3d-b9fc89fcd46c cpu_util --refresh --aggregation min
+---------------------------+-------------+----------------+
+---------------------------+-------------+----------------+
+---------------------------+-------------+----------------+
resources. Set the alarm to trigger when maximum CPU utilization for the productionrhel7 instance exceeds
50% for two consecutive 5 minute periods.
+--------------------+-------------------------------------------------------+
| Field | Value |
+--------------------+-------------------------------------------------------+
| aggregation_method | max |
| alarm_actions | [u'log://'] |
| alarm_id | f93a2bdc-1ac6-4640-bea8-88195c74fb45 |
| comparison_operator| ge |
| evaluation_periods | 2 |
| granularity | 300 |
| metric | cpu_util |
| name | cputhreshold-alarm |
| ok_actions | [] |
| project_id | ba5b8069596541f2966738ee0fee37de |
+--------------------+-------------------------------------------------------+
5.2. View the newly created alarm. Verify that the state of the alarm is either ok or
insufficient data. According to the alarm definition, data is insufficient until two
evaluation periods have been recorded. Continue with the next step if the state is ok or
insufficient data.
+--------------------+-------+---------+
+--------------------+-------+---------+
| cputhreshold-alarm | ok | True |
+--------------------+-------+---------+
6. Simulate high CPU utilization scenario by manually adding new measures to the cpu_util
6.1. Open two terminal windows, either stacked vertically or side-by-side. The second
terminal will be used in subsequent steps to add data points until the alarm triggers. In
the first window, use the watch to repetitively display the alarm state.
[student@workstation ~(architect-production)]$ watch openstack alarm list -c alarm_id -c name -c
state
+--------------------------------------+--------------------+-------+
+--------------------------------------+--------------------+-------+
| 82f0b4b6-5955-4acd-9d2e-2ae4811b8479 | cputhreshold-alarm | ok |
+--------------------------------------+--------------------+-------+
6.2. In the second terminal window, use the watch command to add new measures to the
simulate high CPU utilization, since the alarm is set to trigger at 50%.
Repeat this command once per minute. Continue to add manual data points at a rate
of about one of these commands per minute. Be patient, as the trigger must detect a
6.3. The alarm-evaluator service will detect the new manually added measures. Within
+--------------------------------------+--------------------+-------+
+--------------------------------------+--------------------+-------+
6.4. After stopping the watch and closing the second terminal, view the alarm history to
analyze when the alarm transitioned from the ok state to the alarm state. The output
"timestamp": "2017-06-08T14:05:53.477088",
},
"timestamp": "2017-06-08T13:18:53.356979",
},
"timestamp": "2017-06-08T13:15:53.338924",
\"insufficient data\"}"
},