Sei sulla pagina 1di 26

OPENSTACK REFERENCE ARCHITECTURE

OpenStack Reference
Architecture
4/11/2013
SolidFire Legal Notices
The software described in this user guide is furnished under a license agreement and may be used only in accordance
with the terms of the agreement.

Copyright Notice
Copyright © 2013 SolidFire, Inc.

All Rights Reserved.

Any technical documentation that is made available by SolidFire is the copyrighted work of SolidFire, Inc. and is
owned by SolidFire, Inc.

NO WARRANTY: The technical documentation that is made available by SolidFire, Inc. is delivered AS-IS, and
SolidFire, Inc. makes no warranty as to its accuracy or use.

Trademarks
SolidFire is a U.S. registered trademark.

SolidFire 3010, SolidFire 6010, Element, and SolidFire Helix are U.S trademarks of SolidFire, Inc.

Patent Pending U.S. Patent and Trademark Office

OpenStack Reference Architecture 2


Table of Contents
SolidFire Legal Notices ....................................................................... 2

Copyright Notice ................................................................................................ 2

Trademarks ....................................................................................................... 2

Goal ............................................................................................... 5

Audience ......................................................................................... 5

Prerequisites .................................................................................... 5

Summary of Key Findings .................................................................. 5

Solution Overview ............................................................................. 6

OpenStack ........................................................................................................ 6

The Cinder Service ............................................................................................. 6

SolidFire Storage ............................................................................................... 6

Solution ............................................................................................................ 6

Infrastructure Components & Design ................................................... 7

OpenStack Keystone Identity Service ................................................................... 8

Access Control with Keystone ......................................................................... 8


Service Users ............................................................................................... 8
Service Catalog ............................................................................................ 8
OpenStack Glance Image Service ......................................................................... 8

OpenStack Reference Architecture 3


OpenStack Nova Compute Service ....................................................................... 8

OpenStack Cinder Block Storage Service ............................................................... 9

Installing and Configuring OpenStack Software..................................... 10

Preparing OpenStack Service Hosts .................................................................... 10

Configuring Network Time Protocol (NTP) ...................................................... 10


Configuring the Ubuntu Cloud Archive Aptitude Repository............................... 10
Prerequisite Packages ................................................................................. 11
Cloud Controller Host Components ..................................................................... 11

MySQL Database & RabbitMQ AMQP Servers .................................................. 11


Keystone ................................................................................................... 11
Glance ...................................................................................................... 11
Cinder ....................................................................................................... 12
OpenStack Horizon (Web Dashboard Service) ................................................ 15
Compute Server Components (nova-compute, nova-network, nova-api-metadata) ... 16

Configuration Validation .................................................................... 19

Provisioning Volumes ........................................................................................ 19

Testing Hardware Configuration Summary........................................................... 19

Testing Overview ............................................................................................. 20

Success Criteria ............................................................................................... 22

Test Results .................................................................................................... 22

Interoperability Test Results ........................................................................ 22


Performance Test Results ............................................................................ 22

OpenStack Reference Architecture 4


Goal
The purpose of this document is to outline the configuration of SolidFire storage in an OpenStack Infrastructure-as-a-
Service (IaaS) environment, demonstrate basic interoperability and validate SolidFire Quality of Service (QoS)
capabilities.

Audience
This document is intended to assist a solution architect, sales engineer, consultant or IT administrator with basic
configuration and proof of concept efforts. The document assumes the reader has an architectural understanding of
OpenStack and has reviewed related content in the OpenStack documentation.

The OpenStack documentation center can be found at:

http://docs.openstack.org/

Prerequisites
This document assumes that a SolidFire storage solution has already been installed and configured with network
access to the OpenStack compute and service hosts. Additional documentation on the architecture and deployment
steps for a SolidFire storage solution can be obtained by contacting SolidFire or requested through our online support
portal at the following URL:

http://solidfire.com/support/

Summary of Key Findings


From our testing the SolidFire storage system proved to be fully compatible with OpenStack. The integration was
simple and straightforward, requiring no complex tuning or complicated workarounds.
Key Highlights

 Full SolidFire driver integration with OpenStack Folsom (2012.2) software release

 Configuration with OpenStack Block Storage can be done in less than a minute and only require four entries in
the cinder.conf file

 Ability to set and maintain true QoS levels on a per-volume basis, performance was consistent across 216
volumes in parallel

 Scaling of volumes had no negative performance impact

OpenStack Reference Architecture 5


 Able to create, snapshot, and manage SolidFire volumes using OpenStack clients and APIs

 Able to attach SolidFire volumes to Nova Instances

Solution Overview
OpenStack
Launched in 2010, Openstack is open source software for building clouds. Created to drive industry standards,
accelerate cloud adoption, and put an end to cloud “lock-in”, OpenStack is a common, open platform for both public
and private clouds. The open source cloud operating system enables businesses to manage compute, storage and
networking resources via a self-service portal and APIs at massive scale.

The Cinder Service


Architected to provide traditional block level storage resources to other OpenStack services, Cinder is ideal for
applications with performance sensitive workloads. Different than the Swift object storage service, Cinder presents
persistent block level storage volumes for use with OpenStack Nova compute instances. The Cinder block storage
service manages the creation, attaching and detaching of these volumes between a storage system like SolidFire and
different host servers.

SolidFire Storage
Current storage solutions were not designed for unique challenges presented by large-scale multi-tenant cloud
environments. In cloud environments, the performance, quality-of-service and scale are different from traditional
enterprise settings. A SolidFire storage system is architected specifically to address these issues.

As a clustered scale-out architecture the independent node resources are aggregated together. Capacity and
performance scale linearly with the addition of each node to the system. The result is a single storage system that
enables cloud providers to scale efficiently, on demand, and without downtime or impact to application performance.

Solution
To build an OpenStack-powered cloud infrastructure, there is only one choice for block storage; SolidFire. No other
vendor can combine the comprehensive integration behind a production ready OpenStack Block Storage deployment,
with the guaranteed performance, high availability and scale necessary for customers to confidently host performance
sensitive applications in their cloud infrastructure.

OpenStack Reference Architecture 6


Infrastructure Components & Design
The OpenStack cloud platform is comprised of many services that provide abstracted access to pools of networking,
compute, and storage resources. These services, which are often functionally independent but complementary to each
other, provide the core for constructing a public or private OpenStack cloud.

The relationships between each core OpenStack component can be visualized per the diagram below.

Figure 1: OpenStack Component Diagram

(Source: http://docs.openstack.org/folsom/openstack-compute/admin/content/conceptual-architecture.html)

Additional documentation for the logical architecture of the OpenStack cloud platform can be found at the following
URL:

http://docs.openstack.org/folsom/openstack-compute/admin/content/logical-architecture.html

OpenStack Reference Architecture 7


OpenStack Keystone Identity Service
The OpenStack Keystone service will provide centralized authentication, identity, and service catalog functionality for
our cloud environment.

Access Control with Keystone

The fundamental structure of access designation in Keystone consists of ‘users’ that access resources (compute
instances, block storage volumes, images, etc.) belonging to a ‘tenant’ organization.

To implement more unusual access restrictions, optional Member and Admin user roles can be used to allow user
access to resources from additional tenant organizations (e.g. a user with access to more than one tenant
organization, or an admin user with access to all resources across the cloud).

Service Users

In addition to providing administrative user and ‘end-user’ access, Keystone provides user accounts for other
OpenStack services to interact with and query user permissions (e.g. Cinder service keeps track of which tenant
organizations own which volumes, but it needs the Keystone service to know if a given user is allowed to access
resources (volumes) for that tenant organization).

Service Catalog

The Keystone service also provides a catalog of available ‘services’ (e.g. compute, image, volume, etc.) that usually
have one or more ‘endpoints’ registered to provide network access.

OpenStack Glance Image Service


OpenStack Glance provides an image service for the storage of instance templates (typically a basic operating system
installation or a server preconfigured for a specific application role).

For the purposes of this architecture, standard file system storage for images was sufficient; however Glance can be
easily reconfigured to use the S3-compatible OpenStack Swift object storage service in larger configurations requiring
more robust image storage.

OpenStack Nova Compute Service


Virtual machine or ‘instance’ deployment will be automated across a pool of hypervisors with the OpenStack Nova
service. This service will additionally manage network resources in our configuration using the nova-network service.

Management Components

The Nova project provides a number of daemons that will be used for a variety of functions. The nova-api and nova-
scheduler daemons at the core will work with the nova-compute daemons to provide essential compute functions.

OpenStack Reference Architecture 8


The nova-cert, nova-consoleauth, and nova-novncproxy daemons will additionally be used to provide certificate
management and compute instance console connectivity respectively.

The nova-objectstore daemon provides a single node (not highly available) S3-compatible object store. This
component is considered optional, but an object store is required in order to upload images using the EC2-compatible
endpoint for Nova. This requirement can be satisfied by nova-objectstore, an OpenStack Swift object storage cluster,
or any other S3-compatible object store.

Compute

The nova-compute service is capable of managing compute resources provided by a number of hypervisors. Our own
documented configuration will use the libvirt compute driver to communicate with the KVM hypervisor on each
compute host.

Network

Like a number of the other OpenStack services, the nova-network daemon can leverage a number of different drivers
to implement network deployment features. The default network driver for nova-network will be used (VlanManager).
VlanManager allows the use of IEEE 802.1q VLAN tagging to provide layer-2 isolation between tenant networks in the
switching fabric.

Cloud networking configuration scenarios requiring connectivity between tenant networks and to publicly accessible
address spaces can also be satisfied with the Security Groups and Floating IP features included in the nova-network
service.

OpenStack Cinder Block Storage Service


The OpenStack Cinder block storage service provides dynamic provisioning and portability of block storage devices for
Nova instances.

Cinder can utilize a number of storage solutions to provide block storage volumes, but for the purposes of this
architecture the SolidFire volume driver will be used. To complement the SolidFire storage system functionality, the
SolidFire volume driver implements additional functionality to leverage the quality of service features inherent to
SolidFire block storage volumes in order to provide optimally prescribed performance for every volume.

OpenStack Reference Architecture 9


Installing and Configuring OpenStack Software
Preparing OpenStack Service Hosts

Configuring Network Time Protocol (NTP)

The NTP daemon for Ubuntu 12.04 can be installed by installing the appropriate package from the default aptitude
repositories. The Ubuntu 12.04 NTP daemon can be installed by issuing the following command at an appropriate
prompt:

apt-get install ntp

The default configuration file for the NTP daemon includes a default pool with 5 Internet NTP servers. While these are
functional for a test/small deployment scenario, it is advantageous to configure and use local network NTP time
sources. Among other network design and security considerations that may justify internal NTP time sources, without
them the aggregate number of NTP requests (and overall network traffic) transmitted to the Internet will increase
linearly with the number of hosts included in any given architecture.

Alternative NTP servers can be configured by editing the service’s configuration file (/etc/ntp.conf). In our
configuration example, the default servers were removed and two internal servers added via the following
configuration lines:

server 172.25.100.252

server 172.25.100.253

Configuring the Ubuntu Cloud Archive Aptitude Repository

OpenStack components sourced for this reference architecture are provided from the Canonical/Ubuntu Cloud Archive
package repository. This repository provides new OpenStack release content (Folsom and later releases) to Ubuntu
12.04 LTS.

The Ubuntu Cloud Archive can be configured for the Ubuntu 12.04 aptitude package manager by running the following
commands at an appropriately privileged command shell:

apt-get install ubuntu-cloud-keyring

echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom


main > /etc/apt/sources.list.d/cloud.list

apt-get update

Alternatively, the Ubuntu Cloud Archive repository can be configured by following the steps listed in the associated
Canonical documentation (https://wiki.ubuntu.com/ServerTeam/CloudArchive). This package repository should be
configured for any host that will be used to run OpenStack components.

OpenStack Reference Architecture 10


Prerequisite Packages

For this documented architecture the OpenStack services have been configured to connect to a central MySQL
database server. Likewise, all of the hosts (controller and compute nodes) will need to have the ‘python-mysqldb’
package installed. This package is the prerequisite Python library that allows most OpenStack services to connect to a
MySQL database server.

These packages are installed from the aptitude package manager with the following command:

apt-get install python-mysqldb

Cloud Controller Host Components

MySQL Database & RabbitMQ AMQP Servers

Many of the OpenStack services require messaging & database connections. These requirements are fulfilled in this
configuration RabbitMQ and MySQL. These services can be installed from aptitude with the following command:

apt-get install rabbitmq-server mysql-server

Keystone

Installing Software Packages

The Keystone service is contained in a single aptitude package, and was installed with the following command:

apt-get install keystone

keystone.conf (/etc/keystone/keystone.conf)

Configuration Changes

 The admin_token parameter was configured to allow access to the Keystone Service Endpoint

 The connection parameter was configured for a MySQL database connection.

The Keystone service will need to be bootstrapped with information for users, tenants, roles, and services/service
endpoints before other OpenStack services will function as intended. Further detail on this process and overall
Keystone installation steps can be found in the OpenStack documentation at:

http://docs.openstack.org/folsom/openstack-compute/install/apt/content/install-keystone.html

Glance

Installing Software Packages

The Glance service packages can be installed from aptitude with the following command:

OpenStack Reference Architecture 11


apt-get install glance

glance-api.conf (/etc/glance/glance-api.conf)

Configuration Changes

 The sql_connection parameter was configured for a MySQL database connection.

 The [keystone_authtoken] configuration section should be updated with appropriate reference to the
Keystone authentication endpoint and the Glance service user created as part of the Keystone bootstrap
sequence.

glance-registry.conf (/etc/glance/glance-registry.conf)

Configuration Changes

 The ‘sql_connection’ parameter was configured for a MySQL database connection. This should be configured to
use the same database as the glance-api daemon.

 The [keystone_authtoken] configuration section should be updated with appropriate reference to the Keystone
authentication endpoint and the Glance service user created as part of the Keystone bootstrap sequence.

api-paste.ini (/etc/glance/api-paste.ini)

Configuration Changes

 The [filter:authtoken] configuration section should be updated with appropriate reference to the Keystone
authentication endpoint and the Glance service user created as part of the Keystone bootstrap sequence.

Operating system images will need to be uploaded to Glance before Nova compute instances will function as intended.
While SolidFire used operating system images specific to I/O simulation tasks, additional documentation for uploading
images can be found in the OpenStack documentation:

http://docs.openstack.org/folsom/openstack-compute/admin/content/adding-images.html

Cinder

Installing Software Packages

The OpenStack Cinder Block Storage service is composed from three core components: cinder-api, cinder-scheduler,
and cinder-volume. These components can be installed on Ubuntu 12.04 by issuing the following command at a shell
prompt and following through the aptitude package manager prompts:

apt-get install cinder-api cinder-scheduler cinder-volume

cinder.conf (/etc/cinder/cinder.conf)

The Cinder configuration file (/etc/cinder/cinder.conf) for the test environment is a comprehensive replacement for the
provided default file. The file is outlined from start to finish in the following configuration examples.

OpenStack Reference Architecture 12


Boilerplate Configuration

[DEFAULT]

rootwrap_config = /etc/cinder/rootwrap.conf

api_paste_confg = /etc/cinder/api-paste.ini

state_path = /var/lib/cinder

volumes_dir = /var/lib/cinder/volumes

# Set auth strategy to Keystone

auth_strategy = keystone

# Set MySQL database connection info


(mysql://username:password@databasehost/databasename)

sql_connection = mysql://cinder:NddyuGUpr9nTGwXF@172.25.100.43/cinder

# Set AMQP messaging bus connection info

rabbit_host=172.25.100.43

Configuring the SolidFire cinder-volume Driver

The volume_driver configuration option enables the cinder-volume service to utilize a number of different storage
solutions to provide block storage in an OpenStack infrastructure. By default, this option leverages a driver for LVM,
provisioning new logical volumes to distribute and virtualize tenant data in a storage pool (LVM volume group).

Before adding the configuration parameters for the SolidFire volume driver, an admin account will need to be created
on the SolidFire storage system. This cluster admin account will be used by the cinder-volume daemon to send API
calls to the SolidFire system to manage volumes.

For the purposes of this reference architecture, the SolidFire volume driver is configured for the cinder-volume
service. As illustrated below, most volume drivers have configuration options of their own; the SolidFire volume driver
requires the options san_ip, san_login, and san_password to be set in order to communicate with the SolidFire API
endpoint. These parameters should be added to the cinder.conf configuration file in order to configure and use the
SolidFire volume driver for OpenStack Cinder.

# Set SolidFire volume driver for cinder-volume

volume_driver=cinder.volume.solidfire.SolidFire

# Set connection information to SolidFire API endpoint for SolidFire volume driver

san_ip=172.25.100.50

san_login=admin

san_password=solidfire

OpenStack Reference Architecture 13


# Allow tenants to set QoS on ‘cinder create’ API

sf_allow_tenant_qos=True

Nova

Installing Packages

A number of Nova components will be installed on the cloud controller to provide compute resource access as well as
edge functionality such as object storage (for a functional EC2 endpoint), and instance VNC console connectivity.

The Nova components used in our own configuration were installed from the aptitude package manager with the
following command:

apt-get install nova-api nova-cert nova-consoleauth nova-novncproxy nova-


objectstore nova-scheduler

nova.conf (/etc/nova/nova.conf)

The reference configuration file used for the Nova services on the cloud controller host is provided below. For context,
this configuration references the cloud controller IP (172.25.100.43) in several locations.

[DEFAULT]

state_path=/var/lib/nova

lock_path=/var/lock/nova

root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

ec2_private_dns_show_ip=True

api_paste_config=/etc/nova/api-paste.ini

# LOGS

verbose=True

logdir=/var/log/nova

# SCHEDULER

compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler

# DATABASE

sql_connection=mysql://nova:8AxNCfBVmBbqdw4S@172.25.100.43/nova

# APIS

OpenStack Reference Architecture 14


enabled_apis=ec2,osapi_compute,metadata

osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions

ec2_dmz_host=172.25.100.43

s3_host=172.25.100.43

# RABBITMQ

rabbit_host=172.25.100.43

rabbit_password=guest

# GLANCE

image_service=nova.image.glance.GlanceImageService

glance_api_servers=172.25.100.43:9292

# NOVNC CONSOLE

novncproxy_base_url=http://172.25.100.43:6080/vnc_auto.html

vncserver_proxyclient_address=172.25.100.43

vncserver_listen=172.25.100.43

# VOLUMES

volume_api_class=nova.volume.cinder.API

auth_strategy=keystone

api-paste.ini (/etc/nova/api-paste.ini)

Configuration Changes

The [filter:authtoken] configuration section should be updated with appropriate reference to the Keystone
authentication endpoint and the Nova service user created as part of the Keystone bootstrap sequence.

OpenStack Horizon (Web Dashboard Service)

The OpenStack Horizon dashboard provides a stateless django web interface for navigating an OpenStack
infrastructure.

This service can be considered optional to meet the functional requirements of most OpenStack cloud deployments,
however it is included here as a convenient interface for accessing VM consoles and other basic administration tasks.

The Horizon dashboard can be installed by installing the following packages from aptitude:

apt-get install openstack-dashboard

OpenStack Reference Architecture 15


Compute Server Components (nova-compute, nova-network, nova-api-
metadata)
Three core components will be installed on each ‘compute’ node: nova-api-metadata, nova-compute, and nova-
network. The first will provide configuration metadata for instances (e.g. hostname, SSH key injection), and the latter
two will provide compute resource and network resource management respectively.

Installing Packages

The Nova packages for each earlier mentioned compute component, as well as the KVM hypervisor and libvirt
virtualization APIs, can be installed from the package manager with the following command:

apt-get install kvm libvirt-bin nova-compute nova-compute-kvm nova-network nova-


api-metadata

nova.conf (/etc/nova/nova.conf)

The nova.conf configuration file for each compute host is provided below. For context, this will reference a number of
the network interfaces on the compute host to direct network traffic appropriately. The configuration will also
reference daemons running on the ‘cloud controller’ host, as well as the IP address that should be used for the local
metadata service and VNC connections on the compute host.

In our own environment the cloud controller is located at 172.25.100.43. The compute host IP address will change
with each compute node, but the compute host IP used in our reference configuration is 172.25.100.28.

[DEFAULT]

state_path=/var/lib/nova

lock_path=/var/lock/nova

root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

ec2_private_dns_show_ip=True

api_paste_config=/etc/nova/api-paste.ini

# LOGS

logdir=/var/log/nova

# SCHEDULER

compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler

enabled_apis=metadata

# DATABASE

OpenStack Reference Architecture 16


sql_connection=mysql://nova:8AxNCfBVmBbqdw4S@172.25.100.43/nova

# RABBITMQ

rabbit_host=172.25.100.43

rabbit_password=guest

# GLANCE

image_service=nova.image.glance.GlanceImageService

glance_api_servers=172.25.100.43:9292

# NOVNC CONSOLE

novncproxy_base_url=http://172.25.100.43:6080/vnc_auto.html

vncserver_proxyclient_address=172.25.100.28

vncserver_listen=172.25.100.28

# VOLUMES

volume_api_class=nova.volume.cinder.API

auth_strategy=keystone

compute_driver=libvirt.LibvirtDriver

libvirt_type=kvm

metadata_host=172.25.100.28

# NETWORK

network_manager=nova.network.manager.VlanManager

force_dhcp_release=True

dhcpbridge_flagfile=/etc/nova/nova.conf

dhcpbridge=/usr/bin/nova-dhcpbridge

firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

vlan_interface=bond0

public_interface=bond0.1100

network_size=256

flat_injected=False

connection_type=libvirt

multi_host=True

OpenStack Reference Architecture 17


Networking Configuration

An overview of each compute host’s network configuration is displayed in Figure 2. This diagram outlines the data
flow for tenant virtual machines, storage traffic, management traffic, and public network traffic. Each of these
networks will operate independently, with segregated access between them.

Figure 2: Compute Host’s Network Configuration

OpenStack Reference Architecture 18


Configuration Validation
All validation and testing was conducted in-house using the configuration defined in this document. The testing was
focused on demonstrating basic interoperability between OpenStack and SolidFire, as well as validating SolidFire’s QoS
capabilities in an IaaS environment by capturing storage performance metrics during instance creation and under
various levels of load.

Provisioning Volumes
The SolidFire volume driver for the OpenStack Cinder block storage service enables direct access to the underlying
Quality of Service functionality in the SolidFire storage system. This functionality can be accessed by providing the
appropriate metadata and volume information to the Cinder command line client or API.

The SolidFire volume driver in the OpenStack Folsom release provides two methods for defining volume quality of
service settings.

Using pre-defined volume service offerings

The default driver includes four pre-defined quality of service presets as documented below. The existing presets can
be modified or additional presets added by modifying the SolidFire volume driver.

sf-qos slow medium fast performant

minIOPS (4k) 100 200 500 2000

maxIOPS (4k) 200 400 1000 4000

burstIOPS (4k) 200 400 1000 4000

Volumes can be created with these preset quality of service settings by specifying the appropriate key value pair in
the volume metadata. This is shown below with the Cinder command line client (e.g. 10GB volume, ‘fast’):

cinder create 10 --metadata sf-qos=fast

Using On-the-Fly Quality of Service Definitions

Volumes can also be created by specifying minimum, maximum and burst IOPs metadata values at volume creation as
shown below:

cinder create 20 --metadata minIOPS=1000 maxIOPS=10000 burstIOPS=12000

Testing Hardware Configuration Summary


 1 Cloud Controller Host

OpenStack Reference Architecture 19


o 2x Intel E5645 (6 cores @ 2.40GHz, Westmere)

o 24GB RAM

o 2x 1GbE connections for management, tenant, and public network communication

o Configured with Management Server Components

 9 Nova Compute Hosts

o 2x Intel E5645 (6 cores @ 2.40GHz, Westmere)

o 48GB RAM

o 2x 1GbE connections for management, tenant, and public network communication

o 2x 10GbE connections for storage network communication

o Configured with Compute Server Components

 5-Node SolidFire SF3010 Cluster

o 2x 1GbE connections per node for management network communication

o 2x 10GbE connections per node for storage network communication

o ~60TB Usable Capacity

o 15TB Raw Capacity

o 250,000 IOPs

Testing Overview

Interoperability Test Suite

Interoperability testing for the SolidFire volume driver and all associated functionality is performed for every major
OpenStack release, and at a number of other milestones throughout the development of OpenStack Cinder and the
SolidFire storage solution.

These integration tests are intended to primarily function as a basic integration verification and regression check.
Additional white-box testing is performed when functional modifications are made to the SolidFire volume driver,
where emphasis is placed on evaluating new or modified functionality.

Interoperability testing is primarily performed using the devstack development toolkit for OpenStack, which includes
some rudimentary testing scripts in addition to the Tempest integration test suite.

 devstack (http://devstack.org/)

 Tempest (https://github.com/openstack/tempest)

OpenStack Reference Architecture 20


Performance Test Suite

Instance Build Cycle Performance Test

 Duration: ~2 hours

 Run Instance Build Routine 216 times

o Create 1 Instance (‘nova boot’)

 1 vCPU, 1GB RAM, Ubuntu 12.04

o Create 1 Volume (‘cinder create’)

 minIOPS=100, maxIOPS=1100, burstIOPS=1200

 25GB

o Attach Volume to Instance (‘nova volume-attach’)

o Run Workload

 vdbench 5.03 RC 11

 4k Block Size

 100% Random

 80% Read, 20% Write

 Target 900 IOPs

Sustained Performance Test

 Duration: ~2 hours

 216 Instances (Same instances from instance build cycle test)

o 1 vCPU, 1GB RAM, Ubuntu 12.04

o 1 attached volume (/dev/vdb)

 25GB, minIOPS=100, maxIOPS=1100, burstIOPS=1200

o vdbench 5.03 RC11

 4k Block Size, 80% Read, 100% Random, Target 900 IOPs

OpenStack Reference Architecture 21


Success Criteria

Interoperability

The following volume actions are tested during volume driver integration testing:

 Create Cinder Volume

 Attach Cinder Volume to Nova Instance

 Create Filesystem on Attached Volume & Test Data Transfer

 Create Snapshot

 Create Volume from Snapshot

 Delete Snapshot

 Delete Volume

Performance

1. Performance Consistency - Average volume performance remains within 5% (45 IOPS) of the target (900
IOPs) for the duration of the test cycle.

2. Performance Scalability - Individual Sample Volume performance remains within 15% (135 IOPs) of the target
(900 IOPs) regardless of the current volume count in the test cycle (between 1 and 216 workloads).

Test Results

Interoperability Test Results

Test Result
Create Cinder Volume PASS
Attach Cinder Volume to Nova Instance PASS
Create Filesystem & Test Data Transfer PASS
Create Snapshot PASS
Create Volume from Snapshot PASS
Delete Snapshot PASS
Delete Volume PASS

Performance Test Results

I/O statistics for the entire duration of the instance build cycle and sustained performance test can be viewed below.
Statistics were collected for all volumes approximately every 2 seconds, and stored at a resolution of 10 seconds (~5
average samples submitted per stored data point).

OpenStack Reference Architecture 22


Data is displayed in the graphs below in two contexts. The first is a performance overview that provides minimum,
maximum, and average IO statistics collected from all test volumes. The second is a more detailed view that shows
the full I/O statistics throughout the test cycle for three volumes. These three sample volumes were strategically
chosen to display the relative performance of volumes created in differing load scenarios throughout the instance build
cycle.

Additionally, data is displayed as an overview of all tests (full 4 hours), as well as snapshots at early and late points in
both the instance build cycle and sustained performance evaluations.

Instance Build Cycle & Sustained Performance Test Summary – All Volumes
The following display (Figure 3) provides a visual representation of average performance activity over the course of
the performance test suite execution. The instance build cycle test can be viewed in the period from ~0:00 to ~2:00,
and the sustained performance test continues afterward from ~2:00 to ~4:00.

During these periods, the maximum, minimum, and average volume performance metrics are displayed. All 216 test
volumes are considered in the calculation of each metric.

Figure 3: SolidFire OpenStack Sustained Performance Test – 216 Volumes – Performance Summary

OpenStack Reference Architecture 23


Instance Build Cycle & Sustained Performance Test Summary – Sample
Volumes Detail
The following display (Figure 4) provides a visual representation of detailed performance activity over the course of
the performance test suite execution. The instance build cycle test can be viewed in the period from ~0:00 to ~2:00,
and the sustained performance test continues afterward from ~2:00 to ~4:00.

During these periods, real time performance metrics are displayed for three volumes created at the beginning, middle,
and end of the instance build cycle respectively. This allows the reader to visualize the performance consistency
displayed by each volume despite the presence of other workloads and aggregate system load fluctuations throughout
the instance build cycle. This volume performance consistency can be attributed to SolidFire’s Quality of Service
controls maintaining per-volume performance independent of these external factors.

Figure 4: SolidFire OpenStack Sustained Performance Test – Individual Sample Volumes – Performance Detail

OpenStack Reference Architecture 24


Performance Test Summary

Test Result
Average Volume Performance within 5% (45) of target (900) for duration of test PASS
Sample Volume Performance within 15% (135) of target (900) for duration of test PASS

OpenStack Reference Architecture 25


1620 Pearl Street, Suite 200
Boulder, Colorado 80302

Phone: 720.523.3278
Email: info@solidfire.com

www.solidfire.com

4/11/2013

Potrebbero piacerti anche