Sei sulla pagina 1di 56

Intel Open Network Platform Server

Reference Architecture (Release 1.2)


NFV/SDN Solutions with Intel Open Network Platform Server

Document Revision 1.2


December 2014

Intel ONP Server Reference Architecture


Solutions Guide

Revision History
Revision

Date

1.2

December 15, 2014

1.1.1

October 29, 2014

1.1

September 18, 2014

1.0

August 21, 2014

Comments
Document prepared for release 1.2 of Intel Open Network Platform Server 1.2.
Changed two links to the following:

https://01.org/sites/default/files/page/vbng-scripts.tgz
https://01.org/sites/default/files/page/qat_patches_netkeyshim.zip

Minor edits throughout document.


Initial document for release of Intel Open Network Platform Server 1.1.

Intel ONP Server Reference Architecture


Solutions Guide

Contents
1.0 Audience and Purpose ................................................................................................. 5
2.0 Summary .................................................................................................................... 7
2.1

Network Services Examples ........................................................................................................9

2.1.1

Suricata (Next Generation IDS/IPS engine) .............................................................................9

2.1.2

vBNG (Broadband Network Gateway) .....................................................................................9

3.0 Hardware Components ...............................................................................................11


4.0 Software Versions ......................................................................................................13
4.1

Obtaining Software Ingredients ................................................................................................ 14

5.0 Installation and Configuration Guide .........................................................................15


5.1

Instructions Common to Compute and Controller Nodes ............................................................... 15

5.1.1

BIOS Settings ................................................................................................................... 15

5.1.2

Operating System Installation and Configuration.................................................................... 16

5.2

Controller Node Setup ............................................................................................................. 18

5.2.1
5.3

5.3.1
5.4

OpenStack (Juno).............................................................................................................. 18

Compute Node Setup .............................................................................................................. 23


Host Configuration............................................................................................................. 23

vIPS ..................................................................................................................................... 26

5.4.1

Network Configuration for non-vIPS Guests........................................................................... 26

6.0 Testing the Setup .......................................................................................................27


6.1

Preparation with OpenStack ..................................................................................................... 27

6.1.1

Deploying Virtual Machines .................................................................................................27

6.1.2

Non-uniform Memory Access (Numa) Placement and SR-IOV Pass-through for OpenStack ........... 32

6.2

Using OpenDaylight ................................................................................................................ 35

6.2.1
6.3

Preparing the OpenDaylightController................................................................................... 35

Border Network Gateway ......................................................................................................... 36

6.3.1

Installation and Configuration Inside the VM.......................................................................... 37

6.3.2

Installation and Configuration of the Back-to-Back Host (Packet Generator) .............................. 39

6.3.3

Extra Preparations on the Compute Node.............................................................................. 40

Appendix A
A.1

Additional OpenDaylight Information..........................................................43

Create VMs using DevStack Horizon GUI ..................................................................................... 45

Appendix B

BNG as an Appliance ...................................................................................51

Appendix C

Glossary ......................................................................................................53

Appendix D

References ..................................................................................................55

Intel ONP Server Reference Architecture


Solutions Guide

NOTE:

This page intentionally left blank.

Intel ONP Server Reference Architecture


Solutions Guide

1.0

Audience and Purpose

The primary audiences for this document are architects and engineers implementing the Intel Open
Network Platform Server Reference Architecture using Open Source software. Software ingredients
include:
DevStack*
OpenStack*
OpenDaylight*
Data Plane Development Kit (DPDK)*
Intel DPDK Accelerated vSwitch
Open vSwitch*
Fedora 20*
This document provides a guide for integration and performance characterization using the Intel Open
Network Platform Server (Intel ONP Server). Content includes high-level architecture, setup and
configuration procedures, integration learnings, and a set of baseline performance data. This
information is intended to help architects and engineers evaluate Network Function Virtualization (NFV)
and Software Defined Network (SDN) solutions.
An understanding of system performance is required to develop solutions that meet the demanding
requirements of the telecom industry and transform telecom networks. Workload examples are
described and are useful for evaluating other NFV workloads.
Ingredient versions, integration procedures, configuration parameters, and test methodologies all
influence performance. The performance data provided here does not represent best possible
performance, but rather provides a baseline of what is possible using out-of-box open source software
ingredients.
The purpose of documenting configurations is not to imply any preferred methods. However, providing
a baseline configuration of well tested procedures can help to achieve optimal system performance
when developing an NFV/SDN solution.

Intel ONP Server Reference Architecture


Solutions Guide

NOTE:

This page intentionally left blank.

Intel ONP Server Reference Architecture


Solutions Guide

2.0

Summary

The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization
with the latest Intel Architecture Communications Platform.
This document describes how to setup and configure controller and compute nodes for evaluating and
developing NFV/SDN solutions using the Intel Open Network Platform ingredients.
Platform hardware is based on a Intel Xeon DP Server with the following:
Intel Xeon Processor Series E5-2697 V3
Intel 82599 10 GbE Controller
The host operating system is Fedora* 20 with Qemu-kvm virtualization technology. Software
ingredients include Data Plane Development Kit (DPDK), Open vSwitch, Intel DPDK Accelerated
vSwitch, OpenStack, and OpenDaylight.

Figure 2-1

Intel ONP Server - Hardware and Software Ingredients

Intel ONP Server Reference Architecture


Solutions Guide

Figure 2-2 shows a generic SDN/NFV setup. In this configuration, Orchestrator and Controller
(management and control plane) and compute node (data plane) run on different server nodes. Note
that many variations of this setup can be deployed.

Figure 2-2

Generic Setup with Controller and Two Compute Nodes

The test cases described in this document were designed to illustrate certain baseline performance and
functionality using the specified ingredients, configurations, and specific test methodology. A simple
network topology was used, as shown in Figure 2-2.
Test cases are designed to:
Baseline packet processing (such as data plane) performance with host and VM configurations.
Verify communication between controller and compute nodes.
Validate basic controller functionality.

Intel ONP Server Reference Architecture


Solutions Guide

2.1

Network Services Examples

The following examples of network services are included as use-cases that have been tested with the
Intel Open Network Platform Server Reference Architecture.

2.1.1

Suricata (Next Generation IDS/IPS engine)

Suricata is a high performance Network IDS, IPS, and Network Security Monitoring engine developed
by the OISF, its supporting vendors, and the community.
http://suricata-ids.org/

2.1.2

vBNG (Broadband Network Gateway)

Intel Data Plane Performance Demonstrators Border Network Gateway (BNG) using DPDK.
https://01.org/intel-data-plane-performance-demonstrators/downloads/bng-application-v013
A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server
(BRAS) and routes traffic to and from broadband remote access devices, such as digital subscriber line
access multiplexers (DSLAM). This network function is included as an example of a workload that can
be virtualized on the Intel ONP Server.
Additional information on the performance characterization of this vBNG implementation can be found
at:
http://networkbuilders.intel.com/docs/Network_Builders_RA_vBRAS_Final.pdf
Refer to Border Network Gateway for information on setting up and testing the vBNG application with
Intel DPDK Accelerated vSwitch or to Appendix B for more information on running the BNG as an
appliance.

Intel ONP Server Reference Architecture


Solutions Guide

NOTE:

10

This page intentionally left blank.

Intel ONP Server Reference Architecture


Solutions Guide

3.0

Hardware Components

Table 3-1

Hardware Ingredients (Grizzly Pass)

Item

Description

Notes

Platform

Intel

Server Board 2U 8x3.5 SATA 2x750W 2xHS


Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets).


240GB SSD 2.5in SATA 6Gb/s Intel Wolfsville
SSDSC2BB240G401 DC S3500 Series

Processors

Intel Xeon Processor Series E5-2680 v2 LGA2011


2.8GHz 25MB 115W 10 cores

Ivy Bridge Socket-R (EP), 10 Core, 2.8GHz, 115W,


2.5M per core LLC, 8.0 GT/s QPI, DDR3-1867, HT,
turbo

Cores

10 physical cores/CPU

20 Hyper-threaded cores per CPU for 40 total cores

Memory

8 GB 1600 Reg ECC 1.5 V DDR3 Kingston


KVR16R11S4/8I Romley

64 GB RAM (8x 8 GB)

NICs (82599)

2x Intel 82599 10 GbE Controller (Niantic)

NICs are on socket zero (3 PCIe slots available on


socket 0)

BIOS

SE5C600.86B.02.01.0002.082220131453
Release Date: 08/22/2013
BIOS Revision: 4.6

Intel Virtualization Technology for Directed I/O


(Intel VT-d)

Long product availability.

Table 3-2

Hyper-threading enabled.

Hardware Ingredients (Wildcat Pass)

Item

Description

Notes

Platform

Intel Server Board S2600WTT 1100W power supply

Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB


SSD 2.5in SATA 6GB/s Intel Wolfsville
SSDSC2BB120G4

Processors

Intel Xeon Processor Series E5-2697 v3 2.6GHz


25MB 145W 14 cores

Haswell, 14 Core, 2.6GHz, 145W, 35M total cache per


processor, 9.6 GT/s QPI, DDR4-1600/1866/2133

Cores

14 physical cores/CPU

28 Hyper-threaded cores per CPU for 56 total cores

Memory

8 GB DDR4 RDIMM Crucial CT8G4RFS423

64 GB RAM (8x 8 GB)

Intel

82599 10 GbE Controller (Niantic)

NICs (82599)

2x

BIOS

GRNDSDP1.86B.0038.R01.1409040644 Release Date:


09/04/2014

NICs are on socket zero


Intel Virtualization Technology for Directed I/O
(Intel VT-d) enabled only for SR-IOV PCI passthrough tests
Hyper-threading enabled, but disabled for benchmark
testing.

11

Intel ONP Server Reference Architecture


Solutions Guide

NOTE:

12

This page intentionally left blank.

Intel ONP Server Reference Architecture


Solutions Guide

4.0

Software Versions

Table 4-1

Software Versions

Software Component

Function

Version/Configuration

Fedora 20 x86_64

Host OS

3.15.6-200.fc20.x86_64

Qemukvm

Virtualization technology

Modified QEMU 1.6.2 (bundled with Intel DPDK Accelerated vSwitch)

Data Plane Development


Kit (DPDK)

Network Stack bypass


and libraries for packet
processing. Includes user
space poll mode drivers

1.7.1

Intel DPDK Accelerated


vSwitch

vSwitch

v1.2.0

Open vSwitch

vSwitch

Open vSwitch V 2.3

OpenStack

SDN Orchestrator

Juno Release + Intel patches (openstack_ovdk.l.0.2-907.zip)

DevStack

Tool for Open Stack


deployment

https://github.com/openstack-dev/devstack.git

commit id 6210bb0a6139b20283de115f87aa7a381b04670f

Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

Commit id d6f700db33aeab68916156a98971aef8cfa53a2e

OpenDaylight

SDN Controller

HeliumSR1

Suricata

IPS application

Suricata v2.0.4 (current Fedora 20 package)

BNG DPPD

Broadband Network
Gateway DPDK
Performance
Demonstrator Application

DPPD v013

PktGen

Software Network
Package Generator

https://01.org/intel-data-plane-performance-demonstrators/downloads

v.2.7.7

13

Intel ONP Server Reference Architecture


Solutions Guide

4.1

Obtaining Software Ingredients

Table 4-2

Software Ingredients

Software
Component

Software
Sub-components

Patches

Fedora 20

Location

Comments

http://download.fedoraproject.org/
pub/fedora/linux/releases/20/Fedora/
x86_64/iso/Fedora-20-x86_64-DVD.iso

Standard Fedora 20 iso


image.
All sub-components in
one zip file.

Data Plane
Development Kit
(DPDK)

DPDK poll mode driver,


sample apps (bundled)

http://dpdk.org/git/dpdk

Intel DPDK
Accelerated
vSwitch (OVDK)

dpdk-ovs, qemu, ovs-db,


vswitchd, ovs_client
(bundled)

https://github.com/01org/dpdk-ovs.git

Commit id
99213f3827bad956d74e2259d0684401
2ba287a4

v1.2.0

Commit id
6210bb0a6139b20283de115f87aa7a38
1b04670f

Open vSwitch

https://github.com/openvswitch/
ovs.git
Commit id
b35839f3855e3b812709c6ad1c9278f4
98aa9935

OpenStack

Juno release. To be deployed using


DevStack
(see following row)

DevStack

Patches for
DevStack and
Nova

https://github.com/openstack-dev/
devstack.git
Commit id
d6f700db33aeab68916156a98971aef8
cfa53a2e

Three patches
downloaded as one
tarball. Then follow the
instructions to deploy
the Nodes.
Two patches
downloaded as one
tarball. Then follow the
instructions to deploy

Then apply to that commit the patches


in:
https://download.01.org/packetprocessing/ONPS1.2/
openstack_ovdk.l.0.2-907.zip
OpenDaylight

http://nexus.opendaylight.org/content/
repositories/opendaylight.release/org/
opendaylight/integration/distributionkaraf/0.2.1-Helium-SR1/distributionkaraf-0.2.1-Helium-SR1.tar.gz

Intel ONP
Server Release
1.2 Script

Helper scripts to setup


SRT 1.2 using DevStack

https://download.01.org/packetprocessing/ONPS1.2/
onps_server_1_2.tar.gz

BNG DPPD

Broadband Network
Gateway DPDK
Performance

https://01.org/intel-data-planeperformance-demonstrators/dppd-bngv013.zip

PktGen

Software Network
Package Generator

https://github.com/Pktgen/PktgenDPDK.git
commit id
5e8633c99e9771467dc26b64a4ff232c7
e9fba2a

BNG Helper
scripts
Suricata

14

Intel ONP for Server


Configuration Scripts for
vBNG

https://01.org/sites/default/files/page/
vbng-scripts.tgz
Package from Fedora 20.

yum install suricata

Intel ONP Server Reference Architecture


Solutions Guide

5.0

Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and
compute nodes.

5.1

Instructions Common to Compute and


Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS
settings and operating system installation. The preferred operating system is Fedora 20, although it is
considered relatively easy to use this solutions guide for other Linux distributions.

5.1.1

BIOS Settings

Table 5-1

BIOS Settings
Setting for
Controller Node

Setting for
Compute Node

Enhanced Intel SpeedStep

Enabled

Disabled

Processor C3

Disabled

Disabled

Processor C6

Disabled

Disabled

Configuration

Intel

Virtualization Technology for Directed I/O (Intel

Vt-d)

Disabled

Enabled
(OpenStack Numa
Placement only)

Intel Hyper-Threading Technology (HTT)

Enabled

Disabled

MLC Streamer

Enabled

Enabled

MLC Spatial Prefetcher

Enabled

Enabled

DCU Instruction Prefetcher

Enabled

Enabled

Direct Cache Access (DCA)

Enabled

Enabled

Performance

Performance

Intel Turbo boost

Enabled

Off

Memory RAS and Performance Configuration -> Numa Optimized

Enabled

Enabled

CPU Power and Performance Policy

15

Intel ONP Server Reference Architecture


Solutions Guide

5.1.2

Operating System Installation and Configuration

Following are some generic instructions for installing and configuring the operating system. Other ways
of installing the operating system are not described in this solutions guide, such as network installation,
PXE boot installation, USB key installation, etc.

5.1.2.1

Getting the Fedora 20 DVD

1. Download the 64-bit Fedora 20 DVD (not Fedora 20 Live Media) from the following site:
http://fedoraproject.org/en/get-fedora#formats
or from direct URL:
http://download.fedoraproject.org/pub/fedora/linux/releases/20/Fedora/x86_64/iso/Fedora20-x86_64-DVD.iso
2. Burn the ISO file to DVD and create an installation disk.

5.1.2.2

Fedora 20 Installation

Use the DVD to install Fedora 20. During the installation, click Software selection, then choose the
following:
1. C Development Tool and Libraries
2. Development Tools
Also create a user stack and check the box Make this user administrator during the installation. The
user stack is used in OpenStack installation.
Note:

Please make sure to download and use the onps_server_1_2.tar.gz tarball. Start with the
README file. Youll get instructions on how to use Intels scripts to automate most of the
installation steps described in this section and this saves you time. When using Intels
scripts you can jump to Section 5.4 after installing OpenDaylight based on the instructions
described in Section 6.2.1.

5.1.2.3

Additional Packages Installation and Upgrade

Some packages are not installed with the standard Fedora 20 installation, but are required by Intel
Open Network Platform Software (ONPS) components. These packages should be installed by the user.
git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster
python-cliff
ONPS supports Fedora kernel 3.15.6, which is newer than native Fedora 20 kernel 3.11.10. To upgrade
to 3.15.6, follow these steps:
1. Download kernel packages.
wget https://kojipkgs.fedoraproject.org//packages/kernel/3.15.6/200.fc20/x86_64 /
kernel-3.15.6-200.fc20.x86_64.rpm
wget https://kojipkgs.fedoraproject.org//packages/kernel/3.15.6/200.fc20/x86_64 /
kernel-devel-3.15.6-200.fc20.x86_64.rpm
wget https://kojipkgs.fedoraproject.org//packages/kernel/3.15.6/200.fc20/x86_64 /
kernel-modules-extra-3.15.6-200.fc20.x86_64.rpm

16

Intel ONP Server Reference Architecture


Solutions Guide

2. Install kernel packages


rpm -i kernel-3.15.6-200.fc20.x86_64.rpm
rpm -i kernel-devel-3.15.6-200.fc20.x86_64.rpm
rpm -i kernel-modules-extra-3.15.6-200.fc20.x86_64.rpm
3. Reboot system to allow booting into 3.15.6 kernel
Note:

ONPS depends on libraries provided by your Linux distribution. As such, it is recommended


that you regularly update your Linux distribution with the latest bug fixes and security
patches to reduce the risk of security vulnerabilities in your systems.

After installing the required packages, the operating system should be updated with the following
command:
yum update -y
This command upgrades to the latest kernel that Fedora supports. In order to maintain kernel version
(3.15.6), the yum configuration file needs modified with this command:
echo "exclude=kernel*" >> /etc/yum.conf
before running yum update.
After the update completes, the system needs to be rebooted.

5.1.2.4

Disable and Enable Services

For OpenStack, the following services were disabled: selinux, firewall, and NetworkManager. Run the
following commands:
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config systemctl disable
firewalld.service
systemctl disable NetworkManager.service
The following services should be enabled: ntp, sshd, and network. Run the following commands:
systemctl
systemctl
systemctl
chkconfig

enable ntpd.service
enable ntpdate.service
enable sshd.service
network on

It is important to keep the timing synchronized between all nodes. It is also necessary to use a known
NTP server for all nodes. Users can edit etc/ntp.conf to add a new server and remove default servers.
The following example replaces a default NTP server with a local NTP server 10.0.0.12 and comments
out other default servers
sed -i 's/server
sed -i 's/server
g' /etc/ntp.conf
sed -i 's/server
g' /etc/ntp.conf
sed -i 's/server
g' /etc/ntp.conf

0.fedora.pool.ntp.org iburst/server 10.0.0.12/g' /etc/ntp.conf


1.fedora.pool.ntp.org iburst/# server 1.fedora.pool.ntp.org iburst /
2.fedora.pool.ntp.org iburst/# server 2.fedora.pool.ntp.org iburst /
3.fedora.pool.ntp.org iburst/# server 3.fedora.pool.ntp.org iburst /

17

Intel ONP Server Reference Architecture


Solutions Guide

5.2

Controller Node Setup

This section describes the controller node setup. It is assumed that the user successfully followed the
operating system installation and configuration sections.
Note:

5.2.1

Make sure to download and use the onps_server_1_2.tar.gz tarball. Start with the README
file. Youll get instructions on how to use Intels scripts to automate most of the installation
steps described in this section and this saves you time.

OpenStack (Juno)

This section documents features and limitations that are supported with the Intel DPDK Accelerated
vSwitch and OpenStack Juno.

5.2.1.1

Network Requirements

General
At least two networks are required to build OpenStack infrastructure in a lab environment. One network
is used to connect all nodes for OpenStack management (management network), and the other one is
a private network, exclusively for an OpenStack internal connection (tenant network) between
instances (or virtual machines).
One additional network is required for Internet connectivity, as installing OpenStack requires pulling
packages from various sources/repositories on the Internet.
Some users might want to have Internet and/or external connectivity for OpenStack instances (virtual
machines). In this case, an optional network can be used.
The assumption is that the targeting OpenStack infrastructure contains multiple nodes; one is controller
node and one or more are compute node(s).
Network Configuration Example
The following is an example of how to configure networks for OpenStack infrastructure. The example
uses four network interfaces as follows:
ens2f1: For Internet network - Used to pull all necessary packages/patches from repositories on the
Internet; configured to obtain a DHCP address.
ens2f0: For Management network - Used to connect all nodes for OpenStack management;
configured to use network 10.11.0.0/16.
p1p1: For Tenant network - Used for OpenStack internal connections for virtual machines;
configured with no IP address.
p1p2: For Optional External network - Used for virtual machine Internet/external connectivity;
configured with no IP address. This interface is only in the Controller node if external network is
configured. For Compute node, this interface is not needed.
Note that, among these interfaces, interface for virtual network (in this example, p1p1) must be an
82599 port because it is used for DPDK and Intel DPDK Accelerated vSwitch. Also note that a static IP
address should be used for interface of management network.
In Fedora 20, the network configuration files are located at:
/etc/sysconfig/network-scripts/

18

Intel ONP Server Reference Architecture


Solutions Guide

To configure a network on the host system, edit the following network configuration files:
ifcfg-ens2f1 DEVICE=ens2f1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=dhcp
ifcfg-ens2f0
DEVICE=ens2f0
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=static
IPADDR=10.11.12.11
NETMASK=255.255.0.0
ifcfg-p1p1
DEVICE=p1p1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
ifcfg-p1p2
DEVICE=p1p2
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
Note:

Do not configure the IP address for p1p1 (10 Gb/s interface); otherwise, DPDK does not
work when binding the driver during OpenStack Neutron installation.

Note:

10.11.12.11 and 255.255.0.0 are static IP address and net mask to the management
network. It is necessary to have static IP address on this subnet. The IP address
10.11.12.11 is just an example.

5.2.1.2

Storage Requirements

By default, DevStack uses blocked storage (Cinder) with a volume group, stack-volumes. If not
specified, stack-volumes is created with 10 Gb/s space from a local file system. Note that stackvolumes is the name for the volume group, not more than 1 volume.
The following example shows how to use spare local disks, /dev/sdb and /dev/sdc, to form stackvolumes on a controller node by running the following commands:
pvcreate /dev/sdb
pvcreate /dev/sdc
vgcreate stack-volumes /dev/sdb /dev/sdc

5.2.1.3

OpenStack Installation Procedures

General
DevStack is used to deploy OpenStack in this example. The following procedure uses an actual example
of an installation performed in an Intel test lab, consisting of one controller node (controller) and one
compute node (compute).
Controller Node Installation Procedures
The following example uses a host for controller node installation with the following:
Hostname: sdnlab-k01
Internet network IP address: Obtained from DHCP server

19

Intel ONP Server Reference Architecture


Solutions Guide

OpenStack Management IP address: 10.11.12.1


User/password: stack/stack
Root User Actions
Login as su or root user and perform the following:
1. Add stack user to sudoer list
echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
2. Edit /etc/libvirt/qemu.conf, add or modify with the following lines:
cgroup_controllers = [ "cpu", "devices", "memory", "blkio", "cpuset","cpuacct" ]
cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc", "/dev/hpet", "/dev/net/tun",
"/mnt/huge", "/dev/vhost-net"]
hugetlbs_mount = "/mnt/huge"
3. Restart libvirt service and make sure libvird is active
systemctl restart libvirtd.service
systemctl status libvirtd.service
Stack User Actions
1. Login as a stack user.
2. Configure the appropriate proxies (yum, http, https, and git) for package installation, and make
sure these proxies are functional. Note that on controller node, localhost and it's IP address
should be included in no_proxy setup (for example, export no_proxy=localhost,10.11.12.1).
3. Intel DPDK Accelerated vSwitch patches for OpenStack.
The tar file openstack_ovdk.l.0.2-907.zip contains necessary patches for OpenStack. Currently it is
not native to the OpenStack. The file can be downloaded from:
https://01.org/sites/default/files/page/openstack_ovdk.l.0.2-907.zip
Place the file in the /home/stack/ directory and unzip. Three patch files: devstack.patch,
nova.patch, and neutron.patch, will be present after unzip.
cd /home/stack
wget https://01.org/sites/default/files/page/openstack_ovdk.l.0.2-907.zip
unzip openstack_ovdk.l.0.2-907.zip
4. Download DevStack source.
git_clone https://github.com/openstack-dev/devstack.git
5. Check out DevStack with Intel DPDK Accelerated vSwitch and patch.
cd /home/stack/devstack/
git checkout d6f700db33aeab68916156a98971aef8cfa53a2e
patch -p1 < /home/stack/devstack.patch

20

Intel ONP Server Reference Architecture


Solutions Guide

6. Download and patch Nova and Neutron.


sudo_mkdir /opt/stack
sudo_chown stack:stack /opt/stack
cd /opt/stack/
git_clone https://github.com/openstack/nova.git
git_clone https://github.com/openstack/neutron.git
cd /opt/stack/nova/
git checkout b7738bfb6c2f271d047e8f20c0b74ef647367111
patch -p1 < /home/stack/nova.patch
7. Create local.conf file in /home/stack/devstack/.
8. Pay attention to the following in the local.conf file:
a. Use Rabbit for messaging services (Rabbit is on by default). In the past, Fedora only supported
QPID for OpenStack. Now it only supports Rabbit.
b. Explicitly disable Nova compute service on the controller. This is because by default, Nova
compute service is enabled.
disable_service n-cpu
c. To use Open vSwitch, specify in configuration for ML2 plug-in.
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch
d. Explicitly disable tenant tunneling and enable tenant VLAN. This is because by default, tunneling
is used
ENABLE_TENANT_TUNNELS=False
ENABLE_TENANT_VLANS=True
e. A sample local.conf files for controller node is as follows:
# Controller_node
[[local|localrc]]
FORCE=yes
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
DATABASE_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=no-token-password
HORIZON_PASSWORD=password
RABBIT_PASSWORD=password
disable_service n-net
disable_service n-cpu
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
enable_service horizon
DEST=/opt/stack
LOGFILE=$DEST/stack.sh.log
SCREEN_LOGDIR=$DEST/screen
SYSLOG=True
LOGDAYS=1
HOST_IP_IFACE=ens2f0
PUBLIC_INTERFACE=p1p2
VLAN_INTERFACE=p1p1
FLAT_INTERFACE=p1p1

21

Intel ONP Server Reference Architecture


Solutions Guide

Q_AGENT=openvswitch
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch
Q_ML2_PLUGIN_TYPE_DRIVERS=vlan,flat,local
ENABLE_TENANT_TUNNELS=False
ENABLE_TENANT_VLANS=True
PHYSICAL_NETWORK=physnet1
ML2_VLAN_RANGES=physnet1:1000:1010
OVS_PHYSICAL_BRIDGE=br-p1p1
MULTI_HOST=True
[[post-config|$NOVA_CONF]]
[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
novncproxy_host=0.0.0.0
novncproxy_port=6080
9. Install DevStack
cd /home/stack/devstack/
./stack.sh
10. For a successful installation, the following shows at the end of screen output:
stack.sh completed in XXX seconds
where XXX is the number of seconds.
11. For controller node only Add physical port(s) to the bridge(s) created by the DevStack
installation. The following example can be used to configure the two bridges: br-p1p1 (for virtual
network) and br-ex (for external network).
sudo ovs-vsctl add-port br-p1p1 p1p1
sudo ovs-vsctl add-port br-ex p1p2
12. Make sure proper VLANs are created in the switch connecting physical port p1p1. For example, the
previous local.conf specifies VLAN range of 1000-1010; therefore matching VLANs 1000 to 1010
should be configured in the switch.

22

Intel ONP Server Reference Architecture


Solutions Guide

5.3

Compute Node Setup

This section describes how to complete the setup of the compute nodes. It is assumed that the user has
successfully completed the BIOS settings and operating system installation and configuration sections.
Note:

Please make sure to download and use the onps_server_1_2.tar.gz tarball. Start with the
README file. Youll get instructions on how to use Intels scripts to automate most of the
installation steps described in this section and this saves you time.

5.3.1

Host Configuration

5.3.1.1

Using DevStack to Deploy vSwitch and OpenStack


Components

General
Deploying OpenStack and Intel DPDK Accelerated vSwitch using DevStack on a compute node follows
the same procedures as on the controller node. Differences include:
Required services are nova compute, neutron agent, and Rabbit.
Intel DPDK Accelerated vSwitch is used in place of Open vSwitch for neutron agent.
Compute Node Installation Example
The following example uses a host for compute node installation with the following:
Hostname: sdnlab-k02
Lab network IP address: Obtained from DHCP server
OpenStack Management IP address: 10.11.12.2
User/password: stack/stack
Note the following:
No_proxy setup: Localhost and its IP address should be included in the no_proxy setup. In addition,
hostname and IP address of the controller node should also be included. For example:
export no_proxy=localhost,10.11.12.2,sdnlab-k01,10.11.12.1
Differences in the local.conf file:
The service host is the controller, as well as other OpenStack servers, such as MySQL, Rabbit,
Keystone, and Image. Therefore, they should be spelled out. Using the controller node example
in the previous section, the service host and its IP address should be:
SERVICE_HOST_NAME=sdnlab-k01
SERVICE_HOST=10.11.12.1
The only OpenStack services required in compute nodes are messaging, nova compute, and
neutron agent, so the local.conf might look like:
disable_all_services
enable_service rabbit
enable_service n-cpu
enable_service q-agt

23

Intel ONP Server Reference Architecture


Solutions Guide

The user has option to use ovdk or openvswitch for neutron agent:
Q_AGENT=ovdk
or
Q_AGENT=openvswitch

Note:

For openvswitch, the user can specify regular or accelerated openvswitch


(accelerated OVS). If accelerated OVS is use, the following setup should be added:
OVS_DATAPATH_TYPE=netdev

Note:

If both are specified in the same local.conf file, the later one overwrites the
previous one.

For the OVDK and accelerated OVS huge pages setting, specify number of huge pages to be
allocated and mounting point (default is /mnt/huge/).
OVDK_NUM_HUGEPAGES=8192
or
OVS_NUM_HUGEPAGES=8192
For this version, Intel uses specific versions for OVDK or Accelerated OVS from their respective
repositories. Specify the following in the local.conf file if OVDK or accelerated OVS is used:
OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f
OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935
Binding the physical port to the bridge is through the following line in local.conf. For example,
to bind port p1p1 to bridge br-p1p1, use:
OVS_PHYSICAL_BRIDGE=br-p1p1
A sample local.conf file for compute node with ovdk agent follows:
# Compute node
[[local|localrc]]
FORCE=yes
MULTI_HOST=True
HOST_NAME=$(hostname)
HOST_IP=10.11.12.2
HOST_IP_IFACE=ens2f0
SERVICE_HOST_NAME=10.11.12.1
SERVICE_HOST=10.11.12.1
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
DATABASE_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=no-token-password
HORIZON_PASSWORD=password
RABBIT_PASSWORD=password
disable_all_services
enable_service rabbit

24

Intel ONP Server Reference Architecture


Solutions Guide

enable_service n-cpu
enable_service q-agt
DEST=/opt/stack_LOGFILE=$DEST/stack.sh.log
SCREEN_LOGDIR=$DEST/screen
SYSLOG=True
LOGDAYS=1
Q_AGENT=ovdk
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch
Q_ML2_PLUGIN_TYPE_DRIVERS=vlan
OVDK_NUM_HUGEPAGES=8192
OVDK_GIT_TAG=6210bb0a6139b20283de115f87aa7a381b04670f
ENABLE_TENANT_TUNNELS=False
ENABLE_TENANT_VLANS=True
ML2_VLAN_RANGES=physnet1:1000:1010
PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-p1p1
[[post-config|$NOVA_CONF]]
[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
vnc_enabled=True
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=$HOST_IP
A sample local.conf file for compute node with accelerated ovs agent follows.
# Compute node
#
[[local|localrc]]
FORCE=yes
MULTI_HOST=True
HOST_NAME=$(hostname)
HOST_IP=10.11.12.2
HOST_IP_IFACE=ens2f0
SERVICE_HOST_NAME=sdnlab-k01
SERVICE_HOST=10.11.12.1
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
DATABASE_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=no-token-password
HORIZON_PASSWORD=password
RABBIT_PASSWORD=password
disable_all_services
enable_service rabbit
enable_service n-cpu
enable_service q-agt
DEST=/opt/stack

25

Intel ONP Server Reference Architecture


Solutions Guide

LOGFILE=$DEST/stack.sh.log
SCREEN_LOGDIR=$DEST/screen
SYSLOG=True
LOGDAYS=1
Q_AGENT=openvswitch
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch
Q_ML2_PLUGIN_TYPE_DRIVERS=vlan
OVS_NUM_HUGEPAGES=8192
OVS_DATAPATH_TYPE=netdev
OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935
ENABLE_TENANT_TUNNELS=False
ENABLE_TENANT_VLANS=True
ML2_VLAN_RANGES=physnet1:1000:1010
PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-p1p1
[[post-config|$NOVA_CONF]]
[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
vnc_enabled=True
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=$HOST_IP

5.4

vIPS

The vIPS used is Suricata, which should be installed as an rpm package as previously described in a
VM. In order to configure it to run in inline mode (IPS) use the following:
1. Turn on IP forwarding.
# sysctl -w net.ipv4.ip_forward=1
2. Mangle all traffic from one vPort to the other using a netfilter queue.
# iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE
# iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE
3. Have Suricata run in inline mode using the netfilter queue.
# suricata -c /etc/suricata/suricata.yaml -q 0
4. Enable ARP proxying.
# echo 1 > /proc/sys/net/ipv4/conf/eth1/proxy_arp
# echo 1 > /proc/sys/net/ipv4/conf/eth2/proxy_arp

5.4.1

Network Configuration for non-vIPS Guests

1. Turn on IP forwarding.
# sysctl -w net.ipv4.ip_forward=1
2. In the source, add the route to the sink.
# route add -net 192.168.200.0/24 eth1
3. At the sink, add the route to the source.
# route add -net 192.168.100.0/24 eth1

26

Intel ONP Server Reference Architecture


Solutions Guide

6.0

Testing the Setup

This section describes how to bring up the VMs in a compute node, connect them to the virtual
network(s), verify the functionality.
Note:

Currently, it is not possible to have more than one virtual network in a multi-compute node
setup. Although, it is possible to have more than one virtual network in a single compute
node setup.

6.1

Preparation with OpenStack

6.1.1

Deploying Virtual Machines

6.1.1.1

Default Settings

OpenStack comes with the following default settings:


Tenant (Project): admin, demo
Network:
Private network (virtual network): 10.0.0.0/24
Public network (external network): 172.24.4.0/24
Image: cirros-0.3.1-x86_64
Flavor: nano, micro, tiny, small, medium, large, xlarge
To deploy new instances (VMs) with different setups (such as a different VM image, flavor, or network)
users must create their own. See below for details of how to create them.
To access the OpenStack dashboard, use a web browser (Firefox, Internet Explorer or others) and the
controller's IP address (management network). For example:
http://10.11.12.1/
Login information is defined in the local.conf file. In the examples that follow, password is the
password for both admin and demo users.

27

Intel ONP Server Reference Architecture


Solutions Guide

6.1.1.2

Customer Settings

The following examples describe how to create a custom VM image, flavor, and aggregate/availability
zone using OpenStack commands. The examples assume the IP address of the controller is 10.11.12.1.
1. Create a credential file, admin-cred, for admin user. The file contains the following lines:
export
export
export
export

OS_USERNAME=admin
OS_TENANT_NAME=admin
OS_PASSWORD=password
OS_AUTH_URL=http://10.11.12.1:35357/v2.0/

2. Source admin-cred to the shell environment for actions of creating glance image, aggregate/
availability zone, and flavor.
source admin-cred
3. Create an OpenStack glance image. A VM image file should be ready in a location accessible by
OpenStack.
glance image-create --name <image-name-to-create> --is-public=true --containerformat=bare --disk-format=<format> --file=<image-file-path-name>
The following example shows the image file, fedora20-x86_64-basic.qcow2, is located in a NFS
share and mounted at /mnt/nfs/openstack/images/ to the controller host. The following command
creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant
can use this glance image).
glance image-create --name fedora-basic --is-public=true --container-format=bare
--disk-format=qcow2 --file=/mnt/nfs/openstack/images/fedora20-x86_64-basic.qcow2
4. Create host aggregate and availability zone:
First, find out the available hypervisors and then use the information for creating aggregate/
availability zone.
nova hypervisor-list
nova aggregate-create <aggregate-name> <zone-name>
nova aggregate-add-host <aggregate-name> <hypervisor-name>
The following example creates an aggregate named aggr-g06 with one availability zone named
zone-g06, and the aggregate contains one hypervisor named sdnlab-g06.
nova aggregate-create aggr-g06 zone-g06
nova aggregate-add-host aggr-g06 sdnlab-g06
5. Create flavor. Flavor is a virtual hardware configuration for the VMs; it defines the number of virtual
CPUs, size of virtual memory and disk space, among others.
The following command creates a flavor named onps-flavor with an ID of 1001, 1024 Mb virtual
memory, 4 Gb virtual disk space, and 1 virtual CPU.
nova flavor-create onps-flavor 1001 1024 4 1

28

Intel ONP Server Reference Architecture


Solutions Guide

6.1.1.3

Example VM Deployment

The following example describes how to use a customer VM image, flavor, and aggregate to launch a
VM for a demo Tenant, using OpenStack commands. Again, the example assumes the IP address of the
controller is 10.11.12.1.
1. Create a credential file, demo-cred for a demo user. The file contains the following lines:
export
export
export
export

OS_USERNAME=demo
OS_TENANT_NAME=demo
OS_PASSWORD=password
OS_AUTH_URL=http://10.11.12.1:35357/v2.0/

2. Source demo-cred to the shell environment for actions of creating tenant network and instance
(VM).
source demo-cred
3. Create network for tenant demo. Take the following steps:
a. Get tenant demo.
keystone tenant-list | grep -Fw demo
The following creates a network with a name of net-demo for tenant with ID
10618268adb64f17b266fd8fb83c960d:
neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo
b. Create subnet.
neutron subnet-create --tenant-id <demo-tenant-id> --name <subnet_name>
<network-name> <net-ip-range>
The following creates a subnet with a name of sub-demo and CIDR address 192.168.2.0/24for
network net-demo.
neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d
--name sub-demo net-demo 192.168.2.0/24
4. Create instance (VM) for tenant demo. Take the following steps:
a. Get the name and/or ID of the image, flavor, and availability zone to be used for creating
instance.
glance image-list
nova flavor-list
nova aggregate-list
neutron net-list
b. Launch an instance (VM), using information obtained from previous step.
nova boot --image <image-id> --flavor <flavor-id> --availability-zone <zonename> --nic net-id=<network-id> <instance-name>
c. The new VM should be up and running in a few minutes.
5. Log into the OpenStack dashboard using the demo user credential; click Instances under Project
in the left pane, the new VM should show in the right pane. Click instance name to open Instance
Details view, then click Console in the top menu to access the VM, as follows:

29

Intel ONP Server Reference Architecture


Solutions Guide

6.1.1.4

Local vIPS

Figure 6-1

Local vIPS

Configuration
1. OpenStack brings up the VMs and connects them to the vSwitch.
2. IP addresses of the VMs get configured using the DHCP server. VM1 belongs to one subnet and VM3
to a different one. VM2 has ports on both subnets.
3. Flows get programmed to the vSwitch by the OpenDaylight controller (Section 6.2).
Data Path (Numbers Matching Red Circles)
1. VM1 sends a flow to VM3 through the vSwitch.
2. The vSwitch forwards the flow to the first vPort of VM2 (active IPS).
3. The IPS receives the flow, inspects it and (if not malicious) sends it out through its second vPort.
4. The vSwitch forwards it to VM3.

30

Intel ONP Server Reference Architecture


Solutions Guide

6.1.1.5

Remote vIPS

Figure 6-2

Remote iVPS

Configuration
1. OpenStack brings up the VMs and connects them to the vSwitch.
2. The IP addresses of the VMs get configured using the DHCP server.
Data Path (Numbers Matching Red Circles)
1. VM1 sends a flow to VM3 through the vSwitch inside compute node 1.
2. The vSwitch forwards the flow out of the first 82599 port to the first 82599 port of compute node 2.
3. The vSwitch of compute node 2 forwards the flow to the first port of the vHost, where the traffic
gets consumed by VM1.
4. The IPS receives the flow, inspects it, and (provided it is not malicious) sends it out through its
second port of the vHost into the vSwitch of compute node 2.
5. The vSwitch forwards the flow out of the second 82599 port of compute node 2 into the second port
of the 82599 in compute node 1.
6. The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow
gets terminated.

31

Intel ONP Server Reference Architecture


Solutions Guide

6.1.2

Non-uniform Memory Access (Numa) Placement


and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release. NUMA placement enables an
OpenStack administrator to ping particular NUMA nodes for guest systems optimization. With a SR-IOV
enabled network interface card, each SR-IOV port is associated with a Virtual Function (VF). OpenStack
SR-IOV pass-through enables a guest access to a VF directly.

6.1.2.1

Prepare Compute Node for SR-IOV Pass-through

To enable the previous features, follow these steps to configure compute node:
1. The server hardware support IOMMU or Intel VT-d. To check whether IOMMU is supported, run the
command and the output should show IOMMU entries.
dmesg | grep -e IOMMU
Note:

IOMMU cab be enabled/disabled through a BIOS setting, under Advanced and then
Processor.

2. Enable kernel IOMMU in grub. For Fedora 20, run commands:


sed -i 's/rhgb quiet/rhgb quite intel_iommu=on/g' /etc/default/grub
grub2-mkconfig -o /boot/grub2/grub.cfg
3. Install necessary packages:
yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel
dbus-devel numactl-devel python-devel
4. Install Libvirt to v1.2.8 or newer. The following example uses v1.2.9:
systemctl stop libvirtd
yum remove libvirt
yum remove libvirtd
wget http://libvirt.org/sources/libvirt-1.2.9.tar.gz
tar zxvf libvirt-1.2.9.tar.gz
cd libvirt-1.2.9
./autogen.sh --system --with-dbus
make
make install
systemctl start libvirtd
Make sure libvirtd is running v1.2.9:
libvirtd --version
5. Install libvirt-python. Example below uses v1.2.9 to match libvirt version.
yum remove libvirt-python
wget https://pypi.python.org/packages/source/l/libvirt-python/libvirt-python1.2.9.tar.gz
tar zxvf libvirt-python-1.2.9.tar.gz
cd libvirt-python-1.2.9
python setup.py instal

32

Intel ONP Server Reference Architecture


Solutions Guide

6. Modify /etc/libvirt/qemu.conf, add:


"/dev/vfio/vfio"
to
cgroup_device_acl list
An example follows:
cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc", "/dev/hpet", "/dev/net/tun",
"/dev/vfio/vfio"]
7. Enable the SR-IOV virtual function for an 82599 interface. The following example enables 2 VFs for
interface p1p1
echo 2 > /sys/class/net/p1p1/device/sriov_numvfs
To check that virtual functions are enabled:
lspci -nn | grep 82599
The screen output should display the physical function and two virtual functions.

6.1.2.2

Devstack Configurations

In the following text, the example uses a controller with IP address 10.11.12.1 and compute
10.11.12.4. PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output
(10fb for physical function and 10ed for VF):
lspci -nn | grep 82599
On Controller node:
1. Edit Controller local.conf. Note that the same local.conf file of Section 5.2.1.3 is used here but
adding the following:
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,sriovnicswitch
[[post-config|$NOVA_CONF]]
[DEFAULT]
scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,
ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,
NUMATopologyFilter
pci_alias={"name":"niantic","product_id":"10ed","vendor_id":"8086"}
[[post-config|/$Q_PLUGIN_CONF_FILE]]
[ml2_sriov]
supported_pci_vendor_devs = 8086:10fb, 8086:10ed
2. Run ./stack.sh
On Compute node:
1. Edit /opt/stack/nova/requirements.txt, add libvirt-python>=1.2.8.
echo "libvirt-python>=1.2.8" >> /opt/stack/nova/requirements.txt
2. Edit Compute local.conf for accelerated OVS. Note that the same local.conf file of Section 5.3.1.1 is
used here.

33

Intel ONP Server Reference Architecture


Solutions Guide

3. Adding the following:


[[post-config|$NOVA_CONF]]
[DEFAULT]
pci_passthrough_whitelist={"address":"0000:08:00.0","vendor_id":"8086",
"physical_network":"physnet1"}
pci_passthrough_whitelist={"address":"0000:08:10.0","vendor_id":"8086","
physical_network":"physnet1"}
pci_passthrough_whitelist={"address":"0000:08:10.2","vendor_id":"8086","
physical_network":"physnet1"}
4. Removing (or comment out) the following. Note that currently, SR-IOV pass-through is only
supported with a standard OVS):
OVS_NUM_HUGEPAGES=8192
OVS_DATAPATH_TYPE=netdev
OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935
Run ./stack.sh for both controller and compute nodes to complete the Devstack installation.

6.1.2.3

Create VM with Numa Placement and SR-IOV

1. After stacking is successful on both the controller and compute nodes, verify the PCI pass-through
device(s) are in the OpenStack deatbase.
mysql -uroot -ppassword -h 10.11.12.1 nova -e 'select * from pci_devices'
2. The output should show entry(ies) of PCI device(s) similar to the following:
| 2014-11-18 19:41:14 | NULL
| NULL
|
0 | 1 |
3 | 0000:08:10.0
| 10ed
| 8086
| type-VF | pci_0000_08_10_0 | label_8086_10ed | available |
{"phys_function": "0000:08:00.0"} | NULL
| NULL
|
0 |

3. Next, to create a flavor, for example:


nova flavor-create numa-flavor 1001 1024 4 1
where:
flavor name = numa-flavor
id = 1001
virtual memory = 1024 Mb
virtual disk size = 4Gb
number of virtual CPU = 1
4. Modify flavor for numa placement with PCI pass-through.
nova flavor-key 1001 set "pci_passthrough:alias"="niantic:1" hw:numa_nodes=1
hw:numa_cpus.0=0 hw:numa_mem.0=1024
5. To show detailed information of the flavor:
nova flavor-show 1001
6. Create a VM, numa-vm1, with the flavor, numa-flavor under the default project demo. Note that the
following example assumes a image, fedora-basic, and an availability zone, zone-04, are already in
place (see Section 6.1.1.2), and the private is the default network for demo project.
nova boot --image <image-id> --flavor <flavor-id> --availability-zone <zone-name>
--nic <network-id> numa-vm1
where numa-vm1 is the name of instance of the VM to be booted.

34

Intel ONP Server Reference Architecture


Solutions Guide

Access the VM from the OpenStack Horizon, the new VM shows two virtual network interfaces. The
interface with a SR-IOV VF should show a name of ensX, where X is a numerical number. For example,
ens5. If a DHCP server is available for the physical interface (p1p1 in this example), the VF gets an IP
address automatically; otherwise, users can assign an IP address to the interface the same way as a
standard network interface.
To verify network connectivity through a VF, users can set up two compute hosts and create a VM on
each node. After obtaining IP addresses, the VMs should communicate with each other as with a normal
network.

6.2

Using OpenDaylight

This section describes how to download, install and setup a OpenDaylight Controller.

6.2.1

Preparing the OpenDaylightController

1. Download the pre-built OpenDaylight Helium-SR1 distribution.


wget http://nexus.opendaylight.org/content/repositories/opendaylight.release/org/
opendaylight/integration/distribution-karaf/0.2.1-Helium-SR1/distribution-karaf0.2.1-Helium-SR1.tar.gz
2. Extract the archive and cd into it.
tar xf distribution-karaf-0.2.1-Helium-SR1.tar.gz
cd distribution-karaf-0.2.1-Helium-SR1
3. Use the bin/karaf executable start the Karaf shell.

35

Intel ONP Server Reference Architecture


Solutions Guide

4. Install the required features.

Karaf might take a long time to start or feature. Install might fail if the host does not have network
access. Youll need to setup the appropriate proxy settings.

6.3

Border Network Gateway

This section describes how to install and run a Border Network Gateway on a compute node that is
prepared as described in Section 5.1 and Section 5.3. The example interface names from these sections
have been maintained in this section too. Also for simplicity, the BNG is using the handle_none
configuration mode, which makes it work as a L2 forwarding engine. The BNG is more complex than
this and users who are interested to explore more of its capabilities should read https://01.org/inteldata-plane-performance-demonstrators/quick-overview.
The setup to test the functionality of the vBNG follows:

36

Intel ONP Server Reference Architecture


Solutions Guide

6.3.1

Installation and Configuration Inside the VM

1. Execute the following command:


#yum -y update
2. Disable SELinux.
#setenforce 0
#vi /etc/selinux/config
And change so SELINUX=disabled
3. Disable the firewall.
systemctl disable firewalld.service
reboot
4. Edit grub default configuration.
vi /etc/default/grub
Add hugepages to it.
noirqbalance intel_idle.max_cstate=0 processor.max_cstate=0 ipv6.disable=1
default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1,2,3,4"
5. Rebuild grub config and reboot the system.
grub2-mkconfig -o /boot/grub2/grub.cfg
reboot
6. Verify that hugepages are available in the VM.
cat /proc/meminfo
...
HugePages_Total:2
HugePages_Free:2
Hugepagesize:1048576 kB
...
7. Add the following to the end of ~/.bashrc file:
# --------------------------------------------export RTE_SDK=/root/dpdk
export RTE_TARGET=x86_64-native-linuxapp-gcc
export OVS_DIR=/root/ovs
export RTE_UNBIND=$RTE_SDK/tools/dpdk_nic_bind.py
export DPDK_DIR=$RTE_SDK;
export DPDK_BUILD=$DPDK_DIR/$RTE_TARGET
# --------------------------------------------8. Re-login or source that file:
. .bashrc
9. Install DPDK.
git clone http://dpdk.org/git/dpdk
cd dpdk
git checkout v1.7.1
make install T=$RTE_TARGET
modprobe uio
insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko

37

Intel ONP Server Reference Architecture


Solutions Guide

10. Check the PCI addresses of the 82599 cards:


lspci | grep Network
00:04.0 Ethernet controller:
Connection (rev 01)
00:05.0 Ethernet controller:
Connection (rev 01)
00:06.0 Ethernet controller:
Connection (rev 01)
00:07.0 Ethernet controller:
Connection (rev 01)

Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network


Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network
Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network
Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network

11. Make sure that the correct PCI addresses are listed in the script bind_to_igb_uio.sh.
12. Download BNG packages.
wget https://01.org/sites/default/files/downloads/intel-data-plane-performancedemonstrators/dppd-bng-v013.zip
13. Extract DPPD BNG sources.
unzip dppd-bng-v013.zip
14. Build BNG DPPD application.
yum -y install ncurses-devel
cd dppd-BNG-v013
make
15. Refer to Section 6.3.3, Extra Preparations on the Compute Node before running the BNG
application in the VM inside the compute node
16. Make sure that the application starts.
./build/dppd -f config/handle_none.cfg
The handle none configuration should be passing all through traffic between ports, which is essentially
similar to the L2 forwarding test. The config directory contains additional complex BNG configurations
and Pktgen scripts. Additional BNG specific workloads can be found in the dppd-BNG/v013pktgenscripts directory.
Following is a sample graphic of the BNG running in a VM with 2 ports:

38

Intel ONP Server Reference Architecture


Solutions Guide

Exit the application by pressing ESC or CTRL-C.


Refer to Section 6.3.2 regarding installation and running the software traffic generator.
For the sanity check test users can use the pktgen wrapper script onps_pktgen-64bytes-UDP-2ports.sh
for running PktGen (on its dedicated server) in order to test the handle-none throughput for two
physical and two virtual ports, youll need to update the PKTGEN_DIR at the top of the file to point to the
right directory, which is the following referring to Section 6.3.2:
PKTGEN_DIR=/home/stack/git/Pktgen-DPDK/./pktgen-64bytes.sh

6.3.2

Installation and Configuration of the Back-to-Back


Host (Packet Generator)

The back-to-back host can be any Intel Xeon processor-based system, or it can be any Compute
Node that has been prepared using the instructions in Section 5.1 and Section 5.3. For simplicity Intel
assumes the later was the case. Also assume that the git directory for stack user is in /home/stack/
git/.
1. In the git directory get the source from Github.
git clone https://github.com/Pktgen/Pktgen-DPDK.git
cd Pktgen-DPDK
2. An extra package must be installed for Pktgen to compile correctly.
yum -y install libpcap-devel
Pktgen comes with its own distribution of DPDK sources. This bundled version of DPDK must be
used. Note that it contains some WindRiver* specific helper libraries that are not in the default
DPDK distribution, which Pktgen depends on.
3. The $RTE_TARGET variable must be set to a specific value. Otherwise, these libraries will not build.
cd
vi .bashrc
Add the following three lines to the end:
export RTE_SDK=$HOME/Pktgen-DPDK/dpdk
export RTE_TARGET=x86_64-pktgen-linuxapp-gcc
export PKTGEN_DIR=$HOME/Pktgen-DPDK
4. Re-login, or execute the following command:
. .bashrc
5. Build the basic DPDK libraries and extra helpers.
cd $RTE_SDK
make install T=$RTE_TARGET
6. Build Pktgen.
cd examples/pktgen
make
7. Adapt the dpdk_nic_bind.py script accordingly to the actual NICs in use so both interfaces are
bound to igb_uio so DPDK can use them. See the details of the command the follows:
tools/dpdk_nic_bind.py --status
8. Use onps_pktgen-64-bytes-UDP-2ports.sh from onps_server_1_2.tar.gz.

39

Intel ONP Server Reference Architecture


Solutions Guide

9. Now run the script as root after the Compute node has been setup as in Section 6.3.3, the VM of
the BNG has been prepared as in Section 6.3.1 inside the VM, and the BNG has been run inside the
VM.

6.3.3

Extra Preparations on the Compute Node

1. Do the following as a stack user:


cd /home/stack/devstack
vi local.conf
2. Comment out the following:
#PHYSICAL_NETWORK=physnet1
#OVS_PHYSICAL_BRIDGE=br-p1p1
And at the same time, add the following line right below the previous commented ones:
OVS_BRIDGE_MAPPINGS=default:br-p1p1,physnet1:br-p1p2
3. Run again as stack user:
./unstack.sh
./stack.sh
This causes both physical interfaces to come up and get bound to the DPDK. Also a bridge is created on
top of each of these interfaces:
# ovs-vsctl show
b52bd3ed-0f6c-45b9-ace1-846d901bed64
Bridge "br-p1p1"
Port "br-p1p1"
Interface "br-p1p1"
type: internal
Port "p1p1"
Interface "p1p1"
type: dpdkphy
options: {port="0"}
Port "phy-br-p1p1"
Interface "phy-br-p1p1"
type: patch
options: {peer="int-br-p1p1"}
Bridge br-int
fail_mode: secure
Port "int-br-p1p2"
Interface "int-br-p1p2"
type: patch
options: {peer="phy-br-p1p2"}
Port "int-br-p1p1"
Interface "int-br-p1p1"
type: patch
options: {peer="phy-br-p1p1"}
Port br-int
Interface br-int
type: internal
Bridge "br-p1p2"
Port "phy-br-p1p2"
Interface "phy-br-p1p2"
type: patch
options: {peer="int-br-p1p2"}
Port "p1p2"
Interface "p1p2"
type: dpdkphy
options: {port="1"}
Port "br-p1p2"
Interface "br-p1p2"

40

Intel ONP Server Reference Architecture


Solutions Guide

type: internal
4. Move the p1p2 physical port under the same bridge as p1p1.
#ovs-vsctl del-port p1p2
#ovs-vsctl add-port br-p1p1 p1p2 -- set Interface p1p2 type=dpdkphy

option:port=1

5. Delete the agent of OpenStack


rejoin-stack.sh
ctrl-a 1
ctrl-c
ctrl-ad
6. Add the dpdkvhost interfaces for the VM:
ovs-vsctl --no-wait add-port br-p1p1 port3 -- set Interface port3 type=dpdkvhost
ofport_request=3
ovs-vsctl --no-wait add-port br-p1p1 port4 -- set Interface port4 type=dpdkvhost
ofport_request=4
7. Find out the port number of the obstructed interfaces:
ovs-ofctl show br-p1p1
The output should be similar to the following. Note the number on the left of the interface because its
the obstructed port number.
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000286031010000
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST
SET_NW_SRC SET_NW_DST SET_TP_SRC SET_TP_DST
1(phy-br-p1p1): addr:9e:ae:92:25:3c:c1
config:
0
state:
0
speed: 0 Mbps now, 0 Mbps max
2(p1p2): addr:9e:ae:92:25:3c:c1
config:
0
state:
0
speed: 0 Mbps now, 0 Mbps max
3(port3): addr:9e:ae:92:25:3c:c1
config:
0
state:
0
speed: 0 Mbps now, 0 Mbps max
4(port4): addr:49:04:ff:7f:00:00
config:
0
state:
0
speed: 0 Mbps now, 0 Mbps max
16(p1p1): addr:49:04:ff:7f:00:00
config:
0
state:
0
speed: 0 Mbps now, 0 Mbps max
LOCAL(br-p1p1): addr:9e:ae:92:25:3c:c1
config:
0
state:
0
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
8. Clean up the flow table of the bridge.
ovs-ofctl del-flows br-p1p1
9. Program the flows so each physical interface forwards the packets to a dpdkvhost interface and the
other way round.
ovs-ofctl
ovs-ofctl
ovs-ofctl
ovs-ofctl

add-flow
add-flow
add-flow
add-flow

br-p1p1
br-p1p1
br-p1p1
br-p1p1

in_port=16,dl_type=0x0800,idle_timeout=0,action=output:3
in_port=3,dl_type=0x0800,idle_timeout=0,action=output:16
in_port=4,dl_type=0x0800,idle_timeout=0,action=output:2
in_port=2,dl_type=0x0800,idle_timeout=0,action=output:4

41

Intel ONP Server Reference Architecture


Solutions Guide

10. Users can now spawn their vBNG.


qemu-kvm -cpu host -enable-kvm -m 4096 -smp 4,cores=4,threads=1,sockets=1 \
-name VM1 -hda <path to the VM image file> -mem-path /dev/hugepages -mem-prealloc
-vnc :2 -daemonize\
-net nic,model=virtio,macaddr=00:1e:77:68:09:fd \
-net tap,ifname=tap1,script=no,downscript=no \
-netdev type=tap,id=net1,script=no,downscript=no,ifname=port3,vhost=on \
-device virtio-netpci,netdev=net1,mac=00:00:01:00:00:01,csum=off,gso=off,guest_tso4=off,guest_tso6=
off,guest_ecn=off\
-netdev type=tap,id=net2,script=no,downscript=no,ifname=port4,vhost=on \
-device virtio-netpci,netdev=net2,mac=00:00:01:00:00:02,csum=off,gso=off,guest_tso4=off,guest_tso6=
off,guest_ecn=off

42

Intel ONP Server Reference Architecture


Solutions Guide

Appendix A Additional OpenDaylight


Information
This section describes how OpenDaylight can be used in a multi-node setup. Two hosts are used, one
running OpenDaylight, OpenStack Controller + Compute and OVS. The second host is the compute
node. This section describes how to create a Vxlan tunnel, VMs and ping from one VM to another.
Note:

Due to a known defect in ODL https://bugs.opendaylight.org/show_bug.cgi?id=2469


multi-node setup could not be verified.

Following is a sample local.conf for OpenDaylight host.


[[local|localrc]]
FORCE=yes
HOST_NAME=<name of this machine>
HOST_IP=<ip of this machine>
HOST_IP_IFACE=<mgmt. ip, isolated from internet>
PUBLIC_INTERFACE=<isolated IP, could be same as HOST_IP_IFACE>
VLAN_INTERFACE=
FLAT_INTERFACE=
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
DATABASE_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=no-token-password
HORIZON_PASSWORD=password
RABBIT_PASSWORD=password
disable_service n-net
disable_service n-cpu
enable_service
enable_service
enable_service
enable_service
enable_service
enable_service
enable_service

q-svc
q-agt
q-dhcp
q-l3
q-meta
neutron
horizon

LOGFILE=$DEST/stack.sh.log
SCREEN_LOGDIR=$DEST/screen
SYSLOG=True
LOGDAYS=1
#ODL start
Q_HOST=$HOST_IP
enable_service odl-server
enable_service odl-compute
ODL_MGR_IP=10.11.10.7
ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-novnc,n-cauth,n-cauth,nova
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak

43

Intel ONP Server Reference Architecture


Solutions Guide

Q_PLUGIN=ml2
Q_AGENT=openvswitch
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,opendaylight
#Q_ML2_PLUGIN_TYPE_DRIVERS=vlan,flat,local
Q_ML2_TENANT_NETWORK_TYPE=vxlan
ENABLE_TENANT_TUNNELS=True
#ENABLE_TENANT_VLANS=True
PHYSICAL_NETWORK=physnet1
ML2_VLAN_RANGES=physnet1:1000:1010
#OVS_PHYSICAL_BRIDGE=brMULTI_HOST=True
[[post-config|$NOVA_CONF]]
#disable nova security groups
[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
novncproxy_host=0.0.0.0
novncproxy_port=6080
Here is a sample local.conf for Compute Node.
[[local|localrc]]
FORCE=yes
MULTI_HOST=True
HOST_NAME=<name of this machine>
HOST_IP=<ip of this machine>
HOST_IP_IFACE=<isolated interface>
SERVICE_HOST_NAME=<name of the controller machine>
SERVICE_HOST=<ip of controller machine>
Q_HOST=$SERVICE_HOST
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=<ip of controller machine>
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
DATABASE_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=no-token-password
HORIZON_PASSWORD=password
RABBIT_PASSWORD=password
disable_all_services
enable_service
enable_service
enable_service
enable_service

rabbit
n-cpu
q-agt
odl-compute

DEST=/opt/stack
LOGFILE=$DEST/stack.sh.log
SCREEN_LOGDIR=$DEST/screen
SYSLOG=True
LOGDAYS=1

44

Intel ONP Server Reference Architecture


Solutions Guide

ODL_MGR_IP=<ip of controller machine>


Q_PLUGIN=ml2
Q_AGENT=openvswitch
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,opendaylight
Q_ML2_PLUGIN_TYPE_DRIVERS=vxlan
#OVS_NUM_HUGEPAGES=8192
#OVS_DATAPATH_TYPE=netdev
#OVDK_OVS_GIT_TAG=
ENABLE_TENANT_TUNNELS=True
#ENABLE_TENANT_VLANS=True
Q_ML2_TENANT_NETWORK_TYPE=vxlan
ML2_VLAN_RANGES=physnet1:1000:1010
PHYSICAL_NETWORK=physnet1
#OVS_PHYSICAL_BRIDGE=br-p1p1
[[post-config|$NOVA_CONF]]
[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
vnc_enabled=True
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=$HOST_IP

A.1

Create VMs using DevStack Horizon GUI

After starting the OpenDaylight controller, as described in Section 6.1, run a stack on the controller and
compute nodes.
Login to http://<control node ip address>:8080 to start horizon gui.
Verify that the node shows up in the following GUI.

Create a new Vxlan network:


1. Click on the Networks tab.
2. Click on the Create Network button.
3. Enter the Network name then click Next.

45

Intel ONP Server Reference Architecture


Solutions Guide

4. Enter the subnet information then click Next.

46

Intel ONP Server Reference Architecture


Solutions Guide

5. Add additional information then click Next.

6. Click the Create button.


7. Create a VM instance by clicking the Launch Instances button.

47

Intel ONP Server Reference Architecture


Solutions Guide

8. Click on the Details tab to enter VM details.

48

Intel ONP Server Reference Architecture


Solutions Guide

9. Click on the Networking tab then enter network information.

VMS will now be created.

Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file, it is
possible to disable the bundle from the OSGi console. However, there does not appear to be a way to
make this persistent, so it must be done each time the controller restarts.

49

Intel ONP Server Reference Architecture


Solutions Guide

Once the controller is up and running, connect to the OSGi console. The ss command displays all of the
bundles that are installed and their status. Adding a string(s) filters the list of bundles. List the OVSDB
bundles:
osgi> ss ovs
"Framework is launched."
id
106
112
262

State
ACTIVE
ACTIVE
ACTIVE

Bundle
org.opendaylight.ovsdb.northbound_0.5.0
org.opendaylight.ovsdb_0.5.0
org.opendaylight.ovsdb.neutron_0.5.0

Note that there are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in
this case).
Disable the OVSDB neutron bundle and then list the OVSDB bundles again:
osgi> stop 262
osgi> ss ovs
"Framework is launched."
id
106
112
262

State
ACTIVE
ACTIVE
RESOLVED

Bundle
org.opendaylight.ovsdb.northbound_0.5.0
org.opendaylight.ovsdb_0.5.0
org.opendaylight.ovsdb.neutron_0.5.0

Now the OVSDB neutron bundle is in the RESOLVED state, which means that it is not active.

50

Intel ONP Server Reference Architecture


Solutions Guide

Appendix B BNG as an Appliance


Please download the latest BNG application from https://01.org/intel-data-plane-performancedemonstrators/downloads. More details about how BNG works can be found in https://01.org/inteldata-plane-performance-demonstrators/quick-overview.

51

Intel ONP Server Reference Architecture


Solutions Guide

NOTE:

52

This page intentionally left blank.

Intel ONP Server Reference Architecture


Solutions Guide

Appendix C Glossary
Acronym

Description

ATR

Application Targeted Routing

COTS

Commercial OffThe-Shelf

DPI

Deep Packet Inspection

FCS

Frame Check Sequence

GRE

Generic Routing Encapsulation

GRO

Generic Receive Offload

IOMMU

Input/Output Memory Management Unit

Kpps

Kilo packets per seconds

KVM

Kernel-based Virtual Machine

LRO

Large Receive Offload

MSI

Message Signaling Interrupt

MPLS

Multi-protocol Label Switching

Mpps

Millions packets per seconds

NIC

Network Interface Card

pps

Packets per seconds

QAT

Quick Assist Technology

QinQ

VLAN stacking (802.1ad)

RA

Reference Architecture

RSC

Receive Side Coalescing

RSS

Receive Side Scaling

SP

Service Provider

SR-IOV

Single root I/O Virtualization

TCO

Total Cost of Ownership

TSO

TCP Segmentation Offload

53

Intel ONP Server Reference Architecture


Solutions Guide

NOTE:

54

This page intentionally left blank.

Intel ONP Server Reference Architecture


Solutions Guide

Appendix D References
Document Name
Internet Protocol version 4

Source
http://www.ietf.org/rfc/rfc791.txt

Internet Protocol version 6

http://www.faqs.org/rfc/rfc2460.txt

Intel 82599 10 Gigabit Ethernet Controller Datasheet

http://www.intel.com/content/www/us/en/ethernet-controllers/8259910-gbe-controller-datasheet.html

Intel DDIO

https://www-ssl.intel.com/content/www/us/en/io/direct-data-i-o.html?

Bandwidth Sharing Fairness

http://www.intel.com/content/www/us/en/network-adapters/10-gigabitnetwork-adapters/10-gbe-ethernet-flexible-port-partitioning-brief.html

Design Considerations for efficient network


applications with Intel multi-core processor- based
systems on Linux

http://download.intel.com/design/intarch/papers/324176.pdf

OpenFlow with Intel 82599

http://ftp.sunet.se/pub/Linux/distributions/bifrost/seminars/workshop2011-03-31/Openflow_1103031.pdf

Wu, W., DeMar,P. & Crawford,M (2012). A TransportFriendly NIC for Multicore / Multiprocessor Systems

IEEE transactions on parallel and distributed systems, vol 23, no 4, April


2012.
http://lss.fnal.gov/archive/2010/pub/fermilab-pub-10-327-cd.pdf

Why does Flow Director Cause Placket Reordering?

http://arxiv.org/ftp/arxiv/papers/1106/1106.0443.pdf

IA packet processing

http://www.intel.com/p/en_US/embedded/hwsw/technology/packetprocessing

High Performance Packet Processing on Cloud


Platforms using Linux* with Intel Architecture

http://networkbuilders.intel.com/docs/
network_builders_RA_packet_processing.pdf

Packet Processing Performance of Virtualized


Platforms with Linux* and Intel Architecture

http://networkbuilders.intel.com/docs/network_builders_RA_NFV.pdf

DPDK

http://www.intel.com/go/dpdk

Intel DPDK Accelerated vSwitch

https://01.org/packet-processing

55

Intel ONP Server Reference Architecture


Solutions Guide

LEGAL
By using this document, in addition to any agreements you have with Intel, you accept the terms set forth below.
You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel
products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which
includes subject matter disclosed herein.
INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY
ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN
INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL
DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR
WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT,
COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.
Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software,
operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and
performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when
combined with other products.
The products described in this document may contain design defects or errors known as errata which may cause the product to
deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or
your distributor to obtain the latest specifications and before placing your product order.
Intel technologies may require enabled hardware, specific software, or services activation. Check with your system manufacturer or
retailer. Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or
configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your
purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance.
All products, computer systems, dates and figures specified are preliminary based on current expectations, and are subject to change
without notice. Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and
provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your
actual performance.
No computer system can be absolutely secure. Intel does not assume any liability for lost or stolen data or systems or any damages
resulting from such losses.
Intel does not control or audit third-party web sites referenced in this document. You should visit the referenced web site and confirm
whether referenced data are accurate.
Intel Corporation may have patents or pending patent applications, trademarks, copyrights, or other intellectual property rights that
relate to the presented subject matter. The furnishing of documents and other materials and information does not provide any
license, express or implied, by estoppel or otherwise, to any such patents, trademarks, copyrights, or other intellectual property
rights.
2014 Intel Corporation. All rights reserved. Intel, the Intel logo, Core, Xeon and others are trademarks of Intel Corporation in the
U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

56

Potrebbero piacerti anche